text
stringlengths 6.4k
846k
| label
int64 0
9
|
---|---|
arXiv:1707.02772v2 [cs.PL] 24 Mar 2018
Probabilistic Program Equivalence for NetKAT∗
STEFFEN SMOLKA, Cornell University, USA
PRAVEEN KUMAR, Cornell University, USA
NATE FOSTER, Cornell University, USA
JUSTIN HSU, Cornell University, USA
DAVID KAHN, Cornell University, USA
DEXTER KOZEN, Cornell University, USA
ALEXANDRA SILVA, University College London, UK
We tackle the problem of deciding whether two probabilistic programs are equivalent in Probabilistic NetKAT,
a formal language for specifying and reasoning about the behavior of packet-switched networks. We show
that the problem is decidable for the history-free fragment of the language by developing an effective decision
procedure based on stochastic matrices. The main challenge lies in reasoning about iteration, which we address
by designing an encoding of the program semantics as a finite-state absorbing Markov chain, whose limiting
distribution can be computed exactly. In an extended case study on a real-world data center network, we
automatically verify various quantitative properties of interest, including resilience in the presence of failures,
by analyzing the Markov chain semantics.
1
INTRODUCTION
Program equivalence is one of the most fundamental problems in Computer Science: given a pair
of programs, do they describe the same computation? The problem is undecidable in general, but it
can often be solved for domain-specific languages based on restricted computational models. For
example, a classical approach for deciding whether a pair of regular expressions denote the same
language is to first convert the expressions to deterministic finite automata, which can then be
checked for equivalence in almost linear time [32]. In addition to the theoretical motivation, there
are also many practical benefits to studying program equivalence. Being able to decide equivalence
enables more sophisticated applications, for instance in verified compilation and program synthesis.
Less obviously—but arguably more importantly—deciding equivalence typically involves finding
some sort of finite, explicit representation of the program semantics. This compact encoding can
open the door to reasoning techniques and decision procedures for properties that extend far
beyond straightforward program equivalence.
With this motivation in mind, this paper tackles the problem of deciding equivalence in Probabilistic NetKAT (ProbNetKAT), a language for modeling and reasoning about the behavior of
packet-switched networks. As its name suggests, ProbNetKAT is based on NetKAT [3, 9, 30], which
is in turn based on Kleene algebra with tests (KAT), an algebraic system combining Boolean predicates and regular expressions. ProbNetKAT extends NetKAT with a random choice operator and
a semantics based on Markov kernels [31]. The framework can be used to encode and reason
about randomized protocols (e.g., a routing scheme that uses random forwarding paths to balance
load [33]); describe uncertainty about traffic demands (e.g., the diurnal/nocturnal fluctuation in
access patterns commonly seen in networks for large content providers [26]); and model failures
(e.g., switches or links that are known to fail with some probability [10]).
However, the semantics of ProbNetKAT is surprisingly subtle. Using the iteration operator
(i.e., the Kleene star from regular expressions), it is possible to write programs that generate
continuous distributions over an uncountable space of packet history sets [8, Theorem 3]. This makes
reasoning about convergence non-trivial, and requires representing infinitary objects compactly
∗ This
is a preliminary draft from March 28, 2018.
2
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
in an implementation. To address these issues, prior work [31] developed a domain-theoretic
characterization of ProbNetKAT that provides notions of approximation and continuity, which can
be used to reason about programs using only discrete distributions with finite support. However,
that work left the decidability of program equivalence as an open problem. In this paper, we settle
this question positively for the history-free fragment of the language, where programs manipulate
sets of packets instead of sets of packet histories (finite sequences of packets).
Our decision procedure works by deriving a canonical, explicit representation of the program
semantics, for which checking equivalence is straightforward. Specifically, we define a big-step
semantics that interprets each program as a finite stochastic matrix—equivalently, a Markov chain
that transitions from input to output in a single step. Equivalence is trivially decidable on this
representation, but the challenge lies in computing the big-step matrix for iteration—intuitively,
the finite matrix needs to somehow capture the result of an infinite stochastic process. We address
this by embedding the system in a more refined Markov chain with a larger state space, modeling
iteration in the style of a small-step semantics. With some care, this chain can be transformed to an
absorbing Markov chain, from which we derive a closed form analytic solution representing the
limit of the iteration by applying elementary matrix calculations. We prove the soundness of this
approach formally.
Although the history-free fragment of ProbNetKAT is a restriction of the general language, it
captures the input-output behavior of a network—mapping initial packet states to final packet
states—and is still expressive enough to handle a wide range of problems of interest. Many other
contemporary network verification tools, including Anteater [22], Header Space Analysis [15],
and Veriflow [17], are also based on a history-free model. To handle properties that involve paths
(e.g., waypointing), these tools generate a series of smaller properties to check, one for each hop
in the path. In the ProbNetKAT implementation, working with history-free programs can reduce
the space requirements by an exponential factor—a significant benefit when analyzing complex
randomized protocols in large networks.
We have built a prototype implementation of our approach in OCaml. The workhorse of the
decision procedure computes a finite stochastic matrix—representing a finite Markov chain—given
an input program. It leverages the spare linear solver UMFPACK [5] as a back-end to compute
limiting distributions, and incorporates a number of optimizations and symbolic techniques to
compactly represent large but sparse matrices. Although building a scalable implementation would
require much more engineering (and is not the primary focus of this paper), our prototype is already
able to handle programs of moderate size. Leveraging the finite encoding of the semantics, we
have carried out several case studies in the context of data center networks; our central case study
models and verifies the resilience of various routing schemes in the presence of link failures.
Contributions and outline. The main contribution of this paper is the development of a decision
procedure for history-free ProbNetKAT. We develop a new, tractable semantics in terms of stochastic
matrices in two steps, we establish the soundness of the semantics with respect to ProbNetKAT’s
original denotational model, and we use the compact semantics as the basis for building a prototype
implementation with which we carry out case studies.
In Section 2 and Section 3 we introduce ProbNetKAT using a simple example and motivate the
need for quantitative, probabilistic reasoning.
In Section 4, we present a semantics based on finite stochastic matrices and show that it fully
characterizes the behavior of ProbNetKAT programs on packets (Theorem 4.1). In this big-step
semantics, the matrices encode Markov chains over the state space 2Pk . A single step of the chain
models the entire execution of a program, going directly from the initial state corresponding to the
set of input packets the final state corresponding to the set of output packets. Although this reduces
Probabilistic Program Equivalence for NetKAT
3
program equivalence to equality of finite matrices, we still need to provide a way to explicitly
compute them. In particular, the matrix that models iteration is given in terms of a limit.
In Section 5 we derive a closed form for the big-step matrix associated with p ∗ , giving an
explicit representation of the big-step semantics. It is important to note that this is not simply
the calculation of the stationary distribution of a Markov chain, as the semantics of p ∗ is more
subtle. Instead, we define a small-step semantics, a second Markov chain with a larger state space
such that one transition models one iteration of p ∗ . We then show how to transform this finer
Markov chain into an absorbing Markov chain, which admits a closed form solution for its limiting
distribution. Together, the big- and small-step semantics enable us to analytically compute a finite
representation of the program semantics. Directly checking these semantics for equality yields an
effective decision procedure for program equivalence (Corollary 5.8). This is in contrast with the
previous semantics [8], which merely provided an approximation theorem for the semantics of
iteration p ∗ and was not suitable for deciding equivalence.
In Section 6, we illustrate the practical applicability of our approach by exploiting the representation of ProbNetKAT programs as stochastic matrices to answer a number of questions of interest
in real-world networks. For example, we can reduce loop termination to program equivalence: the
fact that the while loop below terminates with probability 1 can be checked as follows:
while ¬f =0 do (skip ⊕r f ←0)
≡
f ←0
We also present real-world case studies that use the stochastic matrix representation to answer
questions about the resilience of data center networks in the presence of link failures.
We discuss obstacles in extending our approach to the full ProbNetKAT language in Section 7, including a novel automata model encodable in ProbNetKAT for which equivalence seems challenging
to decide. We survey related work in Section 8 and conclude in Section 9.
2 OVERVIEW
This section introduces the syntax and semantics of ProbNetKAT using a simple example. We will
also see how various properties, including program equivalence and also program ordering and
quantitative computations over the output distribution, can be encoded in ProbNetKAT. Each of
the analyses in this section can be automatically carried out in our prototype implementation.
As our running example, consider the network shown in Figure 1. It connects Source and
Destination hosts through a topology with three switches. Suppose we want to implement the
following policy: forward packets from the Source to the Destination. We will start by building a
straightforward implementation of this policy in ProbNetKAT and then verify that it correctly
implements the specification embodied in the policy using program equivalence. Next, we will
refine our implementation to improve its resilience to link failures and verify that the refinement is
more resilient. Finally, we characterize the resilience of both implementations quantitatively.
2.1
Deterministic Programming and Reasoning
We will start with a simple deterministic program that forwards packets from left to right through
the topology. To a first approximation, a ProbNetKAT program can be thought of as a random
function from input packets to output packets. We model packets as records, with fields for standard
headers such as the source address (src) and destination address (dst) of a packet, as well as two
fields switch (sw) and port (pt) identifying the current location of the packet. The precise field
names and ranges turns out to be not so important for our purposes; what is crucial is that the
number of fields and the size of their domains must be finite.
NetKAT provides primitives f ←n and f =n to modify and test the field f of an incoming packet.
A modification f ←n returns the input packet with the f field updated to n. A test f =n either
4
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
Switch 3
1
2
3
1
3
2
1
Switch 1
Source
2
Switch 2
Destination
Fig. 1. Example network.
returns the input packet unmodified if the test succeeds, or returns the empty set if the test fails.
There are also primitives skip and drop that behave like a test that always succeeds and fails,
respectively. Programs p, q are assembled to larger programs by composing them in sequence (p ; q)
or in parallel (p & q). NetKAT also provides the Kleene star operator p ∗ from regular expressions to
iterate programs. ProbNetKAT extends NetKAT with an additional operator p ⊕r q that executes
either p with probability r , or q with probability 1 − r .
Forwarding. We now turn to the implementation of our forwarding policy. To route packets from
Source to Destination, all switches can simply forward incoming packets out of port 2:
p1 ≜ pt←2
p2 ≜ pt←2
p3 ≜ pt←2
This is achieved by modifying the port field (pt). Then, to encode the forwarding logic for all
switches into a single program, we take the union of their individual programs, after guarding the
policy for each switch with a test that matches packets at that switch:
p ≜ (sw=1 ; p1 ) & (sw=2 ; p2 ) & (sw =3 ; p3 )
Note that we specify a policy for switch 3, even though it is unreachable.
Now we would like to answer the following question: does our program p correctly forward
packets from Source to Destination? Note however that we cannot answer the question by inspecting
p alone, since the answer depends fundamentally on the network topology.
Topology. Although the network topology is not programmable, we can still model its behavior
as a program. A unidirectional link matches on packets located at the source location of the link,
and updates their location to the destination of the link. In our example network (Figure 1), the
link ℓi j from switch i to switch j , i is given by
ℓi j ≜ sw=i ; pt=j ; sw←j ; pt←i
We obtain a model for the entire topology by taking the union of all its links:
t ≜ ℓ12 & ℓ13 & ℓ32
Although this example uses unidirectional links, bidirectional links can be modeled as well using a
pair of unidirectional links.
Network Model. A packet traversing the network is subject to an interleaving of processing steps
by switches and links in the network. This is expressible in NetKAT using Kleene star as follows:
M(p, t) ≜ (p ; t)∗ ; p
However, the model M(p, t) captures the behavior of the network on arbitrary input packets,
including packets that start at arbitrary locations in the interior of the network. Typically we
are interested only in the behavior of the network for packets that originate at the ingress of the
Probabilistic Program Equivalence for NetKAT
5
network and arrive at the egress of the network. To restrict the model to such packets, we can
define predicates in and out and pre- and post-compose the model with them:
in ; M(p, t) ; out
For our example network, we are interested in packets originating at the Source and arriving at the
Destination, so we define
in ≜ sw=1 ; pt=1
out ≜ sw=2 ; pt=2
With a full network model in hand, we can verify that p correctly implements the desired network
policy, i.e. forward packets from Source to Destination. Our informal policy can be expressed
formally as a simple ProbNetKAT program:
teleport ≜ sw←2 ; pt←2
We can then settle the correctness question by checking the equivalence
in ; M(p, t) ; out ≡ in ; teleport
Previous work [3, 9, 30] used NetKAT equivalence with similar encodings to reason about various
essential network properties including waypointing, reachability, isolation, and loop freedom, as
well as for the validation and verification of compiler transformations. Unfortunately, the NetKAT
decision procedure [9] and other state of the art network verification tools [15, 17] are fundamentally
limited to reasoning about deterministic network behaviors.
2.2
Probabilistic Programming and Reasoning
Routing schemes used in practice often behave non-deterministically—e.g., they may distribute
packets across multiple paths to avoid congestion, or they may switch to backup paths in reaction
to failures. To see these sorts of behaviors in action, let’s refine our naive routing scheme p to make
it resilient to random link failures.
Link Failures. We will assume that switches have access to a boolean flag upi that is true if and
only if the link connected to the switch at port i is transmitting packets correctly.1 To make the
network resilient to a failure, we can modify the program for Switch 1 as follows: if the link ℓ12 is
up, use the shortest path to Switch 2 as before; otherwise, take a detour via Switch 3, which still
forwards all packets to Switch 2.
pb1 ≜ (up2 =1 ; pt←2) & (up2 =0 ; pt←3)
As before, we can then encode the forwarding logic for all switches into a single program:
pb ≜ (sw=1 ; pb1 ) & (sw=2 ; p2 ) & (sw=3 ; p3 )
Next, we update our link and topology encodings. A link behaves as before when it is up, but
drops all incoming packets otherwise:
ℓbi j ≜ upj =1 ; ℓi j
For the purposes of this example, we will consider failures of links connected to Switch 1 only:
b
t ≜ ℓb12 & ℓb13 & ℓ32
1 Modern
switches use low-level protocols such as Bidirectional Forwarding Detection (BFD) to maintain healthiness
information about the link connected to each port [4].
6
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
We also need to assume some failure model, i.e. a probabilistic model of when and how often links
fail. We will consider three failure models:
f 0 ≜ up2 ←1 ; up3 ←1
Ê
1
1
1
f1 ≜
f 0 @ , (up2 ←0) & (up3 ←1) @ , (up2 ←1) & (up3 ←0) @
2
4
4
f 2 ≜ (up2 ←1 ⊕0.8 up2 ←0) ; (up2 ←1 ⊕0.8 up2 ←0)
Intuitively, in model f 0 , links never fail; in f 1 , the links ℓ12 and ℓ13 can fail with probability 25%
each, but at most one fails; in f 2 , the links can fail independently with probability 20% each.
Finally, we can assemble the encodings of policy, topology, and failures into a refined model:
b t, f ) ≜ var up2 ←1 in
M(p,
var up3 ←1 in
M((f ; p), t)
b wraps our previous model M with declarations of the two local variables up2
The refined model M
and up3 , and it executes the failure model at each hop prior to switch and topology processing. As
a quick sanity check, we can verify that the old model and the new model are equivalent in the
absence of failures, i.e. under failure model f 0 :
b b
M(p, t) ≡ M(p,
t , f0 )
Now let us analyze our resilient routing scheme pb. First, we can verify that it correctly routes
packets to the Destination in the absence of failures by checking the following equivalence:
b p, b
in ; M(b
t , f 0 ) ; out ≡ in ; teleport
In fact, the scheme pb is 1-resilient: it delivers all packets as long as no more than 1 link fails. In
particular, it behaves like teleport under failure model f 1 . In contrast, this is not true for our naive
routing scheme p:
b p, b
b b
in ; M(b
t , f 1 ) ; out ≡ in ; teleport . in ; M(p,
t , f 1 ) ; out
Under failure model f 2 , neither of the routing schemes is fully resilient and equivalent to teleportation. However, it is reassuring to verify that the refined routing scheme pb performs strictly better
than the naive scheme p,
b b
b p, b
M(p,
t , f 2 ) < M(b
t , f2 )
where p < q means that q delivers packets with higher probability than p.
Reasoning using program equivalences and inequivalences is helpful to establish qualitative
properties such as reachability properties and program invariants. But we can also go a step further,
and compute quantitative properties of the packet distribution generated by a ProbNetKAT program.
For example, we may ask for the probability that the schemes deliver a packet originating at Source
to Destination under failure model f 2 . The answer is 80% for the naive scheme, and 96% for the
resilient scheme. Such a computation might be used by an Internet Service Provider (ISP) to check
that it can meet its service-level agreements (SLA) with customers.
In Section 6 we will analyze a more sophisticated resilient routing scheme and see more complex
examples of qualitative and quantitative reasoning with ProbNetKAT drawn from real-world data
center networks. But first, we turn to developing the theoretical foundations (Sections 3 to 5).
Probabilistic Program Equivalence for NetKAT
3
7
BACKGROUND ON PROBABILISTIC NETKAT
In this section, we review the syntax and semantics of ProbNetKAT [8, 31] and basic properties of
the language, focusing on the history-free fragment. A synopsis appears in Figure 2.
3.1
Syntax
A packet π is a record mapping a finite set of fields f1 , f2 , . . . , fk to bounded integers n. Fields include
standard header fields such as source (src) and destination (dst) addresses, as well as two logical
fields for the switch (sw) and port (pt) that record the current location of the packet in the network.
The logical fields are not present in a physical network packet, but it is convenient to model them
as if they were. We write π .f to denote the value of field f of π and π [f :=n] for the packet obtained
from π by updating field f to n. We let Pk denote the (finite) set of all packets.
ProbNetKAT expressions consist of predicates (t, u, . . .) and programs (p, q, . . .). Primitive predicates include tests (f =n) and the Boolean constants false (drop) and true (skip). Compound predicates
are formed using the usual Boolean connectives of disjunction (t & u), conjunction (t ; u), and
negation (¬t). Primitive programs include predicates (t) and assignments (f ←n). Compound programs are formed using the operators parallel composition (p & q), sequential composition (p ; q),
iteration (p ∗ ), and probabilistic choice (p ⊕r q). The full version of the language also provides a
dup primitives, which logs the current state of the packet, but we omit this operator from the
history-free fragment of the language considered in this paper; we discuss technical challenges to
handling full ProbNetKAT in Section 7.
The probabilistic choice operator p ⊕r q executes p with probability r and q with probability 1 −r ,
where r is rational, 0 ≤ r ≤ 1. We often use an n-ary version and omit the r ’s as in p1 ⊕ · · · ⊕ pn ,
which is interpreted as executing one of the pi chosen with equal probability. This can be desugared
into the binary version.
Conjunction of predicates and sequential composition of programs use the same syntax (t ; u
and p ; q, respectively), as their semantics coincide. The same is true for disjunction of predicates
and parallel composition of programs (t & u and p & q, respectively). The negation operator (¬)
may only be applied to predicates.
The language as presented in Figure 2 only includes core primitives, but many other useful
constructs can be derived. In particular, it is straightforward to encode conditionals and while
loops:
if t then p else q ≜ t ; p & ¬t ; q
while t do p ≜ (t ; p)∗ ; ¬t
These encodings are well known from KAT [19]. Mutable and immutable local variables can also
be desugared into the core calculus (although our implementation supports them directly):
var f ←n in p ≜ f ←n ; p ; f ←0
Here f is an otherwise unused field. The assignment f ←0 ensures that the final value of f is
“erased” after the field goes out of scope.
3.2
Semantics
In the full version of ProbNetKAT, the space 2H of sets of packet histories2 is uncountable, and
programs can generate continuous distributions on this space. This requires measure theory and
Lebesgue integration for a suitable semantic treatment. However, as programs in our history-free
fragment can generate only finite discrete distributions, we are able to give a considerably simplified
presentation (Figure 2). Nevertheless, the resulting semantics is a direct restriction of the general
semantics originally presented in [8, 31].
2A
history is a non-empty finite sequence of packets modeling the trajectory of a single packet through the network.
8
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
Semantics
Syntax
Naturals
n ::= 0 | 1 | 2 | · · ·
Fields
f ::= f1 | · · · | fk
Packets Pk ∋ π ::= {f1 = n 1 , . . . , fk = nk }
Probabilities
r ∈ [0, 1] ∩ Q
Predicates
Programs
JpK ∈ 2Pk → D(2Pk )
JdropK(a) ≜ δ (∅)
JskipK(a) ≜ δ (a)
Jf =nK(a) ≜ δ ({π ∈ a | π .f = n})
Jf ←nK(a) ≜ δ ({π [f :=n] | π ∈ a})
t, u ::= drop
| skip
| f =n
| t &u
| t ;u
| ¬t
False
True
Test
Disjunction
Conjunction
Negation
p, q ::= t
| f ←n
| p &q
| p ;q
| p ⊕r q
| p∗
n ∈N
Filter
(0) ≜ skip, p (n+1) ≜ skip & p ; p (n)
where
p
Assignment
Union
(Discrete) Probability Monad D
Sequence
Unit δ : X → D(X ) δ (x) ≜ δ x
Choice
Iteration
Bind −† : (X → D(Y )) → D(X ) → D(Y )
Í
f † (µ)(A) ≜ x ∈X f (x)(A) · µ(x)
J¬tK(a) ≜ D(λb.a − b)(JtK(a))
Jp & qK(a) ≜ D(∪)(JpK(a) × JqK(a))
Jp ; qK(a) ≜ JqK† (JpK(a))
Jp ⊕r qK(a) ≜ rÄ
· JpK(a) + (1 − r ) · JqK(a)
Jp ∗ K(a) ≜
Jp (n) K(a)
Fig. 2. ProbNetKAT core language: syntax and semantics.
Proposition 3.1. Let L−M denote the semantics defined in [31]. Then for all dup-free programs p
and inputs a ∈ 2Pk , we have JpK(a) = LpM(a), where we identify packets and histories of length one.
Proof. The proof is given in Appendix A.
□
For the purposes of this paper, we work in the discrete space 2Pk , i.e., the set of sets of packets.
An outcome (denoted by lowercase variables a, b, c, . . . ) is a set of packets and an event (denoted by
uppercase variables A, B, C, . . . ) is a set of outcomes. Given a discrete probability measure on this
space, the probability of an event is the sum of the probabilities of its outcomes.
Programs are interpreted as Markov kernels on the space 2Pk . A Markov kernel is a function
2Pk → D(2Pk ) in the probability (or Giry) monad D [11, 18]. Thus, a program p maps an input set
of packets a ∈ 2Pk to a distribution JpK(a) ∈ D(2Pk ) over output sets of packets. The semantics uses
the following probabilistic primitives:3
• For a discrete measurable space X , D(X ) denotes the set of probability measures over X ; that
is, the set of countably additive functions µ : 2X → [0, 1] with µ(X ) = 1.
• For a measurable function f : X → Y , D(f ) : D(X ) → D(Y ) denotes the pushforward along
f ; that is, the function that maps a measure µ on X to
D(f )(µ) ≜ µ ◦ f −1 = λA ∈ ΣY . µ({x ∈ X | f (x) ∈ A})
which is called the pushforward measure on Y .
• The unit δ : X → D(X ) of the monad maps a point x ∈ X to the point mass (or Dirac
measure) δ x ∈ D(X ). The Dirac measure is given by
δ x (A) ≜ 1[x ∈ A]
3 The
same primitives can be defined for uncountable spaces, as would be required to handle the full language.
Probabilistic Program Equivalence for NetKAT
9
That is, the Dirac measure is 1 if x ∈ A and 0 otherwise.
• The bind operation of the monad,
−† : (X → D(Y )) → D(X ) → D(Y )
lifts a function f : X → D(Y ) with deterministic inputs to a function f † : D(X ) → D(Y )
that takes random inputs. Intuitively, this is achieved by averaging the output of f when the
inputs are randomly distributed according to µ. Formally,
Õ
f † (µ)(A) ≜
f (x)(A) · µ(x).
x ∈X
• Given two measures µ ∈ D(X ) and ν ∈ D(Y ),
µ × ν ∈ D(X × Y )
denotes their product measure. This is the unique measure satisfying:
(µ × ν)(A × B) = µ(A) · ν (B)
Intuitively, it models distributions over pairs of independent values.
With these primitives at our disposal, we can now make our operational intuitions precise.
Formal definitions are given in Figure 2. A predicate t maps (with probability 1) the set of input
packets a ∈ 2Pk to the subset of packets b ⊆ a satisfying the predicate. In particular, the false
primitive drop simply drops all packets (i.e., it returns the empty set with probability 1) and the
true primitive skip simply keeps all packets (i.e., it returns the input set with probability 1). The
test f =n returns the subset of input packets whose f -field contains n. Negation ¬t filters out the
packets returned by t.
Parallel composition p & q executes p and q independently on the input set, then returns the union
of their results. Note that packet sets do not model nondeterminism, unlike the usual situation in
Kleene algebras—rather, they model collections of packets traversing possibly different portions of
the network simultaneously. Probabilistic choice p ⊕r q feeds the input to both p and q and returns
a convex combination of the output distributions according to r . Sequential composition p ; q can
be thought of as a two-stage probabilistic experiment: it first executes p on the input set to obtain
a random intermediate result, then feeds that into q to obtain the final distribution over outputs.
The outcome of q needs to be averaged over the distribution of intermediate results produced by
p. It may be helpful to think about summing over the paths in a probabilistic tree diagram and
multiplying the probabilities along each path.
We say that two programs are equivalent, denoted p ≡ q, if they denote the same Markov kernel,
i.e. if JpK = JqK. As usual, we expect Kleene star p ∗ to satisfy the characteristic fixed point equation
p ∗ ≡ skip & p ; p ∗ , which allows it to be unrolled ad infinitum. Thus we define it as the supremum of
its finite unrollings p (n) ; see Figure 2. This supremum is taken in a CPO (D(2Pk ), ⊑) of distributions
that is described in more detail in § 3.3. The partial ordering ⊑ on packet set distributions gives
rise to a partial ordering on programs: we write p ≤ q iff JpK(a) ⊑ JqK(a) for all inputs a ∈ 2Pk .
Intuitively, p ≤ q iff p produces any particular output packet π with probability at most that of q
for any fixed input.
A fact that should be intuitively clear, although it is somewhat hidden in our presentation of the
denotational semantics, is that the predicates form a Boolean algebra:
Lemma 3.2. Every predicate t satisfies JtK(a) = δ a∩bt for a certain packet set bt ⊆ Pk, where
• bdrop = ∅,
• bskip = Pk,
• bf =n = {π ∈ Pk | π .f = n},
10
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
• b¬t = Pk − bt ,
• bt &u = bt ∪ bu , and
• bt ;u = bt ∩ bu .
Proof. For drop, skip, and f =n, the claim holds trivially. For ¬t, t & u, and t ; u, the claim follows
inductively, using that D(f )(δb ) = δ f (b) , δb × δc = δ (b,c) , and that f † (δb ) = f (b). The first and last
equations hold because ⟨D, δ, −† ⟩ is a monad.
□
3.3
The CPO (D(2Pk ), ⊑)
The space 2Pk with the subset order forms a CPO (2Pk , ⊆). Following Saheb-Djahromi [27], this
CPO can be lifted to a CPO (D(2Pk ), ⊑) on distributions over 2Pk . Because 2Pk is a finite space, the
resulting ordering ⊑ on distributions takes a particularly easy form:
µ ⊑ν
⇐⇒
µ({a}↑) ≤ ν ({a}↑) for all a ⊆ Pk
where {a}↑ ≜ {b | a ⊆ b} denotes upward closure. Intuitively, ν produces more outputs then µ. As
was shown in [31], ProbNetKAT satisfies various monotonicity (and continuity) properties with
respect to this ordering, including
a ⊆ a ′ =⇒ JpK(a) ⊑ JpK(a ′)
and
n ≤ m =⇒ Jp (n) K(a) ⊑ Jp (m) K(a).
As a result, the semantics of p ∗ as the supremum of its finite unrollings p (n) is well-defined.
While the semantics of full ProbNetKAT requires domain theory to give a satisfactory characterization of Kleene star, a simpler characterization suffices for the history-free fragment:
Lemma 3.3 (Pointwise Convergence). Let A ⊆ 2Pk . Then for all programs p and inputs a ∈ 2Pk ,
Jp ∗ K(a)(A) = lim Jp (n) K(a)(A).
n→∞
Proof. See Appendix A
□
This lemma crucially relies on our restrictions to dup-free programs and the space 2Pk . With
this insight, we can now move to a concrete semantics based on Markov chains, enabling effective
computation of program semantics.
4
BIG-STEP SEMANTICS
The Scott-style denotational semantics of ProbNetKAT interprets programs as Markov kernels
2Pk → D(2Pk ). Iteration is characterized in terms of approximations in a CPO (D(2Pk ), ⊑) of
distributions. In this section we relate this semantics to a Markov chain semantics on a state space
consisting of finitely many packets.
Since the set of packets Pk is finite, so is its powerset 2Pk . Thus any distribution over packet sets
is discrete and can be characterized by a probability mass function, i.e. a function
Õ
f : 2Pk → [0, 1],
f (b) = 1
b ⊆Pk
It is convenient to view f as a stochastic vector, i.e. a vector of non-negative entries that sums to
1. The vector is indexed by packet sets b ⊆ Pk with b-th component f (b). A program, being a
function that maps inputs a to distributions over outputs, can then be organized as a square matrix
indexed by Pk in which the stochastic vector corresponding to input a appears as the a-th row.
Pk
Pk
Thus we can interpret a program p as a matrix BJpK ∈ [0, 1]2 ×2 indexed by packet sets,
where the matrix entry BJpKab denotes the probability that program p produces output b ∈ 2Pk
on input a ∈ 2Pk . The rows of BJpK are stochastic vectors, each encoding the output distribution
Probabilistic Program Equivalence for NetKAT
11
BJpK ∈ S(2Pk )
BJdropKab ≜ 1[b = ∅]
BJp & qKab ≜
Õ
1[c ∪ d = b] · BJpKa,c · BJqKa,d
c,d
BJskipKab ≜ 1[a = b]
BJf =nKab ≜ 1[b = {π ∈ a | π .f = n}]
BJ¬tKab ≜ 1[b ⊆ a] · BJtKa,a−b
BJf ←nKab ≜ 1[b = {π [f := n] | π ∈ a}]
BJp ; qK ≜ BJpK · BJqK
BJp ⊕r qK ≜ r · BJpK + (1 − r ) · BJqK
BJp ∗ Kab ≜ lim BJp (n) Kab
n→∞
Fig. 3. Big-Step Semantics: BJpKab denotes the probability that program p produces output b on input a.
corresponding to a particular input set a. Such a matrix is called (right-)stochastic. We denote by
S(2Pk ) the set of right-stochastic square matrices indexed by 2Pk .
The interpretation of programs as stochastic matrices is largely straightforward and given
formally in Figure 3. At a high level, deterministic program primitives map to simple (0, 1)-matrices,
and program operators map to operations on matrices. For example, the program primitive drop is
interpreted as the stochastic matrix
∅ b2 ... bn
1
.. ..
BJdropK = . .
an 1
∅
0 · · · 0
.. . . ..
. .
.
0 · · · 0
a2
1
..
.
an
a1 = ∅
1
(1)
1
that moves all probability mass to the ∅-column, and the primitive skip is the identity matrix. The
formal definitions are given in Figure 3 using Iverson brackets: 1[φ] is defined to be 1 if φ is true,
or 0 otherwise.
As suggested by the picture in (1), a stochastic matrix B ∈ S(2Pk ) can be viewed as a Markov chain
(MC), a probabilistic transition system with state space 2Pk that makes a random transition between
states at each time step. The matrix entry Bab gives the probability that, whenever the system
is in state a, it transitions to state b in the next time step. Under this interpretation, sequential
composition becomes matrix product: a step from a to b in BJp ; qK decomposes into a step from a
to some intermediate state c in BJpK and a step from c to the final state b in BJqK with probability
Õ
BJp ; qKab =
BJpKac · BJqKcb = (BJpK · BJqK)ab .
c
4.1
Soundness
The main theoretical result of this section is that the finite matrix BJpK fully characterizes the
behavior of a program p on packets.
Theorem 4.1 (Soundness). For any program p and any sets a, b ∈ 2Pk , BJp ∗ K is well-defined,
BJpK is a stochastic matrix, and BJpKab = JpK(a)({b}).
Proof. It suffices to show the equality BJpKab = JpK(a)({b}); the remaining claims then follow by
well-definedness of J−K. The equality is shown using Lemma 3.3 and a routine induction on p:
For p = drop, skip, f =n, f ←n we have
JpK(a)({b}) = δc ({b}) = 1[b = c] = BJpKab
for c = ∅, a, {π ∈ a | π .f = n}, {π [f := n] | π ∈ a}, respectively.
12
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
For ¬t we have,
BJ¬tKab
= 1[b ⊆ a] · BJtKa,a−b
= 1[b ⊆ a] · JtK(a)({a − b})
= 1[b ⊆ a] · 1[a − b = a ∩ bt ]
= 1[b ⊆ a] · 1[a − b = a − (H − bt )]
= 1[b = a ∩ (H − bt )]
= J¬tK(a)(b)
(IH)
(Lemma 3.2)
(Lemma 3.2)
For p & q, letting µ = JpK(a) and ν = JqK(a) we have
Jp & qK(a)({b})
= (µ × ν )({(b1 , b2 ) | b1 ∪ b2 = b})
Í
= b1,b2 1[b1 ∪ b2 = b] · (µ × ν )({(b1 , b2 )})
Í
= Íb1,b2 1[b1 ∪ b2 = b] · µ({b1 }) · ν ({b2 })
= b1,b2 1[b1 ∪ b2 = b] · BJpKab1 · BJqKab2
= BJp & qKab
(IH)
where we use in the second step that b ⊆ Pk is finite, thus {(b1 , b2 ) | b1 ∪ b2 = b} is finite.
For p ; q, let µ = JpK(a) and νc = JqK(c) and recall that µ is a discrete distribution on 2Pk . Thus
Í
Jp ; qK(a)({b}) = Íc ∈2Pk νc ({b}) · µ({c})
= c ∈2Pk BJqKc,b · BJpKa,c
= BJp ; qKab .
For p ⊕r q, the claim follows directly from the induction hypotheses.
Finally, for p ∗ , we know that BJp (n) Kab = Jp (n) K(a)({b}) by induction hypothesis. The key to
proving the claim is Lemma 3.3, which allows us to take the limit on both sides and deduce
BJp ∗ Kab = lim BJp (n) Kab = lim Jp (n) K(a)({b}) = Jp ∗ K(a)({b}).
n→∞
n→∞
□
Together, these results reduce the problem of checking program equivalence for p and q to
checking equality of the matrices produced by the big-step semantics, BJpK and BJqK.
Corollary 4.2. For programs p and q, JpK = JqK if and only if BJpK = BJqK.
Proof. Follows directly from Theorem 4.1.
□
Unfortunately, BJp ∗ K is defined in terms of a limit. Thus, it is not obvious how to compute the
big-step matrix in general. The next section is concerned with finding a closed form for the limit,
resulting in a representation that can be effectively computed, as well as a decision procedure.
5
SMALL-STEP SEMANTICS
This section derives a closed form for BJp ∗ K, allowing to compute BJ−K explicitly. This yields an
effective mechanism for checking program equivalence on packets.
In the “big-step” semantics for ProbNetKAT, programs are interpreted as Markov chains over
the state space 2Pk , such that a single step of the chain models the entire execution of a program,
going directly from some initial state a (corresponding to the set of input packets) to the final state
b (corresponding to the set of output packets). Here we will instead take a “small-step” approach
and design a Markov chain such that one transition models one iteration of p ∗ .
To a first approximation, the states (or configurations) of our probabilistic transition system are
triples ⟨p, a, b⟩, consisting of the program p we mean to execute, the current set of (input) packets
a, and an accumulator set b of packets output so far. The execution of p ∗ on input a ⊆ Pk starts
from the initial state ⟨p ∗ , a, ∅⟩. It proceeds by unrolling p ∗ according to the characteristic equation
Probabilistic Program Equivalence for NetKAT
1
⟨p ∗ , a, b⟩
13
⟨skip & p ; p ∗ , a, b⟩
1
⟨p ; p ∗ , a, b ∪ a⟩
B Jp K
a, a ′
BJpKa,a ′
⟨p ∗ , a ′, b ∪ a⟩
Fig. 4. The small-step semantics is given by a Markov chain whose states are configurations of the form
⟨program, input set, output accumulator⟩. The three dashed arrows can be collapsed into the single solid arrow,
rendering the program component superfluous.
p ∗ ≡ skip & p ; p ∗ with probability 1:
1
⟨p ∗ , a, ∅⟩ −−−−−−−−−→ ⟨skip & p ; p ∗ , a, ∅⟩
To execute a union of programs, we must execute both programs on the input set and take the
union of their results. In the case of skip & p ; p ∗ , we can immediately execute skip by outputting
the input set with probability 1, leaving the right hand side of the union:
1
⟨skip & p ; p ∗ , a, ∅⟩ −−−−−−−−−→ ⟨p ; p ∗ , a, a⟩
To execute the sequence p ; p ∗ , we first execute p and then feed its (random) output into p ∗ :
∀a ′ :
BJpKa, a ′
⟨p ; p ∗ , a, a⟩ −−−−−−−−−→ ⟨p ∗ , a ′, a⟩
At this point the cycle closes and we are back to executing p ∗ , albeit with a different input set a ′
and some accumulated outputs. The structure of the resulting Markov chain is shown in Figure 4.
At this point we notice that the first two steps of execution are deterministic, and so we can
collapse all three steps into a single one, as illustrated in Figure 4. After this simplification, the
program component of the states is rendered obsolete since it remains constant across transitions.
Thus we can eliminate it, resulting in a Markov chain over the state space 2Pk × 2Pk . Formally, it
can be defined concisely as
SJpK ∈ S(2Pk × 2Pk )
SJpK(a,b),(a ′,b ′ ) ≜ 1[b ′ = b ∪ a] · BJpKa,a ′
As a first sanity check, we verify that the matrix SJpK defines indeed a Markov chain:
Lemma 5.1. SJpK is stochastic.
Proof. For arbitrary a, b ⊆ Pk, we have
Õ
Õ
SJpK(a,b),(a ′,b ′ ) =
1[b ′ = a ∪ b] · BJpKa,a ′
a ′,b ′
a ′,b ′
=
Õ Õ
=
Õ
a′
1[b ′ = a ∪ b] · BJpKa,a ′
b′
BJpKa,a ′ = 1
a′
where, in the last step, we use that BJpK is stochastic (Theorem 4.1).
□
Next, we show that steps in SJpK indeed model iterations of p ∗ . Formally, the (n + 1)-step of
SJpK is equivalent to the big-step behavior of the n-th unrolling of p ∗ in the following sense:
14
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
Proposition 5.2. BJp (n) Ka,b =
Í
a′
SJpKn+1
(a,∅),(a ′,b)
Proof. Naive induction on the number of steps n ≥ 0 fails, because the hypothesis is too weak.
We must first generalize it to apply to arbitrary start states in SJpK, not only those with empty
accumulator. The appropriate generalization of the claim turns out to be:
Lemma 5.3. Let p be program. Then for all n ∈ N and a, b, b ′ ⊆ Pk,
Õ
Õ
1[b ′ = a ′ ∪ b] · BJp (n) Ka,a ′ =
SJpKn+1
(a,b),(a ′,b ′ )
a′
Proof.
a′
By induction on n ≥ 0. For n = 0, we have
Õ
Õ
1[b ′ = a ′ ∪ b] · BJp (n) Ka,a ′ =
1[b ′ = a ′ ∪ b] · BJskipKa,a ′
a′
a′
=
Õ
1[b ′ = a ′ ∪ b] · 1[a = a ′]
a′
= 1[b ′ = a ∪ b]
= 1[b ′ = a ∪ b] ·
Õ
BJpKa,a ′
a′
=
Õ
SJpK(a,b),(a ′,b ′ )
a′
In the induction step (n > 0),
Õ
1[b ′ = a ′ ∪ b] · BJp (n) Ka,a ′
a′
=
Õ
=
Õ
a′
1[b ′ = a ′ ∪ b] · BJskip & p ; p (n−1) Ka,a ′
1[b ′ = a ′ ∪ b] ·
Õ
c
a′
1[a ′ = a ∪ c] · BJp ; p (n−1) Ka,c
!
=
Õ Õ
=
Õ
c
1[b = a ∪ b] · 1[a = a ∪ c] ·
′
′
′
Õ
a′
k
1[b = a ∪ c ∪ b] · BJpKa,k · BJp
′
(n−1)
c,k
=
Õ
BJpKa,k ·
a′
k
=
Õ
BJpKa,k ·
ÕÕ
a′
=
=
Õ
a′
Kk,c
1[b ′ = a ′ ∪ (a ∪ b)] · BJp (n−1) Kk,a ′
SJpKn(k,a∪b),(a ′,b ′ )
1[k 2 = a ∪ b] · BJpKa,k1 · SJpKn(k1,k2 ),(a ′,b ′ )
k 1,k 2
ÕÕ
a′
Õ
a′
k
=
Õ
BJpKa,k · BJp (n−1) Kk,c
SJpK(a,b)(k1,k2 ) · SJpKn(k1,k2 ),(a ′,b ′ )
k 1,k 2
SJpKn+1
(a,b),(a ′,b ′ )
Proposition 5.2 then follows by instantiating Lemma 5.3 with b = ∅.
□
□
Probabilistic Program Equivalence for NetKAT
5.1
15
Closed form
Let (an , bn ) denote the random state of the Markov chain SJpK after taking n steps starting from
(a, ∅). We are interested in the distribution of bn for n → ∞, since this is exactly the distribution
of outputs generated by p ∗ on input a (by Proposition 5.2 and the definition of BJp ∗ K). Intuitively,
the ∞-step behavior of SJpK is equivalent to the big-step behavior of p ∗ . The limiting behavior of
finite state Markov chains has been well-studied in the literature (e.g., see [16]), and we can exploit
these results to obtain a closed form by massaging SJpK into a so called absorbing Markov chain.
A state s of a Markov chain T is called absorbing if it transitions to itself with probability 1:
s
(formally: Ts,s ′ = 1[s = s ′])
1
A Markov chain T ∈ S(S) is called absorbing if each state can reach an absorbing state:
n
∀s ∈ S. ∃s ′ ∈ S, n ≥ 0. Ts,s
′ > 0 and Ts ′,s ′ = 1
The non-absorbing states of an absorbing MC are called transient. Assume T is absorbing with nt
transient states and na absorbing states. After reordering the states so that absorbing states appear
before transient states, T has the form
I 0
T =
R Q
where I is the na × na identity matrix, R is an nt × na matrix giving the probabilities of transient
states transitioning to absorbing states, and Q is an nt ×nt square matrix specifying the probabilities
of transient states transitioning to transient states. Absorbing states never transition to transient
states, thus the na × nt zero matrix in the upper right corner.
No matter the start state, a finite state absorbing MC always ends up in an absorbing state
eventually, i.e. the limit T ∞ ≜ limn→∞ T n exists and has the form
I 0
∞
T =
A 0
for an nt × na matrix A of so called absorption probabilities, which can be given in closed form:
A = (I + Q + Q 2 + . . . ) R
That is, to transition from a transient state to an absorbing state, the MC can first take an arbitrary
number of steps between transient states, before taking a single and final step into an absorbing
Í
state. The infinite sum X ≜ n ≥0 Q n satisfies X = I + QX , and solving for X we get
X = (I − Q)−1
and A = (I − Q)−1 R
(2)
(We refer the reader to [16] or Lemma A.2 in Appendix A for the proof that the inverse must exist.)
Before we apply this theory to the small-step semantics SJ−K, it will be useful to introduce some
T
MC-specific notation. Let T be an MC. We write s −
→n s ′ if s can reach s ′ in precisely n steps, i.e.
T
n > 0; and we write s −
n > 0 for any
if Ts,s
→ s ′ if s can reach s ′ in any number of steps, i.e. if Ts,s
′
′
T
T
T
n ≥ 0. Two states are said to communicate, denoted s ←
→ s ′, if s −
→ s ′ and s ′ −
→ s. The relation
T
←
→ is an equivalence relation, and its equivalence classes are called communication classes. A
communication class is called absorbing if it cannot reach any states outside the class. We sometimes
T
n . For the rest of the section, we fix a program p and
write Pr[s −
→n s ′] to denote the probability Ts,s
′
abbreviate BJpK as B and SJpK as S.
Of central importance are what we will call the saturated states of S:
16
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
Definition 5.4. A state (a, b) of S is called saturated if the accumulator b has reached its final
S
value, i.e. if (a, b) →
− (a ′, b ′) implies b ′ = b.
Once we have reached a saturated state, the output of p ∗ is determined. The probability of ending
up in a saturated state with accumulator b, starting from an initial state (a, ∅), is
Õ
n
lim
S (a,∅),(a
′,b)
n→∞
a′
that p ∗
and indeed this is the probability
outputs b on input a by Proposition 5.2. Unfortunately, a
saturated state is not necessarily absorbing. To see this, assume there exists only a single field f
ranging over {0, 1} and consider the program p ∗ = (f ←0 ⊕1/2 f ←1)∗ . Then S has the form
0, 0
0, {0, 1}
1, 0
1, {0, 1}
0, ∅
where all edges are implicitly labeled with 12 , 0 denotes the packet with f set to 0 and 1 denotes the
packet with f set to 1, and we omit states not reachable from (0, ∅). The two right most states are
saturated; but they communicate and are thus not absorbing.
We can fix this by defining the auxiliary matrix U ∈ S(2Pk × 2Pk ) as
(
1[a ′ = ∅] if (a, b) is saturated
′
U(a,b),(a ′,b ′ ) ≜ 1[b = b] ·
1[a ′ = a] else
It sends a saturated state (a, b) to the canonical saturated state (∅, b), which is always absorbing;
and it acts as the identity on all other states. In our example, the modified chain SU looks as follows:
0, {0, 1}
0, 0
∅, {0, 1}
0, ∅
1, {0, 1}
1, 0
To show that SU is always an absorbing MC, we first observe:
S
Lemma 5.5. S, U , and SU are monotone in the following sense: (a, b) →
− (a ′, b ′) implies b ⊆ b ′
(and similarly for U and SU ).
Proof. For S and U the claim follows directly from their definitions. For SU the claim then follows
compositionally.
□
Now we can show:
Proposition 5.6. Let n ≥ 1.
(1) (SU )n = S n U
(2) SU is an absorbing MC with absorbing states {(∅, b) | b ⊆ Pk}.
Proof.
(1) It suffices to show that U SU = SU . Suppose that
U SU
Pr[(a, b) −−−−→1 (a ′, b ′)] = p > 0.
Probabilistic Program Equivalence for NetKAT
17
It suffices to show that this implies
SU
Pr[(a, b) −−→1 (a ′, b ′)] = p.
If (a, b) is saturated, then we must have (a ′, b ′) = (∅, b) and
U SU
SU
Pr[(a, b) −−−−→1 (∅, b)] = 1 = Pr[(a, b) −−→1 (∅, b)]
U
If (a, b) is not saturated, then (a, b) −→1 (a, b) with probability 1 and therefore
U SU
SU
Pr[(a, b) −−−−→1 (a ′, b ′)] = Pr[(a, b) −−→1 (a ′, b ′)]
(2) Since S and U are stochastic, clearly SU is a MC. Since SU is finite state, any state can reach an
SU
absorbing communication class. (To see this, note that the reachability relation −−→ induces
a partial order on the communication classes of SU . Its maximal elements are necessarily
absorbing, and they must exist because the state space is finite.) It thus suffices to show that
a state set C ⊆ 2Pk × 2Pk in SU is an absorbing communication class iff C = {(∅, b)} for some
b ⊆ Pk.
B
S
“⇐”: Observe that ∅ −
→1 a ′ iff a ′ = ∅. Thus (∅, b) →
− 1 (a ′, b ′) iff a ′ = ∅ and b ′ = b, and likewise
U
(∅, b) −→1 (a ′, b ′) iff a ′ = ∅ and b ′ = b. Thus (∅, b) is an absorbing state in SU as required.
SU
“⇒”: First observe that by monotonicity of SU (Lemma 5.5), we have b = b ′ whenever (a, b) ←→
(a ′, b ′); thus there exists a fixed bC such that (a, b) ∈ C implies b = bC .
SU
Now pick an arbitrary state (a, bC ) ∈ C. It suffices to show that (a, bC ) −−→ (∅, bC ), because
SU
that implies (a, bC ) ←→ (∅, bC ), which in turn implies a = ∅. But the choice of (a, bC ) ∈ C
was arbitrary, so that would mean C = {(∅, bC )} as claimed.
SU
To show that (a, bC ) −−→ (∅, bC ), pick arbitrary states such that
S
U
(a, bC ) →
− (a ′, b ′) −→1 (a ′′, b ′′)
SU
SU
and recall that this implies (a, bC ) −−→ (a ′′, b ′′) by claim (1). Then (a ′′, b ′′) −−→ (a, bC )
because C is absorbing, and thus bC = b ′ = b ′′ by monotonicity of S, U , and SU . But (a ′, b ′)
was chosen as an arbitrary state S-reachable from (a, bC ), so (a, b) and by transitivity (a ′, b ′)
must be saturated. Thus a ′′ = ∅ by the definition of U .
□
Arranging the states (a, b) in lexicographically ascending order according to ⊆ and letting
n = |2Pk |, it then follows from Proposition 5.6.2 that SU has the form
In 0
SU =
R Q
where for a , ∅
(SU )(a,b),(a ′,b ′ ) = R
Moreover, SU converges and its limit is given by
In
∞
(SU ) ≜
(I − Q)−1R
Q
(a,b),(a ′,b ′ )
0
= lim (SU )n
0
n→∞
(3)
We can use the modified Markov chain SU to compute the limit of S:
Theorem 5.7 (Closed Form). Let a, b, b ′ ⊆ Pk. Then
Õ
n
∞
lim
S (a,b),(a
′,b ′ ) = (SU )(a,b),(∅,b ′ )
n→∞
a′
(4)
18
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
or, using matrix notation,
lim
Õ
n→∞
a′
n
S (−,−),(a
′,−)
Pk
Pk
Pk
In
=
∈ [0, 1](2 ×2 )×2
(I − Q)−1 R
(5)
In particular, the limit in (4) exists and it can be effectively computed in closed-form.
Proof.
Using Proposition 5.6.1 in the second step and equation (3) in the last step,
Õ
Õ
n
lim
S (a,b),(a
(S n U )(a,b),(a ′,b ′ )
′,b ′ ) = lim
n→∞
n→∞
a′
= lim
n→∞
=
a′
Õ
a′
(SU )n(a,b),(a ′,b ′ )
Õ
∞
(SU )∞
(a,b),(a ′,b ′ ) = (SU )(a,b),(∅,b ′ )
a′
(SU )∞
is computable because S and U are matrices over Q and hence so is (I − Q)−1 R.
□
Corollary 5.8. For programs p and q, it is decidable whether p ≡ q.
Proof. Recall from Corollary 4.2 that it suffices to compute the finite rational matrices BJpK and
BJqK and check them for equality. But Theorem 5.7 together with Proposition 5.2 gives us an
effective mechanism to compute BJ−K in the case of Kleene star, and BJ−K is straightforward to
compute in all other cases.
To summarize, we repeat the full chain of equalities we have deduced:
Õ
Jp ∗ K(a)({b}) = BJp ∗ Ka,b = lim BJp (n) Ka,b = lim
SJpKn(a,∅),(a ′,b) = (SU )∞
(a,∅),(∅,b)
n→∞
n→∞
a′
(From left to right: Theorem 4.1, Definition of BJ−K, Proposition 5.2, and Theorem 5.7.)
6
□
CASE STUDY: RESILIENT ROUTING
We have build a prototype based on Theorem 5.7 and Corollary 5.8 in OCaml. It implements
ProbNetKAT as an embedded DSL and compiles ProbNetKAT programs to transition matrices using
symbolic techniques and a sparse linear algebra solver. A detailed description and performance
evaluation of the implementation is beyond the scope of this paper. Here we focus on demonstrating
the utility of such a tool by performing a case study with real-world datacenter topologies and
resilient routing schemes.
Recently proposed datacenter designs [1, 13, 14, 21, 24, 29] utilize a large number of inexpensive
commodity switches, which improves scalability and reduces cost compared to other approaches.
However, relying on many commodity devices also increases the probability of failures. A recent
measurement study showed that network failures in datacenters [10] can have a major impact on
application-level performance, leading to a new line of work exploring the design of fault-tolerant
datacenter fabrics. Typically the topology and routing scheme are co-designed, to achieve good
resilience while still providing good performance in terms of throughput and latency.
6.1
Topology and routing
Datacenter topologies typically organize the fabric into multiple levels of switches.
FatTree. A FatTree [1], which is a multi-level, multi-rooted tree, is perhaps the most common
example of such a topology. Figure 5 shows a 3-level FatTree topology with 20 switches. The bottom
level, edge, consists of top-of-rack (ToR) switches; each ToR switch connects all the hosts within a
rack (not shown in the figure). These switches act as ingress and egress for intra-datacenter traffic.
Probabilistic Program Equivalence for NetKAT
19
Core
C
✗
Aggregation
A
✗
A′
A′′
Edge
s1
s2
s3
s4
s5
s6
s7
s8
Fig. 5. A FatTree topology with 20 switches.
s1
s2
s3
s4
s5
s6
s7
s8
Fig. 6. An AB FatTree topology with 20 switches.
The other two levels, aggregation and core, redundantly interconnect the switches from the edge
layer.
The redundant structure of a FatTree naturally lends itself to forwarding schemes that locally
route around failures. To illustrate, consider routing from a source (s7) to a destination (s1) along
shortest paths in the example topology. Packets are first forwarded upwards, until eventually there
exists a downward path to s1. The green links in the figure depict one such path. On the way up,
there are multiple paths at each switch that can be used to forward traffic. Thus, we can route
around failures by simply choosing an alternate upward link. A common routing scheme is called
equal-cost multi-path routing (ECMP) in the literature, because it chooses between several paths
all having the same cost—e.g., path length. ECMP is especially attractive as is it can provide better
resilience without increasing the lengths of forwarding paths.
However, after reaching a core switch, there is a unique shortest path to the destination, so
ECMP no longer provides any resilience if a switch fails in the aggregation layer (cf. the red cross
in Figure 5). A more sophisticated scheme could take a longer (5-hop) detour going all the way to
another edge switch, as shown by the red lines in the figure. Unfortunately, such detours inflate
the path length and lead to increased latency and congestion.
AB FatTree. FatTree’s unpleasantly long backup routes on the downward paths are caused by
the symmetric wiring of aggregation and core switches. AB FatTrees [21] alleviate this flaw by
skewing the symmetry of wiring. It defines two types of subtrees, differing in their wiring to higher
levels. To illustrate, Figure 6 shows an example which rewires the FatTree from Figure 5 to make it
an AB FatTree. It contains two types of subtrees:
i) Type A: switches depicted in blue and wired to core using dashed lines, and
ii) Type B: switches depicted in red and wired to core using solid lines.
Type A subtrees are wired in a way similar to FatTree, but type B subtrees differ in their connections
to core switches (see the original paper for full details [21]).
This slight change in wiring enables shorter detours to route around failures in the downward
direction. Consider again a flow involving the same source (s7) and destination (s1). As before, we
have multiple options going upwards when following shortest paths (e.g., the one depicted in green),
but we have a unique downward path once we reach the top. But unlike FatTree, if the aggregation
switch on the downward path fails, we find that there is a short (3-hop) detour, as shown in blue.
This backup path exists because the core switch, which needs to reroute traffic, is connected to
aggregation switches of both types of subtrees. More generally, aggregation switches of the same
type as the failed switch provide a 5-hop detour (as in a standard FatTrees); but aggregation switches
of the opposite type can provide a more efficient 3-hop detour.
20
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
// F10 with 3-hop & 5-hop rerouting
// F10 without rerouting
f10_3_5 :=
f10_0 :=
if at_ingress then (default <- 1);
// ECMP, but don’t use inport
if default = 1 then (
fwd_on_random_shortest_path
f10_3;
if at_down_port then (5hop_rr; default <- 0)
// F10 with 3-hop rerouting
) else (
f10_3 :=
default <- 1; // back to default forwarding
f10_0;
fwd_downward_uniformly_at_random
if at_down_port then 3hop_rr
)
Fig. 7. ProbNetKAT implementation of F10 in three refinement steps.
6.2
ProbNetKAT implementation.
Now we will see how to encode several routing schemes using ProbNetKAT and analyze their
behavior in each topology under various failure models.
Routing. F10 [21] provides a routing algorithm that combines the three routing and rerouting
strategies we just discussed (ECMP, 3-hop rerouting, 5-hop rerouting) into a single scheme. We
implemented it in three steps (see Figure 7). The first scheme, F100 , implements an approach similar
to ECMP:4 it chooses a port uniformly at random from the set of ports connected to minimumlength paths to the destination. We exclude the port at which the packet arrived from this set; this
eliminates the possibility of forwarding loops when routing around failures.
Next, we improve the resilience of F100 by augmenting it with 3-hop rerouting if the next hop
aggregation switch A along the downward shortest path from a core switch C fails. To illustrate,
consider the blue path in Figure 6. We find a port on C that connects to an aggregation switch
A′ of the opposite type than the failed aggregation switch, A, and forward the packet to A′. If
there are multiple such ports that have not failed, we choose one uniformly at random. Normal
routing continues at A′, and ECMP will know not to send the packet back to C. F103 implements
this refinement.
Note that if the packet is still parked at port whose adjacent link is down after executing F103 , it
must be that all ports connecting to aggregation switches of the opposite type are down. In this
case, we attempt 5-hop rerouting via an aggregation switch A′′ of the same type as A. To illustrate,
consider the red path in Figure 6. We begin by sending the packet to A′′. To let A′′ know that
it should not send the packet back up as normally, we set a flag default to false in the packet,
telling A′′ to send the packet further down instead. From there, default routing continues. F103,5
implements this refinement.
p
Failure and Network model. We define a family of failure models fk in the style of Section 2. Let
k ∈ N∪{∞} denote a bound on the maximum number of link failures that may occur simultaneously,
and assume that links otherwise fail independently with probability 0 ≤ p < 1 each. We omit p
when it is clear from context. For simplicity, to focus on the more complicated scenarios occurring
on downward paths, we will model failures only for links connecting the aggregation and core
layer.
Our network model works much like the one from Section 2. However, we model a single
destination, switch 1, and we elide the final hop to the appropriate host connected to this switch.
M(p, t) ≜ in ; do (p ; t) while (¬sw=1)
4 ECMP
implementations are usually based on hashing, which approximates random forwarding provided there is sufficient
entropy in the header fields used to select an outgoing port.
Probabilistic Program Equivalence for NetKAT
k
0
1
2
3
4
∞
b
b
b
M(F10
0, t, f k ) M(F10
3, t, f k ) M(F10
3,5, t, f k )
≡ teleport
≡ teleport
≡ teleport
✓
✗
✗
✗
✗
✗
✓
✓
✓
✗
✗
✗
✓
✓
✓
✓
✗
✗
Table 1. Evaluating k-resilience of F10.
21
k
0
1
2
3
4
∞
compare
compare
compare
(F100, F103 ) (F103, F103,5 ) (F103,5, teleport)
≡
<
<
<
<
<
≡
≡
≡
<
<
<
≡
≡
≡
≡
<
<
Table 2. Comparing schemes under k failures.
The ingress predicate in is a disjunction of switch-and-port tests over all ingress locations. This first
b t, f ) that integrates the failure model and declares all
model is embedded into a refined model M(p,
necessary local variables that track the healthiness of individual ports:
b t, f ) ≜ var up1 ←1 in
M(p,
...
var upd ←1 in
M((f ; p), t)
Here d denotes the maximum degree of all nodes in the FatTree and AB FatTree topologies from
Figures 5 and 6, which we encode as programs fattree and abfattree. much like in Section 2.2.
6.3
Checking invariants
We can gain confidence in the correctness of our implementation of F10 by verifying that it
maintains certain key invariants. As an example, recall our implementation of F103,5 : when we
perform 5-hop rerouting, we use an extra bit (default) to notify the next hop aggregation switch to
forward the packet downwards instead of performing default forwarding. The next hop follows
this instruction and also sets default back to 1. By design, the packet can not be delivered to the
destination with default set to 0.
To verify this property, we check the following equivalence:
b
b
∀t, k : M(F10
3,5 , t, f k ) ≡ M(F10
3,5 , t, f k ) ; default=1
We executed the check using our implementation for k ∈ {0, 1, 2, 3, 4, ∞} and t ∈ {fattree, abfattree}.
As discussed below, we actually failed to implement this feature correctly on our first attempt due
to a subtle bug—we neglected to initialize the default flag to 1 at the ingress.
6.4
F10 routing with FatTree
We previously saw that the structure of FatTree doesn’t allow 3-hop rerouting on failures because
all subtrees are of the same type. This would mean that augmenting ECMP with 3-hop rerouting
should have no effect, i.e. 3-hop rerouting should never kick in and act as a no-op. To verify this,
we can check the following equivalence:
b
b
∀k : M(F10
0 , fattree, f k ) ≡ M(F10
3 , fattree, f k )
We have used our implementation to check that this equivalence indeed holds for k ∈ {0, 1, 2, 3, 4, ∞}.
22
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
1.00
Pr[delivery]
0.95
0.90
AB FatTree, F10 no rerouting
AB FatTree, F10 3-hop rerouting
AB FatTree, F10 3+5-hop rerouting
FatTree, F10 3+5-hop rerouting
0.85
0.80
1/128
1/64
1/32
1/16
Link failure probability
1/8
1/4
Fig. 8. Probability of delivery vs. link-failure probability. (k = ∞).
6.5
Refinement
Recall that we implemented F10 in three stages. We started with a basic routing scheme (F100 )
based on ECMP that provides resilience on the upward path, but no rerouting capabilities on
the downward paths. We then augmented this scheme by adding 3-hop rerouting to obtain F103 ,
which can route around certain failures in the aggregation layer. Finally, we added 5-hop rerouting
to address failure cases that 3-hop rerouting cannot handle, obtaining F103,5 . Hence, we would
expect the probability of packet delivery to increase with each refinement of our routing scheme.
Additionally, we expect all schemes to deliver packets and drop packets with some probability
under the unbounded failure model. These observations are summarized by the following ordering:
b
b
b
drop < M(F10
0 , t, f ∞ ) < M(F10
3 , t, f ∞ ) < M(F10
3,5 , t, f ∞ ) < teleport
where t = abfattree and teleport ≜ sw←1. To our surprise, we were not able to verify this property
initially, as our implementation indicated that the ordering
b
b
M(F10
3 , t, f ∞ ) < M(F10
3,5 , t, f ∞ )
was violated. We then added a capability to our implementation to obtain counterexamples, and
found that F103 performed better than F103,5 for packets π with π .default = 0. We were missing
the first line in our implementation of F103,5 (cf., Figure 7) that initializes the default bit to 1 at the
ingress, causing packets to be dropped! After fixing the bug, we were able to confirm the expected
ordering.
6.6
k-resilience
We saw that there exists a strict ordering in terms of resilience for F100 , F103 and F103,5 when an
unbounded number of failures can happen. Another interesting way of measuring resilience is to
count the minimum number of failures at which a scheme fails to guarantee 100% delivery. Using
ProbNetKAT, we can measure this resilience by setting k in fk to increasing values and checking
equivalence with teleportation. Table 1 shows the results based on our decision procedure for the
AB FatTree topology from Figure 6.
The naive scheme, F100 , which does not perform any rerouting, drops packets when a failure
occurs on the downward path. Thus, it is 0-resilient. In the example topology, 3-hop rerouting
Probabilistic Program Equivalence for NetKAT
23
1.0
Pr[hop count ≤ x]
0.9
0.8
AB FatTree, F10 no rerouting
AB FatTree, F10 3-hop rerouting
AB FatTree, F10 3+5-hop rerouting
FatTree, F10 3+5-hop rerouting
0.7
0.6
2
4
6
8
Hop count
10
12
14
Fig. 9. Increased latency due to resilience. (k = ∞, p = 41 )
has two possible ways to reroute for the given failure. Even if only one of the type B subtrees is
reachable, F103 can still forward traffic. However, if both the type B subtrees are unreachable, then
F103 will not be able to reroute traffic. Thus, F103 is 2-resilient. Similarly, F103,5 can route as long as
any aggregation switch is reachable from the core switch. For F103,5 to fail the core switch would
need to be disconnected from all four aggregation switches. Hence it is 3-resilient. In cases where
schemes are not equivalent to teleport, we can characterize the relative robustness by computing
the ordering, as shown in Table 2.
6.7 Resilience under increasing failure rate
We can also do more quantitative analyses such as evaluating the effect of increase failure probability
of links on the probability of packet delivery. Figure 8 shows this analysis in a failure model in which
an unbounded number of failures can occur simultaneously. We find that F100 ’s delivery probability
dips significantly as the failure probability increases because F100 is not resilient to failures. In
contrast, both F103 and F103,5 continue to ensure high probability of delivery by rerouting around
failures.
6.8
Cost of resilience
By augmenting naive routing schemes with rerouting mechanisms, we are able to achieve a higher
degree of resilience. But this benefit comes at a cost. The detours taken to reroute traffic increase
the latency (hop count) for packets. ProbNetKAT enables quantifying this increase in latency by
augmenting our model with a counter that gets increased at every hop. Figure 9 shows the CDF
of latency as the fraction of traffic delivered within a given hop count. On AB FatTree, we find
that F100 delivers as much traffic as it can (≈80%) within a hop count ≤ 4 because the maximum
length of a shortest path from any edge switch to s1 is 4 and F100 does not use any longer paths.
F103 and F103,5 deliver the same amount of traffic with hop count ≤ 4. But, with 2 additional hops,
they are able to deliver significantly more traffic because they perform 3-hop rerouting to handle
certain failures. With 4 additional hops, F103,5 ’s throughput increases as 5-hop rerouting helps. We
find that F103 also delivers more traffic with 8 hops—these are the cases when F103 performs 3-hop
rerouting twice for a single packet as it encountered failure twice. Similarly, we see small increases
24
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
4.8
E[hop count | delivered]
AB FatTree, F10 no rerouting
AB FatTree, F10 3-hop rerouting
AB FatTree, F10 3+5-hop rerouting
FatTree, F10 3+5-hop rerouting
4.6
4.4
4.2
4.0
3.8
3.6
1/128
1/64
1/32
1/16
Link failure probability
1/8
1/4
Fig. 10. Expected hop-count conditioned on delivery. (k = ∞).
in throughput for higher hop counts. We find that F103,5 improves resilience for FatTree too, but
the impact on latency is significantly higher as FatTree does not support 3-hop rerouting.
6.9
Expected latency
Figure 10 shows the expected hop-count of paths taken by packets conditioned on their delivery.
Both F103 and F103,5 deliver packets with high probability even at high failure probabilities, as we
saw in Figure 8. However, a higher probability of link-failure implies that it becomes more likely
for these schemes to invoke rerouting, which increases hop count. Hence, we see the increase in
expected hop-count as failure probability increases. F103,5 uses 5-hop rerouting to achieve more
resilience compared to F103 , which performs only 3-hop rerouting, and this leads to slightly higher
expected hop-count for F103,5 . We see that the increase is more significant for FatTree in contrast
to AB FatTree because FatTree only supports 5-hop rerouting.
As the failure probability increases, the probability of delivery for packets that are routed via
the core layer decreases significantly for F100 (recall Figure 8). Thus, the distribution of delivered
packets shifts towards those with direct 2-hop path via an aggregation switch (such as packets
from s2 to s1), and hence the expected hop-count decreases slightly.
6.10
Discussion
As this case study of resilient routing in datacenters shows, the stochastic matrix representation of
ProbNetKAT programs and accompanying decision procedure enable us to answer a wide variety of
questions about probabilistic networks completely automatically. These new capabilities represent
a signficant advance over current network verification tools, which are based on deterministic
packet-forwarding models [9, 15, 17, 22].
7
DECIDING FULL PROBNETKAT: OBSTACLES AND CHALLENGES
As we have just seen, history-free ProbNetKAT can describe sophisticated network routing schemes
under various failure models, and program equivalence for the language is decidable. However, it
is less expressive than the original ProbNetKAT language, which includes an additional primitive
dup. Intuitively, this command duplicates a packet π ∈ Pk and outputs the word π π ∈ H, where
H = Pk∗ is the set of non-empty, finite sequences of packets. An element of H is called a packet
Probabilistic Program Equivalence for NetKAT
25
history, representing a log of previous packet states. ProbNetKAT policies may only modify the
first (head) packet of each history; dup fixes the current head packet into the log by copying it. In
this way, ProbNetKAT policies can compute distributions over the paths used to forward packets,
instead of just over the final output packets.
However, with dup, the semantics of ProbNetKAT becomes significantly more complex. Policies
p now transform sets of packet histories a ∈ 2H to distributions JpK(a) ∈ D(2H ). Since 2H is
uncountable, these distributions are no longer guaranteed to be discrete, and formalizing the
semantics requires full-blown measure theory (see prior work for details [31]).
Deciding program equivalence also becomes more challenging. Without dup, policies operate on
sets of packets 2Pk ; crucially, this is a finite set and we can represent each set with a single state in
a finite Markov chain. With dup, policies operate on sets of packet histories 2H . Since this set is
not finite—in fact, it is not even countable—encoding each packet history as a state would give a
Markov chain with infinitely many states. Procedures for deciding equivalence are not known for
such systems.
While in principle there could be a more compact representation of general ProbNetKAT policies
as finite Markov chains or other models where equivalence is decidable, (e.g., weighted or probabilistic automata [7] or quantitative variants of regular expressions [2]), we suspect that deciding
equivalence in the presence of dup is intractable. As evidence in support of this conjecture, we
show that ProbNetKAT policies can simulate the following kind of probabilistic automata. This
model appears to be new, and may be of independent interest.
Definition 7.1. Let A be a finite alphabet. A 2-generative probabilistic automata is defined by a
tuple (S, s 0 , ρ, τ ) where S is a finite set of states; s 0 ∈ S is the initial state; ρ : S → (A ∪ {_})2 maps
each state to a pair of letters (u, v), where either u or v may be a special blank character _; and the
transition function τ : S → D(S) gives the probability of transitioning from one state to another.
The semantics of an automaton can be defined as a probability measure on the space A∞ × A∞ ,
where A∞ is the set of finite and (countably) infinite words over the alphabet A. Roughly, these
measures are fully determined by the probabilities of producing any two finite prefixes of words
(w, w ′) ∈ A∗ × A∗ .
Presenting the formal semantics would require more concepts from measure theory and take us
far afield, but the basic idea is simple to describe. An infinite trace of a 2-generative automata over
states s 0 , s 1 , s 2 , . . . gives a sequence of pairs of (possibly blank) letters:
ρ(s 0 ), ρ(s 1 ), ρ(s 2 ) . . .
By concatenating these pairs together and dropping all blank characters, a trace induces two (finite
or infinite) words over the alphabet A. For example, the sequence,
(a 0 , _), (a 1 , _), (_, a 2 ), . . .
gives the words a 0a 1 . . . and a 2 . . . . Since the traces are generated by the probabilistic transition
function τ , each automaton gives rise to a probability measure over pairs of words.
While we have no formal proof of hardness, deciding equivalence between these automata
appears highly challenging. In the special case where only one word is generated (say, when the
second component produced is always blank), these automata are equivalent to standard automata
with ε-transitions (e.g., see [23]). In the standard setting, non-productive steps can be eliminated
and the automata can be modeled as a finite state Markov chain, where equivalence is decidable.
In our setting, however, steps producing blank letters in one component may produce non-blank
letters in the other. As a result, it is not entirely clear how to eliminate these steps and encode our
automata as a Markov chain.
26
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
Returning to ProbNetKAT, 2-generative automata can be encoded as policies with dup. We
sketch the idea here, deferring further details to Appendix B. Suppose we are given an automaton
(S, s 0 , ρ, τ ). We build a ProbNetKAT policy over packets with two fields, st and id. The first field st
ranges over the states S and the alphabet A, while the second field id is either 1 or 2; we suppose
the input set has exactly two packets labeled with id = 1 and id = 2. In a set of packet history, the
two active packets have the same value for st ∈ S—this represents the current state in the automata.
Past packets in the history have st ∈ A, representing the words produced so far; the first and second
components of the output are tracked by the histories with id = 1 and id = 2. We can encode the
transition function τ as a probabilistic choice in ProbNetKAT, updating the current state st of all
packets, and recording non-blank letters produced by ρ in the two components by applying dup
on packets with the corresponding value of id.
Intuitively, a set of packet histories generated by the resulting ProbNetKAT term describes a pair
of words generated by the original automaton. With a bit more bookkeeping (see Appendix B), we
can show that two 2-generative automata are equivalent if and only if their encoded ProbNetKAT
policies are equivalent. Thus, deciding equivalence for ProbNetKAT with dup is harder than deciding
equivalence for 2-generative automata. Showing hardness for the full framework is a fascinating
open question. At the same time, deciding equivalence between 2-generative automata appears to
require substantially new ideas; these insights could shed light on how to decide equivalence for
the full ProbNetKAT language.
8
RELATED WORK
A key ingredient that underpins the results in this paper is the idea of representing the semantics
of iteration using absorbing Markov chains, and exploiting their properties to directly compute
limiting distributions on them.
Markov chains have been used by several authors to represent and to analyze probabilistic
programs. An early example of using Markov chains for modeling probabilistic programs is the
seminal paper by Sharir, Pnueli, and Hart [28]. They present a general method for proving properties
of probabilistic programs. In their work, a probabilistic program is modeled by a Markov chain and
an assertion on the output distribution is extended to an invariant assertion on all intermediate
distributions (providing a probabilistic generalization of Floyd’s inductive assertion method). Their
approach can assign semantics to infinite Markov chains for infinite processes, using stationary
distributions of absorbing Markov chains in a similar way to the one used in this paper. Note
however that the state space used in this and other work is not like ProbNetKAT’s current and
accumulator sets (2P k × 2P k ), but is instead is the Cartesian product of variable assignments and
program location. In this sense, the absorbing states occur for program termination, rather than
for accumulation as in ProbNetKAT. Although packet modification is clearly related to variable
assignment, accumulation does not clearly relate to program location.
Readers familiar with prior work on probabilistic automata might wonder if we could directly
apply known results on (un)decidability of probabilistic rational languages. This is not the case—
probabilistic automata accept distributions over words, while ProbNetKAT programs encode distributions over languages. Similarly, probabilistic programming languages, which have gained
popularity in the last decade motivated by applications in machine learning, focus largely on
Bayesian inference. They typically come equipped with a primitive for probabilistic conditioning
and often have a semantics based on sampling. Working with ProbNetKAT has a substantially
different style, in that the focus is on on specification and verification rather than inference.
Di Pierro, Hankin, and Wiklicky have used probabilistic abstract interpretation (PAI) to statically analyze probabilistic λ-calculus [6]. They introduce a linear operator semantics (LOS) and
demonstrate a strictness analysis, which can be used in deterministic settings to replace lazy with
Probabilistic Program Equivalence for NetKAT
27
eager evaluation without loss. Their work was later extended to a language called pW hile, using a
store plus program location state-space similar to [28]. The language pW hile is a basic imperative
language comprising while-do and if-then-else constructs, but augmented with random choice between program blocks with a rational probability, and limited to a finite of number of finitely-ranged
variables (in our case, packet fields). The authors explicitly limit integers to finite sets for analysis
purposes to maintain finiteness, arguing that real programs will have fixed memory limitations. In
contrast to our work, they do not deal with infinite limiting behavior beyond stepwise iteration,
and do not guarantee convergence. Probabilistic abstract interpretation is a new but growing field
of research [34].
Olejnik, Wicklicky, and Cheraghchi provided a probabilistic compiler pwc for a variation of
pW hile [25], implemented in OCaml, together with a testing framework. The pwc compiler has
optimizations involving, for instance, the Kronecker product to help control matrix size, and a Julia
backend. Their optimizations based on the Kronecker product might also be applied in, for instance,
the generation of SJpK from BJpK, but we have not pursued this direction as of yet.
There is a plenty of prior work on finding explicit distributions of probabilistic programs. Gordon,
Henzinger, Nori, and Rajamani surveyed the state of the art with regard to probabilistic inference
[12]. They show how stationary distributions on Markov chains can be used for the semantics of
infinite probabilistic processes, and how they converge under certain conditions. Similar to our
approach, they use absorbing strongly-connected-components to represent termination.
Markov chains are used in many probabilistic model checkers, of which PRISM [20] is a prime
example. PRISM supports analysis of discrete-time Markov chains, continuous-time Markov chains,
and Markov decision processes. The models are checked against specifications written in temporal
logics like PCTL and CSL. PRISM is written in Java and C++ and provides three model checking
engines: a symbolic one with (multi-terminal) binary decision diagrams ((MT)BDDs), a sparse
matrix one, and a hybrid. The use of PRISM to analyse ProbNetKAT programs is an interesting
research avenue and we intend to explore it in the future.
9
CONCLUSION
This paper settles the decidability of program equivalence for history-free ProbNetKAT. The key
technical challenge is overcome by modeling the iteration operator as an absorbing Markov chain,
which makes it possible to compute a closed-form solution for its semantics. The resulting tool is
useful for reasoning about a host of other program properties unrelated to equivalence. Natural
directions for future work include investigating equivalence for full ProbNetKAT, developing an
optimized implementation, and exploring new applications to networks and beyond.
REFERENCES
[1] Mohammad Al-Fares, Alexander Loukissas, and Amin Vahdat. 2008. A Scalable, Commodity Data Center Network
Architecture. In ACM SIGCOMM Computer Communication Review, Vol. 38. ACM, 63–74.
[2] Rajeev Alur, Dana Fisman, and Mukund Raghothaman. 2016. Regular programming for quantitative properties of data
streams. In ESOP 2016. 15–40.
[3] Carolyn Jane Anderson, Nate Foster, Arjun Guha, Jean-Baptiste Jeannin, Dexter Kozen, Cole Schlesinger, and David
Walker. 2014. NetKAT: Semantic Foundations for Networks. In POPL. 113–126.
[4] Manav Bhatia, Mach Chen, Sami Boutros, Marc Binderberger, and Jeffrey Haas. 2014. Bidirectional Forwarding
Detection (BFD) on Link Aggregation Group (LAG) Interfaces. RFC 7130. (Feb. 2014). https://doi.org/10.17487/RFC7130
[5] Timothy A. Davis. 2004. Algorithm 832: UMFPACK V4.3—an Unsymmetric-pattern Multifrontal Method. ACM Trans.
Math. Softw. 30, 2 (June 2004), 196–199. https://doi.org/10.1145/992200.992206
[6] Alessandra Di Pierro, Chris Hankin, and Herbert Wiklicky. 2005. Probabilistic λ-calculus and quantitative program
analysis. Journal of Logic and Computation 15, 2 (2005), 159–179. https://doi.org/10.1093/logcom/exi008
[7] Manfred Droste, Werner Kuich, and Heiko Vogler. 2009. Handbook of Weighted Automata. Springer.
28
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
[8] Nate Foster, Dexter Kozen, Konstantinos Mamouras, Mark Reitblatt, and Alexandra Silva. 2016. Probabilistic NetKAT.
In ESOP. 282–309. https://doi.org/10.1007/978-3-662-49498-1_12
[9] Nate Foster, Dexter Kozen, Matthew Milano, Alexandra Silva, and Laure Thompson. 2015. A Coalgebraic Decision
Procedure for NetKAT. In POPL. ACM, 343–355.
[10] Phillipa Gill, Navendu Jain, and Nachiappan Nagappan. 2011. Understanding Network Failures in Data Centers:
Measurement, Analysis, and Implications. In ACM SIGCOMM. 350–361.
[11] Michele Giry. 1982. A categorical approach to probability theory. In Categorical aspects of topology and analysis.
Springer, 68–85. https://doi.org/10.1007/BFb0092872
[12] Andrew D Gordon, Thomas A Henzinger, Aditya V Nori, and Sriram K Rajamani. 2014. Probabilistic programming. In
Proceedings of the on Future of Software Engineering. ACM, 167–181. https://doi.org/10.1145/2593882.2593900
[13] Chuanxiong Guo, Guohan Lu, Dan Li, Haitao Wu, Xuan Zhang, Yunfeng Shi, Chen Tian, Yongguang Zhang, and
Songwu Lu. 2009. BCube: A High Performance, Server-centric Network Architecture for Modular Data Centers. ACM
SIGCOMM Computer Communication Review 39, 4 (2009), 63–74.
[14] Chuanxiong Guo, Haitao Wu, Kun Tan, Lei Shi, Yongguang Zhang, and Songwu Lu. 2008. Dcell: A Scalable and
Fault-Tolerant Network Structure for Data Centers. In ACM SIGCOMM Computer Communication Review, Vol. 38. ACM,
75–86.
[15] Peyman Kazemian, George Varghese, and Nick McKeown. 2012. Header Space Analysis: Static Checking for Networks.
In USENIX NSDI 2012. 113–126. https://www.usenix.org/conference/nsdi12/technical-sessions/presentation/kazemian
[16] John G Kemeny, James Laurie Snell, et al. 1960. Finite markov chains. Vol. 356. van Nostrand Princeton, NJ.
[17] Ahmed Khurshid, Wenxuan Zhou, Matthew Caesar, and Brighten Godfrey. 2012. Veriflow: Verifying Network-Wide
Invariants in Real Time. In ACM SIGCOMM. 467–472.
[18] Dexter Kozen. 1981. Semantics of probabilistic programs. J. Comput. Syst. Sci. 22, 3 (1981), 328–350. https://doi.org/10.
1016/0022-0000(81)90036-2
[19] Dexter Kozen. 1997. Kleene Algebra with Tests. ACM TOPLAS 19, 3 (May 1997), 427–443. https://doi.org/10.1145/
256167.256195
[20] M. Kwiatkowska, G. Norman, and D. Parker. 2011. PRISM 4.0: Verification of Probabilistic Real-time Systems. In Proc.
23rd International Conference on Computer Aided Verification (CAV’11) (LNCS), G. Gopalakrishnan and S. Qadeer (Eds.),
Vol. 6806. Springer, 585–591. https://doi.org/10.1007/978-3-642-22110-1_47
[21] Vincent Liu, Daniel Halperin, Arvind Krishnamurthy, and Thomas E Anderson. 2013. F10: A Fault-Tolerant Engineered
Network. In USENIX NSDI. 399–412.
[22] Haohui Mai, Ahmed Khurshid, Rachit Agarwal, Matthew Caesar, P. Brighten Godfrey, and Samuel Talmadge King.
2011. Debugging the Data Plane with Anteater. In ACM SIGCOMM. 290–301.
[23] Mehryar Mohri. 2000. Generic ε -removal algorithm for weighted automata. In CIAA 2000. Springer, 230–242.
[24] Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya, and Amin Vahdat. 2009. Portland: A Scalable Fault-Tolerant Layer 2 Data Center Network
Fabric. In ACM SIGCOMM Computer Communication Review, Vol. 39. ACM, 39–50.
[25] Maciej Olejnik, Herbert Wiklicky, and Mahdi Cheraghchi. 2016. Probabilistic Programming and Discrete Time
Markov Chains. (2016). http://www.imperial.ac.uk/media/imperial-college/faculty-of-engineering/computing/public/
MaciejOlejnik.pdf
[26] Arjun Roy, Hongyi Zeng, Jasmeet Bagga, George Porter, and Alex C. Snoeren. 2015. Inside the Social Network’s
(Datacenter) Network. In ACM SIGCOMM. 123–137.
[27] N. Saheb-Djahromi. 1980. CPOs of measures for nondeterminism. Theoretical Computer Science 12 (1980), 19–37.
https://doi.org/10.1016/0304-3975(80)90003-1
[28] Micha Sharir, Amir Pnueli, and Sergiu Hart. 1984. Verification of probabilistic programs. SIAM J. Comput. 13, 2 (1984),
292–314. https://doi.org/10.1137/0213021
[29] Ankit Singla, Chi-Yao Hong, Lucian Popa, and P Brighten Godfrey. 2012. Jellyfish: Networking Data Centers Randomly.
In USENIX NSDI. 225–238.
[30] Steffen Smolka, Spiros Eliopoulos, Nate Foster, and Arjun Guha. 2015. A Fast Compiler for NetKAT. In ICFP 2015.
https://doi.org/10.1145/2784731.2784761
[31] Steffen Smolka, Praveen Kumar, Nate Foster, Dexter Kozen, and Alexandra Silva. 2017. Cantor Meets Scott: Semantic
Foundations for Probabilistic Networks. In POPL 2017. https://doi.org/10.1145/3009837.3009843
[32] Robert Endre Tarjan. 1975. Efficiency of a Good But Not Linear Set Union Algorithm. J. ACM 22, 2 (1975), 215–225.
https://doi.org/10.1145/321879.321884
[33] L. Valiant. 1982. A Scheme for Fast Parallel Communication. SIAM J. Comput. 11, 2 (1982), 350–361.
[34] Di Wang, Jan Hoffmann, and Thomas Reps. 2018. PMAF: An Algebraic Framework for Static Analysis of Probabilistic
Programs. In POPL 2018. https://www.cs.cmu.edu/~janh/papers/WangHR17.pdf
Probabilistic Program Equivalence for NetKAT
A
29
OMITTED PROOFS
Lemma A.1. Let A be a finite boolean combination of basic open sets, i.e. sets of the form Ba = {a} ↑
for a ∈ ℘ω (H), and let L−M denote the semantics from [31]. Then for all programs p and inputs a ∈ 2H ,
Lp ∗ M(a)(A) = lim Lp (n) M(a)(A)
n→∞
Proof. Using topological arguments, the claim follows directly from previous results: A is a
Cantor-clopen set by [31] (i.e., both A and A are Cantor-open), so its indicator function 1A is
Cantor-continuous. But µ n ≜ Lp (n) M(a) converges weakly to µ ≜ Lp ∗ M(a) in the Cantor topology
(Theorem 4 in [8]), so
∫
∫
lim Lp (n) M(a)(A) = lim
1Adµ n =
1Adµ = Lp ∗ M(a)(A)
n→∞
n→∞
(To see why A and A are open in the Cantor topology, note that they can be written in disjunctive
normal form over atoms B {h } .)
□
Proof of Proposition 3.1. We only need to show that for dup-free programs p and history-free
inputs a ∈ 2Pk , LpM(a) is a distribution on packets (where we identify packets and singleton histories).
We proceed by structural induction on p. All cases are straightforward except perhaps the case of
p ∗ . For this case, by the induction hypothesis, all Jp (n) K(a) are discrete probability distributions on
packet sets, therefore vanish outside 2Pk . By Lemma A.1, this is also true of the limit Jp ∗ K(a), as its
value on 2Pk must be 1, therefore it is also a discrete distribution on packet sets.
□
Proof of Lemma 3.3. This follows directly from Lemma A.1 and Proposition 3.1 by noticing that
any set A ⊆ 2Pk is a finite boolean combination of basic open sets.
□
Lemma A.2. The matrix X = I − Q in Equation (2) of §5.1 is invertible.
Proof. Let S be a finite set of states, |S | = n, M an S × S substochastic matrix (Mst ≥ 0, M1 ≤ 1).
Í
i
A state s is defective if (M1)s < 1. We say M is stochastic if M1 = 1, irreducible if ( n−1
i=0 M )st > 0
(that is, the support graph of M is strongly connected), and aperiodic if all entries of some power of
M are strictly positive.
We show that if M is substochastic such that every state can reach a defective state via a path in
the support graph, then the spectral radius of M is strictly less than 1. Intuitively, all weight in the
system eventually drains out at the defective states.
Let es , s ∈ S, Í
be the standard basis vectors. As a distribution, esT is the unit point mass on s. For
A ⊆ S, let e A = s ∈A es . The L 1 -norm of a substochastic vector is its total weight as a distribution.
Multiplying on the right by M never increases total weight, but will strictly decrease it if there is
nonzero weight on a defective state. Since every state can reach a defective state, this must happen
Í
after n steps, thus ∥esT M n ∥1 < 1. Let c = maxs ∥esT M n ∥1 < 1. For any y = s as es ,
Õ
∥yT M n ∥1 = ∥( as es )T M n ∥1
s
≤
Õ
s
|as | · ∥esT M n ∥1 ≤
Õ
|as | · c = c · ∥yT ∥1 .
s
Then M n is contractive in the L 1 norm, so |λ| < 1 for all eigenvalues λ. Thus I − M is invertible
because 1 is not an eigenvalue of M.
□
30
B
S. Smolka, P. Kumar, N. Foster, J. Hsu, D. Kahn, D. Kozen, and A. Silva
ENCODING 2-GENERATIVE AUTOMATA IN FULL PROBNETKAT
To keep notation light, we describe our encoding in the special case where the alphabet A = {x, y},
there are four states S = {s 1 , s 2 , s 3 , s 4 }, the initial state is s 1 , and the output function ρ is
ρ(s 1 ) = (x, _)
ρ(s 2 ) = (y, _)
ρ(s 3 ) = (_, x)
ρ(s 4 ) = (_, y).
Encoding general automata is not much more complicated. Let τ : S → D(S) be a given transition
function; we write pi, j for τ (si )(s j ). We will build a ProbNetKAT policy simulating this automaton.
Packets have two fields, st and id, where st ranges over S ∪A ∪ {•} and id ranges over {1, 2}. Define:
p ≜ st=s 1 ; loop∗ ; st←•
The initialization keeps packets that start in the initial state, while the final command marks
histories that have exited the loop by setting st to be special letter •.
The main program loop first branches on the current state st:
st=s 1
st=s 2
loop ≜ case
st=s 3
st=s
4
: state1
: state2
: state3
: state4
Then, the policy simulates the behavior from each state. For instance:
(if id=1 then st←x ; dup else skip) ; st←s 1 @ p1,1 ,
Ê
(if id=1 then st←y ; dup else skip) ; st←s 2 @ p1,2 ,
state1 ≜
(if id=2 then st←x ; dup else skip) ; st←s 3 @ p1,3 ,
(if id=2 then st←y ; dup else skip) ; st←s @ p
4
1,4
The policies state2, state3, state4 are defined similarly.
Now, suppose we are given two 2-generative automata W ,W ′ that differ only in their transition
functions. For simplicity, we will further assume that both systems have strictly positive probability
of generating a letter in either component in finitely many steps from any state. Suppose they
generate distributions µ, µ ′ respectively over pairs of infinite words Aω × Aω . Now, consider the
encoded ProbNetKAT policies p, p ′. We argue that JpK = JqK if and only if µ = µ ′.5
First, it can be shown that JpK = Jp ′K if and only if JpK(e) = Jp ′K(e), where
e ≜ {π π | π ∈ Pk}.
and ν ′
Let ν = JpK(e)
=
is the following equality:
Jp ′K(e).
The key connection between the automata and the encoded policies
µ(Su,v ) = ν(Tu,v )
(6)
for every pair of finite prefixes u, v ∈ A∗ . In the automata distribution on the left, Su,v ⊆ Aω × Aω
consists of all pairs of infinite strings where u is a prefix of the first component and v is a prefix of
the second component. In the ProbNetKAT distribution on the right, we first encode u and v as
packet histories. For i ∈ {1, 2} representing the component and w ∈ A∗ a finite word, define the
history
hi (w) ∈ H ≜ (st = •, id = i), (st = w[|w |], id = i), . . . , (st = w[1], id = i), (st = s 1 , id = i).
The letters of the word w are encoded in reverse order because by convention, the head/newest
packet is written towards the left-most end of a packet history, while the oldest packet is written
5 We
will not present the semantics of ProbNetKAT programs with dup here; instead, the reader should consult earlier
papers [8, 31] for the full development.
Probabilistic Program Equivalence for NetKAT
31
towards the right-most end. For instance, the final letter w[|w |] is the most recent (i.e., the latest)
letter produced by the policy. Then, Tu,v is the set of all history sets including h1 (u) and h2 (v):
Tu,v ≜ {a ∈ 2H | h1 (u) ∈ a, h2 (v) ∈ a}.
Now JpK = Jp ′K implies µ = µ ′, since Equation (6) gives
µ(Su,v ) = µ ′(Su,v ).
The reverse implication is a bit more delicate. Again by Equation (6), we have
ν (Tu,v ) = ν ′(Tu,v ).
We need to extend this equality to all cones, defined by packet histories h:
B h ≜ {a ∈ 2H | h ∈ a}.
This follows by expressing B h as boolean combinations of Tu,v , and observing that the encoded
policy produces only sets of encoded histories, i.e., where the most recent state st is set to • and
the initial state st is set to s 1 .
| 6 |
Motif-based Rule Discovery for Predicting
Real-valued Time Series∗†‡
Yuanduo He, Xu Chu, Juguang Peng, Jingyue Gao, Yasha Wang
arXiv:1709.04763v4 [cs.AI] 2 Dec 2017
Key Laboratory of High Confidence Software Technologies,
Ministry of Education, Beijing 100871, China
{ydhe, chu xu, pgj.pku12, gaojingyue1997, yasha.wang}@pku.edu.cn
Abstract
Time series prediction is of great significance in many applications and has attracted extensive attention from the data
mining community. Existing work suggests that for many
problems, the shape in the current time series may correlate an upcoming shape in the same or another series. Therefore, it is a promising strategy to associate two recurring patterns as a rule’s antecedent and consequent: the occurrence
of the antecedent can foretell the occurrence of the consequent, and the learned shape of consequent will give accurate predictions. Earlier work employs symbolization methods, but the symbolized representation maintains too little
information of the original series to mine valid rules. The
state-of-the-art work, though directly manipulating the series, fails to segment the series precisely for seeking antecedents/consequents, resulting in inaccurate rules in common scenarios. In this paper, we propose a novel motifbased rule discovery method, which utilizes motif discovery
to accurately extract frequently occurring consecutive subsequences, i.e. motifs, as antecedents/consequents. It then
investigates the underlying relationships between motifs by
matching motifs as rule candidates and ranking them based
on the similarities. Experimental results on real open datasets
show that the proposed approach outperforms the baseline
method by 23.9%. Furthermore, it extends the applicability
from single time series to multiple ones.
Introduction
The prediction of real-valued time series is a topic of great
significance and colossal enthusiasm in the data mining
community, and has been applied extensively in many research areas (Hamilton 1994; Esling and Agon 2012). Most
of the work in literature aims at modeling the dependencies among variables, and forecasting the next few values
of a series based on the current values (Makridakis, Wheelwright, and Hyndman 2008). However, other work suggests
that for many problems it is the shape of the current pattern
rather than actual values that makes the prediction (Das et
al. 1998). For clarity, we call the latter one, forecasting by
∗
The work remains incomplete and will be further refined. Read
it on your own risk :).
†
We are grateful to Leye Wang, Junyi Ma, Zhu Jin, and Siqi
Yang for their invaluable help.
‡
We are also grateful for the reviewers in AAAI18.
shape, rule-based prediction, which is the subject of this paper.
Informally1 , a rule A ⇒τ B associates a pattern A as the
antecedent with a pattern B as the consequent in an upperbounding time interval τ . The rule-based prediction works
as follows: if a shape resembling A is observed in a series,
then a shape resembling B is supposed to appear in the same
or another series, within the time interval τ . Usually, a rule
refers to an “authentic rule”, an underlying relationship between two events implied by the two shapes.
In most work, firstly the subsequence clustering method
is employed to symbolize/discretize the series, and then the
symbolic series rule discovery method is applied to find
rules from real-valued series (Das et al. 1998). However,
such work has met limited success and ends up discovering spurious rules (e.g. “good” rules discovered from random walk data), because the symbol representation by common symbolization methods is independent with the raw
real-valued series (Keogh and Lin 2005). In conclusion, the
symbolized time series maintains little information about the
original series to mine valid rules.
The state-of-the-art work (Shokoohi-Yekta et al. 2015)
directly manipulates the series under the assumption that
a rule is contained in a subsequence. It first selects a
subsequence and then splits it into a rule’s antecedent
and consequent. However, usually there is an interval between a rule’s antecedent and consequent. The splitting
method will append the extra interval series to the antecedent/consequent, which fails to segment precisely for
antecedents/consequents and results in rules with a lower
prediction performance. Furthermore, it cannot be applied
to find rules from multiple series, i.e. a shape in one series
predicting a shape in another series.
Only when the observed object presents repeatability, can
predictions be valid/reasonable. We believe that a serviceable rule should appear repeatedly in a real-valued time
series and therefore its antecedent and consequent must
be recurring patterns of the series. Recent studies (Vahdatpour, Amini, and Sarrafzadeh 2009; Brown et al. 2013;
Mueen 2014) show that Motifs, frequently occurring patterns in time series (Patel et al. 2002), contain key information about the series. Therefore, we could utilize motif dis1
A formal definition is given in the Preliminaries section.
ries.
• We develop a novel heuristic matching algorithm in
search of the best combinations of motif instances. To
accommodate the heuristic matching result, we modify
Yekta’s scoring method leveraging Minimum Description
Length (MDL) principle to evaluate each rule (ShokoohiYekta et al. 2015).
• We evaluate our work on real open datasets, and the experimental results show that our method outperforms stateof-the-art work by 23.9% on average2 . Moreover, rules in
multiple series can also be discovered.
Figure 1: The electrical penetration graphs (EPG) data monitoring of the probing behavior of leafhoppers. There are
three motifs in the series, motif 1 and motif 2 are overlapped
in the green curves.
Figure 2: An underlying rule with respect to motif mA and
mB . TA and TB are two series. m˜A (i) and m˜B (j) are motif
instances of mA and mB in TA and TB , respectively. m˜A (1)
with m˜B (2) or m˜B (4) can form different rule instances, i.e.
the two solid lines.
covery methods to find all distinct motifs which accurately
represent antecedent and consequent candidates of rules.
However, three challenges are confronted: (1) there could
be quite a few (overlapping) motifs in a single series (see
Figure 1), let alone the dataset containing multiple series.
How to screen for effective motifs as antecedents and consequents? (2) there could be many instances for two given effective motifs and different combinations of them will lead
to different instances of a rule (see Figure 2). How to identify the best combinations for the underlying rule? (3) How
to rank different rules given the instances of each rule?
In this paper, we present a rule-based prediction method
for real-valued time series from the perspective of motifs.
For each pair of motifs as a candidate rule, a heuristicmatching-based scoring algorithm is developed to investigate the connection between them.
We summarize contributions of this paper as follows:
• We propose a novel rule discovery approach from the perspective of motifs. It can find rules with higher prediction
performance and can also be applied to multiple time se-
Related Work
Early work extracts association rules from frequent patterns
in the transactional database (Han, Pei, and Kamber 2011).
Das et al. are the first to study the problem of finding rules
from real-valued time series in a symbolization-based perspective (Das et al. 1998), followed by a series of work
(Sang Hyun, Wesley, and others 2001; Wu, Salzberg, and
Zhang 2004).
However, it has been widely accepted that few work dealing with real-valued series discovers valid rules for prediction. Because the symbolized series by common symbolization methods, including subsequence clustering and Piecewise Linear Approximation (PLA), cannot effectively represent the original series. (Struzik 2003; Keogh and Lin 2005;
Shokoohi-Yekta et al. 2015). Eamonn Keogh et al. (2005)
demonstrate that clustering of time series subsequences
is meaningless, resulting from the independency between
the clustering centers and the input. Shokoohi-Yekta et al.
(2015) point out that two time series can be differ only by
a small discrepancy, yet have completely different PLA representations. Therefore, the symbolized series is irrelevant
with the original real-valued one and the failure of finding
valid rules is doomed.
The state-of-the-art work (Y15) directly manipulates
the real-valued series (Shokoohi-Yekta et al. 2015). Their
method is found on the assumption that a rule is contained in a subsequence, which splits a subsequence into
the rule’s antecedent and consequent. Usually, there is an
interval between a rule’s antecedent/consequent, and the
splitting method will append the extra series to the antecedent/consequent. The complicated diversity of the intervals could result in rules with bad prediction performance.
Besides, the splitting method cannot be applied to discover
rules from two series either, since a motif is a subsequence
in a single series.
The difference between Y15 with our work can be seen
from the perspective of the different usages of motifs. In
Y15, it is noted that “In principle we could use a brute force
search, testing all subsequences of T”, which means that it
can work without motifs; but our method cannot work without motifs, since we are trying to relate pairs of them. Even if
Y15 method uses motifs to “provide a tractable search”, it is
2
This experiment is performed on a single dataset, which is inadequate. We will refine it. Please see future work.
still different from ours. As it mentioned, “a good rule candidate must be a time series motif in T”, Y15 treats motifs
as rule candidates, while our method takes motifs as candidates of antecedents and consequents, as we noted that “we
could utilize motif discovery methods to find all distinct motifs which accurately represent antecedent and consequent
candidates of rules”.
Algorithm 1: Find top-K rules.
Input: TA and TB are two time series.
Output: Res is K best rules.
1
2
3
4
Preliminaries
In this paper, we consider the problem of rule discovery from
real-valued time series for rule-based prediction. Without
loss of generality, the rule discovery problem is exemplified using two time series TA and TB , since finding rules
from over two time series can be treated pairwisely. It can
also be applied to the situation of only one series by letting
TA = TB . We begin with the definition of a rule and its
instance.
Definition 1 (Rule). A rule r is a 4-tuple (mA , mB , τ, θ).
mA , mB are the subsequences of two real-valued time series TA , TB , respectively, and also the antecedent and consequent of the rule r. τ is a non-negative value3 indicating
the max length of time interval between the mA and mB . θ
is a non-negative value as the trigger threshold for firing the
rule.
Definition 2 (Instance). A rule r’s instance e is a 3-tuple
(m˜A , m˜B , τ̃ ). m˜A is the mA -like subsequence observed in
series TA subject to d(m˜A , mA ) < θ. m˜B is the mB -like
subsequence observed later than m˜A in series TB . τ̃ is the
time interval between m˜A and m˜B .
By the definitions, given a rule r = (mA , mB , τ, θ), if a
subsequence m˜A of TA is observed and d(m˜A , mA ) < θ,
the rule r is fired and a subsequence of m˜B similar with mB
is supposed to be observed from TB within the subsequent τ time interval4 . Notice that there are no constraints for τ̃ and
d(m˜B , mB ) in Definition 2, since an instance with τ̃ > τ or
d(m˜B , mB ) 0 can be viewed as a “bad” instance for r.
An example is shown in Figure 2.
Intuitively, if a rule is good, it must have many supporting
instances in the time series, and then m˜A and m˜B will be
frequently occurring patterns. By the definition of the motif
(Patel et al. 2002), they are actually the motifs of TA and
TB , respectively. Inspired by that, we make the following
assumption.
Assumption 1. If r = (mA , mB , τ, θ) is a rule, then mA ∈
MA , mB ∈ MB , where MA and MB are motif sets of TA
and TB .
Based on the assumption, the rule discovery problem can
be formulated as follows. Given two time series TA , TB
with their motif sets MA and MB , find top-K rules r =
(ma , mb , τ, θ), where ma ∈ MA , and mb ∈ MB . Efficient algorithms for motif discovery have already been developed, and we could choose the widely-applied MK algorithm (Mueen et al. 2009)5 .
3
In real applications, τ is a predefined value according to the
series, which avoids nonsense rules with infinity max time interval.
4
In fact, m˜B is observed after τ̃ time.
5
For concise, the footnotes of all Ks are omitted throughout this
paper. In fact, Ks can be different.
5
MA , MB ← motifs(TA ), motifs(TB )
MA , MB ← sortK(MA ), sortK(MB )
forall (mA , mB ) ∈ MA × MB do
score(mA , mB , TA , TB )
Res ← topK score rules
Directly, we present the top-level algorithm 1. It first finds
the motif set for each series in line 1 by MK algorithm. Line
2 sorts and returns the top K motifs according to the domain
knowledge of the series. Then it traverses every pair of motifs, and scores the corresponding rule in line 3 to 4. It finally
returns the best K rules.
Methodologies
Given a pair of motif ma and mb , the score algorithm aims
at evaluating a rule candidate r based on the instances in
the training data. We will use the example in Figure 2 to
illustrate. The scoring approach consists of three steps:
1. Find out all mA , mB -like patterns in TA , TB . In this step,
we use the sliding window method to select similar patterns by setting a threshold. In Figure 2, three mA -like
patterns m˜A (i) (i = 1, 2, 3) and four mB -like patterns
m˜B (i) (i = 1, .., 4) are discovered;
2. Match mA , mB -like patterns into rule instances, and
search for the matching result that can support the rule
most. A brutal search is not only intractable but also lack
of robustness. Instead, we propose a heuristic matching
algorithm according to the belief that a rule is preferred
when it has (1) many instances and (2) a short max-length
time interval. The lines (both dotted and solid) in Figure
3 are all possible instances and the solid ones {e1,2 , e2,3 }
are the best instances, because it has the most number of
instances (i.e. 2), and the average length of time intervals
is also the smallest.
3. Score each instance with respect to the rule r, and then
integrate to the final score. In this step, instead of Euclidean distance, we follow Yekta’s scoring method and
further consider the ratio of antecedents being matched,
i.e. 2/3. In the example, the two best instances {e1,2 , e2,3 }
are evaluated respectively based on MDL principle. The
final score for r is the sum of {e1,2 , e2,3 }’s results multiplied by 2/3.
Step 1. Motif-like Pattern Discovery
MK algorithm (Mueen et al. 2009) returns pairs of subsequences which are very similar with each other as motifs. In
this step, given a motif m (a pair of subsequences) of a time
series T , we need to search for all m-like non-overlapped
subsequences in T .
A direct approach based on sliding window method is as
follows: (1) search all subsequences of the same length with
g
Figure 3: The graph G for the example in Figure 2. M
A =
(1)
(2)
(2)
(3)
(4)
g
{m˜A , m˜A }, MB = {m˜B , m˜B , m˜B }, and E =
{e1,2 , e1,3 , e1,4 , e2,3 , e2,4 }. The best subset S is {e1,2 , e2,3 },
i.e., the solid edges. The crossed edges e2,3 and e1,4 cannot
be chosen for S at the same time, according to the parallel
constraint.
m and calculate the distance between them and m; (2) set an
appropriate threshold θ0 to filter the subsequences with distance smaller than θ0 ; (3) sort the remaining subsequences
by distance and remove the overlap ones with larger distances. After that, the motif-like patterns are chosen as the
non-overlapped subsequences with small distance.
The rule threshold θ is set as the threshold for selecting antecedent-like subsequences. The complexity of sliding window method is determined by the number of windows and the distance computation procedure. In this step,
the complexity is O(|m| · |T |).
Step 2. Heuristic Matching
The patterns found in step 1 can be combined into pairs as
instances of a rule. This step aims at finding the best instance
set that support the rule, which should satisfy the following
conditions: (1) its cardinality is the largest among all possible sets; (2) the average length of interjacent intervals of
instances is the smallest. The two condition come from the
belief about what a good rule is.
Modeling. To formulate the problem concretely, we introduce the following notations and construct a weighted bipartite graph.
(1)
g
g
M
˜A (1) , ..., m˜A (p) }, M
, ..., m˜B (q) }
A = {m
B = {m˜B
are the sets containing all subsequences similar with the
rule’s antecedent mA and consequent mB , respectively.
E = {ei,j = (m˜A (i) , m˜B (j) )|1 ≤ i ≤ p, 1 ≤ j ≤ q, 0 <
t(m˜B (j) ) − t(m˜A (i) ) < τ }, where function t(·) returns the
occurrence time of the pattern. E is the set of all feasible
instances, since the antecedent must appear before the consequent and the interval between them cannot be too large.
It imposes a structure on the set E that given m˜A (i) , for ∀j
such that τ > t(m˜B (j) ) − t(m˜A (i) ) > 0, then ei,j ∈ E.
Besides, let wi,j = t(m˜B (j) ) − t(m˜A (i) ) measure the length
of interjacent interval of the instance ei,j .
g
g
M
A , MB , E make up a weighted bipartite graph G =
g
g
(M
∪
M
A
B , E). Figure 3 shows the graph G of the example
in Figure 2.
Optimization. Using the notation introduced above, we
restate the heuristic matching process as : Given a nong
g
complete weighted graph G(M
A ∪ MB , E), find the instance
set S ⊂ E subject to (1) |S| is maximized, and (2) W (S),
the total weight of S, is minimized.
One cannot simply apply algorithms solving assignment
problem due to a parallel constraint. Concretely speaking,
for any two instances, if the antecedent-like pattern in one
instance appears earlier than the other’s, then its corresponding consequent-like pattern must also come earlier
than the other’s. In the graph, this constraint requires no
crossed edges in S, as is illustrated in Figure 3.
Suppose that the max |S| is known as s somehow, we can
solve the following 0-1 integer programming problem:
minimize
X
wi,j xi,j
subject to
X
xi,j = s
x
X
xi,j ≤ 1,
i
(1a)
(1b)
X
xi,j ≤ 1
(1c)
j
xi,j + xk,l ≤ 1, ∀ i > k and l < j
xi,j ∈ {0, 1}
(1d)
(1e)
The optimization variables xi,j ’s are 0, 1 variables, constrained by (1e), each of which represents the selection of
corresponding instance ei,j . (1b) restricts that |S| is maximized as s. (1c) requires that at most one edge can be chosen
in the graph G with respect to the same vertex. (1d) refers to
the parallel constraint.
Now consider how to solve for s. It is not the classical
problem of maximum unweighted bipartite matching due
to the parallel constraint and therefore it cannot be easily
solved by max/min flow algorithms. We formulate it as another optimization problem.
maximize
X
subject to
X
x
i
xi,j
xi,j ≤ 1,
(2a)
X
xi,j ≤ 1
(2b)
j
xi,j + xk,l ≤ 1, ∀ i > k and l < j
xi,j ∈ {0, 1}
(2c)
(2d)
The optimization problems are both 0-1 integer programming, which are NP-hard generally. Existing solvers
(e.g. Matlab Optimization Toolbox) based on cutting plane
method can handle these problems within a tolerable time.
Step 3. MDL-based Instance Scoring
In this step, given the best instance set S, we first evaluate each instance by the similarity between it and the rule r
made up by mA and mB , and then aggregate the results to
the score of rule r for further comparison.
We first introduce the MDL-based scoring method, which
is initially proposed in Shokoohi-Yekta et al. 2015.
Intuitively, the more similar the shape in the instance is
with respect to the rule’s consequent, the better the instance
can support the rule. The Euclidean distance is the most
widely accepted measure for similarity. However, the length
of consequent varies in different rules, where Euclidean metric cannot fairly judge the differences between subsequences
of different length.
Inspired by the Minimum Description Length principle
that any regularity in a given set of data can be used to compress the data, i.e. to describe it using fewer symbols than
needed to describe the data literally (Grünwald 2007), it is
possible to take the rule’s consequent as the regularity and
measure how many bits can be saved by compressing the
shape in the instances according to the regularity using Huffman coding. A concrete example can be found in ShokoohiYekta et al. 2015.
To use MDL principle, the series must be digitized first,
and let dl(·) be the coding length. The digitization loses little information of the raw data according. The number of
bits saved for instance e by encoding m˜B with respect to r’s
consequent mB is as below:
bit saved(e, r) = dl(e.m˜B ) − dl(e.m˜B |r.mB )
(3)
The above is the original version of MDL-based scoring
method developed by Yekta et al.
We further take the ratio of antecedent-like pattern being matched into consideration. Intuitively, when the ratio
is too small, indicating the number of matched instances is
much less than the times that the antecedent is fired, the
rule shouldn’t be considered a good rule. Therefore, the final
score for a rule r is:
score(r) =
X
s
bit saved(e, r) − dl(r.mB ) (4)
g
|M
A|
e∈S
Experiment Evaluation
We evaluate our method on real open datasets. Top rules discovered by our method and the baseline method from the
same training data are compared and analyzed. In addition,
we also validate the applicability of our method on multiple
series.
Experiment Setup
The baseline method (Y15) is the state-of-the-art work by
Yekta et al. The experiment is conducted on two open metering datasets. One is Almanac of Minutely Power dataset
(AMPds), mainly containing electricity stream data at one
minute intervals per meter for 2 years of monitoring in a
single house (Makonin et al. 2013; 2016). The other is UK
Domestic Appliance-Level Electricity (UK-DALE) dataset,
which records both the whole-house mains power demand
every six seconds as well as power demand from individual
appliances every six seconds from five houses (Kelly and
Knottenbelt 2015).
Settings. Two groups of experiments are performed to (1)
evaluate the prediction performance of our method and (2)
validate the applicability of multiple series.
• On Single Series. We utilize the aggregate power series
of clothes washer and clothes dryer for 1 year from AMPds, which is also used by Y15 as the experiment dataset.
The series of first month is used to discover rules, while
the rest is used for testing. We select 5 top rules by each
method and evaluate the prediction performance on the
test data;
• One Multiple Series. We attempt to discover rules from
the separated power series of total 52 appliances from the
house 1 of UK-DALE dataset, such as boiler, laptop, etc.
We run our method on each pair of the series to search for
valid rules.
Metric. To measure the prediction performance of rules,
we adopt the same metric Q proposed by Yekta et al. as:
PN
d(mB , ui )
Q(r) = Pi=1
,
(5)
N
i=1 d(mB , vi )
where N is the total firing number of the rule r in the test
data, and d(mB , x) is Euclidean distance between the consequent mB with the the shape beginning by position x. ui and
vi are the i-th firing position and the i-th randomly chosen
position, respectively. The denominator is used to normalize
Q to a value between 0 and 1. The final Q is averaged after
1000 measurements.
A smaller Q indicates a better prediction performance.
The Q close to 1 suggests the rule r is no better than random
guessing and Q close to 0 indicates that the rule r captures
the structure in the data and predicts accurately.
On Single Series
To compare Y15 with our method, we select top 5 rules discovered by each method from the training data, and then
evaluate them on the test data.
Result. The top 5 ranked rules’ Qs are listed in Table
1. The top rules discovered by our methods are better than
those by Y15. Specifically, our method outperforms Y15 by
23.9% on average.
Comparison. To demonstrate the reason why the rules
discovered by our method outperform those by Y15, we take
a close look at the 5-th top rule, rY (by Y15) and rO (by this
paper), and scrutinize their prediction results on the test data
in Figure 4.
As is shown in Figure 4a and 4b, the 5-th top rules rY and
rO are quite different from each other though they are describing the same thing6 . rY comes from a splitting point at
10%-th of a 120-long subsequence, whose antecedent is 12
in length and consequent is 108 in length, whereas rO takes
two motifs as its antecedent and consequent, whose lengths
6
Actually, they both imply the fact that the clothes dryer is often
used after the clothes washer.
Y15
MBP
1
2
3
4
5
Mean
0.389
0.340
0.436
0.299
0.398
0.337
0.481
0.310
0.424
0.341
0.426
0.324
Table 1: The prediction performance Q of top 5 rules on the
test data.
(a) The 5-th top rule rY by Y15
(b) The 5-th top rule rO by our method
(c) A firing by rY
(d) A firing by rO
Figure 4: The two 5-th top rules with the prediction result around a same position. In (a) and (b), the red patterns are the
antecedent of the rule, while the blue ones are the consequent. The rule’s threshold θ is set to 5 and max length time interval τ
is set to 300 in both methods. (c) (d) depict the prediction results around the same position by both rules. The black curve is the
real time series. The red curve shows the position where the rule is fired, while the blue curve is the best prediction during the
max time interval.
are 50 and 30 respectively. In contrast, rY has more reasonable antecedent and consequent. To illustrate, consider the
case in Figure 4c and 4d.
The antecedents of both rules present a good match and
trigger both rules around the same position. However, rO
gives an accurate prediction,while rY predicts a shape with
a clear discrepancy to the real series. The interval before
rY ’s consequent is so long that the consequent misses the
best matching position. Intuitively, rY ’s consequent can be
viewed as rO ’s consequent appended by a piece of noise series, which results in the mismatch of the consequent.
The inaccurate prediction has its root in the splitting
method of Y15, which inevitably adds some extra noises
to the “real” antecedent/consequent, because the splitting method cannot position the boundaries of the antecedent/consequent. Our method, however, directly finds
the key part of the series, i.e. motif, as the rule’s antecedent/consequent.
Additionally, any rule discovered by Y15 is a split of a
motif in this experiment. Since the split parts are also frequent patterns in the series, they can be discovered as motifs, i.e. the elements of MA and MB . Therefore, rules discovered by Y15 can also be found by our method.
Discussion. In the electricity datasets, zero series7 is recognized as motif by MK algorithm because it is also a frequent pattern (though meaningless). To avoid such motifs,
we sort the discovered motifs by the “roughness” of the
shape and choose the top ones. Commonly, the sorting process, mentioned in the line 2 of Algorithm 1, is relevant with
7
The values in the series are almost all 0s, indicating no appliance is being used.
Figure 5: A rule rE discovered in the series of hair dryer and
straightener. The red curve is the antecedent, while the blue
curve is the consequent.
the characteristics of the time series.
On Multiple Series
We attempt to discover rules in multiple appliance series
from the UK-DALE datasets, which include 52 kinds of appliances. The original data is sampled every six seconds, and
we resample it per minute for efficiency consideration.
Result. A serviceable rule rE discovered is from the
power series of hair dryer and straighteners, the antecedent
and consequent of which are shown in Figure 5.
The rule rE describes the relationship between the usage of hair dryer and straightener. An interesting fact is that
the rule’s antecedents and consequents are interchangeable,
coinciding the common sense that the two appliances, hair
dryer and straightener, are often used at the same time. To
illustrate rE concretely, we list an overview of 4 power series in Figure 6, including hair dryer, straightener, breadmaker and laptop. The series of hair dryer and straightener
are well matched, whereas the rest combinations are ranked
References
Figure 6: The overviews of four appliances’ power series.
relatively lower.
Conclusion and Discussion
In this paper, we have introduced a novel rule-based prediction method for real-valued time series from the perspective
of motifs. We preliminarily explore the possibility of relating two motifs as rule candidate. It first leverages motif discovery to segment the time series precisely for seeking recurring patterns as antecedents/consequents, and then investigates the underlying temporal relationships between motifs by combing motifs as rule candidates and ranking them
based on the similarities.
However, as is mentioned before, this work itself is incomplete and will be refined. We further consider the following two problems:
First, current experiment mainly uses one kind of open
dataset, i.e. household electricity usage. We will search for
more open datasets to comprehensively evaluate the performance of our method.
Second, in this work we evaluate each rule from the perspective of prediction. However, prediction is only a single
aspect of a rule. We will try to develop more metrics that can
reveal the inner connections within rules.
Acknowlegements
This work is supported by the program JS71-16-005.
Brown, A. E.; Yemini, E. I.; Grundy, L. J.; Jucikas, T.; and
Schafer, W. R. 2013. A dictionary of behavioral motifs
reveals clusters of genes affecting caenorhabditis elegans locomotion. Proceedings of the National Academy of Sciences
110(2):791–796.
Das, G.; Lin, K.-I.; Mannila, H.; Renganathan, G.; and
Smyth, P. 1998. Rule discovery from time series. In KDD,
volume 98, 16–22.
Esling, P., and Agon, C. 2012. Time-series data mining.
ACM Computing Surveys (CSUR) 45(1):12.
Grünwald, P. D. 2007. The minimum description length
principle. MIT press.
Hamilton, J. D. 1994. Time series analysis, volume 2.
Princeton university press Princeton.
Han, J.; Pei, J.; and Kamber, M. 2011. Data mining: concepts and techniques. Elsevier.
Kelly, J., and Knottenbelt, W. 2015. The UK-DALE
dataset, domestic appliance-level electricity demand and
whole-house demand from five UK homes. 2(150007).
Keogh, E., and Lin, J. 2005. Clustering of time-series subsequences is meaningless: implications for previous and future
research. Knowledge and information systems 8(2):154–
177.
Makonin, S.; Popowich, F.; Bartram, L.; Gill, B.; and Bajic,
I. V. 2013. Ampds: A public dataset for load disaggregation
and eco-feedback research. In Electrical Power & Energy
Conference (EPEC), 2013 IEEE, 1–6. IEEE.
Makonin, S.; Ellert, B.; Bajic, I. V.; and Popowich, F. 2016.
Electricity, water, and natural gas consumption of a residential house in Canada from 2012 to 2014. Scientific Data
3(160037):1–12.
Makridakis, S.; Wheelwright, S. C.; and Hyndman, R. J.
2008. Forecasting methods and applications. John wiley
& sons.
Mueen, A.; Keogh, E.; Zhu, Q.; Cash, S.; and Westover, B.
2009. Exact discovery of time series motifs. In Proceedings
of the 2009 SIAM International Conference on Data Mining,
473–484. SIAM.
Mueen, A. 2014. Time series motif discovery: dimensions
and applications. Wiley Interdisciplinary Reviews: Data
Mining and Knowledge Discovery 4(2):152–159.
Patel, P.; Keogh, E.; Lin, J.; and Lonardi, S. 2002. Mining
motifs in massive time series databases. In Data Mining,
2002. ICDM 2003. Proceedings. 2002 IEEE International
Conference on, 370–377. IEEE.
Sang Hyun, P.; Wesley, W.; et al. 2001. Discovering and
matching elastic rules from sequence databases. Fundamenta Informaticae 47(1-2):75–90.
Shokoohi-Yekta, M.; Chen, Y.; Campana, B.; Hu, B.; Zakaria, J.; and Keogh, E. 2015. Discovery of meaningful rules
in time series. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, 1085–1094. ACM.
Struzik, Z. R. 2003. Time series rule discovery: Tough, not
meaningless. Lecture notes in computer science 32–39.
Vahdatpour, A.; Amini, N.; and Sarrafzadeh, M. 2009.
Toward unsupervised activity discovery using multidimensional motif detection in time series. In IJCAI, volume 9, 1261–1266.
Wu, H.; Salzberg, B.; and Zhang, D. 2004. Online eventdriven subsequence matching over financial data streams. In
Proceedings of the 2004 ACM SIGMOD international conference on Management of data, 23–34. ACM.
| 2 |
arXiv:1402.2031v1 [cs.CV] 10 Feb 2014
Deeply Coupled Auto-encoder Networks for
Cross-view Classification
Wen Wang, Zhen Cui, Hong Chang, Shiguang Shan, Xilin Chen
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
{wen.wang, zhen.cui, hong.chang, shiguang.shan, xilin.chen}@vipl.ict.ac.cn
November 2013
Abstract
age retrieval, interests are taken in comparing two types
of heterogeneous images, which may come from different
views or even different sensors. Since the spanned feature
spaces are quite different, it is very difficult to classify
these images across views directly. To decrease the discrepancy across views, most of previous works endeavored to learn view-specific linear transforms and to project
cross-view samples into a common latent space, and then
employed these newly generated features for classification.
The comparison of heterogeneous samples extensively
exists in many applications, especially in the task of image classification. In this paper, we propose a simple but
effective coupled neural network, called Deeply Coupled
Autoencoder Networks (DCAN), which seeks to build
two deep neural networks, coupled with each other in every corresponding layers. In DCAN, each deep structure
is developed via stacking multiple discriminative coupled auto-encoders, a denoising auto-encoder trained with
maximum margin criterion consisting of intra-class compactness and inter-class penalty. This single layer component makes our model simultaneously preserve the local consistency and enhance its discriminative capability. With increasing number of layers, the coupled networks can gradually narrow the gap between the two
views. Extensive experiments on cross-view image classification tasks demonstrate the superiority of our method
over state-of-the-art methods.
1
Though there are lots of approaches used to learn viewspecific projections, they can be divided roughly based
on whether the supervised information is used. Unsupervised methods such as Canonical Correlation Analysis (CCA)[14] and Partial Least Square (PLS) [26] are
employed to the task of cross-view recognition. Both of
them attempt to use two linear mappings to project samples into a common space where the correlation is maximized, while PLS considers the variations rather than
only the correlation in the target space. Besides, with
use of the mutual information, a Coupled InformationTheoretic Encoding (CITE) method is developed to narrow the inter-view gap for the specific photo-sketch recognition task. And in [30], a semi-coupled dictionary is
used to bridge two views. All the methods above consider
to reduce the discrepancy between two views, however,
the label information is not explicitly taken into account.
With label information available, many methods were further developed to learn a discriminant common space For
instance, Discriminative Canonical Correlation Analysis
(DCCA) [16] is proposed as an extension of CCA. And
Introduction
Real-world objects often have different views, which
might be endowed with the same semantic. For example, face images can be captured in different poses, which
reveal the identity of the same object; images of one face
can also be in different modalities, such as pictures under
different lighting condition, pose, or even sketches from
artists. In many computer vision applications, such as im1
In [22], with an additional local smoothness constraints,
two linear projections are simultaneously learnt for Common Discriminant Feature Extraction (CDFE). There are
also other such methods as the large margin approach [8]
and the Coupled Spectral Regression (CSR) [20]. Recently, multi-view analysis [27, 15] is further developed to
jointly learn multiple specific-view transforms when multiple views (usually more than 2 views) can be available.
Although the above methods have been extensively applied in the cross-view problem, and have got encouraging performances, they all employed linear transforms to
capture the shared features of samples from two views.
However, these linear discriminant analysis methods usually depend on the assumption that the data of each class
agrees with a Gaussian distribution, while data in real
world usually has a much more complex distribution [33].
It indicates that linear transforms are insufficient to extract
the common features of cross-view images. So it’s natural
to consider about learning nonlinear features.
A recent topic of interest in nonlinear learning is the
research in deep learning. Deep learning attempts to learn
nonlinear representations hierarchically via deep structures, and has been applied successfully in many computer vision problems. Classical deep learning methods
often stack or compose multiple basic building blocks
to yield a deeper structure. See [5] for a recent review
of Deep Learning algorithms. Lots of such basic building blocks have been proposed, including sparse coding
[19], restricted Boltzmann machine (RBM) [12], autoencoder [13, 6], etc. Specifically, the (stacked) autoencoder has shown its effectiveness in image denoising
[32], domain adaptation [7], audio-visual speech classification [23], etc.
As we all known, the kernel method, such as Kernel
Canonical Correlation Analysis(Kernel CCA) [1], is also
a widely used approach to learn nonlinear representations.
Compared with the kernel method, deep learning is much
more flexible and time-saving because the transform is
learned rather than fixed and the time needed for training and inference process is beyond the limit of the size
of training set.
Inspired by the deep learning works above, we intend to
solve the cross-view classification task via deep networks.
It’s natural to build one single deep neural network with
samples from both views, but this kind of network can’t
handle complex data from totally different modalities and
may suffer from inadequate representation capacity. Another way is to learn two different deep neural networks
with samples of the different views. However, the two independent networks project samples from different views
into different spaces, which makes comparison infeasible.
Hence, building two neural networks coupled with each
other seems to be a better solution.
In this work, we propose a Deeply Coupled Autoencoder Networks(DCAN) method that learns the common representations to conduct cross-view classification
by building two neural networks deeply coupled respectively, each for one view. We build the DCAN by stacking
multiple discriminative coupled auto-encoders, a denoising auto-encoder with maximum margin criterion. The
discriminative coupled auto-encoder has a similar input
corrupted and reconstructive error minimized mechanism
with the denoising auto-encoder proposed in [28], but is
modified by adding a maximum margin criterion. This
kind of criterion has been used in previous works, like
[21, 29, 35], etc. Note that the counterparts from two
views are added into the maximum margin criterion simultaneously since they both come from the same class,
which naturally couples the corresponding layer in two
deep networks. A schematic illustration can be seen in
Fig.1.
The proposed DCAN is related to Multimodal Autoencoders [23], Multimodal Restricted Boltzmann Machines and Deep Canonical Correlation Analysis [3]. The
first two methods tend to learn a single network with one
or more layers connected to both views and to predict one
view from the other view, and the Deep Canonical Correlation Analysis build two deep networks, each for one
view, and only representations of the highest layer are
constrained to be correlated. Therefore, the key difference is that we learn two deep networks coupled with each
other in representations in each layer, which is of great
benefits because the DCAN not only learn two separate
deep encodings but also makes better use of data from the
both two views. What’s more, these differences allow for
our model to handle the recognition task even when data
is impure and insufficient.
The rest of this paper is organized as follows. Section 2 details the formulation and solution to the proposed Deeply Coupled Auto-encoder Networks. Experimental results in Section 3 demonstrate the efficacy of
the DCAN. In section 4 a conclusion is given.
2
2
Deeply Coupled
Networks
Auto-encoder
min
fx ,fy
s.t.
In this section, we first present the basic idea. The second
part gives a detailed description of the discriminative coupled auto-encoder. Then, we describe how to stack multiple layers to build a deep network. Finally, we briefly
describe the optimization of the model.
2.1
L(X, fx ) + L(Y, fy )
G1 (Hx , Hy ) − G2 (Hx , Hy ) ≤ ε,
(1)
(2)
where X, Y denote inputs from the two views, and
Hx , Hy denote hidden representations of the two views
respectively. fx : X −→ Hx , fy : Y −→ Hy are the
transforms we intend to learn, and we denote the reconstructive error as L(·), and maximum margin criterion as
G1 (·) − G2 (·), which are described detailedly in the next
subsection.ε is the threshold of the maximum margin criterion.
Basic Idea
As shown in Fig.1, the Deeply Coupled Auto-encoder
Networks(DCAN) consists of two deep networks coupled
with each other, and each one is for one view. The network structures of the two deep networks are just like the
left-most and the right-most parts in Fig.1, where circles
means the units in each layers (pixels in a input image for
the input layer and hidden representation in higher layers),
and arrows denote the full connections between adjacent
layers. And the middle part of Fig.1 illustrates how the
whole network projects samples in different views into
a common space and gradually enhances the separability
with increasing layers.
2.2
Discriminative coupled auto-encoder
In the problem of cross-view, there are two types of heterogenous samples. Without loss of generality, we denote
samples from one view as X = [x1 , · · · , xn ] , and those
from the other view as Y = [y1 , · · · , yn ], in which n is
the sample sizes. Noted that the corresponding labels are
known, and Hx , Hy denote hidden representations of the
two views we want to learn.
The DCAN attempts to learn two nonlinear transforms
fx : X −→ Hx and fy : Y −→ Hy that can project
the samples from two views to one discriminant common space respectively, in which the local neighborhood
relationship as well as class separability should be well
preserved for each view. The auto-encoder like structure stands out in preserving the local consistency, and
the denoising form enhances the robustness of learnt representations. However, the discrimination isn’t taken into
consideration. Therefore, we modify the denoising autoencoder by adding a maximum margin criterion consisting of intra-class compactness and inter-class penalty.
And the best nonlinear transformation is a trade-off between local consistency preserving and separability enhancing.
Just like the one in denoising auto-encoder, the reconstructive error L(·) in Eq.(1) is formulated as follows:
The two deep networks are both built through stacking multiple similar coupled single layer blocks because a
single coupled layer might be insufficient, and the method
of stacking multiple layers and training each layer greedily has be proved efficient in lots of previous works, such
as those in [13, 6]. With the number of layers increased,
the whole network can compactly represent a significantly
larger set of transforms than shallow networks , and gradually narrow the gap with the discriminative capacity enhanced.
We use a discriminative coupled auto-encoders trained
with maximum margin criterion as a single layer component. Concretely, we incorporate the additional noises
in the training process while maximizing the margin criterion, which makes the learnt mapping more stable as
well as discriminant. Note that the maximum margin criterion also works in coupling two corresponding layers.
Formally, the discriminative coupled auto-encoder can be
written as follows:
L(X, Θ) =
X
Ex̃∼P (x̃|x) kx̂ − xk
(3)
Eỹ∼P (ỹ|y) kŷ − yk
(4)
x∈X p
L(Y, Θ) =
X
y∈Y p
3
Figure 1: An illustration of our proposed DCAN. The left-most and right-most schematic show the structure of the two
coupled network respectively. And the schematic in the middle illustrates how the whole network gradually enhances
the separability with increasing layers, where pictures with solid line border denote samples from view 1, those with
dotted line border denote samples from view 2, and different colors imply different subjects.
4
where E calculates the expectation over corrupted ver- which can be formulated as follows,
sions X̃, Ỹ of examples X, Y obtained from a corruption
P
1
process P (x̃|x), P (ỹ|y). Θ = {Wx , Wy , bx , by , cx , cy }
khi − hj k2 ,
(9)
G2 (H) = 2N
2
Ii ,Ij ∈D
specifies the two nonlinear transforms fx , fy , where
Ij ∈KN N (Ii )
Wx , Wy is the weight matrix, and bx , by , cx , cy are the
bias of encoder and decoder respectively, and X̂, Ŷ are where I belongs to the k nearest neighbors of I with
j
i
calculated through the decoder process :
different class labels, and N2 is the number of all pairs
satisfying the condition.
T
And the function of G1 (H), G2 (H) is illustrated in the
X̂ = s(Wx Hx + cx )
(5)
middel
part of Fig.1. In the projected common space deŶ = s(WyT Hy + cy )
noted by S, the compactness term G1 (·) shown by red ellipse works by pulling intra-class samples together while
And hidden representations Hx , Hy are obtained from
the penalty term G2 (·) shown by black ellipse tend to push
the encoder that is a similar mapping with the decoder,
adjacent inter-class samples away.
Finally, by solving the optimization problem Eq.(1), we
can
learn a couple of nonlinear transforms fx , fy to transHx = s(Wx X̃ + bx )
(6) form the original samples from both views into a common
Hy = s(Wy Ỹ + by )
space.
where s is the nonlinear activation function, such as
the point-wise hyperbolic tangent operation on linear projected features, i.e.,
s(x) =
eax − e−ax
eax + e−ax
2.3
Stacking coupled auto-encoder
Through the training process above, we model the map
between original sample space and a preliminary discriminant subspace with gap eliminated, and build a hidden
representation H which is a trade-off between approximate preservation on local consistency and the distinction
of the projected data. But since real-world data is highly
complicated, using a single coupled layer to model the
vast and complex real scenes might be insufficient. So
we choose to stack multiple such coupled network layers
described in subsection 2.2. With the number of layers increased, the whole network can compactly represent a significantly larger set of transforms than shallow networks,
and gradually narrow the gap with the discriminative ability enhanced.
Training a deep network with coupled nonlinear transforms can be achieved by the canonical greedy layer-wise
approach [12, 6]. Or to be more precise, after training
a single layer coupled network, one can compute a new
feature H by the encoder in Eq.(6) and then feed it into
the next layer network as the input feature. In practice,
we find that stacking multiple such layers can gradually
reduce the gap and improve the recognition performance
(see Fig.1 and Section 3).
(7)
in which a is the gain parameter.
Moreover, for the maximum margin criterion consisting of intra-class compactness and inter-class penalty, the
constraint term G1 (·) − G2 (·) in Eq.(1) is used to realize coupling since samples of the same class are treated
similarly no matter which view they are from.
Assuming S is the set of sample pairs from the same
class, and D is the set of sample pairs from different
classes. Note that the counterparts from two views are
naturally added into S, D since it’s the class rather than
the view that are considered.
Then, we characterize the compactness as follows,
P
1
G1 (H) = 2N
khi − hj k2 ,
(8)
1
Ii ,Ij ∈S
where hi denotes theTcorresponding hidden representation
of an input Ii ∈ X Y and is a sample from either view
1 or view 2, and N1 is the size of S.
Meanwhile, the goal of the inter-class separability is to
push the adjacent samples from different classes far away,
5
7 poses (−45◦ , −30◦ , −15◦ , 0◦ , 15◦ , 30◦ , 45◦ ), 3 expression (Neutral,Smile, Disgust), no flush illumination from
We adopt the Lagrangian multiplier method to solve the 4 sessions are selected to validate our method. We ranobjective function Eq.(1) with the constraints Eq.(2) as domly choose 4 images for each pose of each subject, then
follows:
randomly partition the data into two parts: the training set
with 231 subjects (i.e., 231 × 7 × 4 = 6468 images) and
the testing set with the rest subjects.
min λ(L(X, Θ) + L(Y, Θ)) + (G1 (H) − G2 (H))+
Θ
CUHK Face Sketch FERET (CUFSF) dataset [34,
1
1
31] contains two types of face images: photo and sketch.
2
2
γ( kWx kF + kWy kF )
Total 1,194 images (one image per subject) were collected
2
2
(10) with lighting variations from FERET dataset [25]. For
each subject, a sketch is drawn with shape exaggeration.
where the first term is the the reconstruction error, the According to the configuration of [15], we use the first
second term is the maximum margin criterion, and the last 700 subjects as the training data and the rest subjects as
term is the shrinkage constraints called the Tikhonov reg- the testing data.
ularizers in [11], which is utilized to decrease the magnitude of the weights and further to help prevent over-fitting.
λ is the balance parameter between the local consistency 3.2 Settings
and empirical separability. And γ is called the weight de- All images from Multi-PIE and CUFSF are cropped into
cay parameter and is usually set to a small value, e.g., 64×80 pixels without any preprocess. We compare the
1.0e-4.
proposed DCAN method with several baselines and stateTo optimize the objective function (10), we use back- of-the-art methods, including CCA [14], Kernel CCA [1],
propagation to calculate the gradient and then employ the Deep CCA [3], FDA [4], CDFE [22], CSR [20], PLS
limited-memory BFGS (L-BFGS) method [24, 17], which [26] and MvDA [15]. The first seven methods are pairis often used to solve nonlinear optimization problems wise methods for cross-view classification. MvDA jointly
without any constraints. L-BFGS is particularly suitable learns all transforms when multiple views can be utilized,
for problems with a large amount of variables under the and has achieved the state-of-the-art results in their remoderate memory requirement. To utilize L-BFGS, we ports [15].
need to calculate the gradients of the object function. ObThe Principal Component Analysis (PCA) [4] is used
viously, the object function in (10) is differential to these for dimension reduction. In our experiments, we set the
parameters Θ, and we use Back-propagation [18] method default dimensionality as 100 with preservation of most
to derive the derivative of the overall cost function. In our energy except Deep CCA, PLS, CSR and CDFE, where
setting, we find the objective function can achieve as fast the dimensionality are tuned in [50,1000] for the best perconvergence as described in [17].
formance. For all these methods, we report the best performance by tuning the related parameters according to
their papers. Firstly, for Kernel CCA, we experiment with
3 Experiments
Gaussian kernel and polynomial kernel and adjust the parameters to get the best performance. Then for Deep CCA
In this section, the proposed DCAN is evaluated on two [3], we strictly follow their algorithms and tune all possidatasets, Multi-PIE [9] and CUHK Face Sketch FERET ble parameters, but the performance is inferior to CCA.
(CUFSF) [34, 31].
One possible reason is that Deep CCA only considers the
correlations on training data (as reported in their paper)
so that the learnt mode overly fits the training data, which
3.1 Databases
thus leads to the poor generality on the testing set. BeMulti-PIE dataset [9] is employed to evaluate face recog- sides, the parameter α and β are respectively traversed in
nition across pose. Here a subset from the 337 subjects in [0.2,2] and [0.0001,1] for CDFE, the parameter λ and η
2.4
Optimization
6
Method
Accuracy
CCA[14]
KernelCCA[10]
DeepCCA[3]
FDA[4]
CDFE[22]
CSR[20]
PLS[26]
MvDA[15]
0.698
0.840
0.599
0.814
0.773
0.580
0.574
0.867
DCAN-1
DCAN-2
DCAN-3
DCAN-4
0.830
0.877
0.884
0.879
Table 1: Evaluation on Multi-PIE database in terms of
mean accuracy. DCAN-k means a stacked k-layer network.
Figure 2: After learning common features by the crossview methods, we project the features into 2-D space by
using the principal two components in PCA. The depicted
samples are randomly chosen form Multi-PIE [9] dataset.
are searched in [0.001,1] for CSR, and the reduced dimen- The “◦” and “+” points come from two views respecsionality is tuned for CCA, PLS, FDA and MvDA.
tively. Different color points belong to different classes.
As for our proposed DCAN, the performance on DCAN-k is our proposed method with a stacked k-layer
CUFSF database of varied parameters, λ, k, is shown in neural network.
Fig.3. In following experiments, we set λ = 0.2, γ =
1.0e − 4, k = 10 and a = 1. With increasing layers, the
number of hidden neurons are gradually reduced by 10, on Multi-PIE data set. Since the images are acquired over
i.e., 90, 80, 70, 60 if four layers.
seven poses on Multi-PIE data set, in total 7 × 6 = 42
comparison experiments need to be conducted. The detailed results are shown in Table 2,where two poses are
3.3 Face Recognition across Pose
used as the gallery and probe set to each other and the
First, to explicitly illustrate the learnt mapping, we con- rank-1 recognition rate is reported. Further, the mean acduct an experiment on Multi-PIE dataset by projecting curacy of all pairwise results for each methods is also rethe learnt common features into a 2-D space with Princi- ported in Table 1.
pal Component Analysis (PCA). As shown in Fig.2. The
From Table 1, we can find the supervised methods exclassical method CCA can only roughly align the data cept CSR are significantly superior to CCA due to the
in the principal directions and the state-of-the-art method use of the label information. And nonlinear methods
MvDA [15] attempts to merge two types of data but seems except Deep CCA are significantly superior to the nonto fail. Thus, we argue that linear transforms are a little linear methods due to the use of nonlinear transforms.
stiff to convert data from two views into an ideal com- Compared with FDA, the proposed DCAN with only one
mon space. The three diagrams below shows that DCAN layer network can perform better with 1.6% improvement.
can gradually separate samples from different classes with With increasing layers, the accuracy of DCAN reaches
the increase of layers, which is just as we described in the a climax via stacking three layer networks. The reason
above analysis.
of the degradation in DCAN with four layers is mainly
Next, we compare our methods with several state-of- the effect of reduced dimensionality, where 10 dimenthe-art methods for the cross-view face recognition task sions are cut out from the above layer network. Obvi7
−45◦ −30◦ −15◦
−45◦
−30◦
−15◦
0◦
15◦
30◦
45◦
1.000
0.816
0.588
0.473
0.473
0.515
0.511
0.816
1.000
0.858
0.611
0.664
0.553
0.553
0.588
0.858
1.000
0.894
0.807
0.602
0.447
0◦
15◦
30◦
0.473
0.611
0.894
1.000
0.909
0.604
0.484
0.473
0.664
0.807
0.909
1.000
0.874
0.602
0.515
0.553
0.602
0.604
0.874
1.000
0.768
45◦
−45◦ −30◦ −15◦
0.511 −45◦
0.553 −30◦
0.447 −15◦
0.484
0◦
0.602 15◦
0.768 30◦
1.000 45◦
(a) CCA, Ave = 0.698
−45◦ −30◦ −15◦
−45◦
−30◦
−15◦
0◦
15◦
30◦
45◦
1.000
0.854
0.598
0.425
0.473
0.522
0.523
0.854
1.000
0.844
0.578
0.676
0.576
0.566
0.598
0.844
1.000
0.806
0.807
0.602
0.424
0◦
0.425
0.578
0.806
1.000
0.911
0.599
0.444
30◦
0.522
0.576
0.602
0.599
0.866
1.000
0.756
45◦
−45◦
−30◦
−15◦
0◦
15◦
30◦
45◦
1.000
0.854
0.714
0.595
0.557
0.633
0.608
0.854
1.000
0.867
0.746
0.688
0.697
0.606
0.714
0.867
1.000
0.887
0.808
0.704
0.579
0.523
0.566 −30◦
0.424 −15◦
0.444
0◦
0.624 15◦
0.756 30◦
1.000 45◦
1.000
0.847
0.754
0.686
0.573
0.610
0.664
−45◦
−30◦
−15◦
0◦
15◦
30◦
45◦
1.000
0.856
0.807
0.757
0.688
0.708
0.719
0.872
1.000
0.874
0.854
0.777
0.735
0.715
0.819
0.881
1.000
0.896
0.854
0.788
0.697
30◦
45◦
0.756
0.858
0.911
1.000
0.938
0.759
0.759
0.706
0.808
0.880
0.938
1.000
0.922
0.845
0.726
0.801
0.861
0.759
0.922
1.000
0.912
0.737
0.757
0.765
0.759
0.845
0.912
1.000
0.847
1.000
0.911
0.847
0.807
0.766
0.635
0.754
0.911
1.000
0.925
0.896
0.821
0.602
0◦
15◦
30◦
45◦
0.686
0.847
0.925
1.000
0.964
0.872
0.684
0.573
0.807
0.896
0.964
1.000
0.929
0.768
0.610
0.766
0.821
0.872
0.929
1.000
0.878
0.664
0.635
0.602
0.684
0.768
0.878
1.000
(d) FDA, Ave = 0.814
0◦
15◦
30◦
0.595
0.746
0.887
1.000
0.916
0.819
0.651
0.557
0.688
0.808
0.916
1.000
0.912
0.754
0.633
0.697
0.704
0.819
0.912
1.000
0.850
45◦
−45◦ −30◦ −15◦
0.608 −45◦
0.606 −30◦
0.579 −15◦
0.651
0◦
0.754 15◦
0.850 30◦
1.000 45◦
1.000
0.914
0.854
0.763
0.710
0.770
0.759
(e) CDFE, Ave = 0.773
−45◦ −30◦ −15◦
0.810
0.892
1.000
0.911
0.880
0.861
0.765
−45◦ −30◦ −15◦
−45◦
(c) DeepCCA, Ave = 0.599
−45◦ −30◦ −15◦
0.878
1.000
0.892
0.858
0.808
0.801
0.757
15◦
(b) KernelCCA, Ave = 0.840
15◦
0.473
0.676
0.807
0.911
1.000
0.866
0.624
1.000
0.878
0.810
0.756
0.706
0.726
0.737
0◦
0.914
1.000
0.947
0.858
0.812
0.861
0.766
0.854
0.947
1.000
0.923
0.880
0.894
0.775
0◦
15◦
30◦
45◦
0.763
0.858
0.923
1.000
0.938
0.900
0.750
0.710
0.812
0.880
0.938
1.000
0.923
0.807
0.770
0.861
0.894
0.900
0.923
1.000
0.934
0.759
0.766
0.775
0.750
0.807
0.934
1.000
(f) MvDA, Ave = 0.867
0◦
15◦
30◦
0.730
0.825
0.869
1.000
0.916
0.834
0.752
0.655
0.754
0.865
0.938
1.000
0.918
0.832
0.708
0.737
0.781
0.858
0.900
1.000
0.909
45◦
−45◦ −30◦ −15◦
0.686 −45◦
0.650 −30◦
0.681 −15◦
0.790
0◦
0.823 15◦
0.916 30◦
1.000 45◦
(g) DCAN-1, Ave = 0.830
1.000
0.927
0.867
0.832
0.765
0.779
0.794
0.905
1.000
0.929
0.876
0.865
0.832
0.777
0.876
0.954
1.000
0.925
0.907
0.870
0.785
0◦
15◦
30◦
45◦
0.783
0.896
0.905
1.000
0.951
0.916
0.812
0.714
0.850
0.905
0.958
1.000
0.945
0.876
0.779
0.825
0.867
0.896
0.929
1.000
0.938
0.796
0.730
0.757
0.808
0.874
0.949
1.000
(h) DCAN-3, Ave = 0.884
Table 2: Results of CCA, FDA [4], CDFE [22], MvDA [15] and DCAN on MultiPIE dataset in terms of rank-1
recognition rate. DCAN-k means a stacked k-layer network. Due to space limitation, the results of other methods
cannot be reported here, but their mean accuracies are shown in Table 1.
8
Method
Photo-Sketch
Sketch-Photo
CCA[14]
KernelCCA[10]
DeepCCA[3]
CDFE[22]
CSR[20]
PLS[26]
FDA[4]
MvDA[15]
0.387
0.466
0.364
0.456
0.502
0.486
0.468
0.534
0.475
0.570
0.434
0.476
0.590
0.510
0.534
0.555
DCAN-1
DCAN-2
DCAN-3
0.535
0.603
0.601
0.555
0.613
0.652
67
70
60
65
64
63
62
61
55
0.01
0.2
0.4
λ
(a)
0.6
0.8
1
60
2
4
6
8
10
12
k
(b)
Figure 3: The performance with varied parameter values
for our proposed DCAN. The sketch and photo images in
CUFSF [34, 31] are respectively used for the gallery and
probe set. (a) Varied λ with fixed k = 10. (b) Varied k
with fixed λ = 0.2.
Table 3: Evluation on CUFSF database in terms of mean
accuracy. DCAN-k means a stacked k-layer network.
on a small sample size. The reasons lie in three folds:
ously, compared with two-view based methods, the proposed DCAN with three layers improves the performance
greatly (88.4% vs. 81.4%). Besides, MvDA also achieves
a considerably good performance by using all samples
from all poses. It is unfair to compare these two-view
based methods (containing DCAN) with MvDA, because
the latter implicitly uses additional five views information
except current compared two views. But our method performs better than MvDA, 88.4% vs. 86.7%. As observed
in Table 2, three-layer DCAN achieves a largely improvement compared with CCA,FDA,CDFE for all cross-view
cases and MvDA for most of cross-view cases. The results are shown in Table 2 and Table 1.
3.4
Accuracy (%)
Accuracy(%)
66
65
(1) The maximum margin criterion makes the learnt mapping more discriminative, which is a straightforward
strategy in the supervised classification task.
(2) Auto-encoder approximately preserves the local
neighborhood structures.
For this, Alain et al. [2] theoretically prove that the
learnt representation by auto-encoder can recover local properties from the view of manifold. To further validate that, we employ the first 700 photo
images from CUFSF database to perform the nonlinear self-reconstruction with auto-encoder. With
the hidden presentations, we find the local neighbors
with 1,2,3,4,5 neighbors can be preserved with the
probability of 99.43%, 99.00%, 98.57%, 98.00% and
97.42% respectively. Thus, the use of auto-encoder
intrinsically reduces the complexity of the discriminant model, which further makes the learnt model
better generality on the testing set.
Photo-Sketch Recognition
Photo-Sketch recognition is conducted on CUFSF dataset.
The samples come from only two views, photo and
sketch. The comparison results are provided in Table
3. As shown in this table, since only two views can be
utilized in this case, MvDA degrades to a comparable
performance with those previous two-view based meth- (3) The deep structure generates a gradual model, which
ods. Our proposed DCAN with three layer networks
makes the learnt transform more robust. With only
can achieve even better with more than 6% improvement,
one layer, the model can’t represent the complex data
which further indicates DCAN benefits from the nonlinvery well. But with layers goes deeper, the coupled
ear and multi-layer structure.
networks can learn transforms much more flexible
and hence can be allowed to handle more complex
Discussion and analysis: The above experiments
data.
demonstrate that our methods can work very well even
9
4
Conclusion
In this paper, we propose a deep learning method, the
Deeply Coupled Auto-encoder Networks(DCAN), which
can gradually generate a coupled discriminant common
representation for cross-view object classification. In
each layer we take both local consistency and discrimination of projected data into consideration. By stacking
multiple such coupled network layers, DCAN can gradually improve the learnt shared features in the common
space. Moreover, experiments in the cross-view classification tasks demonstrate the superior of our method over
other state-of-the-art methods.
References
[1] S. Akaho. A kernel method for canonical correlation analysis, 2006. 2, 6
[2] G. Alain and Y. Bengio. What regularized auto-encoders
learn from the data generating distribution. arXiv preprint
arXiv:1211.4246, 2012. 9
[3] G. Andrew, R. Arora, J. Bilmes, and K. Livescu. Deep
canonical correlation analysis. 2, 6, 7, 9
[4] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman.
Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711–720, 1997. 6, 7,
8, 9
[5] Y. Bengio, A. Courville, and P. Vincent. Representation
learning: A review and new perspectives. 2013. 2
[6] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle.
Greedy layer-wise training of deep networks, 2007. 2, 3, 5
[7] M. Chen, Z. Xu, K. Weinberger, and F. Sha. Marginalized
denoising autoencoders for domain adaptation, 2012. 2
[8] N. Chen, J. Zhu, and E. P. Xing. Predictive subspace learning for multi-view data: a large margin approach, 2010. 2
[9] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker.
The cmu multi-pose, illumination, and expression (multipie) face database, 2007. 6, 7
[10] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor. Canonical correlation analysis: An overview with application
to learning methods. Neural Computation, 16(12):2639–
2664, 2004. 7, 9
[11] T. Hastie, R. Tibshirani, and J. J. H. Friedman. The elements of statistical learning, 2001. 6
[12] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural computation,
18(7):1527–1554, 2006. 2, 5
10
[13] G. E. Hinton and R. R. Salakhutdinov. Reducing the
dimensionality of data with neural networks. Science,
313(5786):504–507, 2006. 2, 3
[14] H. Hotelling. Relations between two sets of variates.
Biometrika, 28(3/4):321–377, 1936. 1, 6, 7, 9
[15] M. Kan, S. Shan, H. Zhang, S. Lao, and X. Chen. Multiview discriminant analysis. pages 808–821, 2012. 2, 6, 7,
8, 9
[16] T.-K. Kim, J. Kittler, and R. Cipolla. Discriminative learning and recognition of image set classes using canonical
correlations. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 29(6):1005–1018, 2007. 1
[17] Q. V. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow,
and A. Y. Ng. On optimization methods for deep learning,
2011. 6
[18] Y. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller. Efficient
backprop. In Neural networks: Tricks of the trade, pages
9–50. Springer, 1998. 6
[19] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse
coding algorithms, 2007. 2
[20] Z. Lei and S. Z. Li. Coupled spectral regression for matching heterogeneous faces, 2009. 2, 6, 7, 9
[21] H. Li, T. Jiang, and K. Zhang. Efficient and robust feature extraction by maximum margin criterion. Neural Networks, IEEE Transactions on, 17(1):157–165, 2006. 2
[22] D. Lin and X. Tang. Inter-modality face recognition. pages
13–26, 2006. 1, 6, 7, 8, 9
[23] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y.
Ng. Multimodal deep learning, 2011. 2
[24] J. Nocedal and S. J. Wright. Numerical optimization,
2006. 6
[25] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss.
The feret database and evaluation procedure for facerecognition algorithms. Image and vision computing,
16(5):295–306, 1998. 6
[26] A. Sharma and D. W. Jacobs. Bypassing synthesis: Pls
for face recognition with pose, low-resolution and sketch,
2011. 1, 6, 7, 9
[27] A. Sharma, A. Kumar, H. Daume, and D. W. Jacobs. Generalized multiview analysis: A discriminative latent space,
2012. 2
[28] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol.
Extracting and composing robust features with denoising
autoencoders, 2008. 2
[29] F. Wang and C. Zhang. Feature extraction by maximizing the average neighborhood margin. In Computer Vision
and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007. 2
[30] S. Wang, L. Zhang, Y. Liang, and Q. Pan. Semicoupled dictionary learning with applications to image
super-resolution and photo-sketch synthesis, 2012. 1
[31] X. Wang and X. Tang. Face photo-sketch synthesis and
recognition. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 31(11):1955–1967, 2009. 6, 9
[32] J. Xie, L. Xu, and E. Chen. Image denoising and inpainting
with deep neural networks, 2012. 2
[33] S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin.
Graph embedding and extensions: a general framework for
dimensionality reduction. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 29(1):40–51, 2007. 2
[34] W. Zhang, X. Wang, and X. Tang. Coupled informationtheoretic encoding for face photo-sketch recognition,
2011. 6, 9
[35] B. Zhao, F. Wang, and C. Zhang. Maximum margin embedding. In Data Mining, 2008. ICDM’08. Eighth IEEE
International Conference on, pages 1127–1132. IEEE,
2008. 2
11
| 1 |
arXiv:1607.00062v1 [math.AG] 30 Jun 2016
LOCAL COHOMOLOGY AND BASE CHANGE
KAREN E. SMITH
f
Abstract. Let X −→ S be a morphism of Noetherian schemes, with S reduced.
For any closed subscheme Z of X finite over S, let j denote the open immersion
X \ Z ֒→ X. Then for any coherent sheaf F on X \ Z and any index r ≥ 1, the
sheaf f∗ (Rr j∗ F ) is generically free on S and commutes with base change. We prove
this by proving a related statement about local cohomology: Let R be Noetherian
algebra over a Noetherian domain A, and let I ⊂ R be an ideal such that R/I is
finitely generated as an A-module. Let M be a finitely generated R-module. Then
there exists a non-zero g ∈ A such that the local cohomology modules HIr (M )⊗A Ag
are free over Ag and for any ring map A → L factoring through Ag , we have
r
HIr (M ) ⊗A L ∼
(M ⊗A L) for all r.
= HI⊗
AL
1. Introduction
In his work on maps between local Picard groups, Kollár was led to investigate the
behavior of certain cohomological functors under base change [Kol]. The following
theorem directly answers a question he had posed:
f
Theorem 1.1. Let X → S be a morphism of Noetherian schemes, with S reduced.
Suppose that Z ⊂ X is closed subscheme finite over S, and let j denote the open
embedding of its complement U. Then for any coherent sheaf F on U, the sheaves
f∗ (Rr j∗ F ) are generically free and commute with base change for all r ≥ 1.
Our purpose in this note is to prove this general statement. Kollár himself had
proved a special case of this result in a more restricted setting [Kol, Thm 78].
We pause to say precisely what is meant by generically free and commutes with
base change. Suppose H is a functor which, for every morphism of schemes X → S
and every quasi-coherent sheaf F on X, produces a quasi-coherent sheaf H(F ) on S.
We say H(F ) is generically free if there exists a dense open set S 0 of S over which
Date: July 4, 2016.
Thanks to János Kollár for encouraging me to write this paper, for his insistence that I establish
the more general version of the theorem, and for sharing his insight as to why my arguments should
go through more generally, especially the idea to reduce to the affine case in Section 2. I’d also
like to thank Mel Hochster for listening to my arguments, Karl Schwede for reading a draft and
suggesting the reference [DGI], and Barghav Bhatt for his interest. Financial support partially from
NSF DMS-1501625.
1
2
KAREN E. SMITH
p
the OS -module H(F ) is free. If in addition, for every change of base T −→ S 0 , the
natural map
p∗ H(F ) → H(p∗X F )
of quasi-coherent sheaves on T is an isomorphism (where pX is the induced morphism
X ×S T → X), then we say that H(F ) is generically free and commutes with base
change. See [Kol, §72].
Remark 1.2. We do not claim the r = 0 case of Theorem 1.1; in fact, it is false.
For a counterexample, consider the ring homomorphism splitting Z ֒→ Z × Q ։ Z.
The corresponding morphism of Noetherian schemes
Z = Spec(Z) ֒→ X = Spec(Z × Q) → S = Spec Z
satisfies the hypothesis of Theorem 1.1. The open set U = X \ Z is the component
Spec Q of X. The coherent sheaf determined by the module Q on U is not generically
free over Z, since there is no open affine subset Spec Z[ n1 ] over which Q is a free
module. [In this case, the map j is affine, so the higher direct image sheaves Rp j∗ F
all vanish for p > 0.]
On the other hand, if f is a map of finite type, then the r = 0 case of Theorem 1.1
can be deduced from Grothendiecks’s Lemma on Generic freeness; see Lemma 4.1.
For the commutative algebraists, we record the following version of the main result,
which is essentially just the statement in the affine case:
Corollary 1.3. Let A be a reduced Noetherian ring. Let R be a Noetherian Aalgebra with ideal I ⊂ R such that the induced map A → R/I is finite. Then for any
Noetherian R module M, the local cohomology modules HIi (M) are generically free
and commute with base change over A for all i ≥ 0. Explicitly, this means that there
exists element g not in any minimal prime of A such that the modules HIi (M) ⊗A Ag
are free over Ag , and that for any algebra L over Ag , the natural map
HIi (M) ⊗A L → HIi (M ⊗A L)
is an isomorphism.
Some version of Theorem 1.1 may follow from known statements in the literature,
but looking through works of Grothendieck ([RD], [LC], [SGA2]) and [Conrad], I am
not able to find it; nor presumably could Kollár. After this paper was written, I did
find a related statement due to Hochster and Roberts [HR, Thm 3.4] in a special case,
not quite strong enough to directly answer Kollár’s original question; furthermore,
my proof is different and possibly of independent interest. In any case, there may be
value in the self-contained proof here, which uses a relative form of Matlis duality
proved here using only basic results about local cohomology well-known to most
commutative algebraists.
LOCAL COHOMOLOGY AND BASE CHANGE
3
2. Restatement of the Problem
In this section, we show that Theorem 1.1 reduces to the following special statement:
Theorem 2.1. Let A be a Noetherian domain. Let R = A[[x1 , . . . , xn ]] be a power
series ring over A, and M a finitely generated R-module. Denote by I the ideal
(x1 , . . . , xn ) ⊂ R. Then the local cohomology modules
HIi (M)
are generically free over A and commute with base change for all i.
For basic definitions and properties of local cohomology modules, we refer to [LC].
For the remainder of this section, we show how Theorem 1.1 and Corollary 1.3
follow from Theorem 2.1.
First, Theorem 1.1 is local on the base. Because the scheme S is reduced, it is the
finite union of its irreducible components, each of which is reduced and irreducible,
so it suffices to prove the result on each of them. Thus we can immediately reduce
to the case where S = Spec A, for some Noetherian domain A.
We now reduce to the case where X is affine as well. The coherent sheaf F on
the open set U extends to a coherent sheaf on X, which we also denote by F . To
simplify notation, let us denote the sheaf Rr j∗ F by H, which we observe vanishes
outside the closed set Z. Each section is annihilated by a power of the ideal IZ of
Z, so that although the sheaf of abelian groups H on Z is not an OZ -module, it
does have the structure of a module over the sheaf of OS -algebras limt OI Xt , which we
←− Z
bZ
\
denote OX,Z ; put differently, H can be viewed as sheaf on the formal scheme X
0 i
[
[
over S. Observe that i∗ O
X,Z |X 0 = OX,Z , where X ֒→ X is the inclusion of any
open set X 0 ⊂ X containing the generic points of all the components of Z. This
means that H is generically free as an OS -module if and only if H|X 0 is, and there
is no loss of generality in replacing X by any such open set X0 . In particular, we
can choose such X 0 to be affine, thus reducing the proof of Theorem 1.1 to the case
where both X and S are affine.
We can now assume that X → S is the affine map of affine schemes corresponding
to a ring homomorphism A → T . In this case the closed set Z is defined by some
ideal I of T . Because Z is finite over S = Spec A, the composition A → T → T /I is a
finite integral extension of A. The coherent sheaf F on U extends to a coherent sheaf
F on X, which corresponds to a finitely generated T -module M. Since X = Spec T
is affine, we have natural identifications for r ≥ 1
Rr j∗ F = H r (X \ Z, F ) = HIr+1 (M)
4
KAREN E. SMITH
of modules over T [LC, Cor 1.9]. Thus we have reduced Theorem 1.1 to showing that
if T is a Noetherian ring over a Noetherian domain A and I is any ideal such that
T /I is finitely generated as an A-module, then for any finitely generated T -module
M, the modules HIr+1(M) are generically free and commute with base change over
A for r ≥ 1. In fact, we will be able to show this for all indices r ≥ −1.
To get to the power series case, we first observe that for all i, every element of
HIi (M) is annihilated by some power of I. This means that HIi (M) has the structure
of a module over the I-adic completion T̂ I . There is no loss of generality in replacing
T and M by their I-adic completions T̂ I and M̂ I —the module HIi (M) is canonically
identified with HIi T̂ I (M̂ I ). So with out loss of generality, we assume that T is Iadically complete.
Now, Lemma 2.2 below guarantees that T is a module-finite algebra over a power
series ring A[[x1 , . . . , xn ]]. So the finitely generated T -module M is also a finitely
generated module over A[[x1 , . . . , xn ]], and the computation of local cohomology is
the same viewed over either ring. This means that to prove Theorem 1.1, it would
suffice to prove Theorem 2.1. It only remains to prove Lemma 2.2.
Lemma 2.2. Let A be a Noetherian ring. Let T be a Noetherian A-algebra containing an ideal I such that the composition of natural maps A → T ։ T /I is finite.
Then there is a natural ring homomorphism from a power series ring
A[[x1 , . . . , xn ]] → T̂ I := lim T /I t
←−
t
which is also finite.
Proof of Lemma. Fix generators y1 , . . . , yn for the ideal I of T . Consider the Aalgebra homomorphism
A[x1 , . . . , xn ] → T
xi 7→ yi .
We will show that this map induces a ring homomorphism
A[[x1 , . . . , xn ]] → T̂ I
which is finite. First note that for each natural number t, there is a naturally induced
ring homomorphism
(1)
A[x1 , . . . , xn ]
T
→ t
t
(x1 , . . . , xn )
I
sending the class xi to the class yi .
Claim: For every t, the map (1) is finite. Indeed, if t1 , . . . , td are elements of T
whose classes modulo I are A-module generators for T /I, then the classes of t1 , . . . , td
modulo I t are generators for T /I t as a module over A[x1 , . . . , xn ]/(x1 , . . . , xn )t .
LOCAL COHOMOLOGY AND BASE CHANGE
5
The claim is straightforward to prove using induction on t and the exact sequence
0 → I t /I t+1 → T /I t+1 → T /I t → 0.
We leave these details to the reader.
Now to prove the lemma, we take the direct limit of the maps (1). Since at every
stage, the target is generated over the source by the classes of t1 , . . . , td , also in the
limit, T̂ I will be generated over A[[x1 , . . . , xn ]] by the images of t1 , . . . , td . So the
induced ring homomorphism A[[x1 , . . . , xn ]] → T is finite.
Having reduced the proof of the main results discussed in the introduction to
Theorem 2.1, the rest of the paper focuses on the local cohomology statement in the
special case. Our proof of Theorem 2.1 uses an A-relative version of Matlis duality
to convert the problem to an analagous one for finitely generated modules over a
power series ring, where it will follow from the theorem on generic freeness. This
relative version of Matlis duality might be of interest to commutative algebraists in
other contexts, and holds in greater generality than what we develop here. To keep
the paper as straightforward and readable as possible, we have chosen to present it
only in the case we need to prove the main result. Some related duality is worked
out in [DGI].
3. A Relative Matlis Dual Functor
3.1. Matlis Duality. We first recall the classical Matlis duality in the complete
local (Noetherian) case.
Let (R, m) be a complete local ring, and let E be an injective hull of its residue
field R/m. The Matlis dual functor HomR (−, E) is an exact contravariant functor on
the category of R-modules. It takes each Noetherian R-module (i.e., one satisfying
the ascending chain condition) to an Artinian R-module (i.e., one satisfying the
descending chain condition) and vice-versa. Moreover, for any Artinian or Noetherian
R-module H, we have a natural isomorphism H → HomR (HomR (H, E), E). That is,
the Matlis dual functor defines an equivalence of categories between the category of
Noetherian and the category of Artinian R-modules. See [LC], [BH, Thm 3.2.13] or
[Hoch] for more on Matlis duality.
3.2. A relative version of Matlis duality. Let A be a domain. Let R be an Aalgebra, with ideal I ⊂ R such that R/I is finitely generated as an A-module. Define
the relative Matlis dual functor to be the functor
{R − modules} → {R − modules}
M 7→ M ∨A := lim HomA (M/I t M, A).
−→
t
6
KAREN E. SMITH
We also denote M ∨A by Homcts
A (M, A), since it is the submodule of HomA (M, A)
consisting of maps continuous in the I-adic topology. That is, Homcts
A (M, A) is the
R-submodule of HomA (M, A) consisting of maps φ : M → A satisfying φ(I t M) = 0
for some t.
Proposition 3.1. Let R be a Noetherian algebra over a Noetherian domain A, with
ideal I ⊂ R such that R/I is finitely generated as an A-module.
(1) The functor Homcts
A (−, A) is naturally equivalent to the functor
M 7→ HomR (M, Homcts
A (R, A)).
(2) The functor preserves exactness of sequences
0 → M1 → M2 → M3 → 0
of finitely generated R-modules, provided that the modules M3 /I n M3 are (locally) free A-modules for all n ≫ 0.
Remark 3.2. If A = R/I is a field, then the relative Matlis dual specializes to the
usual Matlis dual functor HomR (−, E), where E is the injective hull of the residue
field of R at the maximal ideal I (denoted here now m). Indeed, one easily checks
that Homcts
A (R, A) is an injective hull of R/m. To wit, the R-module homomorphism
(
R→A
R/m → Homcts
sending
r mod m 7→
A (R, A)
s 7→ rs mod m
is a maximal essential extension of R/m.
Proof of Proposition. Statement (1) follows from the adjointness of tensor and Hom,
which is easily observed to restrict to the corresponding statement for modules of
continuous maps.
For (2), we need to show the sequence remains exact after applying the relative
Matlis dual functor. The functor HomA (−, A) preserves left exactness: that is,
(2)
0 → HomA (M3 , A) → HomA (M2 , A) → HomA (M1 , A)
is exact. We want to show that, restricting to the submodules of continuous maps,
we also have exactness at the right. That is, we need the exactness of
(3)
cts
cts
0 → Homcts
A (M3 , A) → HomA (M2 , A) → HomA (M1 , A) → 0.
The exactness of (3) at all spots except the right is easy to verify using the description
of a continuous map as one annihilated by a power of I.
To check exactness of (3) at the right, we use the Artin Rees Lemma [AM, 10.10].
n
Take φ ∈ Homcts
A (M1 , A). By definition of continuous, we φ is annihilated by I for
sufficiently large n. By the Artin-Rees Lemma, there exists t such that for all n ≥ t,
we have I n+t M2 ∩ M1 ⊂ I n M1 . This means we have a surjection
M1 /(I n+t M2 ∩ M1 ) ։ M1 /I n M1 .
LOCAL COHOMOLOGY AND BASE CHANGE
7
Therefore the composition
M1 /I n+t M2 ∩ M1 ։ M1 /I n M1 → A
gives a lifting of φ to an element φ′ in HomA (M1 /I n+t M2 ∩ M1 , A).
Now note that for n ≫ 0, we have exact sequences
0 → M1 /M1 ∩ I n+t M2 → M2 /I n+t M2 → M3 /I n+t M3 → 0,
which are split over A by our assumption that M3 /I n+t M3 is projective. Thus
(4) 0 → HomA (M3 /I n+t M3 , A) → HomA (M2 /I n+t M2 , A) → HomA (M1 /M1 ∩ I n+t M2 , A) → 0
is also split exact. This means we can pull φ′ ∈ HomA (M1 /I n+t M2 ∩ M1 , A) back to
φ
some element φ̃ in HomA (M2 /I n+t M2 , A). So our original continuous map M1 → A
φ̃
is the restriction of some map M2 → A which satisfies φ̃(I n+t M2 ) = 0. This exactly
says the continuous map φ on M1 extends to a continuous map φ̃ of M2 . That is,
the sequence (3) is exact.
Remark 3.3. If M3 is a Noetherian module over a Noetherian algebra R over the
Noetherian domain A, then the assumption that M3 /I n M3 is locally free for all n
holds generically on A—that is, after inverting a single element of A. See Lemma
4.2.
4. Generic Freeness
We briefly review Grothendieck’s idea of generic freeness, and use it to prove that
the relative Matlis dual of a Noetherian R-module is generically free over A (under
suitable Noetherian hypothesis on R and A).
Let M be a module over a commutative domain A. We say that M is generically
free over A if there exists a non-zero g ∈ A, such that M ⊗A Ag is a free Ag -module,
where Ag denotes the localization of A at the element g. Likewise, a collection M of
A-modules is simultaneously generically free over A if there exists a non-zero g ∈ A,
such that M ⊗A Ag is a free for all modules M ∈ M. Note that any finite collection
of generically free modules is always simultaneously generically free, since we can
take g to be the product the gi that work for each of the Mi .
Of course, finitely generated A-modules are always generically free. Grothendieck’s
famous Lemma on Generic Freeness ensures that many other modules are as well:
Lemma 4.1. [EGA, 6.9.2] Let A be a Noetherian domain. Let M be any finitely
generated module over a finitely generated A-algebra T . Then M is generically free
over A.
We need a version of Generic Freeness for certain infinite families of modules over
more general A-algebras:
8
KAREN E. SMITH
Lemma 4.2. Let A be any domain. Let T be any Noetherian A-algebra, and I ⊂ T
any ideal such that T /I is finite over A. Then for any Noetherian T -module M, the
family of modules
{M/I n M | n ≥ 1}
is simultaneously generically free over A. That is, after inverting a single element of
A, the modules M/I n M for all n ≥ 1 become free over A.
Remark 4.3. We will make use of Lemma 4.2 in the case where T = A[[x1 , . . . , xn ]].
Proof. If M is finitely generated over T , then the associated graded module
grI M = M/IM ⊕ IM/I 2 M ⊕ I 2 M/I 3 M ⊕ . . .
is finitely generated over the associated graded ring grI T = T /I ⊕ I/I 2 ⊕ I 2 /I 3 ⊕ . . . ,
which is a homomorphic image of a polynomial ring over T /I. Hence grI T is a finitely
generated A-algebra. Applying the Lemma of generic freeness to the grI T -module
grI M, we see that after inverting a single non-zero element of g of A, the module
grI M becomes A-free. Since grI M is graded over grI T and A acts in degree zero,
clearly its graded pieces are also free after tensoring with Ag . We can thus replace A
by Ag for suitable g, and assume that the I n M/I n+1 M are Ag -free for all n ≥ 0.
Now consider the short exact sequences
(5)
0 → I n M/I n+1 M → M/I n+1 M → M/I n M → 0,
for each n ≥ 1. We already know that M/I 1 M and I n M/I n+1 M for all n ≥ 1 are
free (over Ag ), so by induction, we conclude that the sequences (5) are all split over
Ag for every n. In particular, the modules M/I n M are also free over Ag for all n ≥ 1.
The lemma is proved.
Proposition 4.4. Let A be a Noetherian domain. Let R be any Noetherian Aalgebra with ideal I ⊂ R such that R/I is a finitely generated A-module. Then for
any Noetherian R-module M, the relative Matlis dual
Homcts
A (M, A)
is a generically free A-module. Furthermore, if g ∈ A is a non-zero element such
that Ag ⊗A Homcts
A (M, A) is free over Ag , then for any base change A → L factoring
through Ag , the natural map
cts
Homcts
A (M, A) ⊗A L → HomL (M ⊗A L, L)
is an isomorphism of R ⊗A L-modules, functorial in M.
Proof. We can invert one element of A so that each M/I t M is free over A; replace
A by this localization. We now claim that the A-module
M
cts
HomA (M, A) = lim HomA
,A
−→
I tM
t
LOCAL COHOMOLOGY AND BASE CHANGE
9
t
is free. Indeed,
since each M/I M is free and has finite rank over A, its A-dual
M
HomA I t M , A is also free of finite rank. The direct limit is also A-free because the
maps in the limit system are all split over A and injective. Indeed, if some finite
A-linear combination of fi ∈ Homcts
combination is
A (M, A) is zero, then that same
,
A
of
homomorphisms
zero considered as elements of the free-submodule HomA IM
tM
in Homcts
(M,
A)
killed
by
a
large
power
of
I.
It
follows
that
Homcts
A
A (M, A) is free
over A, as desired.
The result on base change follows as well, since tensor commutes with direct limits
and with dualizing a finitely generated free module.
Remark 4.5. We can interpret Proposition 4.4 as saying that generically on A,
the relative Matlis dual functor (applied to Noetherian R-modules) is exact and
commutes with base change.
5. Relative Local Duality and the Proof of the Main Theorem
The proof Theorem 2.1 and therefore of our main result answering Kollár’s question, follows from a relative version of Local Duality:
Theorem 5.1. Let R be a power series ring A[[x1 , . . . , xn ]] over a Noetherian domain A, and let M be a finitely generated R-module. Then, after replacing A by its
localization at one element, there is a functorial isomorphism for all i
H i (M) ∼
= [Extn−i (M, R)]∨A .
I
R
To prove Theorem 5.1, we need the following lemma.
Lemma 5.2. Let R be a power series ring A[[x1 , . . . , xn ]] over a ring A. There is a
∼ n
natural R-module isomorphism Homcts
A (R, A) = HI (R), where I = (x1 , . . . , xn ). In
particular, the relative Matlis dual functor can also be expressed
M 7→ HomR (M, HIn (R)).
Proof. We recall that HIi (R) is the i-th cohomology of the extended Čech complex
K • on the elements x1 , . . . , xn . This is the complex
M
δ1
δn
0→R→
Rxi xj → · · · −→
Rx1 ⊕ Rx2 · · · ⊕ Rxn →
Rx1 x2 ···xn → 0
i<j
where the maps are (sums of) suitably signed localization maps. In particular, HIn (R)
is the cokernel of δn , which can be checked to be the free A-module on (the classes
of) the monomials xa11 . . . xann with all ai < 0. 1
L
page 226 of [Hart], although I have included one extra map δ1 : R →
Rxi sending
f 7→ ( f1 , . . . , f1 ) in order to make the complex exact on the left, and my ring is a power series ring
over A instead of a polynomial rings over a field. This is also discussed in [LC] page 22.
1See
10
KAREN E. SMITH
Now define an explicit R-module isomorphism Φ from HIn (R) to Homcts
A (R, A) by
sending the (class of the) monomial xα to the map φα ∈ Homcts
(R,
A):
A
φα
R −→ A
(
xb11 +a1 +1 . . . xnbn +an +1
b1
bn
x1 . . . xn 7→
0 otherwise
mod I
if αi + βi ≥ −1 for all i
Since {xβ | β ∈ Nn } is an A-basis for R, the map φα is a well-defined A-module map
from R to A, and since it sends all but finitely many xβ to zero, it is I-adically continΦ
uous. Thus the map HIn (R) −→ Homcts
A (R, A) is is an A-module homomorphism; in
fact, Φ is an A-module isomorphism, since it defines a bijection between the A-basis
{xα | αi < 0} for HIn (R) and {πβ | βi ≥ 0} for Homcts
A (R, A) (meaning the dual basis
β
on the free basis x for R over A) matching the indices αi ↔ βi = −αi − 1. It is easy
to check that Φ is also R-linear, by thinking of it as “premultiplication by xα+1 ” on
the cokernel of δn . Thus Φ identifies the R-modules HIn (R) and Homcts
A (R, A).
Proof of Theorem 5.1. We proceed by proving that both modules are generically ison
morphic to a third, namely TorR
n−i (M, HI (R)).
First, recall how to compute HIi (M). Let K • be the extended Čech complex on
the elements x1 , . . . , xn
M
δ1
δn
0→R→
Rxi xj → · · · −→
Rx1 ⊕ Rx2 · · · ⊕ Rxn →
Rx1 x2 ···xn → 0.
i<j
This is a complex of flat R-modules, all free over A, exact at every spot except the
right end. Thus it is a flat R-module resolution of the local cohomology module
HIn (R). The local cohomology module HIi (M) is the cohomology of this complex
after tensoring over R with M, that is
n
HIi (M) = TorR
n−i (M, HI (R)).
n−i
On the other hand, let us compute the relative Matlis dual of ExtR
(M, R). Let P•
•
be a free resolution of M over R. The module ExtR (M, R) is the cohomology of the
complex HomR (P• , R). We would like to say that the computation of the cohomology
of this complex commutes with the relative Matlis dual functor, but the best we can
say is that this is true generically on A. To see this, we will apply Lemma 4.2 to the
following finite set of R-modules:
• For i = 0, . . . , n, the image Di of the i-th map of the complex HomR (P• , R);
n−i
• For i = 0, . . . , n, the cohomology ExtR
(M, R) of the same complex.
Lemma 4.2 guarantees that the modules
Di /I t Di
and
n−i
n−i
ExtR
(M, R)/I t ExtR
(M, R)
LOCAL COHOMOLOGY AND BASE CHANGE
11
are all simultaneously generically free over A for all t ≥ 1. This allows us to break up
the complex Ag ⊗A HomR (P• , R) into many short exact sequences, split over Ag , which
satisfy the hypothesis of Proposition 3.1(2) (using Ag in place of A and Ag ⊗A R in
place of R). It follows that the computation of cohomology of HomR (P• , R) commutes
with the relative Matlis dual functor (generically on A).
n−i
Thus, after replacing A by a localization at one element, ExtR
(M, R)]∨A is the
cohomology of the complex
HomR (HomR (P• , R), HIn (R)).
In general, for any finitely generated projective module P and any module H (over
any Noetherian ring R), the natural map
P ⊗ H → Hom(Hom(P, R), H)
sending a simple tensor x ⊗ h to the map which sends f ∈ Hom(P, R) to f (x)h, is
an isomorphism, functorially in P and H. Thus we have a natural isomorphism of
complexes
P• ⊗ HIn (R) ∼
= HomR (HomR (P• , R), HIn (R)),
and so [Extn−i (M, R)]∨A is identified with Torn−i (M, HIn (R)), which as we saw is
identified with HIi (M).
Since all identifications are functorial, we have proven the relative local duality
∼
= [Extn−i (M, R)]∨A , generically on A.
HIi (M)
We can finally finish the proof of Theorem 1.1, and hence the main result:
Proof of Theorem 2.1. Let R be a power series ring over a Noetherian domain A,
and let M be any Noetherian R-module. We need to show that the local cohomology
modules HIi (M) are generically free and commute with base change over A.
In light of Corollary 4.4 , we can accomplish this by showing that HIi (M) is the relative Matlis dual of a Noetherian R-module, generically on A. But this is guaranteed
by the relative local duality theorem Theorem 5.1, which guarantees that
H i (M) ∼
= Extn−i (M, R)∨A
I
R
generically on A.
Remark 5.3. One could obviously develop the theory of relative Matlis duality,
especially Theorem 5.1, further; I wrote down only the simplest possible case and
the simplest possible statements needed to answer Kollár’s question as directly as
possible.
12
KAREN E. SMITH
References
[AM] M. F. Atiyah and I. G. MacDonald, Introduction to Commutative Algebra, Addison Wesley
Series in Mathematics, Addison Wesley, London, (1969).
[BH] Winfred Burns and Jürgen Herzog, Cohen-Macaulay Rings, Cambridge series in Advanced
Mathematics, v 39. Cambridge University Press, (1993).
[Conrad] Brian Conrad, Grothendieck Duality and Base Change, Lecture Notes in Mathematics
1750, Springer (2001).
[DGI] W.G. Dwyer, J.P.C. Greenlees and S. Iyengar, Duality in algebra and topology, Advances in
Mathematics Volume 200, Issue 2, 1 March 2006, Pages 357D402.
[Eisen] David Eisenbud, Commutative Algebra with a view towards Algebraic Geometry, Graduate
Texts in Mathematics 150, Springer (1995).
[EGA] Alexander Grothendieck and Jean Dieudonné, Éléments de Geométrie Algébrique Chapter
IV, part 2, Inst. Hautes Études Sci. Pub. Math. 24 (1965).
[SGA2] Alexander Grothendieck and Michele Raynaud, Cohomologie locale des faisceaux cohérents
et théorèmes de Lefschetz locaux et globaux (SGA 2), Documents Mathèmatiques (Paris) 4,
Paris: Sociéité Mathèmatique de France, (2005) [1968], Laszlo, Yves, ed., arXiv:math/0511279,
ISBN 978-2-85629-169-6, MR 2171939
[LC] Robin Hartshorne, Local cohomology. A seminar given by A. Grothendieck, Harvard University, Fall, 1961, Lecture notes in mathematics 41, Berlin, New York: Springer-Verlag, (1967).
MR0224620
[RD] Robin Hartshorne, Residues and Duality: Lecture Notes of a Seminar on the Work of A.
Grothendieck, Given at Harvard 1963 /64, Springer Verlag Lecture Notes in Mathematics, vol
20 (1966).
[Hart] Robin Hartshorne, Algebraic Geometry Graduate Texts in Mathematics 52 Springer-Verlag,
(2006).
[Hoch] Mel Hochster, Lecture notes on Local Cohomology, unpublished, from his University of
Michigan website http://www.math.lsa.umich.edu/ hochster/615W11/loc.pdf
[HR] Mel Hochster and Joel Roberts, The Purity of Frobenius and Local Cohomology, Advances
in Mathematics 21 117–172 (1976).
[Kol] János Kollár, Maps between local Picard groups, arXiv:1407.5108v2 (preprint) 2014.
| 0 |
Symmetric Tensor Completion from Multilinear Entries and
Learning Product Mixtures over the Hypercube
arXiv:1506.03137v3 [cs.DS] 23 Nov 2015
Tselil Schramm∗
Ben Weitz†
November 25, 2015
Abstract
We give an algorithm for completing an order-m symmetric low-rank tensor from its multilinear entries in time roughly proportional to the number of tensor entries. We apply our
tensor completion algorithm to the problem of learning mixtures of product distributions over
the hypercube, obtaining new algorithmic results. If the centers of the product distribution are
linearly independent, then we recover distributions with as many as Ω(n) centers in polynomial
time and sample complexity. In the general case, we recover distributions with as many as
Ω̃(n) centers in quasi-polynomial time, answering an open problem of Feldman et al. (SIAM J.
Comp.) for the special case of distributions with incoherent bias vectors.
Our main algorithmic tool is the iterated application of a low-rank matrix completion algorithm for matrices with adversarially missing entries.
∗
UC Berkeley, [email protected]. Supported by an NSF Graduate Research Fellowship (NSF award no
1106400).
†
UC Berkeley, [email protected]. Supported by an NSF Graduate Research Fellowship (NSF award no
DGE 1106400).
1
Introduction
Suppose we are given sample access to a distribution over the hypercube {±1}n , where each sample
x is generated in the following manner: there are k product distributions D1 , . . . , Dk over {±1}n
(the k “centers” of the distribution), and x is drawn from Di with probability pi . This distribution
is called a product mixture over the hypercube.
Given such a distribution, our goal is to recover from samples the parameters of the individual
product distributions. That is, we would like to estimate the probability pi of drawing from each
product distribution, and furthermore we would like to estimate the parameters of the product
distribution itself. This problem has been studied extensively and approached with a variety of
strategies (see e.g. [FM99, CR08, FOS08]).
A canonical approach to problems of this type is to empirically estimate the moments of the
distribution, from which it may be possible to calculate the distribution parameters using linearalgebraic tools (see e.g. [AM05, MR06, FOS08, AGH+ 14], and many more). For product distributions over the hypercube, this technique runs into the problem that the square moments are always
1, and so they provide no information.
The seminal work of Feldman, O’Donnell and Servedio [FOS08] introduces an approach to
this problem which compensates for the missing higher-order moment information using matrix
completion. Via a restricted brute-force search, Feldman et al. check all possible square moments,
resulting in an algorithm that is triply-exponential in the number of distribution centers. Continuing
this line work, by giving an alternative to the brute-force search Jain and Oh [JO13] recently
obtained a polynomial-time algorithm for a restricted class of product mixtures. In this paper we
extend these ideas, giving a polynomial-time algorithm for a wider class of product mixtures, and a
quasi-polynomial time algorithm for an even broader class of product mixtures (including product
mixtures with centers which are not linearly independent).
Our main tool is a matrix-completion-based algorithm for completing tensors of order m from
their multilinear moments in time Õ(nm+1 ), which we believe may be of independent interest.
There has been ample work in the area of noisy tensor decomposition (and completion), see e.g.
[JO14, BKS15, TS15, BM15]. However, these works usually assume that the tensor is obscured
by random noise, while in our setting the “noise” is the absence of all non-multilinear entries. An
exception to this is the work of [BKS15], where to obtain a quasi-polynomial algorithm it suffices
to have the injective tensor norm of the noise be bounded via a Sum-of-Squares proof.1 To our
knowledge, our algorithm is the only nO(m) -time algorithm that solves the problem of completing
a symmetric tensor when only multilinear entries are known.
1.1
Our Results
Our main result is an algorithm for learning a large subclass of product mixtures with up to even
Ω(n) centers in polynomial (or quasi-polynomial) time. The subclass of distributions on which
our algorithm succeeds is described by characteristics of the subspace spanned by the bias vectors.
Specifically, the rank and incoherence of the span of the bias vectors cannot simultaneously be too
large. Intuitively, the incoherence of a subspace measures how close the subspace is to a coordinate
subspace of Rn . We give a formal definition of incoherence later, in Definition 2.3.
More formally, we prove the following theorem:
1
It may be possible that this condition is met for some symmetric tensors when only multilinear entries are known,
but we do not know an SOS proof of this fact.
1
Theorem 1.1. Let D be a mixture over k product distributions on {±1}n , with bias vectors
v1 , . . . , vk ∈ Rn and mixing weights w1 , . . . , wk > 0. Let span{vi } have dimension r and incoherence µ. Suppose we are given as input the moments of D.
1. If v1 , . . . , vk are linearly independent, then as long as 4 · µ · r < n, there is a poly(n, k)
algorithm that recovers the parameters of D.
2. Otherwise, if |hvi , vj i| < kvi k · kvj k · (1 − η) for every i 6= j and η > 0, then as long as
4 · µ · r · log k/ log
parameters of D.
1
1−η
1
< n, there is an nO(log k/ log 1−η ) time algorithm that recovers the
Remark 1.2. In the case that v1 , . . . , vk are not linearly independent, the runtime depends on the
separation between the vectors. We remark however that if we have some vi = vj for i 6= j, then
the distribution is equivalently representable with fewer centers by taking the center vi with mixing
weight wi + wj . If there is some vi = −vj , then our algorithm can be modified to work in that case
as well, again by considering vi and vj as one center–we detail this in Section 4.
In the main body of the paper we assume access to exact moments; in Appendix B we prove
Theorem B.2, a version of Theorem 1.1 which accounts for sampling error.
The foundation of our algorithm for learning product mixtures is an algorithm for completing
a low-rank incoherent tensor of arbitrary order given access only to its multilinear entries:
P
Theorem 1.3. Let T be a symmetric tensor of order m, so that T = i∈[k] wi · vi⊗m for some
vectors v1 , . . . , vk ∈ Rn and scalars w1 , . . . , wk 6= 0. Let span{vi } have incoherence µ and dimension
r. Given perfect access to all multilinear entries of T , if 4·µ ·r ·m/n < 1, then there is an algorithm
which returns the full tensor T in time Õ(nm+1 ).
1.2
Prior Work
We now discuss in more detail prior work on learning product mixtures over the hypercube, and
contextualize our work in terms of previous results.
The pioneering papers on this question gave algorithms for a very restricted setting: the works
of [FM99] and [C99, CGG01] introduced the problem and gave algorithms for learning a mixture
of exactly two product distributions over the hypercube.
The first general result is the work of Feldman, O’Donnell and Servedio, who give an algorithm
3
for learning a mixture over k product distributions in n dimensions in time nO(k ) with sample
complexity nO(k) . Their algorithm relies on brute-force search to enumerate all possible product
mixtures that are consistent with the observed second moments of the distribution. After this, they
use samples to select the hypothesis with the Maximum Likelihood. Their paper leaves as an open
question the more efficient learning of discrete mixtures of product distributions, with a smaller
exponential dependence (or even a quasipolynomial dependence) on the number of centers.2
More recently, Jain and Oh [JO13] extended this approach: rather than generate a large number
of hypotheses and pick one, they use a tensor power iteration method of [AGH+ 14] to find the right
decomposition of the second- and third-order moment tensors. To learn these moment tensors in
the first place, they use alternating minimization to complete the (block)-diagonal of the second
moments matrix, and they compute a least-squares estimation of the third-order moment tensor.
2
We do not expect better than quasipolynomial dependence on the number of centers, as learning the parity
distribution on t bits is conjectured to require at least nΩ(t) time, and this distribution can be realized as a product
mixture over 2t−1 centers.
2
Learning Product Mixtures with k Centers over {±1}n
Reference
Feldman et al. [FOS08]
Jain & Oh [JO14]
lin. indep.
Our Results lin. dep.
Runtime
Samples
Largest k
Dep. Centers?
Incoherence?
nO(k)
n
Allowed
Not Required
poly(n, k)
poly(n, k)
Not Allowed
Required
poly(n,k),
nÕ(log k)
poly(n,k),
nÕ(log k)
k ≤ O(n2/7 )
k ≤ O(n)
Allowed
Required
nO(k
3)
Figure 1: Comparison of our work to previous results. We compare runtime, sample complexity, and
restrictions on the centers of the distribution: the maximum number of centers, whether linearly
dependent centers are allowed, and whether the centers are required to be incoherent. The two
subrows correspond to the cases of linearly independent and linearly dependent centers, for which
we guarantee different sample complexity and runtime.
Using these techniques, Jain and Oh were able to obtain a significant improvement for a restricted
class of product mixtures, obtaining a polynomial time algorithm for linearly independent mixtures
over at most k = O(n2/7 ) centers. In order to ensure the convergence of their matrix (and tensor)
completion subroutine, they introduce constraints on the span of the bias vectors of the distribution
(see Section 2.3 for a discussion of incoherence assumptions on product mixtures). Specifically,
letting r the rank of the span, letting µ be the incoherence of the span, and letting n be the dimension
of the samples, they require that Ω̃(µ5 r 7/2 ) ≤ n.3 Furthermore, in order to extract the bias vectors
from the moment information, they require that the bias vectors be linearly independent. When
these conditions are met by the product mixture, Jain and Oh learn the mixture in polynomial
time.
In this paper, we improve upon this result, and can handle as many as Ω(n) centers in some
parameter settings. Similarly to [JO13], we use as a subroutine an algorithm for completing lowrank matrices with adversarially missing entries. However, unlike [JO13], we use an algorithm with
more general guarantees, the algorithm of [HKZ11].4 These stronger guarantees allow us to devise
an algorithm for completing low-rank higher-order tensors from their multilinear entries, and this
algorithm allows us to obtain a polynomial time algorithm for a more general class of linearly
independent mixtures of product distributions than [JO13].
Furthermore, because of the more general nature of this matrix completion algorithm, we can
give a new algorithm for completing low-rank tensors of arbitrary order given access only to the
multilinear entries of the tensor. Leveraging our multilinear tensor completion algorithm, we can
reduce the case of linearly dependent bias vectors to the linearly independent case by going to higherdimensional tensors. This allows us to give a quasipolynomial algorithm for the general case, in
which the centers may be linearly dependent. To our knowledge, Theorem 1.1 is the first quasipolynomial algorithm that learns product mixtures whose centers are not linearly independent.
Restrictions on Input Distributions. We detail our restrictions on the input distribution. In
the linearly independent case, if there are k bias vector and µ is the incoherence of their span,
and n is the dimension of the samples, then we learn a product mixture in time n3 so long as
3
The conditions are actually more complicated, depending on the condition number of the second-moment matrix
of the distribution. For precise conditions, see [JO13].
4
A previous version of this paper included an analysis of a matrix completion algorithm almost identical to that of
[HKZ11], and claimed to be the first adversarial matrix completion result of this generality. Thanks to the comments
of an anonymous reviewer, we were notified of our mistake.
3
4µr < n. Compare this to the restriction that Ω̃(r 7/2 µ5 ) < n, which is the restriction of Jain
and Oh–we are able to handle even a linear number of centers so long as the incoherence is not
too large, while Jain and Oh can handle at most O(n2/7 ) centers. If the k bias vectors are not
independent, but their span has rank r and if they have maximum pairwise inner product 1 − η
(when scaled to unit vectors), then we learn the product mixture in time nO(log k·(− log 1−η)) so long
1
< n (we also require a quasipolynomial number of samples in this case).
as 4µr log k · log 1−η
While the quasipolynomial runtime for linearly dependent vectors may not seem particularly
glamorous, we stress that the runtime depends on the separation between the vectors. To illustrate
the additional power of our result, we note that a choice of random v1 , . . . , vk in an r-dimensional
√
subspace meet this condition extremely well, as we have η = 1 − Õ(1/ r) with high probability–
for, say, k = 2r, the algorithm of [JO13] would fail in this case, since v1 , . . . , vk are not linearly
independent, but our algorithm succeeds in time nO(1) .
This quasipolynomial time algorithm resolves an open problem of [FOS08], when restricted to
distributions whose bias vectors satisfy our condition on their rank and incoherence. We do not solve
the problem in full generality, for example our algorithm fails to work when the distribution can have
multiple decompositions into few centers. In such situations, the centers do not span an incoherent
subspace, and thus the completion algorithms we apply fail to work. In general, the completion
algorithms fail whenever the moment tensors admit many different low-rank decompositions (which
can happen even when the decomposition into centers is unique, for example parity on three bits).
In this case, the best algorithm we know of is the restricted brute force of Feldman, O’Donnell and
Servedio.
Sample Complexity. One note about sample complexity–in the linearly dependent case, we
require a quasipolynomial number of samples to learn our product mixture. That is, if there are k
product centers, we require nÕ(log k) samples, where the tilde hides a dependence on the separation
between the centers. In contrast, Feldman, O’Donnell, and Servedio require nO(k) samples. This
dependence on k in the sample complexity is not explicitly given in their paper, as for their algorithm
to be practical they consider only constant k.
Parameter Recovery Using Tensor Decomposition. The strategy of employing the spectral
decomposition of a tensor in order to learn the parameters of an algorithm is not new, and has
indeed been employed successfully in a number of settings. In addition to the papers already
mentioned which use this approach for learning product mixtures ([JO14] and in some sense [FOS08],
though the latter uses matrices rather than tensors), the works of [MR06, AHK12, HK13, AGHK14,
BCMV14], and many more also use this idea. In our paper, we extend this strategy to learn a more
general class of product distributions over the hypercube than could previously be tractably learned.
1.3
Organization
The remainder of our paper is organized as follows. In Section 2, we give definitions and background, then outline our approach to learning product mixtures over the hypercube, as well as
put forth a short discussion on what kinds of restrictions we place on the bias vectors of the distribution. In Section 3, we give an algorithm for completing symmetric tensors given access only
to their multilinear entries, using adversarial matrix completion as an algorithmic primitive. In
Section 4, we apply our tensor completion result to learn mixtures of product distributions over
the hypercube, assuming access to the precise second- and third-order moments of the distribution. Appendix A and Appendix B contain discussions of matrix completion and learning product
4
mixtures in the presence of sampling error, and Appendix C contains further details about the
algorithmic primitives used in learning product mixtures.
1.4
Notation
We use ei to denote the ith standard basis vector.
For a tensor T ∈ Rn×n×n , we use T (a, b, c) to denote the entry of the tensor indexed by
a, b, c ∈ [n], and we use T (i, ·, ·) to denote the ith slice of the tensor, or the subset of entries in
m
which the first coordinate is fixed to i ∈ [n]. For an order-m tensor T ∈ Rn , we use T (X) to
represent the entry indexed by the string X ∈ [n]m , and we use T (Y, ·, ·) to denote the slice of T
indexed by the string Y ∈ [n]m−2 . For a vector v ∈ Rn , we use the shorthand x⊗k to denote the
k-tensor x ⊗ x · · · ⊗ x ∈ Rn×···×n .
We use Ω ⊆ [m] × [n] for the set of observed entries of the hidden matrix M , and PΩ denotes
the projection onto those coordinates.
2
Preliminaries
In this section we present background necessary to prove our results, as well as provide a short
discussion on the meaning behind the restrictions we place on the distributions we can learn. We
start by defining our main problem.
2.1
Learning Product Mixtures over the Hypercube
A distribution D over {±1}n is called a product distribution if every bit in a sample x ∼ D is
independently chosen. Let D1 , . . . , Dk be a set of product distributions over {±1}n . Associate with
each Di a vector vi ∈ [−1, 1]n whose jth entry encodes the bias of the jth coordinate, that is
P [x(j) = 1] =
x∼Di
1 + vi (j)
.
2
Define the distribution D to be a convex combination of these product distributions, sampling
P
x ∼ D = {x ∼ Di with probability wi }, where wi > 0 and
i∈[k] wi = 1. The distributions
D1 , . . . , Dk are said to be the centers of D, the vectors v1 , . . . , vk are said to be the bias vectors,
and w1 , . . . , wk are said to be the mixing weights of the distribution.
Problem 2.1 (Learning a Product Mixture over the Hypercube). Given independent samples from
a distribution D which is a mixture over k centers with bias vectors v1 , . . . , vk ∈ [−1, 1]n and mixing
weights w1 , . . . , wk > 0, recover v1 , . . . , vk and w1 , . . . , wk .
This framework encodes many subproblems, including learning parities, a notorious problem in
learning theory; the best current algorithm requires time nΩ(k) , and the noisy version of this problem
is a standard cryptographic primitive [MOS04, Fel07, Reg09, Val15]. We do not expect to be able
to learn an arbitrary mixture over product distribution efficiently. We obtain a polynomial-time
algorithm when the bias vectors are linearly independent, and a quasi-polynomial time algorithm
in the general case, though we do require an incoherence assumption on the bias vectors (which
parities do not meet), see Definition 2.3.
In [FOS08], the authors give an nO(k)-time algorithm for the problem based on the following
idea. With great accuracy in polynomial time we may compute the pairwise moments of D,
X
M = E [xxT ] = E2 +
wi · vi viT .
x∼D
i∈[k]
5
The matrix E2 is a diagonal matrix which corrects for the fact that Mjj = 1 always. If we were
P
able to learn E2 and thus access i∈[k] wi vi viT , the “augmented second moment matrix,” we may
hope to use spectral information to learn v1 , . . . , vk .
The algorithm of [FOS08] performs a brute-force search to learn E2 , leading to a runtime
exponential in the rank. By making additional assumptions on the input D and computing higherorder moments as well, we avoid this brute force search and give a polynomial-time algorithm for
product distributions with linearly independent centers: If the bias vectors are linearly independent,
a power iteration algorithm of [AGH+ 14] allows us to learn D given access to both the augmented
second- and third-order moments.5 Again, sampling the third-order moments only gives access
P
to Ex∼D [x⊗3 ] = E3 + i∈[k] wi · vi⊗3 , where E3 is a tensor which is nonzero only on entries of
multiplicity at least two. To learn E2 and E3 , Jain and Oh used alternating minimization and a
least-squares approximation. For our improvement, we develop a tensor completion algorithm based
on recursively applying the adversarial matrix completion algorithm of Hsu, Kakade and Zhang
[HKZ11]. In order to apply these completion algorithms, we require an incoherence assumption on
the bias vectors (which we define in the next section).
In the general case, when the bias vectors are not linearly independent, we exploit the fact that
high-enough tensor powers of the bias vectors are independent, and we work with the Õ(log k)th
moments of D, applying our tensor completion to learn the full moment tensor, and then using
[AGH+ 14] to find the tensor powers of the bias vectors, from which we can easily recover the vectors
themselves. (the tilde hides a dependence on the separation between the bias vectors). Thus if the
distribution is assumed to come from bias vectors that are incoherent and separated, then we can
obtain a significant runtime improvement over [FOS08].
2.2
Matrix Completion and Incoherence
As discussed above, the matrix (and tensor) completion problem arises naturally in learning product
mixtures as a way to compute the augmented moment tensors.
Problem 2.2 (Matrix Completion). Given a set Ω ⊆ [m] × [n] of observed entries of a hidden
rank-r matrix M , the Matrix Completion Problem is to successfully recover the matrix M given
only PΩ (M ).
However, this problem is not always well-posed. For example, consider the input matrix M =
+ en eTn . M is rank-2, and has only 2 nonzero entries on the diagonal, and zeros elsewhere.
Even if we observe almost the entire matrix (and even if the observed indices are random), it is
likely that every entry we see will be zero, and so we cannot hope to recover M . Because of this,
it is standard to ask for the input matrix to be incoherent:
e1 eT1
Definition 2.3. Let U ⊂ Rn be a subspace of dimension r. We say that U is incoherent with
parameter µ if maxi∈[n] k projU (ei )k2 ≤ µ nr . If M is a matrix with left and right singular spaces
U and V , we say that M is (µU , µV )-incoherent if U (resp. V ) is incoherent with parameter µU
(resp µV ). We say that v1 , . . . , vk are incoherent with parameter µ if their span is incoherent with
parameter µ.
Incoherence means that the singular vectors are well-spread over their coordinates. Intuitively,
this asks that every revealed entry actually gives information about the matrix. For a discussion on
There are actually several algorithms in this space; we use the tensor-power iteration of [AGH+ 14] specifically.
There is a rich body of work on tensor decomposition methods, based on simultaneous diagonalization and similar
techniques (see e.g. Jenrich’s algorithm [Har70] and [LCC07]).
5
6
what kinds of matrices are incoherent, see e.g. [CR09]. Once the underlying matrix is assumed to be
incoherent, there are a number of possible algorithms one can apply to try and learn the remaining
entries of M . Much of the prior work on matrix completion has been focused on achieving recovery
when the revealed entries are randomly distributed, and the goal is to minimize the number of
samples needed (see e.g. [CR09, Rec09, GAGG13, Har14]). For our application, the revealed
entries are not randomly distributed, but we have access to almost all of the entries (Ω(n2 ) entries
as opposed to the Ω(nr log n) entries needed in the random case). Thus we use a particular kind of
matrix completion theorem we call “adversarial matrix completion,” which can be achieved directly
from the work of Hsu, Kakade and Zhang [HKZ11]:
Theorem 2.4. Let M be an m×n rank-r matrix which is (µU , µV )-incoherent, and let Ω ⊂ [m]×[n]
be the set of hidden indices. If there are at most κ elements per column and ρ elements per row of
Ω, and if 2(κ µmU + ρ µnV )r < 1, then there is an algorithm that recovers M .
For the application of learning product mixtures, note that the moment tensors are incoherent
exactly when the bias vectors are incoherent. In Section 3 we show how to apply Theorem 2.4
recursively to perform a special type of adversarial tensor completion, which we use to recover the
augmented moment tensors of D after sampling.
Further, we note that Theorem 2.4 is almost tight. That is, there exist matrix completion
instances with κ/n = 1 − o(1), µ = 1 and r = 3 for which finding any completion is NP-hard
[HMRW14, Pee96] (via a reduction from three-coloring), so the constant on the right-hand side
is necessarily at most six. We also note that the tradeoff between κ/n and µ in Theorem 2.4 is
necessary because for a matrix of fixed rank, one can add extra rows and columns of zeros in an
attempt to reduce κ/n, but this process increases µ by an identical factor. This suggests that
improving Theorem 1.1 by obtaining a better efficient adversarial matrix completion algorithm is
not likely.
2.3
Incoherence and Decomposition Uniqueness
In order to apply our completion techniques, we place the restriction of incoherence on the subspace
spanned by the bias vectors. At first glance this may seem like a strange condition which is
unnatural for probability distributions, but we try to motivate it here. When the bias vectors
are incoherent and separated enough, even high-order moment-completion problems have unique
P
solutions, and moreover that solution is equal to i∈[k] wi · vi⊗m . In particular, this implies that the
distribution must have a unique decomposition into a minimal number of well-separated centers
(otherwise those different decompositions would produce different minimum-rank solutions to a
moment-completion problem for high-enough order moments). Thus incoherence can be thought of
as a special strengthening of the promise that the distribution has a unique minimal decomposition.
Note that there are distributions which have a unique minimal decomposition but are not incoherent,
such as a parity on any number of bits.
3
Symmetric Tensor Completion from Multilinear Entries
In this section we use adversarial matrix completion as a primitive to give a completion algorithm
for symmetric tensors when only a special kind of entry in the tensor is known. Specifically, we call
a string X ∈ [n]m multilinear if every element of X is distinct, and we will show how to complete
m
a symmetric tensor T ∈ Rn when only given access to its multilinear entries, i.e. T (X) is known
7
if X is multilinear. In the next section, we will apply our tensor completion algorithm to learn
mixtures of product distributions over the boolean hypercube.
Our approach is a simple recursion: we complete the tensor slice-by-slice, using the entries we
learn from completing one slice to provide us with enough known entries to complete the next. The
following definition will be useful in precisely describing our recursive strategy:
Definition 3.1. Define the histogram of a string X ∈ [n]m to be the multiset containing the number
of repetitions of each character making at least one appearance in X.
For example, the string (1, 1, 2, 3) and the string (4, 4, 5, 6) both have the histogram (2, 1, 1).
Note that the entries of the histogram of a string of length m always sum to m, and that the length
of the histogram is the number of distinct symbols in the string.
Having defined a histogram, we are now ready to describe our tensor completion algorithm.
Algorithm 3.2 (Symmetric Tensor Completion from Multilinear Moments). Input: The
P
multilinear entries of the tensor T = i∈[k] wi · vi⊗m + E, for vectors v1 , . . . , vk ∈ Rn and
scalars w1 , . . . , wk > 0 and some error tensor E. Goal: Recover the symmetric tensor
P
T ∗ = i∈[k] wi · vi⊗3m .
1. Initialize the tensor T̂ with the known multilinear entries of T .
2. For each subset Y ∈ [n]m−2 with no repetitions:
• Let T̂ (Y, ·, ·) ∈ Rn×n be the tensor slice indexed by Y .
• Remove the rows and columns of T̂ (Y, ·, ·) corresponding to indices present in
Y . Complete the matrix using the algorithm of [HKZ11] from Theorem 2.4
and add the learned entries to T̂ .
3. For ℓ = m − 2, . . . , 1:
(a) For
•
•
•
•
each X ∈ [n]m with a histogram of length ℓ, if T̂ (X) is empty:
If there is an element xi appearing at least 3 times, let Y = X \ {xi , xi }.
Else there are elements xi , xj each appearing twice, let Y = X \ {xi , xj }.
Let T̂ (Y, ·, ·) ∈ Rn×n be the tensor slice indexed by Y .
Complete the matrix T̂ (Y, ·, ·) using the algorithm from Theorem 2.4 and
add the learned entries to T̂ .
4. Symmetrize T̂ by taking each entry to be the average over entries indexed by the
same subset.
Output: T̂ .
Observation 3.3. One might ask why we go through the effort of completing the tensor slice-byslice, rather than simply flattening it to an nm/2 × nm/2 matrix and completing that. The reason
⊗m/2
⊗m/2
may have
is that when span v1 , . . . , vk has incoherence µ and dimension r, span v1
, . . . , vk
m
incoherence as large as µr /k, which drastically reduces the range of parameters for which recovery
is possible (for example, if k = O(r) then we would need r < n1/m ). Working slice-by-slice keeps
the incoherence of the input matrices small, allowing us to complete even up to rank r = Ω̃(n).
P
Theorem 3.4. Let T be a symmetric tensor of order m, so that T = i∈[k] wi · vi⊗m for some
vectors v1 , . . . , vk ∈ Rn and scalars w1 , . . . , wk 6= 0. Let span{vi } have incoherence µ and dimension
r. Given perfect access to all multilinear entries of T (i.e. E = 0), if 4 · µ · r · m/n < 1, then
Algorithm 3.2 returns the full tensor T in time Õ(nm+1 ).
8
In Appendix B, we give a version of Theorem 3.4 that accounts for error E in the input.
Proof. We prove that Algorithm 3.2 successfully completes all the entries of T by induction on the
length of the histograms of the entries. By assumption, we are given as input every entry with a
histogram of length m. For an entry X with a histogram of length m−1, exactly one of its elements
has multiplicity two, call it xi , and consider the set Y = X \ {xi , xi }. When
P step 2 reaches Y , the
T
algorithm attempts to complete a matrix revealed from T (Y, ·, ·) = PY
i∈[k] wi · vi (Y ) · vi vi ,
Q
where vi (Y ) = j∈Y vi (j), and PY is the projector to the matrix with the rows and columns
corresponding to indices appearing in Y removed. Exactly the diagonal of T (Y, ·, ·) is missing since
all other entries are multilinear moments, and the (i, i)th entry should be T (X). Because the rank
of this matrix is equal to dim(span(vi )) = r and 4µr/n ≤ 4µrm/n < 1, by Theorem 2.4, we can
successfully recover the diagonal, including T (X). Thus by the end of step 2, T̂ contains every
entry with a histogram of length ℓ ≥ m − 1.
For the inductive step, we prove that each time step 3 completes an iteration, T̂ contains every
entry with a histogram of length at least ℓ. Let X be an entry with a histogram of length ℓ. When
step 3 reaches X in the ℓth iteration, if T̂ does not already contain T (X), the algorithm attempts
P
to complete a matrix with entries revealed from T (Y, ·, ·) = i∈[k] wi · vi (Y ) · vi viT , where Y is a
substring of X with a histogram of the same length. Since Y has a histogram of length ℓ, every
entry of T (Y, ·, ·) corresponds to an entry with a histogram of length at least ℓ+1, except for the ℓ×ℓ
principal submatrix whose rows and columns correspond to elements in Y . Thus by the inductive
hypothesis, T̂ (Y ) is only missing the aforementioned submatrix, and since 4µrℓ/n ≤ 4µrm/n < 1,
by Theorem 2.4, we can successfully recover this submatrix, including T (X). Once all of the entries
of T̂ are filled in, the algorithm terminates.
Finally, we note that the runtime is Õ(nm+1 ), because the algorithm from Theorem 2.4 runs in
time Õ(n3 ), and we perform at most nm−2 matrix completions because there are nm−2 strings of
length m − 2 over the alphabet [n], and we perform at most one matrix completion for each such
string.
4
Learning Product Mixtures over the Hypercube
In this section, we apply our symmetric tensor completion algorithm (Algorithm 3.2) to learning
mixtures of product distributions over the hypercube, proving Theorem 1.1. Throughout this
section we will assume exact access to moments of our input distribution, deferring finite-sample
error analysis to Appendix B. We begin by introducing convenient notation.
Let D be a mixture over k centers with bias vectors v1 , . . . , vk ∈ [−1, 1]n and mixing weights
nm to be the tensor of order-m moments of the distribution D, so
w1 , . . . , wk > 0. Define MD
m ∈R
⊗m ]. Define T D ∈ Rnm to be the symmetric tensor given by the weighted bias
that MD
m = Ex∼D [x
m
P
vectors of the distribution, so that TmD = i∈[k] wi · vi⊗m .
Note that TmD and MD
m are equal on their multilinear entries, and not necessarily equal elsewhere.
For example, when m is even, entries of MD
m indexed by a single repeating character (the “diagonal”)
are always equal to 1. Also observe that if one can sample from distribution D, then estimating
MD
m is easy.
Suppose that the bias vectors of D are linearly independent. Then by Theorem 4.1 (due to
[AGH+ 14], with similar statements appearing in [AHK12, HK13, AGHK14]), there is a spectral
9
algorithm which learns D given T2D and T3D 6 (we give an account of the algorithm in Appendix C).
Theorem 4.1 (Consequence of Theorem 4.3 and Lemma 5.1 in [AGH+ 14]). Let D be a mixture over
k centers with bias vectors v1 , . . . , vk ∈ [−1, 1]n and mixing weights w1 , . . . , wk > 0. Suppose we are
P
P
given access to T2D = i∈[k] wi · vi viT and T3D = i∈[k] wi · vi⊗3 . Then there is an algorithm which
recovers the bias vectors and mixing weights of D within ε in time O(n3 + k4 · (log log ε√w1 min )).
i
D
Because T2D and T3D are equal to MD
2 and M3 on their multilinear entries, the tensor compleD
tion algorithm of the previous section allows us to find T2D and T3D from MD
2 and M3 (this is only
D
possible because T2D and T3D are low-rank, whereas MD
2 and M3 are high-rank). We then learn
D by applying Theorem 4.1.
A complication is that Theorem 4.1 only allows us to recover the parameters of D if the bias
vectors are linearly independent. However, if the vectors v1 , . . . , vk are not linearly independent,
we can reduce to the independent case by working instead with v1⊗m , . . . , vk⊗m for sufficiently large
m. The tensor power we require depends on the separation between the bias vectors:
Definition 4.2. We call a set of vectors v1 , . . . , vk η-separated if for every i, j ∈ [k] such that i 6= j,
|hvi , vj i| ≤ kvi k · kvj k · (1 − η).
Lemma 4.3. Suppose that v1 , . . . , vk ∈ Rn are vectors which are η-separated, for η > 0. Let
m ≥ ⌈log 1 k⌉. Then v1⊗m , . . . , vk⊗m are linearly independent.
1−η
Proof. For vectors u, w ∈ Rn and for an integer t ≥ 0, we have that hu⊗t , w⊗t i = hu, wit . If
v1 , . . . , vk are η-separated, then for all i 6= j,
vj⊗m
vi⊗m
,
≤ |(1 − η)m | ≤ k1 .
kvi km kvj km
Now considering the Gram matrix of the vectors ( kvvii k )⊗m , we have a k × k matrix with diagonal
entries of value 1 and off-diagonal entries with maximum absolute value k1 . This matrix is strictly
diagonally dominant, and thus full rank, so the vectors must be linearly independent.
Remark 4.4. We re-iterate here that in the case where η = 0, we can reduce our problem to one
with fewer centers, and so our runtime is never infinite. Specifically, if vi = vj for some i 6= j,
then we can describe the same distribution by omitting vj and including vi with weight wi + wj .
If vi = −vj , in the even moments we will see the center vi with weight wi + wj , and in the odd
moments we will see vi with weight wi − wj . So we simply solve the problem by taking m′ = 2m
for the first odd m so that the v ⊗m are linearly independent, so that both the 2m′ - and 3m′ -order
moments are even to learn wi + wj and ±vi , and then given the decomposition into centers we can
extract wi and wj from the order-m moments by solving a linear system.
Thus, in the linearly dependent case, we may choose an appropriate power m, and instead apply
D
D
D
the tensor completion algorithm to MD
2m and M3m to recover T2m and T3m . We will then apply
⊗m
⊗m
Theorem 4.1 to the vectors v1 , . . . , vk in the same fashion.
Here we give the algorithm assuming perfect access to the moments of D and defer discussion
of the finite-sample case to Appendix B.
We remark again that the result in [AGH+ 14] is quite general, and applies to a large class of probability distributions of this character. However the work deals exclusively with distributions for which M2 = T2 and M3 = T3 ,
and assumes access to T2 and T3 through moment estimation.
6
10
Algorithm 4.5 (Learning Mixtures of Product Distributions). Input: Moments of the
distribution D. Goal: Recover v1 , . . . , vk and w1 , . . . , wk .
Let m be the smallest odd integer such that v1⊗m , . . . , vk⊗m are linearly independent. Let
M̂ = M2m + Ê2 and T̂ = M3m + Ê3 be approximations to the moment tensors of order
2m and 3m.
1. Set the non-multilinear entries of M̂ and T̂ to “missing,” and run Algorithm 3.2 on
P
P
M̂ and T̂ to recover M ′ = i wi · vi⊗2m + E2′ and T ′ = i wi · vi⊗3m + E3′ .
P
2. Flatten M ′ to the nm × nm matrix M = i wi · vi⊗m (vi⊗m )⊤ + E2 and similarly
P
flatten T ′ to the nm × nm × nm tensor T = i wi · (vi⊗m )⊗3 + E3 .
3. Run the “whitening” algorithm from Theorem 4.1 (see Appendix C) on (M, T ) to
recover w1 , . . . , wk and v1⊗m , . . . , vk⊗m .
4. Recover v1 , . . . , vk entry-by-entry, by taking the mth root of the corresponding entry
in v1⊗m , . . . , vk⊗m .
Output: w1 , . . . , wk and v1 , . . . , vk .
Now Theorem 1.1 is a direct result of the correctness of Algorithm 4.5:
Proof of Theorem 1.1. The proof follows immediately by combining Theorem 4.1 and Theorem 3.4,
and noting that the parameter m is bounded by m ≤ 2 + log 1 k.
1−η
Acknowledgements
We would like to thank Prasad Raghavendra, Satish Rao, and Ben Recht for helpful discussions,
and Moritz Hardt and Samuel B. Hopkins for helpful questions. We also thank several anonymous
reviewers for very helpful comments.
References
[AGH+ 14] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky, Tensor decompositions for learning latent variable models, Journal of Machine
Learning Research 15 (2014), no. 1, 2773–2832. 1, 2, 6, 9, 10, 15, 16, 17
[AGHK14] Animashree Anandkumar, Rong Ge, Daniel Hsu, and Sham M. Kakade, A tensor approach to learning mixed membership community models, Journal of Machine Learning
Research 15 (2014), no. 1, 2239–2312. 4, 9, 17
[AGJ14a]
Anima Anandkumar, Rong Ge, and Majid Janzamin, Analyzing tensor power
method dynamics: Applications to learning overcomplete latent variable models, CoRR
abs/1411.1488 (2014). 17
[AGJ14b]
Animashree Anandkumar, Rong Ge, and Majid Janzamin, Guaranteed non-orthogonal
tensor decomposition via alternating rank-1 updates, CoRR abs/1402.5180 (2014). 17
[AHK12]
Animashree Anandkumar, Daniel Hsu, and Sham M. Kakade, A method of moments for
mixture models and hidden markov models, COLT 2012 - The 25th Annual Conference
on Learning Theory, June 25-27, 2012, Edinburgh, Scotland, 2012, pp. 33.1–33.34. 4,
9, 17
11
[AM05]
Dimitris Achlioptas and Frank McSherry, On spectral learning of mixtures of distributions, Learning Theory, 18th Annual Conference on Learning Theory, COLT, 2005,
pp. 458–469. 1
[BCMV14] Aditya Bhaskara, Moses Charikar, Ankur Moitra, and Aravindan Vijayaraghavan,
Smoothed analysis of tensor decompositions, Symposium on Theory of Computing,
STOC, 2014. 4
[BKS15]
Boaz Barak, Jonathan A. Kelner, and David Steurer, Dictionary learning and tensor
decomposition via the sum-of-squares method, Proceedings of the Forty-Seventh Annual
ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June
14-17, 2015, 2015, pp. 143–151. 1
[BM15]
Boaz Barak and Ankur Moitra, Tensor prediction, rademacher complexity and random
3-xor, CoRR abs/1501.06521 (2015). 1
[C99]
Ph.D. thesis. 2
[CGG01]
Mary Cryan, Leslie Ann Goldberg, and Paul W. Goldberg, Evolutionary trees can be
learned in polynomial time in the two-state general markov model, SIAM J. Comput.
31 (2001), no. 2, 375–397. 2
[CP09]
Emmanuel J. Candès and Yaniv Plan, Matrix completion with noise, CoRR
abs/0903.3131 (2009). 14
[CR08]
Kamalika Chaudhuri and Satish Rao, Learning mixtures of product distributions using
correlations and independence, 21st Annual Conference on Learning Theory - COLT
2008, Helsinki, Finland, July 9-12, 2008, 2008, pp. 9–20. 1
[CR09]
Emmanuel J. Candès and Benjamin Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics 9 (2009), no. 6, 717–772. 7
[Fel07]
Vitaly Feldman, Attribute-efficient and non-adaptive learning of parities and DNF expressions, Journal of Machine Learning Research 8 (2007), 1431–1460. 5
[FM99]
Yoav Freund and Yishay Mansour, Estimating a mixture of two product distributions,
Proceedings of the Twelfth Annual Conference on Computational Learning Theory,
COLT Santa Cruz, CA, USA, July 7-9, 1999, pp. 53–62. 1, 2
[FOS08]
Jon Feldman, Ryan O’Donnell, and Rocco A. Servedio, Learning mixtures of product
distributions over discrete domains, SIAM J. Comput. 37 (2008), no. 5, 1536–1564. 1,
3, 4, 5, 6
[GAGG13] Suriya Gunasekar, Ayan Acharya, Neeraj Gaur, and Joydeep Ghosh, Noisy matrix completion using alternating minimization, Machine Learning and Knowledge Discovery in
Databases (Hendrik Blockeel, Kristian Kersting, Siegfried Nijssen, and Filip Železný,
eds.), Lecture Notes in Computer Science, vol. 8189, Springer Berlin Heidelberg, 2013,
pp. 194–209 (English). 7
[GHJY15]
Rong Ge, Furong Huang, Chi Jin, and Yang Yuan, Escaping from saddle points - online
stochastic gradient for tensor decomposition, CoRR abs/1503.02101 (2015). 17
12
[Har70]
Richard A. Harshman, Foundations of the PARFAC Procedure: Models and Conditions
for and “Explanatory” Multimodal Factor Analysis. 6
[Har14]
Moritz Hardt, Understanding alternating minimization for matrix completion, 55th
IEEE Annual Symposium on Foundations of Computer Science, FOCS 2014, Philadelphia, PA, USA, October 18-21, 2014, 2014, pp. 651–660. 7
[HK13]
Daniel Hsu and Sham M. Kakade, Learning mixtures of spherical gaussians: moment
methods and spectral decompositions, Innovations in Theoretical Computer Science,
ITCS ’13, Berkeley, CA, USA, January 9-12, 2013, 2013, pp. 11–20. 4, 9, 17, 18
[HKZ11]
Daniel Hsu, Sham M. Kakade, and Tong Zhang, Robust matrix decomposition with
sparse corruptions, IEEE Transactions on Information Theory 57 (2011), no. 11, 7221–
7234. 3, 6, 7, 8, 14
[HMRW14] Moritz Hardt, Raghu Meka, Prasad Raghavendra, and Benjamin Weitz, Computational
limits for matrix completion, CoRR abs/1402.2331 (2014). 7
[JO13]
Prateek Jain and Sewoong Oh, Learning mixtures of discrete product distributions using
spectral decompositions., CoRR abs/1311.2972 (2013). 1, 2, 3, 4
[JO14]
Prateek Jain and Sewoong Oh, Provable tensor factorization with missing data, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada,
2014, pp. 1431–1439. 1, 3, 4
[LCC07]
Lieven De Lathauwer, Joséphine Castaing, and Jean-François Cardoso, Fourth-order
cumulant-based blind identification of underdetermined mixtures, IEEE Transactions
on Signal Processing 55 (2007), no. 6-2, 2965–2973. 6
[MOS04]
Elchanan Mossel, Ryan O’Donnell, and Rocco A. Servedio, Learning functions of k
relevant variables, J. Comput. Syst. Sci. 69 (2004), no. 3, 421–434. 5
[MR06]
Elchanan Mossel and Sébastien Roch, Learning nonsingular phylogenies and hidden
markov models, The Annals of Applied Probability 16 (2006), no. 2, 583–614. 1, 4
[Pee96]
Renè Peeters, Orthogonal representations over finite fields and the chromatic number
of graphs, Combinatorica 16 (1996), no. 3, 417–431 (English). 7
[Rec09]
Benjamin Recht, A simpler approach to matrix completion, CoRR abs/0910.0651
(2009). 7
[Reg09]
Oded Regev, On lattices, learning with errors, random linear codes, and cryptography,
J. ACM 56 (2009), no. 6. 5
[TS15]
Gongguo Tang and Parikshit Shah, Guaranteed tensor decomposition: A moment approach. 1
[Val15]
Gregory Valiant, Finding correlations in subquadratic time, with applications to learning parities and the closest pair problem, J. ACM 62 (2015), no. 2, 13. 5
13
A
Tensor Completion with Noise
Here we will present a version of Theorem 3.4 which account for noise in the input to the algorithm.
We will first require a matrix completion algorithm which is robust to noise. The work of
[HKZ11] provides us with such an algorithm; the following theorem is a consequence of their work.7
Theorem A.1. Let M be an m×n rank-r matrix which is (µU , µV )-incoherent, and let Ω ⊂ [m]×[n]
be the set of hidden indices. If there are at most κ elements per column and
q ρ elements per row of
µV
µU
µV
µU
3
r
U µV
. In particular,
Ω, and if 2(κ m + ρ n )r < 1, then let α = 2 (κ m + ρ n )r and β = 1−λ κρµmn
α < 1 and β < 1. Then for every δ > 0, there is a semidefinite program that computes outputs M̂
satisfying
p
r
2δ min(n, m)
1
kM̂ − M kF ≤ 2δ +
1+
.
1−β
1−α
We now give an analysis for the performance of our tensor completion algorithm, Algorithm 3.2,
in the presence of noise in the input moments. This will enable us to use the algorithm on empirically
estimated moments.
P
Theorem A.2. Let T ∗ be a symmetric tensor of order m, so that T ∗ = i∈[k] wi · vi⊗m for some
vectors v1 , . . . , vk ∈ Rn and scalars w1 , . . . , wk 6= 0. Let span{vi } have incoherence µ and dimension
r. Suppose we are given access to T = T ∗ + E, where E is a noise tensor with |E(Y )| ≤ ε for every
Y ∈ [n]m . Then if
4 · k · µ · m ≤ n,
Then Algorithm 3.2 recovers a symmetric tensor T̂ such that
kT̂ (X, ·, ·) − T ∗ (X, ·, ·)kF ≤ 4 · ε · (5n3/2 )m−1 ,
for any slice T (X, ·, ·) indexed by a string X ∈ [n]m−2 , in time Õ(nm+1 ). In particular, the total
3
Frobenius norm error kT̂ − T ∗ kF is bounded by 4 · ε · (5n3/2 ) 2 m−2 .
Proof. We proceed by induction on the histogram length of the entries: we will prove that an entry
with a histogram of length ℓ has error at most ε(5n3/2 )m−ℓ .
In the base case of ℓ = m, we have that by assumption, every entry of E is bounded by ε.
Now, for the inductive step, consider an entry X with a histogram of length ℓ ≤ m − 1. In
filling in the entry T (X), we only use information from entries with shorter histograms, which by
the inductive hypothesis each have error at most α = ε(5n3/2 )m−ℓ−1 . Summing over the squared
errors of the individual entries, the squared Frobenius norm error of the known entries in the slice in
which T (X) was completed, pre-completion is at most n2 α2 . Due to the assumptions on k, µ, m, n,
by Theorem A.1, matrix completion amplifies the Frobenius norm error of β to at most a Frobenius
norm error of 5β · n1/2 . Thus, we have that the Frobenius norm of the slice T (X) was completed
in, post-completion, is at most 5n3/2 α, and therefore that the error in the entry T (X) is as most
ε · (5n3/2 )m−ℓ , as desired.
This concludes the induction. Finally, as our error bound is per entry, it is not increased by the
symmetrization in step 4. Any slice has at most one entry with a histogram of length one, 2n − 2
entries with a histogram of length two, and n2 − (2n − 1) entries with a histogram of length three.
Thus the total error in a slice is at most 4 · ε · (5n3/2 )m−1 , and there are nm−2 slices.
7
In a previous version of this paper, we derive Theorem A.1 as a consequence of Theorem 2.4 and the work of
[CP09]; we refer the interested reader to http://arxiv.org/abs/1506.03137v2 for the details.
14
B
Empirical Moment Estimation for Learning Product Mixtures
In Section 4, we detailed our algorithm for learning mixtures of product distributions while assuming access to exact moments of the distribution D. Here, we will give an analysis which accounts for
the errors introduced by empirical moment estimation. We note that we made no effort to optimize
the sample complexity, and that a tighter analysis of the error propagation may well be possible.
Algorithm B.1 (Learning product mixture over separated centers). Input: N independent samples x1 , . . . , xN from D, where D has bias vectors with separation η > 0. Goal:
Recover the bias vectors and mixing weights of D.
Let m be the smallest odd integer for which v1⊗m , . . . , vk⊗m become linearly independent.
⊗m
⊗m ⊤
1 P
D
1. Empirically estimate MD
2m and M3m by calculation M := N
i∈[N ] (xi )(xi )
P
⊗3
and T := N1 i∈[N ] (x⊗m
i ) .
2. Run Algorithm 4.5 on M and T .
Output: The approximate mixing weights ŵ1 , . . . , ŵk , and the approximate vectors
v̂1 , . . . , v̂k .
Theorem B.2 (Theorem 1.1 with empirical moment estimation). Let D be a product mixture over
k centers with bias vectors v1 , . . . , vk ∈ [−1, 1]n and mixing weights w1 , . . . , wk > 0. Let m be the
smallest odd integer for which v1⊗m , . . . , vk⊗m are linearly independent (if v1 , . . . , vk are η-separated
P
for η > 0, then m ≤ log 1 k). Define M2m = i∈[k] vi⊗m (vi⊗m )⊤ . Suppose
1−η
4 · m · r · µ ≤ n,
where µ and r are the incoherence
and dimension of the space span{vi } respectively. Furthermore,
√
1
let β ≤ min O(1/k wmax ), 40 be suitably small, and let the parameter N in Algorithm B.1 satisfy
N ≥ ε22 (4 log n + log 1δ ) for ε satisfying
!
σk (M )1/2
β · σk (M2m )
1
,
ε≤
min
√
6 wmax (5n3/2 )3m/2
4 · (5n3/2 )3m−2
Finally, pick any η ∈ (0, 1). Then with probability at least 1 − δ − η, Algorithm B.1 returns
vectors v̂1 , . . . , v̂k and mixing weights ŵ1 , . . . , ŵk such that
kv̂i − vi k ≤
√
1/2
n · 10 · β + 60 · β · kM2m k
β · σk (M2m )
+
√
6 wmax
1/m
,
and
|ŵi − wi | ≤ 40β,
and runs in time nO(m) · O(N · poly(k) log(1/η) · (log k + log log( wmax
ε ))). In particular, a choice of
Õ(m)
N ≥n
gives sub-constant error, where the tilde hides the dependence on wmin and σk (M2m ).
Before proving Theorem B.2, we will state state the guarantees of the whitening algorithm of
[AGH+ 14] on noisy inputs, which is used as a black box in Algorithm 4.5. We have somewhat
modified the statement in [AGH+ 14] for convenience; for a breif account of their algorithm, as well
as an account of our modifications to the results as stated in [AGH+ 14], we refer the reader to
Appendix C.
15
Theorem B.3 (Corollary of Theorem 4.3 in [AGH+ 14]). Let v1 , . . . , vk ∈ [−1, 1]n be vectors and
P
P
let w1 , . . . , wk > 0 be weights. Define M = i∈[k] wi · vi viT and T = i∈[k] wi · vi⊗3 , and suppose
we are given M̂ = M + EM and T̂ = T + ET , where EM ∈ Rn×n and ET ∈ Rn×n×n are symmetric
error terms such that
√
√
6kEM kF wmax
1
kkET kF
<O √
2β :=
+
.
σk (M )
wmax · k
σk (M )3/2
Then there is an algorithm that recovers vectors v̂1 , . . . , v̂k and weights ŵ1 , . . . , ŵk such that for all
i ∈ [n],
kvi − v̂i k ≤ kEM k1/2 + 60kM k1/2 β + 10β, and |wi − ŵi | ≤ 40β,
1
with probability 1 − η in time O(L · k 3 · (log k + log log( √wmax
·ε ))), where L is poly(k) log(1/η).
Having stated the guarantees of the whitening algorithm, we are ready to prove Theorem B.2.
Proof of Theorem B.2. We account for the noise amplification in each step.
Step 1: In this step, we empirically estimate the multilinear moments of the distribution.
We will apply concentration inequalities on each entry individually. By a Hoeffding bound, each
entry concentrates within
ε of its expectation with probability 1 − exp(− 12 N · ε2 ). Taking a union
n
n
bound over the 2m + 3m moments we must estimate, we conclude that with probability at least
1 − exp(− 21 N · ε2 + 4m log n), all moments concentrate to within ε of their expectation. Setting
N = ε22 (4m log n + log 1δ ), we have that with probability 1 − δ, every entry concentrates to within
ε of its expectation.
Now, we run Algorithm 4.5 on the estimated moments.
Step 1 of Algorithm 4.5: Applying Theorem A.2, we see that the error satisfies kE2′ kF ≤
9
4 · ε · (5n3/2 )3m−2 and kE3′ kF ≤ 4 · ε · (5n3/2 ) 2 m−2 .
Step 2 of Algorithm 4.5: No error is introduced in this step.
Step 3 of Algorithm 4.5: Here, we apply Theorem B.3 out of the box, where our vectors
are now the vi⊗m . The desired result now follows immediately for the estimated mixing weights,
and for the estimated tensored vectors we have kui − vi⊗m k ≤ 10 · β + 60 · βkM k1/2 + kE2′ k, for
√
β as defined in Theorem B.2. Note that kE2′ k ≤ kE2′ kF ≤ β · σk (M2m )/6 wmax , so let γ =
k (M2m )
√
10 · β + 60 · βkM k1/2 + β·σ
6 wmax .
Step 4 of Algorithm 4.5: Let u∗i be the restriction of ui to the single-index entries, and let
be the same restriction for vi⊗m . The bound on the error of the ui applies to restrictions, so we
have kvi∗ − u∗i k ≤ γ. So the error in each entry is bounded by γ. By the concavity of the mth root,
√
we thus have that kvi − v̂i k ≤ n · γ 1/m .
vi∗
To see that choosing N ≥ nÕ(m) gives sub-constant error, calculations suffice; we only add that
kM2m k ≤ rnm , where we have applied a bound on the Frobenius norm of kM2m k. The tilde hides
the dependence on wmin and σk (M2m ). This concludes the proof.
16
C
Recovering Distributions from Second- and Third-Order Tensors
In this appendix, we give an account of the algorithm of [AGH+ 14] which, given access to estimates
of MVD and TVD , can recover the parameters of D. We note that the technique is very similar to
those of [AHK12, HK13, AGHK14], but we use the particular algorithm of [AGH+ 14]. In previous
sections, we have given a statement that follows from their results; here we will detail the connection.
In [AGH+ 14], the authors show that for a family of distributions with parameters v1 , . . . , vk ∈ Rn
and w1 , . . . , wk > 0, if the v1 , . . . , vk are linearly independent and one has approximate access to
P
P
MV := i∈[k] wi vi viT and TV := i∈[k] wi · vi⊗3 , then the parameters can be recovered. For this,
they use two algorithmic primitives: singular value decompositions and tensor power iteration.
Tensor power iteration is a generalization of the power iteration technique for finding matrix
eigenvectors to the tensor setting (see e.g. [AGH+ 14]). The generalization is not complete, and
the convergence criteria for the method are quite delicate and not completely understood, although
there has been much progress in this area of late ([AGJ14b, AGJ14a, GHJY15]). However, it is wellknown that when the input tensor T ∈ Rn×n×n is decomposable into k < n symmetric orthogonal
P
rank-1 tensors, i.e. T = i∈[k] vi⊗3 where k < n and hvi , vj i = 0 for i 6= j, then it is possible to
recover v1 , . . . , vk using tensor power iteration.
The authors of [AGH+ 14] prove that this process is robust to some noising of T :
Theorem C.1 (Theorem 5.1 in [AGH+ 14]). Let T̃ = T + E ∈ Rk×k×k be a symmetric tensor,
P
where T has the decomposition T =
i∈[k] λi · ui ⊗ ui ⊗ ui for orthonormal vectors u1 , · · · , uk
and λ1 , . . . , λk > 0, and E is a tensor such that kEkF ≤ β. Then there exist universal constants
C1 , C2 , C3 > 0 such that the following holds. Choose η ∈ (0, 1), and suppose
β ≤ C1 ·
and also
r
ln(L/ log2 (k/η))
·
ln k
λmin
k
ln(ln(L/ log2 (k/η))) + C3
1−
−
4 ln(L/ log2 (k/η))
s
ln 8
ln(L/ log2 (k/η))
!
≥ 1.02 1 +
r
ln 4
ln k
!
.
Then there is a tensor power iteration based algorithm that recovers vectors û1 , . . . , ûk and coefficients λ̂1 , . . . , λ̂k with probability at least 1 − η such that for all i ∈ [n],
kûi − ui k ≤ β
8
,
λi
and
|λ̂i − λi | ≤ 5β,
in O(L · k3 · (log k + log log( λmax
β ))) time. The conditions are met when L = poly(k) log(1/η).
The idea is then to take the matrix MV , and apply a whitening map W = (MV† )1/2 to orthogonalize the vectors. Because v1 , . . . , vk are assumed to be linearly independent, and because
P
T = Id , it follows that the √w · W v are orthogonal vectors.
W MW =
i
i
k
i∈[k] wi (W vi )(W vi )
Now, applying the map W ∈ Rk×n to every slice of T in every direction, we obtain a new tensor
P
TW = i∈[k] wi (W vi )⊗3 , by computing each entry:
T (W, W, W )a,b,c := TW (a, b, c) =
X
1≤a′ ,b′ ,c′ ≤n
W T (a′ , a) · W T (b′ , b) · W T (c′ , c) · T (a′ , b′ , c′ ).
17
From here on out we will use T (A, A, A) to denote this operation on tensors. The tensor TW
P
√
thus has an orthogonal decomposition. Letting ui = wi W vi , we have that T = i∈[k] √1wi · u⊗3
i .
√
1
Applying tensor power iteration allows the recovery of the ui = wi · W vi and the weights √wi ,
from which the vi are recoverable.
The theorem Theorem B.3 is actually the consequence of Theorem C.1 and the following proposition, which controls the error propagation in the whitening step.
P
Proposition C.2 (Consequence of Lemma 12 of [HK13]). Let M2 = i∈[k] λi · ui uTi be a rank-k
PSD matrix, and let M̂ be a symmetric matrix whose top k eigenvalues are positive. Let T =
P
⊗3
i∈[k] λi · ui , and let T̂ = T + ET where ET is a symmetric tensor with kET kF ≤ γ.
Suppose kM2 − M̂ kF ≤ εσk (M2 ), where σk (M ) is the kth eigenvalue of M2 . Let U be the square
root of the pseudoinverse of M2 , and let Û be the square root of the pseudoinverse of the projection
of M̂ to its top k eigenvectors. Then
kT (U, U, U ) − T̂ (Û , Û , Û )k ≤ √
6
ε + γ · kÛ k2 kÛ kF
λmin
Proof. We use the following fact, which is given as Lemma 12 in [HK13].
kT (U, U, U ) − T̂ (Û , Û , Û )k ≤ √
6
ε + kET (Û , Û , Û )k2 .
λmin
The proof of this fact is straightforward, but requires a great deal of bookkeeping; we refer the
reader to [HK13].
It remains to bound kET (Û , Û , Û )k2 . Some straightforward calculations yield the desired bound,
X
X
kE(Û , Û , Û )k2 ≤
k(Û ei ) ⊗ Û T Ei Û k ≤
kÛ ei k2 kÛ T Ei Û k
i
i
2
≤ kÛ k ·
2
≤ kÛ k ·
X
i
kÛ ei k2 kEi k ≤ kÛ k2 ·
sX
i
kÛ ei k22
sX
i
X
i
kÛ ei k2 kEi kF
kEi k2F ≤ kÛ k2 · kÛ kF · kEkF ,
where we have applied the triangle inequality, the behavior of the spectral norm under tensoring,
the submultiplicativity of the norm, and Cauchy-Schwarz.
We now prove Theorem B.3.
Proof of Theorem B.3. Let Û be the square
the projection of M̂ to its top k eigenvectors.
√ root of−1/2
−1/2
Note that kÛ k ≤ σk (M2 )
, kÛ kF ≤ kσk (M2 )
, and thus by Proposition C.2, the error E
in Theorem C.1 satisfies
√
6kEM kF
kET kF k
√
.
2β := kEkF ≤
+
σk (M2 ) λmin σk (M2 )3/2
Suppose 1/40 ≥ 2β ≥ kEkF . Applying Proposition C.2, we obtain vectors u1 , . . . , uk and
√
√
scaling factors λ1 , . . . , λk such that kui − wi · M −1/2 vi k ≤ 16 · β · wi and | √1wi − λi | ≤ 5 · β. The
wi are now recovered by taking the inverse square of the λi , so we have that when 10β <
|ŵi − wi | =
2λi − 10β
1
1
1
≤ 5β · 2
≤ 40β,
− wi ≤ 2 −
2
2
(λi ± 10β)
λi
λi
λi (λi − 10β)2
18
1
4
≤ 41 λi ,
where to obtain the second inequality we have taken a Taylor expansion, and in the final inequality
we have used the fact that 10β < 41 λi .
We now recover vi by taking v̂i = λi · Û ui , so we have
√
√
kv̂i − vi k ≤ kλi · Û wi · M −1/2 vi − vi k + kλi · Û (ui − wi M −1/2 vi )k
√
√
√
≤ (λi · wi )k(Û · M −1/2 − I)vi k + (1 − λi wi )kvi k + kÛ k · 16βλi wi
≤ (1 + 10β)kÛ · M −1/2 − Ik + 10β + kÛ k · 16β(1 + 10β)
≤ (1 + 10β)kÛ · M −1/2 − Ik + 10β + kÛ k · 16β(1 + 10β).
It now suffices to bound kÛ M −1/2 − Ik, for which it in turn suffices to bound kM −1/2 Û Û M −1/2 −
Ik, since the eigenvalues of AAT are the square eigenvalues of A. Consider k(M −1/2 Πk (M +
EM )Πk )M −1/2 − Ik, where Πk is the projector to the top k eigenvectors of M . Because both
matrices are PSD, finally this reduces to bounding kM − Πk (M + EM )Πk k. Since M is rank k, we
have that kM − Πk (M + E)Πk k = σk+1 (EM ) ≤ kEM k.
Thus, taking loose bounds, we have
kvi − v̂i k ≤ kEM k1/2 + 60β · kM2 k1/2 + 10β,
as desired.
19
| 8 |
Fast camera focus estimation for gaze-based focus control
Wolfgang Fuhla , Thiago Santinia , Enkelejda Kasnecia
a Eberhard
Karls University Tübingen, Perception Engineering, Germany, 72076 Tübingen, Sand 14, Tel.: +49 70712970492, [email protected],
[email protected], [email protected]
arXiv:1711.03306v1 [cs.CV] 9 Nov 2017
Abstract
Many cameras implement auto-focus functionality; however, they typically require the user to manually identify the location to
be focused on. While such an approach works for temporally-sparse autofocusing functionality (e.g., photo shooting), it presents
extreme usability problems when the focus must be quickly switched between multiple areas (and depths) of interest – e.g., in a
gaze-based autofocus approach. This work introduces a novel, real-time auto-focus approach based on eye-tracking, which enables
the user to shift the camera focus plane swiftly based solely on the gaze information. Moreover, the proposed approach builds a
graph representation of the image to estimate depth plane surfaces and runs in real time (requiring ≈ 20ms on a single i5 core), thus
allowing for the depth map estimation to be performed dynamically. We evaluated our algorithm for gaze-based depth estimation
against state-of-the-art approaches based on eight new data sets with flat, skewed, and round surfaces, as well as publicly available
datasets.
Keywords:
Shape from focus, Depth from focus, Delaunay Triangulation, Gaze based focus control, eye controlled autofocus, focus
estimation
1. Introduction
Human vision is a foveated vision, i.e., sharpest vision is possible only in the central 2◦ of the field of view. Therefore, in
order to perceive our environment, we perform eye movements,
which allow us to focus on and switch between regions of interest sequentially [23]. Modern digital microscopes and cameras
share a similar problem, in the sense that the auto focus is only
applied to the center of the field of view of the camera.
For various applications – such as microsurgery, microscopical material inspection, human-robot collaborative settings, security cameras – it would be a significant usability improvement
to allow users to adjust the focus in the image to their point of
interest without reorienting the camera or requiring manual focus adjustments. This would not only generate a benefit for
the user of the optical system, but also to non-users – for instance, patients would benefit from a faster surgery and a less
strained surgeon. For a security camera, the security staff could
watch over a complete hall with eye movement speed, quickly
scanning the environment for suspicious activity. Applied to
different monitors, the efficiency gain would be even greater.
In this work, we used a commercial eye tracker to capture
the subject’s gaze. This gaze is then mapped to a screen where
the camera images (two images are overlaid in red and cyan
for a 3D impression) are presented. The gaze position on the
image is mapped to the estimated depth map, from which the
focal length of the camera is automatically adjusted. This enables the user to quickly explore the scene without manually
adjusting the camera’s focal length. The depth map creation for
the complete scene takes ≈20ms (single core from i5), whereas
the preprocessing for each image takes ≈15ms. For depth map
Preprint submitted to Springer
creation, we record 20 images with different focal lengths, although this process can be completed with fewer images by
trading off accuracy. This leads to a system update time of
332ms based on our cameras frame rate (60Hz or 16.6ms per
frame), but buffering delays increase this time to ≈450ms. It is
worth noticing that this process can be sped-up through a faster
camera, multiple cameras, and GPU/multicore processing but is
limited by the reaction time of the optotune lens (2.5ms [17]),
which is used to change the focus.
In the following sections, we use depth as an index in relation
to the acquired image set. The real world depth can be calculated using the resulting focal length (acquired from the index)
and the camera parameters.
2. Related work
To compute a 3D representation of an area given a set of
images produced with different camera focal lengths, two steps
have to be applied. The first step is measuring or calculating
how focused each pixel in this set is. Afterwards, each pixel is
assigned an initial depth value (usually the maximum response)
on which the 3D reconstruction is performed.
In the sequence, we describe the shape-from-focus measuring
operators briefly, grouping then similarly to Pertuz et al. [19] –
which we refer the reader to for a wider overview.
• Gradient-based measure operators are first or higher order
derivatives of the Gaussian and are commonly applied for
edge detection. The idea here is that unfocused or blurred
edges have a lower response than sharp or focused edges.
The best performing representatives according to [19] are
November 10, 2017
first-order derivatives [21, 8], second central moment on
first order derivatives [18], and the second central moment
on a Laplacian (or second order derivatives) [18].
• Statistics-based measurements are based on calculated
moments of random variables. These random variables are
usually small windows shifted over the image. The idea
behind statistics for focus measurements is that the moments (especially the second central moment) reach their
maximum at focused parts of the image. According to
[19], the best representatives are [32] using Chebyshev
moments, second central moment of the principal components obtained from the covariance matrix [28], second
central moment of second central moments in a window
[18], and second central moment from the difference between the image and a blurred counterpart [9, 24, 26, 12].
Figure 1: The system for image recording consisting of a digital camera
(XIMEA mq013mge2) and an optotune lens (el1030). On the left side, the
subject with eye tracker looking at the image visualization is shown. The same
subject with 3D goggles is shown on the right.
focus measure based on the result of a least squares optimization technique is computed. These results are combined and used as depth estimation [1, 2, 27, 14].
• Frequency-based measures transform the image to the frequency domain, which is usually used in image compression. These transformations are
Fourier, wavelet or curvelet transforms. Afterwards, the
coefficients of the base functions are summed [31, 30, 11]
or the statistical measures are applied on the coefficients
[19]. The idea behind frequency-based focus measure is
that the need for many base functions (or non zero coefficients) to describe an image is a measure of complexity or
structure in the image. This amount of structure or complexity is the measure of how focused the image is.
• Surface fitting: Here the samples for the fitting procedure
are volumes around a pixel of focus measures. The surface
is fitted to those samples, and the value aligned (in direction of the set of measurements) to the pixel is used as a
new value. The improvement to the Gaussian or polynomial fit is that the neighborhood of a pixel influences its
depth estimation too. This approach together with a neuronal network for final optimization has been proposed by
[4].
• Texture-based measures use recurrence of intensity values
[10, 24, 26], computed locally binary patterns [13], or the
distance of orthogonally computed gradients in a window
[12]. The idea here is equivalent to the frequency-based
approaches, meaning that the amount of texture present
(complexity of the image) is the measure of how focused
the image is.
3. Setup description
As shown in Figure 1, the setup consists of a binocular Dikablis Professional eye tracker [6], a conventional computer visualizing the image from the camera, and an optotune lens. The
optotune lens has a focal tuning range of 50mm to 120mm [17],
which can be adjusted online over the lens driver (serial communication). The reaction time of the lens is 2.5ms [17]. We
used the XIMEA mq013mge2 digital camera shown in Figure 1
with a frame rate of 60 Hz and resolution 1280x1024 (we used
the binned image 640x512).
For estimating the subjects gaze we used the software EyeRec [22] and the pupil center estimation algorithm ElSe [7].
The calibration was performed with a nine-point grid, fitting a
second order polynomial to the pupil center in the least squares
sense.
Additionally, it has to be noted that our setup includes a set
of fixed lenses between the optotune lens and the object, which
could not be disclosed at the time of writing (due to a NonDisclosure Agreement).
Regarding 3D reconstructions, common methods are:
• Gaussian and polynomial fit: These techniques fit a
Gaussian [16] or polynomial [25] to the set of focus measures. To accomplish this, samples are collected outgoing
from the maximum response of a pixel in the set of measurements (for each image, one measurement) in both directions. The maximum of the resulting Gaussian or polynomial is then used as depth estimate.
• Surface fitting: Here the samples for the fitting procedure
are volumes around a pixel of focus measures. The surface
is fitted to those samples, and the value aligned (in direction of the set of measurements) to the pixel is used as a
new value. The improvement to the Gaussian or polynomial fit is that the neighborhood of a pixel influences its
depth estimation too. This approach together with a neuronal network for final optimization has been proposed by
[4].
4. Application
The Graphical User Interface (GUI) of the system can be
seen in Figure 2; in this GUI, the subjects gaze is mapped to the
viewing area through marker detection and transformation between the eye tracker coordinates to screen coordinates. The top
two images are from two cameras with optotune lenses. Their
• Dynamic programming: In this approach, the volume is
divided into subvolumes. For each sub-volume, an optimal
2
(a) Input
(b) Magnitude
(c) Edges
(d) Filtered magnitude
Figure 2: The GUI of the system. In the top row, the images from the two
cameras with optotune lenses are shown. The depth map for those is on the
right. The correspondence is marked by an arrow. Markers on the left side
are used to map the gaze coordinates from the head-mounted eye tracker to the
subjects view area. For a 3D representation to the user, we overlay the images
from both cameras in red and cyan, which can be seen in the subjects viewing
area.
Figure 4: Canny edge based in focus estimation for one input image 4a. In 4b
and 4c the output of the canny edge filter is shown and the filtered magnitude
image in 4d.
in classical shape-from-focus methods. Therefore, we use the
Canny edge detector [5] as focus measure. The applied filter is
the first derivative of a Gaussian. The resulting edges are used
to filter the magnitude response of the filter, allowing only values with assigned edges to pass. For each filtered pixel magnitude, a maximum map through the set of responses is collected.
In this map, most of the pixels have no value assigned. Additionally, it has to be noticed that the same edge can be present
in this map multiple times because the changing focal length
influences the field of view of the camera. This leads to tracing
edges in the maximum map.
After computing and filtering the focus measures of the image set, they have to be separated into parts. Therefore, we
need candidates representing a strong edge part and their corresponding edge trace to interpolate the depth estimation for
the candidate pixel. The candidate selection is performed by
only selecting local maxima using an eight-connected neighborhood in the maximum map. These local maxima are used
to build a graph representing the affiliation between candidates.
This graph is build using the Delaunay triangulation, connecting candidates without intersections.
The separation of this graph into a maximum response and
edge trace responses is performed by separating nodes that are
maximal in their neighborhood from those that are not. For
interpolation of the depth value of maximal nodes, nonmaximal nodes are assigned based on their interconnection to the
maxima and to an additional nonmaximal node. Additionally
the set of responses is searched for values at the same location
since the influence of the field of view does not affect all values
in the image, and, as a result, centered edges stay at the same
location. The interpolation is performed fitting a Gaussian (as
in [16]) to all possible triple assignments and using the median
of all results.
Figure 3: The algorithmic workflow. The gray boxes are in an output of the
algorithm. White boxes with rounded corners are algorithmic steps.
depth maps can be seen on the right side and the correspondence is indicated by an arrow. Based on a slight shift of both
cameras it is possible to use the red-cyan technique, to achieve a
3D impression for the user too (Figure 1 on the right side). The
focal length of both cameras is automatically set to the depth at
the users gaze position.
5. Method
All steps of the algorithm are shown in Figure 3. The input
to the algorithm is a set of grayscale images recorded with different focal length. The images have to be in the correct order
otherwise the depth estimation will assign wrong depth values
to focused parts of the volume. The main idea behind the algorithm is to estimate the depth map only based on parts of the
image in which the focus is measurable, and interpolate it to the
surrounding pixels if possible. Regions, where the focus is measurable, are clear edges or texture in an image. Plain regions,
for example, usually induce erroneous depth estimations, which
have to be filtered out afterward, typically using a median filter
3
(a) Magnitude of maximum val- (b) Depth of maximum values
ues
(a) Magnitude trace
(b) Depth trace
,
Figure 6: Maximum magnitude responses (6a) and the assigned depth index
(6b) in the set of images. In comparison to figure 5 were only 19 images in the
input set where used, here the set consist of 190 images to show the traces more
clear.
(c) Local maxima
5.2. Graph Representation
After in each image the focus measure was applied and filtered, the maximum along z of each location is collected in a
maximum map (Figure 5a, Equation 1).
(d) Graph representation
Figure 5: Maximum responses in the set of images. In 5a the maximum magnitude for each location collected in the image is shown (black means no measurement collected) and the corresponding depth values in 5b. 5c shows the local
maxima (pixels increased for visualization) of the maximum magnitude map 5a
on which a Delaunay triangulation is applied resulting in a graph representation
5d.
M(x, y) = maxz (V (x, y, z))
(
z
D(x, y) =
0
M(x,y) ∈ V(x,y,z)
M(x,y)=0
(1)
(2)
Equation 1 calculates the maximum map M (Figure 5a) where
V represents the volume of filtered focus measures (one for each
image). The coordinates x, y correspond to the image pixel location, and z is the image index in the input image. Equation 2
is the corresponding depth or z-index map D (Figure 5b) where
an image set position is assigned to each maximum value.
In Figure 5a and its corresponding depth map 5b, it can be
seen that not every pixel has a depth estimation. Additionally,
most of the collected edges have traces, meaning that the edge
was collected in images recorded with different focal length.
The trace occurs because changes in focal length induce a scaling of the field of view of the camera. In Figure 6, these traces
and their occurrences are shown more clear due to the increased
amount of the input image set (190 images). For Figure 5, we
used 19 images in the input set. The bottom right part of Figure 6a shows that the occurrence of those traces is not present
as strongly as in the other parts. This is due to the lens center
(in our setup bottom right) from which the field of view scale
impact increases linearly in distance.
The next step of the algorithm is the computation of local
maxima (Figure 5c) and, based on those, setting up a graph by
applying the Delaunay triangulation (Figure 5d). The idea behind this step is to abstract the depth measurements, making it
possible to estimate the depth of plain surfaces (as long as their
borders are present) without the need of specifying a window
size. Additionally, this graph is used to assign a set of depth
estimations to one edge by identifying connected traces. These
values are important because the set of input images does not
have to contain the optimal focus distance of an edge. Therefore, the set of depth values belonging to one edge are used to
interpolate its depth value.
The graph spanned by the maxima nodes and the corresponding interpolated depth values are now the representation of the
depth map. For further error correction, an interdependent median filter is applied to each node and its direct neighbors in a
non-iterative way to ensure convergence. The last part of the algorithm is the conversion of this graph into a proper depth map.
It has to be noticed that this graph is a set of triangles spanned
between maximal nodes. Therefore, each pixel in the resulting
depth map can be interpolated using its barycentric coordinates
between the three assigned node values of the triangle it is assigned to. Pixels not belonging to a triangle have no assigned
depth value. All steps are described in the following subsections in more detail.
5.1. Focus Measurement
Figure 4 shows the first step of the algorithm for one input
image 4a. We used the canny edge filter [5] with the first derivax2 +y2
1
2σ 2 ∂ ∂ ). The
tive of a Gaussian as kernel (N 0 (x, y) = 2πσ
2e
∂x ∂y
response (magnitude) of the convolution with this kernel is visualized in the Figure 4b. For
√ σ , which is the standard deviation
of the Gaussian, we used 2. After adaptive threshold selection (95% are not edges) and nonmaximum suppression of the
Canny edge filter, we use the resulting edges (Figure 4c) as filter mask. In other words, only magnitude values assigned to a
valid edge are allowed to pass. The stored magnitude responses
for the input image are shown in Figure 4d. The idea behind
this step is to restrict the amount of information gathered per
image, consequently reducing the impact of noise on the algorithm. These two parameters (σ and non-edge ratio) are the
only variables of the proposed method.
4
Algorithm 1 Algorithm for candidate selection where CAN(a)
are all candidates for node a, CN(a) are the connected neighbors to node a, V the set of focus measure responses for each
input frame, z the frame index, D(a) the depth index of node a,
Gall all local maxima, Gmax all maximal nodes and Gnonmax all
not maximal nodes.
Require: Gall , Gmax , Gnonmax ,V
(a) Nodes Gmax
function Selectcorrespondences (Gall , Gmax , Gnonmax )
for a ∈ Gmax do
for b ∈ CN(Gall , a), b ∈ Gnonmax do
if D(a) , D(b) then add(CAN(a), b)
end if
for c ∈ CN(Gall , b) AND c ∈ Gnonmax do
if D(a) , D(c) then add(CAN(a), c)
end if
end for
end for
for z ∈ V (a), V (a, z) > 0 do
if D(a) , z then add(CAN(a), z)
end if
end for
end for
return CAN
end function
(b) Nodes Gnonmax
Figure 7: White dots represent node locations (pixels increased for visualization). In 7a the nodes which have an equal or larger magnitude value compared
to their connected neighbors in Gall are shown. 7b show the remaining non
maximal nodes of Gall after removing those in Gmax .
The local maxima (5) are computed based on a eight connected neighborhood on the maximum magnitude map ( 5a).
Based on those points, the Delaunay triangulation (5d) sets up
a graph, where each triple of points creates a triangle if the circumcircle does not contain another point. This graph Gall (Figure 5) contains multiple maxima from the same edge on different depth plains. To separate those, we introduce two types
of nodes: a maximal response set Gmax (Figure 7a) and a non
maximal response set Gnonmax (Figure 7b).
Gmax = ∀i ∈ Gall , ∀ j ∈ CN(Gall , i),
V ( j) ≤ V (i)
Gnonmax = ∀i ∈ Gall , i < Gmax
(3)
Equation 5 describes the correspondences collection based on
the collected candidates (CAN(a)) belonging to node a. The
equations after the large bracket are the conditions, where D(a)
~ ac is the angle between
is the depth index of node a and ab]~
~
the two vectors ab and a~c. In our implementation, we used
−0.95 (-1 corresponds to π) because an image is a solid grid
and floating point inaccuracy.
(4)
Equation 3 is used to build the maximal response set Gmax (Figure 7a) where i is a Node in Gall and CN(Gall , i) delivers all
connected neighbors of i. Therefore only nodes with an equal
or higher magnitude value compared to their connected neighbors belong to Gmax . Gnonmax (Figure 7b) consists of all nodes
in Gall which are not in Gmax and specified in equation 4.
5.4. Interpolation
For estimating the real depth of a node in Gmax , we used
the three point Gaussian fit technique proposed by Willert and
Gharib [29] and first used for depth estimation by Nayar et
D−D̄
al. [16]. The assumed Gaussian is M = M peak e−0.5 σ where
M is the focus measure response, D the depth, and σ the standard deviation of the Gaussian. This can be rewritten with the
D̄
natural logarithm ln(M) = ln(M peak ) − 0.5 D−
σ . D̄ is the depth
value where the Gaussian has the highest focus measure (mean)
and obtained using equation 6.
5.3. Node Correspondence Collection
Algorithm 1 performs the candidate selection. Candidates
are possible node correspondences and marked by CAN(a),
where a is the index node for the assignment. For each node in
Gmax , connected nodes in Gnonmax with a different depth value
are collected. Since we want to collect all nodes that could possibly build a line over a trace and the maximum could be the last
or first measurement, we have to collect the connected nodes to
the node from Gnonmax too. In case the node from Gmax is close
to the lens center, where the scaling has low to none impact, we
have to search in the volume of responses (V ) as well.
After all candidates are collected, each pair has to be inspected to be a possible line trace or, in other words, a valid
pair of corresponding focus measures.
COR(a) = ∀b, c ∈ CAN(a),
b , c
D(a) , D(b) , D(c)
~
ab]~
ac = π
M + (a, b) = ln(M(a)) − ln(M(b))
M − (a, b, c) = M + (a, b) + M + (a, c)
D2− (a, b) = D(a)2 − D(b)2
∆D(a, c) = 2|D(a) − D(c)|
D̄(a, b, c) =
(6)
M + (a, c) ∗ D2− (a, b)
∆D(a, c) ∗ M − (a, b, c)
In equation 6, a, b, c are node triples obtained from COR(a),
where M is the focus measure, and D is the depth value (we
used the same letters as in equation 1 and 2 for simplification
(5)
5
(a) Barycentric
Figure 9: In 9a the barycentric coordinates of a triangle spanned by nodes A,B
and C is shown. The gray dot P in the middle of this triangle has coordinates
a,b and c which is related to its distance to A, B and C. 9b shows an exemplary
interpolation in such a triangle, where the numbers next to each corner are the
intesity value of the corner pixel.
Figure 8: The graph build on Gmax using Delaunay triangulation.
and want to note that it is not valid for nonmembers of Gall ,
which are obtained through the response volume (1) ).
Since COR(a) can have more than one pair of possible interpolations, we use the median over all possible interpolation
values (D(a) = Median({D̄(a, b, c)}), ∀b, c ∈ COR(a)).
5.5. Rebuild Graph
For using those interpolated nodes in Gmax as image representation, we have to rebuild the graph. Again we use the Delaunay triangulation with the result shown in Figure 8. Due
to possible errors from the interpolation or the focus measurement, we apply a median filter on the depth of the node and
its neighborhood (D(a) = Median({D(CN(a)), D(a)})). This
median interpolation is performed interdependently; in other
words, the values are stored directly into the depth map D,
therefore influencing the median filtering of its neighbors. We
used this way of median filtering because it delivered slightly
better results. A more time-consuming approach would be iteratively determining the median. However, such an iterative approach could lead to oscillation and therefore no convergence.
(a) Depth map
(b) 3D model
Figure 10: In 10a (normalized) white is closer, dark gray is further away and
black means that the depth measure could not estimate a depth value. The 3D
model in 10b is generated with the depth map from 10a using matlab.
triangle and a,b and c are the barycentric coordinates. An exemplary interpolated triangle can be seen in 9b.
D(P) = a ∗ D(A) + b ∗ D(B) + c ∗ D(C)
(8)
For depth assignment to point P equation 8 is used where D
is the depth value. The resulting depth map after linear interpolation of all triangles can be seen in Figure 10a. In this
depth map, white is closer to the camera and dark is further
away. Comparing Figure 10a to Figure 8 it can be seen that
areas for which no enclosing triangle exists are treated as not
measurable (black regions in Figure 10a). If an estimation for
those regions is wanted, it is possible to assign those pixels the
depth value of the closest pixel with depth information or to interpolate the depth value using barycentric coordinates of the
enclosing polygon. The 3D reconstruction based on the depth
map from Figure 10a can be seen in Figure 10b.
5.6. Depth Map Creation
For depth map creation, the graph in Figure 8 has to be transformed into a surface. This is done by assigning each pixel in
a triangle the weighted value of the depth estimations from the
corner nodes. The weights are determined using the distance to
each node. The idea behind this is to have linear transitions between regions with different depth values. This makes it more
comfortable for the subject to slide over the scene with their
gaze, without having an oscillatory effect of the focal length
close to region borders. This can be achieved very fast using
barycentric coordinates (Figure 9a) to linearly interpolate (Figure 9b) those three values, which is usually applied in computer
graphics to 3D models.
P(a, b, c)
∆PBC
∆PAC
∆PAB
a=
,b =
,c =
∆ABC
∆ABC
∆ABC
(b) Interpolation
6. Data sets
In Figure 11, all objects of the new data sets are shown. We
scanned each object in 191 steps over the complete range (focal
tuning range of 50mm to 120mm [17]) of the optotune lens.
Therefore each object set contains 191 images with a resolution
of 640x512.
The objects plastic towers, lego bevel, lego steps, and glass
are coated with a grid of black lines which should simplify the
(7)
Equation 7 describes the transformation from Cartesian coordinates to barycentric coordinates, where A,B and C are the corner nodes of a triangle (Figure 9a), ∆ is the area of the spanned
6
Figure 11: Shows all objects used to generate the datasets. Below each object image stands the title which will be used further in this document.
depth estimation. For the objects tape bevel and tape steps,
we used regular package tape to reduce the focus information
for different focal length. Objects cup, raspberry pi, foam, tin,
and CPU cooler are real objects where tin and raspberry pi are
scanned in an oblique position. All objects except CPU cooler
are used for evaluation, whereas the said object is used in limitations because the laminate is interpolated to a flat surface (without modification of the algorithm). This is due to the oppression
of the not maximal responses along the laminate (which do not
represent real edges).
For evaluation we also used zero motion from Suwajanakorn
et al. [27] and balcony, alley and shelf from Möller et al. [14].
7.1. Algorithm evaluation
Since human perception of sharpness varies between subjects, it is very challenging to make exact depth measurements
for all objects and to manually label those. Therefore, we decided to show the depth map for each algorithm (proposed, variational depth [14] and SFF) and the timings. In addition, we
provide an evaluation against the best index in the image stack
as seen by the authors.
Furthermore, in the supplementary material, we provide the
parameter settings for SFF. Since SFF was implemented in
MATLAB, a comparison with regard to the runtime is not absolutely fair, but we argue, the results are close to what could be
reached by a C implementation. For visualization and comparison purposes we normalized the depth maps, based on the input
stack with a step size of 10 (between consecutive frames, i.e.,
the first frame has focus value 1 and the second would have 11).
Invalid regions are colored red. The normalization of the algorithm variational depth [14] is always over the complete range
because the implementation returns no corresponding scale.
Figure 12 shows the results of our proposed method and the
state-of-the-art. The first four rows represent the results of objects, where the surface is marked with a grid. The purpose
here is to have a comparison based on more objects. For the
plastic tower, the irregular transitions between the four regions
are due to the triangle interpolation and, therefore, a result as
accepted. The corresponding 3D map is shown in Figure 10b.
In the third row (lego steps), it can be seen that our method
is able to correctly detect not measurable regions. The closest
result is WAVR with 60 times our runtime.
For the tape bevel, we see a superior performance of GLVM,
but our method is closest to its result in comparison to the others. For the tape steps object, our method outperforms related
approaches in a fraction of runtime. The most difficult object in
our data set is the cup, which contains a big reflection and only
a small part is textured. The other methods estimate the rim to
be closer to the camera than the body of the cup which (is not
correct, our method labels large parts as not measurable). The
7. Evaluation
For the evaluation, we used 20 images per object from the
set of 191. Those images were equally spread, meaning that the
change in focal length between consecutive images is constant.
In addition to the objects in Section 6, we used the data sets
provided by Suwajanakorn et al. [27] and Moeller et al. [14],
which are recorded with a professional camera in the real world.
For the evaluation, we did not change any parameter of our
algorithm. We used the algorithm variational depth [14] with
the parameters as specified by the authors on a GeForce GT 740
GPU. The shape-from-focus (SFF) measures we evaluate are
modified gray level variance (GLVM), modified Laplacian [15]
(MLAP), Laplacian in 3D window [3] (LAP3d), variance of
wavelets coefficients [31] (WAVV) and ratio of wavelet coefficients [30] (WAVR) as Matlab implementation from Pertuz et
al. [19, 20], since these are the best performing SFF measures
in [19]. For optimal parameter estimation, we tried all focus
measure filter sizes from 9 to 90 in a stepwise search of 9 as
for the median filter size. Additionally, the Gauss interpolation
was used.
7
Figure 12: The results on all new data sets in terms of the depth map and the runtime are shown. Red regions represent areas that are marked by the algorithm to be
not measurable. Brighter means closer to the camera.
8
Figure 13: The results on the data sets alley [14], balcony [14], shelf [14] and zeromotion [27]. The first three data sets have a resolution of 1920x1080, whereas
the last one has a resolution of 774x518 pixel. In the top row, each algorithm is named as shown in figure 12. In addition to the depth maps, we show the processing
time of each algorithm. Red regions represent areas that were marked by the algorithm as not measurable. Brightness represents the distance to the camera (the
brighter, the further away).
valid region is estimated appropriately with a negligible error
(white spot).
The Raspberry PI is estimated correctly by all methods except for GLVM. For foam, all methods perform well. The last
object of the new data set is a tin. The two bright spots on the
left side of the result of our method are due to dust particles
on the lens which get sharp on a closer focal length. The best
performing algorithm for the tin is GLVM.
Figure 13 presents the results on the data sets provided by
Suwajanakorn et al. [27] and Moeller et al. [14]. In comparison
to the related algorithms, the results achieved by our approach
are not as smooth. For the data set balcony [14] our algorithm
did not get the centered leaf correctly but the depth approximation of the remaining area is comparable to that achieved by
the state of the art, while our result does not look as smooth.
For the shelf [14] and zeromotion [27] datasets, our algorithm
performs better because it detects the not measurable regions.
For the algorithm variational depth [14], it was not possible to
provide a depth map for zeromotion from [27] because of the
algorithm crashes (we tried to crop the center out of the image
with 640x512 and 640x480 but it is still used to crash).
The purpose of this evaluation is not to show an algorithm
which is capable of estimating correct depth maps for all scenarios but to show that our algorithm can produce comparable
results in a fraction of runtime without changing any parameter.
We provide all depth maps in the supplementary material and
as a download, together with a Matlab script to show them in
3D.
Method
GLVM
LAPM
LAP3
WAVV
WAVR
VARDEPTH
Proposed
R1
R2
3.40 3.88
6.06 3.80
12.01 3.87
19.10 4.33
27.15 4.59
28.25 18.14
12.96 4.07
R3
6.76
11.48
13.47
19.87
32.53
10.29
6.96
R4
17.33
21.92
22.02
30.87
57.54
35.22
5.61
Table 1: Results for the data set tin. The values represent the mean absolute
errors over the marked regions. The regions are named R1-4 and visualized as
red for R1, green for R2, blue for R3 and cyan for R4.
"plastic tower", "tape steps" and "tin" (see Figure 12). The same
parameters as for the images in Figure 12 and 13 are used.
The tabels 1, 2, 3, and 4 show the mean absolute error
( 1n |vi − vgt |) for the data sets "tin", "lego steps", "tape steps",
and "plastic tower", respectively. Over each table we placed
an image for the specific data set with the evaluated regions
marked by different colors. Regions without depth estimation
(marked red in Figure 12) are excluded in the calculation of the
mean absolute error. Our methods shows similar performance
to the state of the art with less computational time.
7.1.1. Best index evaluation
For evaluation against the best index in the image stack, we
selected a region and an index in which the authors see this region as best focused. It has to be mentioned that we recorded
for each data set 191 images with different focal length. For the
depth map reconstruction, we used 19 equally spaced images.
For evaluation we used the image stacks from "lego steps",
9
Method
GLVM
LAPM
LAP3
WAVV
WAVR
VARDEPTH
Proposed
R1
8.26
12.85
12.77
12.98
14.13
19.61
3.67
R2
4.65
5.28
5.27
5.33
4.92
0.72
4.14
R3
3.78
3.75
3.61
3.57
3.09
2.68
5.28
R4
14.86
11.68
11.80
11.77
11.76
12.98
12.90
Method
GLVM
LAPM
LAP3
WAVV
WAVR
VARDEPTH
Proposed
R1
21.56
45.86
45.86
34.84
56.53
52.89
12.39
R2
R3
R4
14.05 25.83
2.18
62.75 39.90 63.64
62.75 39.90 63.64
59.93 24.33 43.74
65.62 95.51 112.41
61.42 104.43 109.43
19.14 6.27
15.93
Table 2: Results for the data set lego steps. The values represent the mean absolute errors over the marked regions. The regions are named R1-4 and visualized
as red for R1, green for R2, blue for R3 and cyan for R4.
Table 3: Results for the data set tape steps. The values represent the mean absolute errors over the marked regions. The regions are named R1-4 and visualized
as red for R1, green for R2, blue for R3 and cyan for R4.
8. Limitations
and a runtime 6.91sec on a GeForce GT 740 GPU.
For calculation, we used 20 images as for the evaluation and
did not change the Canny parameters for the proposed algorithm.
The algorithm is capable of determining plain surfaces if
valid measures surrounding this surface are present. This advantage comes at the same time with the drawback that a nonexisting surface is interpolated in case of invalid measures in a region. This effect is depicted in Figure 14d, where the laminate
of the CPU cooler is interpreted as a plain surface. The proposed algorithm could determine the lower path in the center of
the image but, as can be seen in Figure 14e, all Gmax nodes are
on the top of the laminate resulting in a wrong interpretation.
For a better reconstruction of the CPU cooler, we adapted the
proposed algorithm by simply not removing the Gnonmax nodes
for the second graph building step (meaning we used all nodes
in Gall ). As can be seen in Figure 14f, the resulting depth map
got the laminate correctly but still fails for the right-skewed
part, which is because there was no valid focus measurement.
This can be seen in the graph shown in Figure 14g.
The disadvantages of using all nodes are the slightly increased runtime (296ms instead of 283ms) due to additional
costs introduced by interpolation. Additionally, some nodes are
faulty because they either belong to an edge trace or are an invalid measurement. To avoid this, an iterative median filter over
should be applied resulting in even higher runtimes. Another
approach for improvement would be filtering nodes which have
been selected for interpolation. However, this would not reduce
the number of faulty measurements.
For comparison, we used the modified gray level variance
with filter size 18, median filter size 63 and Gaussian interpolation, which delivered the best result. The runtime was
7.06sec (Matlab implementation from Pertuz et al. [19, 20])
and is shown in Figure 14b. The black regions are added to the
depth map because of zero reliability, otherwise, the focus measure has everywhere a result. The algorithm variational depth
[14] produced the result in Figure 14c with default parameters
9. Conclusion
In this paper, we showed the use of eye tracking for automatically adapting, focus to the presented image. Due to the realtime capability, our methods can be beneficial in various applications where autofocus facilitates interaction (e.g. surgery,
security, human-robot collaboration, etc.). The algorithm proposed here shows similar performance as the state of the art
but requires minimal computational resources and requires no
parameter adjustment.
In our future work, we will further optimize our method and
provide a GPU implementation to make it computationally possible to use more than only one focus measuring method (e.g.
GLVM). For evaluation purpose, we also have to increase the
data sets by different materials and structures.
References
[1] Ahmad, M. B., Choi, T.-S., 2005. A heuristic approach for finding best focused shape. IEEE Transactions on Circuits and Systems for Video Technology 15 (4), 566–574.
[2] Ahmad, M. B., Choi, T.-S., 2006. Shape from focus using optimization
technique. In: 2006 IEEE International Conference on Acoustics Speech
and Signal Processing Proceedings. Vol. 2. IEEE, pp. II–II.
[3] An, Y., Kang, G., Kim, I.-J., Chung, H.-S., Park, J., 2008. Shape from
focus through laplacian using 3d window. In: 2008 Second International Conference on Future Generation Communication and Networking.
Vol. 2. IEEE, pp. 46–50.
[4] Asif, M., Choi, T.-S., et al., 2001. Shape from focus using multilayer
feedforward neural networks. IEEE Transactions on Image Processing
10 (11), 1670–1675.
[5] Canny, J., 1986. A computational approach to edge detection. IEEE
Transactions on pattern analysis and machine intelligence (6), 679–698.
10
(a) Input
Method
GLVM
LAPM
LAP3
WAVV
WAVR
VARDEPTH
Proposed
R1
6.26
4.70
4.74
9.74
7.57
8.35
3.27
R2
2.08
2.29
2.20
2.96
3.79
2.06
4.02
R3
2.37
3.20
3.22
3.75
3.95
4.24
2.57
R4
8.06
10.82
10.73
13.35
13.31
12.79
2.70
(b) Addapted
(c) Variational
(d) Proposed Gmax
(e) Gmax graph
(f) Proposed Gall
(g) Gall graph
Table 4: Results for the data set plastic tower. The values represent the mean
absolute errors over the marked regions. The regions are named R1-4 and visualized as red for R1, green for R2, blue for R3 and cyan for R4.
[6] Ergoneers, 2016. Dikablis glasses.
[7] Fuhl, W., Santini, T., Kuebler, T., Kasneci, E., 03 2016. Else: Ellipse
selection for robust pupil detection in real-world environments. In: ACM
Symposium on Eye Tracking Research and Applications, ETRA 2016.
[8] Geusebroek, J.-M., Cornelissen, F., Smeulders, A. W., Geerts, H., 2000.
Robust autofocusing in microscopy. Cytometry 39 (1), 1–9.
[9] Groen, F. C., Young, I. T., Ligthart, G., 1985. A comparison of different
focus functions for use in autofocus algorithms. Cytometry 6 (2), 81–91.
[10] Hilsenstein, V., 2005. Robust autofocusing for automated microscopy
imaging of fluorescently labelled bacteria. In: Digital Image Computing:
Techniques and Applications (DICTA’05). IEEE, pp. 15–15.
[11] Huang, J.-T., Shen, C.-H., Phoong, S.-M., Chen, H., 2005. Robust measure of image focus in the wavelet domain. In: 2005 International Symposium on Intelligent Signal Processing and Communication Systems.
IEEE, pp. 157–160.
[12] Huang, W., Jing, Z., 2007. Evaluation of focus measures in multi-focus
image fusion. Pattern recognition letters 28 (4), 493–500.
[13] Lorenzo, J., Castrillon, M., Méndez, J., Deniz, O., 2008. Exploring the
use of local binary patterns as focus measure. In: Computational Intelligence for Modelling Control and Automation, 2008 International Conference on. IEEE, pp. 855–860.
[14] Moeller, M., Benning, M., Schönlieb, C., Cremers, D., 2015. Variational
depth from focus reconstruction. IEEE Transactions on Image Processing
24 (12), 5369–5378.
[15] Nayar, S. K., Ikeuchi, K., Kanade, T., 1989. Surface reflection: physical
and geometrical perspectives. Tech. rep., DTIC Document.
[16] Nayar, S. K., Nakagawa, Y., 1994. Shape from focus. IEEE Transactions
on Pattern analysis and machine intelligence 16 (8), 824–831.
[17] Optotune, 2016. Datasheet: El-10-30-series.
[18] Pech-Pacheco, J. L., Cristóbal, G., Chamorro-Martinez, J., FernándezValdivia, J., 2000. Diatom autofocusing in brightfield microscopy: a comparative study. In: Pattern Recognition, 2000. Proceedings. 15th International Conference on. Vol. 3. IEEE, pp. 314–317.
[19] Pertuz, S., Puig, D., Garcia, M. A., 2013. Analysis of focus measure operators for shape-from-focus. Pattern Recognition 46 (5), 1415–1432.
[20] Pertuz, S., Puig, D., Garcia, M. A., 2013. Reliability measure for shapefrom-focus. Image and Vision Computing 31 (10), 725–734.
[21] Russell, M. J., Douglas, T. S., 2007. Evaluation of autofocus algorithms
for tuberculosis microscopy. In: 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE,
pp. 3489–3492.
[22] Santini, T., Fuhl, W., Geisler, D., Kasneci, E., 2017. Eyerectoo: Opensource software for real-time pervasive head-mounted eye tracking. In:
VISIGRAPP (6: VISAPP). pp. 96–101.
Figure 14: In 14a one image of the CPU cooler set can be seen. 14b is the result
of SFF where we set the optimal parameters (modified gray level variance, filter
size 18, median filter size 63, Gauss interpolation, runtime 7.06sec), where
black means zero reliability. In 14c the result of the variational depth [14] with
default parameters is shown (runtime 6.91sec on GPU). 14d is the output of the
proposed method and the corresponding Gmax graph in 14e (runtime 283ms).
Result of the proposed method withy not removing non maximal 14f and the
corresponding graph 14g (runtime 296ms).
[23] Santini, T., Fuhl, W., Kübler, T., Kasneci, E., 2016. Bayesian identification of fixations, saccades, and smooth pursuits. In: Proceedings of the
Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. ACM, pp. 163–170.
[24] Santos, A., Ortiz de Solórzano, C., Vaquero, J. J., Pena, J., Malpica, N.,
Del Pozo, F., 1997. Evaluation of autofocus functions in molecular cytogenetic analysis. Journal of microscopy 188 (3), 264–272.
[25] Subbarao, M., Choi, T., 1995. Accurate recovery of three-dimensional
shape from image focus. IEEE Transactions on pattern analysis and machine intelligence 17 (3), 266–274.
[26] Sun, Y., Duthaler, S., Nelson, B. J., 2004. Autofocusing in computer microscopy: selecting the optimal focus algorithm. Microscopy research and
technique 65 (3), 139–149.
[27] Suwajanakorn, S., Hernandez, C., Seitz, S. M., 2015. Depth from focus with your mobile phone. In: Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition. pp. 3497–3506.
[28] Wee, C.-Y., Paramesran, R., 2007. Measure of image sharpness using
eigenvalues. Information Sciences 177 (12), 2533–2552.
[29] Willert, C. E., Gharib, M., 1991. Digital particle image velocimetry. Experiments in fluids 10 (4), 181–193.
[30] Xie, H., Rong, W., Sun, L., 2006. Wavelet-based focus measure and
3-d surface reconstruction method for microscopy images. In: 2006
IEEE/RSJ International Conference on Intelligent Robots and Systems.
IEEE, pp. 229–234.
[31] Yang, G., Nelson, B. J., 2003. Wavelet-based autofocusing and unsupervised segmentation of microscopic images. In: Intelligent Robots and
Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International
Conference on. Vol. 3. IEEE, pp. 2143–2148.
[32] Yap, P. T., Raveendran, P., 2004. Image focus measure based on cheby-
11
shev moments. IEE Proceedings-Vision, Image and Signal Processing
151 (2), 128–136.
12
| 1 |
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
1
Dual theory of transmission line outages
arXiv:1606.07276v2 [nlin.AO] 22 Jan 2017
Henrik Ronellenfitsch, Debsankha Manik, Jonas Hörsch, Tom Brown, Dirk Witthaut
Abstract—A new graph dual formalism is presented for the
analysis of line outages in electricity networks. The dual formalism is based on a consideration of the flows around closed
cycles in the network. After some exposition of the theory is
presented, a new formula for the computation of Line Outage
Distribution Factors (LODFs) is derived, which is not only
computationally faster than existing methods, but also generalizes
easily for multiple line outages and arbitrary changes to line
series reactance. In addition, the dual formalism provides new
physical insight for how the effects of line outages propagate
through the network. For example, in a planar network a single
line outage can be shown to induce monotonically decreasing flow
changes, which are mathematically equivalent to an electrostatic
dipole field.
Index Terms—Line Outage Distribution Factor, DC power flow,
dual network, graph theory
I. I NTRODUCTION
The robustness of the power system relies on its ability
to withstand disturbances, such as line and generator outages.
The grid is usually operated with ‘n−1 security’, which means
that it should withstand the failure of any single component,
such as a transmission circuit or a transformer. The analysis
of such contingencies has gained in importance with the
increasing use of generation from variable renewables, which
have led to larger power imbalances in the grid and more
situations in which transmission lines are loaded close to their
thermal limits [1]–[6].
A crucial tool for contingency analysis is the use of Line
Outage Distribution Factors (LODFs), which measure the
linear sensitivity of active power flows in the network to
outages of specific lines [7]. LODFs are not only used to
calculate power flows after an outage, but are also employed
in security-constrained linear optimal power flow (SCLOPF),
where power plant dispatch is optimized such that the network
is always n − 1 secure [7].
LODF matrices can be calculated from Power Transfer
Distribution Factors (PTDFs) [8], [9], which describe how
power flows change when power injection is shifted from one
node to another. In [10], a dual method for calculating PTDFs
was presented. The dual method is based on an analysis of the
flows around closed cycles (closed cycles are paths that are
non-intersecting and start and end at the same node [11]) in
the network graph; for a plane graph, a basis of these closed
H. Ronellenfitsch and D. Manik are at the Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen, Germany. H. Ronellenfitsch is additionally at the Department of Physics and Astronomy, University
of Pennsylvania, Philadelphia PA, USA.
J. Hörsch and T. Brown are at the Frankfurt Institute for Advanced Study,
60438 Frankfurt am Main, Germany.
D. Witthaut is at the Forschungszentrum Jülich, Institute for Energy and
Climate Research - Systems Analysis and Technology Evaluation (IEK-STE),
52428 Jülich, Germany and the Institute for Theoretical Physics, University
of Cologne, 50937 Köln, Germany.
cycles corresponds to the nodes of the dual graph [11]. Seen
as a planar polygon, each basis cycle corresponds to one facet
of the polygon. (This notion of dual of a plane graph is called
the weak dual). In this paper the dual formalism is applied
to derive a new direct formula for the LODF matrices, which
is not only computationally faster than existing methods, but
has several other advantages. It can be easily extended to take
account of multiple line outages and, unlike other methods,
it also works for the case where a line series reactance is
modified rather than failing completely. This latter property is
relevant given the increasing use of controllable series compensation devices for steering network power flows. Moreover,
the dual formalism is not just a calculational tool: it provides
new insight into the physics of how the effects of outages
propagate in the network, which leads to several useful results.
Depending on network topology, the dual method can lead
to a significant improvement in the speed of calculating
LODFs. Thus it can be useful for applications where LODFs
must be calculated repeatedly and in a time-critical fashion,
for instance in ‘hot-start DC models’ or ‘incremental DC
models’ [12]. The dual method we describe is particularly
suited to these types of problems because unlike in the
primal case, most of the involved matrices only depend on
the network topology and can be stored and reused for each
calculation run.
II. T HE
PRIMAL FORMULATION OF LINEARIZED NETWORK
FLOWS
In this paper the linear ‘DC’ approximation of the power
flow in AC networks is used, whose usefulness is discussed
in [13], [14]. In this section the linear power flow formulation
for AC networks is reviewed and a compact matrix notation
is introduced.
In the linear approximation, the directed active power flow
Fℓ on a line ℓ from node m to node n can be expressed in
terms of the line series reactance xℓ and the voltage angles
θm , θn at the nodes
Fℓ =
1
(θm − θn ) = bℓ (θm − θn ),
xℓ
(1)
where bℓ = 1/xℓ is the susceptance of the line. In the
following we do not distinguish between transmission lines
and transformers, which are treated the same.
The power flows of all lines are written in vector form, F =
(F1 , . . . , FL )t ∈ RL , and similarly for the nodal voltage angles
θ = (θ1 , . . . , θN )t ∈ RN , where the superscript t denotes the
transpose of a vector or matrix. Then equation (1) can be
written compactly in matrix form
F = B d K t θ,
(2)
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
2
where B d = diag(b1 , . . . , bL ) ∈ RL×L is the diagonal branch
susceptance matrix. The incidence matrix K ∈ RN ×L encodes
how the nodes of the directed network graph are connected by
the lines [15]. It has components
1 if line ℓ starts at node n,
−1 if line ℓ ends at node n,
(3)
Kn,ℓ =
0 otherwise.
In homology theory K is the boundary operator from the
vector space of lines ∼
= RL to the vector space of nodes ∼
= RN .
The incidence matrix also relates the nodal power injections
at each node P = (P1 , . . . , PN ) ∈ RN to the flows incident
at the node
P = KF .
(4)
This is Kirchhoff’s Current Law expressed in terms of the
active power: the net power flowing out of each node must
equal the power injected at that node.
Combining (2) and (4), we obtain an equation for the power
injections in terms of the voltage angles,
P = Bθ.
(5)
Here we have defined the nodal susceptance matrix B ≡
KB d K t ∈ RN ×N with the components
( X
bℓ if m = n;
ℓ∈Λm
Bm,n =
(6)
−bℓ
if m is connected to n by ℓ,
where Λm is the set of lines which are incident on m. The
matrix B is a weighted Laplace matrix [15] and equation (5)
is a discrete Poisson equation. Through equations (2) and (5),
there is now a linear relation between the line flows F and
the nodal power injections P .
For a connected network, B has a single zero eigenvalue
and therefore cannot be inverted directly. Instead, the MoorePenrose pseudo-inverse B ∗ can be used to solve (5) for θ and
obtain the line flows directly as a linear function of the nodal
power injections
F = BdK tB∗P .
(7)
This matrix combination is taken as the definition of the nodal
Power Transfer Distribution Factor (PTDF) PTDF ∈ RL×N
PTDF = B d K t B ∗ .
(8)
Next, the effect of a line outage is considered. Suppose the
flows before the outage are given by Fk and the line which
(ℓ)
fails is labeled ℓ. The line flows after the outage of ℓ, Fk
are linearly related to the original flows by the matrix of Line
Outage Distribution Factors (LODFs) [7], [16]
(ℓ)
Fk = Fk + LODFkℓ Fℓ ,
(9)
where on the right hand side there is no implied summation
over ℓ. It can be shown [8], [9] that the LODF matrix
elements can be expressed directly in terms of the PTDF
matrix elements as
[PTDF · K]kℓ
.
(10)
LODFkℓ =
1 − [PTDF · K]ℓℓ
For the special case of k = ℓ, one defines LODFkk = −1.
The matrix [PTDF · K]kℓ can be interpreted as the sensitivity
of the flow on k to the injection of one unit of power at the
from-node of ℓ and the withdrawal of one unit of power at the
to-node of ℓ.
III. C YCLES
AND THE
D UAL G RAPH
The power grid defines a graph G = (V, E) with vertex
set V formed by the nodes or buses and edge set E formed
by all transmission lines and transformers. The orientation of
the edges is arbitrary but has to be fixed because calculations
involve directed quantities such as the real power flow. In the
following we reformulate the theory of transmission line outages in terms of cycle flows. A directed cycle is a combination
of directed edges of the graph which form a closed loop. All
such directed cycles can be decomposed into a set of L−N +1
fundamental cycles, with N being the number of nodes, L
being the number of edges and assuming that the graph is
connected [11]. An example is shown in Fig. 1, where two
fundamental cycles are indicated by blue arrows.
The fundamental cyles are encoded in the cycle-edge incidence matrix C ∈ RL×(L−N +1)
1 if edge ℓ is element of cycle c,
−1 if reversed edge ℓ is element of cycle c,
Cℓ,c =
0 otherwise.
(11)
It is a result of graph theory, which can also be checked by
explicit calculation, that the L − N + 1 cycles are a basis for
the kernel of the incidence matrix K [11],
KC = 0 .
(12)
Using the formalism of cycles, the Kirchoff Voltage Law
(KVL) can be expressed in a concise way. KVL states that
the sum of all angle differences along any closed cycle must
equal zero,
X
(θi − θj ) = 0 .
(13)
(ij)∈cycle c
Since the cycles form a vector space it is sufficient to check
this condition for the L − N + 1 basis cycles. In matrix form
this reads
C tK tθ = 0 ,
(14)
which is satisfied automatically by virtue of equation (12).
Using equation (2), the KVL in terms of the flows reads
C t X d F = 0,
(15)
where X d is the branch reactance matrix, defined by X d =
diag(x1 , . . . , xL ) = diag(1/b1 , . . . , 1/bL ) ∈ RL×L .
The results of Sections I through V apply for any graph. In
the final Section VI, a special focus is made on planar graphs,
i.e., graphs which can be drawn or ‘embedded’ in the plane
R2 without edge crossings. Once such an embedding is fixed,
the graph is called a plane graph. Power grids are not naturally
embedded in R2 , but while line crossings are possible, they are
sufficiently infrequent in large scale transmission grids (such
as the high-voltage European transmission grid).
The embedding (drawing) in the plane yields a very intuitive
approach to the cycle flow formulation. The edges separate
polygons, which are called the facets of the graph. We can
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
3
now define a cycle basis which consists exactly of these facets.
Then all edges are part of at most two basis cycles, which is
called MacLane’s criterion for panarity [11]. This construction
is formalized in the definition of the weak dual graph DG of
G. The weak dual graph DG is formed by putting dual nodes
in the middle of the facets of G as described above, and then
connecting the dual nodes with dual edges across those edges
where facets of G meet [11], [15]. DG has L − N + 1 nodes
and its incidence matrix is given by C t .
The simple topological properties of plane graphs are essential to derive some of the rigorous results obtained in
section VI. For more complex networks, graph embeddings
without line crossings can still be defined – but not on the
plane R2 . More complex geometric objects (surfaces with
a genus g > 0) are needed (see, e.g., [17] and references
therein).
IV. D UAL
THEORY OF NETWORK FLOWS
In this section the linear power flow is defined in terms
of dual cycle variables following [10], rather than the nodal
voltage angles. To do this, we define the linear power flow
equations directly in terms of the network flows. The power
conservation equation (4)
KF = P ,
(16)
provides N equations, of which one is linearly dependent, for
the L components of F . The solution space is thus given by
an affine subspace of dimension L − N + 1.
In section III we discussed that the kernel of K is spanned
by the cycle flows. Thus, we can write every solution of
equation (16) as a particular solution of the inhomogeneous
equation plus a linear combination of cycle flows:
F = F (part) + Cf ,
f ∈ RL−N +1 .
(17)
The components fc of the vector f give the strength of the
cycle flows for all basis cycles c = 1, 2, · · · , L − N + 1.
A particular solution F (part) can be found by taking the
uniquely-determined flows on a spanning tree of the network
graph [10].
To obtain the correct physical flows we need a further
condition to fix the L − N + 1 degrees of freedom fc . This
condition is provided by the KVL in (15), which provides
exactly L − N + 1 linear constraints on f
C t X d Cf = −C t X d F (part) .
(18)
Together with equation (16), this condition uniquely determines the power flows in the grid.
Equation (18) is the dual equation of (5). If the cycle
reactance matrix A ∈ RL−N +1×L−N +1 is defined by
A ≡ C t X d C,
then A also has the form,
P
xℓ
ℓ∈κP
c
Acc′ =
±xℓ
ℓ∈κc ∩κc′
(19)
where κc is the set of edges around cycle c and the sign
ambiguity depends on the orientation of the cycles. The
construction of A is very similar to the weighted Laplacian
in equation (6); for plane graphs where the cycles correspond
to the faces of the graph, this analogy can be made exact
(see Section VI). Unlike B, the matrix A is invertible, due to
the fact that the outer boundary cycle of the network is not
included in the cycle basis. This is analogous to removing the
row and column corresponding to a slack node from B, but it
is a natural feature of the theory, and not manually imposed.
V. D UAL
COMPUTATION OF LINE OUTAGE DISTRIBUTION
FACTORS
A. Single line outages
The dual theory of network flows derived in the previous
section can be used to derive an alternative formula for the
LODFs. For the sake of generality we consider an arbitrary
change of the reactance of a transmission line ℓ,
xℓ → xℓ + ξℓ .
The generalization to multiple line outages is presented in
the following section. The change of the network structure
is described in terms of the branch reactance matrix
X̂ d = X d + ∆X d = X d + ξℓ uℓ utℓ ,
if c = c ;
if c 6= c,
(20)
(22)
where uℓ ∈ RL is a unit vector which is 1 at position ℓ and
zero otherwise. In this section we use the hat to distinguish
the line parameters and flows in the modified grid after the
outage from the original grid before the outage.
This perturbation of the network topology will induce a
change of the power flows
F̂ = F + ∆F .
(23)
We consider a change of the topology while the power
injections remain constant. The flow change ∆F thus does
not have any source such that it can be decomposed into cycle
flows
∆F = C∆f .
(24)
The uniqueness condition (15) for the perturbed network
reads
C t (X d + ∆X d )(F + ∆F ) = 0.
(25)
Using condition (15) for the original network and the cycle
flow decomposition equation (24) for the flow changes yields
C t X̂ d C∆f = −C t ∆X d F
⇒ ∆f = −(C t X̂ d C)−1 C t uℓ ξℓ utℓ F
(26)
such that the flow changes are given by
∆F = C∆f = −C(C t X̂ d C)−1 C t uℓ ξℓ utℓ F .
′
(21)
(27)
This expression suggests that we need to calculate the inverse
separately for every possible contingency case, which would
require a huge computational effort. However, we can reduce
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
240 MW
0 MW
40 MW
26.8 MW
226.5 MW
(a) 466.5 MW
323.5 MW
300 MW
300 MW
362.4 MW
0 MW
40 MW
37.6 MW
104.1 MW
(b) 466.5 MW
323.5 MW
300 MW
300 MW
(c)
122.4 MW
5
4
6
Thus, the flow changes can be written according to equation (24) with
122.4 MW
122.4 MW
1
64.4 MW
2
3
1
64.4 MW
64.4 MW
2
4
64.4 MW
5
∆f = (122.4 MW, 64.4 MW)t .
it to the inverse of A = C t X d C describing the unperturbed
grid using the Woodbury matrix identity [18],
−1
= A−1 − A−1 C t uℓ ξℓ−1 + utℓ CA−1 C t uℓ
Thus we obtain
−1
utℓ CA−1 .
−1
.
We then obtain the induced cycle flows and flow change by
inserting this expression into equation (27). We summarize our
results in the following proposition.
Proposition 1. If the reactance of a single transmission line
ℓ is changed by an amount ξℓ , the real power flows change as
−ξℓ Fℓ
M uℓ
1 + ξℓ utℓ M uℓ
−1
(28)
t
with the matrix M = CA C . If the line ℓ fails, we have
ξℓ → ∞. The line outage distribution factor for a transmission
line k is thus given by
LODFk,ℓ =
(cf. also [22], [23] for a discussion of cycle flows in power
grids).
The dual approach to the LODFs can be computationally
advantageous for sparse networks as discussed in section V-C.
Furthermore, we will use it to prove some rigorous results
on flow redistribution after transmission line failures in section VI.
B. Multiple line outages
(C t X̂ d C)−1 C t uℓ = A−1 C t uℓ 1 + ξℓ utℓ CA−1 C t uℓ
∆F =
(32)
3
Fig. 1.
(a) DC power flow in a 5-bus test grid from [21]. (b) DC power
flow in the same network after the outage of one transmission line. (c) The
change of power flows can be decomposed into two cycle flows.
(C t X̂ d C)−1 = A + C t uℓ ξℓ utℓ C
(30)
The grid contains 2 independent cycles, which are chosen as
cycle 1: line 2, line 6, line 3.
cycle 2: line 1, line 4, line 5, reverse line 2
The cycle-edge incidence matrix thus reads
0 +1 +1 0
0 +1
t
C =
.
(31)
+1 −1 0 +1 +1 0
400 MW
14.1 MW
314.1 MW
170 MW
example, the node-edge incidence matrix is given by
+1 −1 +1 0
0
0
−1 0
0 +1 0
0
0
0 −1 +1 0
K= 0
.
0 +1 0
0 −1 −1
0
0 −1 0
0 +1
400 MW
50.3 MW
249.7 MW
170 MW
4
∆Fk
ut M uℓ
= − kt
.
Fℓ
uℓ M uℓ
(29)
Note that the formula for an arbitrary change in series
reactance of a line is useful for the assessment of the impact of
flexible AC transmission (FACTS) devices, in particular series
compensation devices [19] or adjustable inductors that clamp
onto overhead lines [20].
Finally, an example of failure induced cycle flows and the
corresponding flow changes is shown in Figure 1. In the
The dual approach can be generalized to the case of multiple
damaged or perturbed transmission lines in a straightforward
way. Consider the simultaneous perturbation of the M transmission lines ℓ1 , ℓ2 , . . . , ℓM according to
xℓ1 → xℓ1 + ξℓ1 , xℓ2 → xℓ2 + ξℓ2 , . . . , xℓM → xℓM + ξℓM .
The change of the branch reactance matrix is then given by
∆X d = U Ξ U t ,
(33)
where we have defined the matrices
Ξ = diag(ξℓ1 , ξℓ2 , . . . , ξℓM ) ∈ RM×M ,
U = (uℓ1 , uℓ2 , . . . , uℓM ) ∈ RN ×M .
The formula (27) for the flow changes then reads
−1
∆F = −C C t X̂ d C
C t U Ξ U tF .
(34)
To evaluate this expression we again make use of the Woodbury matrix identity [18], which yields
−1
C t X̂ d C
=
−1 t
A−1 − A−1 C t U Ξ−1 + U t M U
U CA−1 .
We then obtain the flow change by inserting this expression
into equation (34) with the result
−1
Ξ U t F . (35)
∆F = −CA−1 C t U 1l + Ξ U t M U
In case of a multiple line outages of lines ℓ1 , . . . , ℓm we
have to consider the limit
ξℓ1 , . . . , ξℓM → ∞.
(36)
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
5
TABLE I
C OMPARISON OF CPU TIME FOR THE CALCULATION OF THE PTDF S IN
SPARSE NUMERICS .
Test Grid
name
source
case300
case1354pegase
GBnetwork
case2383wp
case2736sp
case2746wp
case2869pegase
case3012wp
case3120sp
case9241pegase
[21]
[25]
[26]
[21]
[21]
[21]
[25]
[21]
[21]
[25]
nodes
N
300
1354
2224
2383
2736
2746
2869
3012
3120
9241
Grid Size
cycles
speedup
ratio
L−N+1
L−N+1
N
tconv
tdual
110
357
581
504
760
760
1100
555
565
4967
0.37
0.26
0.26
0.21
0.28
0.28
0.38
0.18
0.18
0.54
1.83
4.43
4.09
4.20
3.27
3.35
2.79
3.93
3.96
1.31
In this limit equation (35) reduces to
∆F = −M U U t M U
−1
U tF .
(37)
Specifically, for the case of two failing lines, we obtain
∆Fk =
Mk,ℓ1 Mℓ2 ,ℓ2 − Mk,ℓ2 Mℓ2 ,ℓ1
Fℓ
Mℓ1 ,ℓ1 Mℓ2 ,ℓ2 − Mℓ1 ,ℓ2 Mℓ2 ,ℓ1 1
Mk,ℓ2 Mℓ1 ,ℓ1 − Mk,ℓ1 Mℓ1 ,ℓ2
Fℓ .
+
Mℓ1 ,ℓ1 Mℓ2 ,ℓ2 − Mℓ1 ,ℓ2 Mℓ2 ,ℓ1 2
(38)
C. Computational aspects
The dual formula (29) for the LODFs can be computationally advantageous to the conventional approach. To calculate
the LODFs via equation (10) we have to invert the matrix
B ∈ RN ×N to obtain the PTDFs. Using the dual approach
the most demanding step is the inversion of the matrix
A = C t X d C ∈ R(L−N +1)×(L−N +1), which can be much
smaller than B if the network is sparse. However, more matrix
multiplications need to be carried out, which decreases the
potential speed-up. We test the computational performance of
the dual method by comparing it to the conventional approach,
which is implemented in many popular software packages such
as for instance in M ATPOWER [21].
Conventionally, one starts with the calculation of the nodal
PTDF matrix defined in Eq. (8). In practice, one usually does
not compute the full inverse but solves the linear system of
equations PTDF· B = B d K instead. Furthermore, one fixes
the voltage phase angle at a slack node s, such that one can
omit the sth row and column in the matrix B and the sth
column in matrix B f = B d K T while solving the linear
system. The result is multiplied by the matrix K from the
right to obtain the PTDFs between the endpoints of all lines.
One then divides each column ℓ by the value 1 − PTDFℓℓ
to obtain the LODFs via formula (10). An implementation of
these steps in M ATLAB is listed in the supplement [24].
The dual approach yields the direct formula (29) for the
LODFs. To efficiently evaluate this formula we first compute
the matrix M = CA−1 C t . Again we do not compute the full
matrix inverse but solve a linear system of equations instead.
The full LODF matrix is then obtained by dividing every
column ℓ by the factor Mℓℓ .
We evaluate the runtime for various test grids from [21],
[25], [26] using a M ATLAB program listed in the supplement [24]. All input matrices are sparse, such that the computation is faster when using sparse numerical methods (using
the command sparse in M ATLAB and converting back to
full at the appropriate time). Then M ATLAB employs the
high-performance supernodal sparse Cholesky decomposition
solver C HOLMOD 1.7.0 to solve the linear system of equations.
We observe a significant speed-up of the dual method by a
factor between 1.31 and 4.43 depending on how meshed the
grid is (see Table I).
VI. T OPOLOGY OF
CYCLE FLOWS
In this section the propagation of the effects of line outages are analyzed using the theory of discrete calculus and
differential operators on the dual network graph. There is a
wide body of physics and mathematics literature on discrete
field theory (see, e.g., [27]). We turn back to the cycle flows
themselves and derive some rigorous results. These results
help to understand the effects of a transmission line outage
and cascading failures in power grids, in particular whether
the effects are predominatly local or affect also remote areas
of the grid (cf. [28]–[31]).
We start with a general discussion of the mathematical structure of the problem and show that line outages affect only parts
of the grid which are sufficiently connected. Further results are
obtained for planar graphs (graphs that can be embedded in
the plane without line crossings, which approximately holds,
e.g., for the European high voltage transmission grid). We
characterize the direction of the induced cycle flows and show
that the effect of failures decreases monotonically with the
distance from the outage. Finally we proceed to discuss nonlocal effects in non-planar networks.
A. General results
The starting point of our analysis of the topology of cycle
flows is a re-formulation of Proposition 1.
Lemma 1. The outage of a single transmission line ℓ induces
cycle flows which are determined by the linear system of
equations
A∆f = q
(39)
with q = Fℓ (utℓ CA−1 C t uℓ )−1 C t uℓ and A = C t X d C.
∆f , q
∈
R(L−N +1) and A
∈
R
. It will now be shown that in some cases
this equation can be interpreted as a discrete Poisson equation
for ∆f with Laplacian operator A and inhomogeneity q.
This formulation is convenient to derive some rigorous results
on flow rerouting after a transmission line failure.
We first note from the explicit construction of A in equation
(20) that two cycles in the dual network are only coupled via
their common edges. The coupling is given by the sum of
the reactances of the common edges. Generally, the reactance
of a line is proportional to its length. The coupling of two
cycles is then directly proportional to the total length of their
common boundary, provided that the lines are all of the same
Note
that
(L−N +1)×(L−N +1)
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
(a) real power flow
0
18.5
(c) flow change
37
(b) cycle flows
6
0
0
3
6
(d) predicted change
dc: inf 4 3 2 1 0
?
Fig. 2.
Flow changes and cycle flows after a transmission line failure
in a small test grid. (a) Real power flows in the initial intact network in
MW. (b) The failure of a transmission line (dashed) must be compensated by
cycle flows indicated by cyclic arrows. The thickness of the lines indicates
the approximate strength of the cycle flows. (c) The resulting flow changes
after the failure of the marked transmission line. (d) The direction of the flow
changes can be predicted using Propositions 3 and 4 for all edges and the
magnitude decreases with the cycle distance dc . The power flow in (a,c) has
been calculated using the standard software M ATPOWER for the 30-bus test
case [21].
type. Since the inhomogeneity q is proportional to C t uℓ , it is
non-zero only for the cycles which are adjacent to the failing
edge ℓ:
qc 6= 0 only if ℓ is an element of cycle c.
(40)
The matrix A typically has a block structure such that a
failure in one block cannot affect the remaining blocks. The
dual approach to flow rerouting gives a very intuitive picture
of this decoupling. To see this, consider the example shown in
Figure 2. The cycle at the top of the network is connected to
the rest of the network via one node. However, it is decoupled
in the dual representation because it shares no common edge
with any other cycle. Thus, a failure in the rest of the grid will
not affect the power flows in this cycle—the mutual LODFs
vanish. This result is summarized in the following proposition,
and a formal proof is given in the supplement [24].
Proposition 2. The line outage distribution factor LODFk,ℓ
between two edges k = (i, j) and ℓ = (s, r) vanishes if there
is only one independent path between the vertex sets {r, s}
and {i, j}.
B. Planar networks
Some important simplifications can be made in the specific
case of a plane network. We can then define the cycle basis
in terms of the interior faces of the graph which allows for a
intuitive geometric picture of induced cycle flows as in Figures
2 and 3. For the remainder of this section we thus restrict
ourselves to such plane graphs and fix the cycle basis by the
interior faces and fix the orientation of all basis cycles to be
counter-clockwise. Thus equation (39) is formulated on the
weak dual of the original graph.
Fig. 3. Schematic representation of the flow changes after the damage of a
single edge (dashed).
According to Mac Lane’s planarity criterion [11], every
edge in a plane graph belongs to at most two cycles such that q
has at most two-nonzero elements: One non-zero element qc1
if ℓ is at the boundary and two non-zero elements qc1 = −qc2
if the line ℓ is in the interior of the network. Furthermore,
the matrix A is a Laplacian matrix in the interior of the
network [15]. That is, for all cycles c which are not at the
boundary we have
X
Adc = −Acc .
(41)
d6=c
Up to boundary effects, equation (39) is thus equivalent to
a discretized Poisson equation on a complex graph with a
dipole source (monopole source if the perturbation occurs on
the boundary).
For plane networks we now prove some rigorous results
on the orientation of cycle flows (clockwise vs. counterclockwise) and on their decay with the distance from the
failing edge. In graph theory, the (geodesic) distance of two
vertices is defined as as the number of edges in a shortest path
connecting them [11]. Similarly, the distance of two edges is
defined as the number of vertices on a shortest path between
the edges.
Proposition 3. Consider the cycle flows ∆f induced by the
failure of a a single line ℓ in a plane linear flow network
described by equation (39). The weak dual graph can be
decomposed into at most two connected subgraphs (‘domains’)
D+ and D− , with ∆fc ≥ 0 ∀c ∈ D+ and ∆fc ≤ 0 ∀c ∈ D− .
The domain boundary, if it exists, includes the perturbed line
ℓ, i.e. the two cycles adjacent to ℓ belong to different domains.
A proof is given in the supplement [24]. The crucial aspect
of this proposition is that the two domains D+ and D− must
be connected. The implications of this statement are illustrated
in Figure 3 in panel (2), showing the induced cycle flows
when the dashed edge is damaged. The induced cycle flows are
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
7
oriented clockwise above the domain boundary and counterclockwise below the domain boundary. If the perturbed edge
lies on the boundary of a finite plane network, then there is
only one domain and all cycle flows are oriented in the same
way.
With this result we can obtain a purely geometric view of
how the flow of all edges in the network change after the
outage. For this, we need some additional information about
the magnitude of the cycle flows in addition to the orientation.
We consider the upper and lower bound for the cycle flows
∆fc at a given distance to the cycle c1 with qc1 > 0 and the
cycles c2 with qc2 < 0, respectively:
C̃ ∈ RL×(L−N +1−2g) and 2g topological non-contractible
cycles encoded in the cycle adjacency matrix Ĉ ∈ RL×2g ,
t
t
which satisfy Ĉ X d C̃ = 0 and C̃ X d Ĉ = 0.
(42)
A proof is given in the supplement [24]. The main result
of this proposition is that the cycle basis of any graph can
be decomposed into two parts. The geometric cycles behave
just as the facets in a planar graph. But for non-planar graphs
there is a second type of cycles – the topological ones. For
the simplest non-planar examples one can find an embedding
without line-crossings on the surface of a torus, which has
the genus g = 1. Two topological cycles have to be added
to the cycle basis, which wind around the torus in the two
distinct directions. These cycles are intrinsically non-local.
The following corollary now shows that also the effects of
a line outage can be decomposed.
Here, dist denotes the graph-theoretic distance between two
cycles or faces, i.e. the length of the shortest path between the
two faces in the dual graph. We then find the following result.
Corollary 1. Consider a general graph with embedding and
cycle basis as in Proposition 5. Then the flow changes after
the outage of a line ℓ are given by
Proposition 4. The maximum (minimum) value of the cycle
flows decreases (increases) monotonically with the distance d
to the reference cycles c1 and c2 , respectively:
∆F = C̃∆f˜ + Ĉ∆fˆ
ud =
ℓd =
max
∆fc
min
∆fc .
c,dist(c,c1 )=d
c,dist(c,c2 )=d
ud ≤ ud−1 ,
ℓd ≥ ℓd−1 ,
1 ≤ d ≤ dmax .
1 ≤ d ≤ dmax .
where the cycle flows are given by
t
t
Fℓ
C̃ uℓ
(C̃ X d C̃)∆f˜ =
Mℓ,ℓ
t
t
Fℓ
Ĉ uℓ .
(Ĉ X d Ĉ)∆fˆ =
Mℓ,ℓ
(43)
A proof is given in the supplement [24]. Strict monotonicity
can be proven when some additional technical assumptions
are satisfied, which are expected to hold in most cases. For
a two-dimensional lattices with regular topology and constant
weights the cycle flows are proportional to the inverse distance
(see supplement [24] for details). However, irregularity of the
network topology and line parameters can lead to a stronger,
even exponential, localization [28]–[30]. Hence, the response
of the grid is strong only in the ‘vicinity’ of the damaged
transmission line, but may be non-zero everywhere in the
connected component.
However, it has to be noted that the distance is defined for
the dual graph, not the original graph, and that the rigorous
results hold only for plane graphs. The situation is much more
involved in non-planar graphs, as a line can link regions which
would be far apart otherwise. Examples for the failure induced
cycles flows and the decay with the distance are shown in
Figure 2.
C. General, non-planar networks
Here, we consider fully general, non-planar networks. Unlike in the previous section, we show that it is impossible to
derive a simple monotonic decay of the effect of line failures.
Instead, by decomposing the LODFs into a geometric and a
topological part, we show that complex, non-local interactions
result. We start with
Proposition 5. Every connected graph G can be embedded
into a Riemannian surface of genus g ∈ N0 without line
crossings. The cycle basis can be chosen such that it consists
of the boundaries of L − N + 1 − 2g geometric facets
of the embedding encoded in the cycle adjacency matrix
(44)
(45)
(46)
Proof: According to proposition 5 the cycle incidence
matrix is decomposed as C = C̃, Ĉ . Similarly, we can
decompose the strength of the cycle flows after the line outage
as
∆f˜
∆f =
(47)
∆fˆ
such that the flow changes are given by ∆F = C̃∆f˜+ Ĉ∆fˆ.
Then Eq. (39) reads
!
!
t
t
∆f˜
Fℓ
C̃
C̃
=
(48)
t X d C̃, Ĉ
t uℓ .
Mℓ,ℓ Ĉ
∆fˆ
Ĉ
t
t
Using that Ĉ X d C̃ = 0 and C̃ X d Ĉ = 0 the corollary
follows.
Remarkably, the corollary shows that the cycle flows around
geometric and topological cycles can be decoupled. The matrix
t
à = C̃ X d C̃ has a Laplacian structure as in Eq. (41) because
at each edge of the graph at most two facets meet. Thus,
Eq. (45) is a discrete Poisson equation as for plane graphs
and the propositions 3 and 4 also hold for for the flows ∆f˜
around the geometric cycles. However, Eq. (46) has no such
interpretation and it is, in general, dense on both sides. Thus,
the topological cycles represented by Eq. (46) are responsible
for complicated, non-local effects of damage spreading in
general power grids.
VII. C ONCLUSIONS
Line Outage Distribution Factors are important for assessing
the reliability of a power system, in particular with the recent
rise of renewables. In this paper, we described a new dual
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
formalism for calculating LODFs, that is based on using power
flows through the closed cycles of the network instead of using
nodal voltage angles.
The dual theory yields a compact formula for the LODFs
that only depends on real power flows in the network. In particular, the formula lends itself to a straightforward generalization
for the case of multiple line outages. Effectively, using cycle
flows instead of voltage angles changes the dimensionality
of the matrices appearing in the formulae from N × N to
(L − N + 1) × (L − N + 1). In cases where the network is
very sparse (i.e., it contains few cycles but many nodes), this
can lead to a significant speedup in LODF computation time,
a critical improvement for quick assessment of real network
contingencies. In addition, the formalism generalises easily
to multiple outages and arbitrary changes in series reactance,
which is important for the assessment of the impact of FACTS
devices. Often, some of the quantities involved in power flow
problems are not known exactly, i.e., they are random (see,
e.g., [32]). Thus, extending our work to include effects of
randomness will be an important next step.
The dual theory not only yields improvements for numerical
computations, it also provides a novel viewpoint of the underlying physics of power grids, in particular if they are (almost)
planar. Within the dual framework for planar networks, it
is easy to show that single line contingencies induce flow
changes in the power grid which decay monotonically in the
same way as an electrostatic dipole field.
ACKNOWLEDGMENTS
We gratefully acknowledge support from the Helmholtz
Association (joint initiative ‘Energy System 2050 – a contribution of the research field energy’ and grant no. VH-NG1025 to D.W.) and the German Federal Ministry of Education
and Research (BMBF grant nos. 03SF0472B, 03SF0472C
and 03SF0472E). The work of H. R. was supported in part
by the IMPRS Physics of Biological and Complex Systems,
Göttingen.
R EFERENCES
[1] S. M. Amin and B. F. Wollenberg, “Toward a smart grid: power delivery
for the 21st century,” IEEE Power and Energy Magazine, vol. 3, no. 5,
p. 34, 2005.
[2] D. Heide, L. von Bremen, M. Greiner, C. Hoffmann, M. Speckmann,
and S. Bofinger, “Seasonal optimal mix of wind and solar power in a
future, highly renewable europe,” Renewable Energy, vol. 35, p. 2483,
2010.
[3] M. Rohden, A. Sorge, M. Timme, and D. Witthaut, “Self-organized
synchronization in decentralized power grids,” Phys. Rev. Lett., vol. 109,
p. 064101, 2012.
[4] T. Pesch, H.-J. Allelein, and J.-F. Hake, “Impacts of the transformation
of the german energy system on the transmission grid,” Eur. Phys. J.
Special Topics, vol. 223, p. 2561, 2014.
[5] Brown, T., Schierhorn, P., Tröster, E., and Ackermann, T., “Optimising
the European transmission system for 77% renewable electricity by
2030,” IET Renewable Power Generation, vol. 10, no. 1, pp. 3–9, 2016.
[6] D. Witthaut, M. Rohden, X. Zhang, S. Hallerberg, and M. Timme,
“Critical links and nonlocal rerouting in complex supply networks,”
Phys. Rev. Lett., vol. 116, p. 138701, 2016.
[7] A. J. Wood, B. F. Wollenberg, and G. B. Sheblé, Power Generation,
Operation and Control. New York: John Wiley & Sons, 2014.
[8] T. Güler, G. Gross, and M. Liu, “Generalized line outage distribution
factors,” IEEE Transactions on Power Systems, vol. 22, no. 2, pp. 879–
881, 2007.
8
[9] J. Guo, Y. Fu, Z. Li, and M. Shahidehpour, “Direct calculation of
line outage distribution factors,” IEEE Transactions on Power Systems,
vol. 24, no. 3, pp. 1633–1634, 2009.
[10] H. Ronellenfitsch, M. Timme, and D. Witthaut, “A dual method for
computing power transfer distribution factors,” IEEE Transactions on
Power Systems, vol. PP, no. 99, pp. 1–1, 2016.
[11] R. Diestel, Graph Theory. New York: Springer, 2010.
[12] B. Stott, J. Jardim, and O. Alsac, “Dc power flow revisited,” IEEE Trans.
Power Syst., vol. 24, no. 3, p. 1290, 2009.
[13] K. Purchala, L. Meeus, D. V. Dommelen, and R. Belmans, “Usefulness
of dc power flow for active power flow analysis,” in IEEE Power
Engineering Society General Meeting, June 2005, pp. 454–459 Vol. 1.
[14] D. Van Hertem, J. Verboomen, K. Purchala, R. Belmans, and W. Kling,
“Usefulness of dc power flow for active power flow analysis with flow
controlling devices,” in The 8th IEEE International Conference on AC
and DC Power Transmission, 2006, pp. 58–62.
[15] M. E. J. Newman, Networks – An Introduction.
Oxford: Oxford
University Press, 2010.
[16] J. J. Grainger and W. D. Stevenson Jr., Power System Analysis. New
York: McGraw-Hill, 1994.
[17] C. D. Modes, M. O. Magnasco, and E. Katifori, “Extracting
hidden hierarchies in 3d distribution networks,” Phys. Rev.
X, vol. 6, p. 031009, Jul 2016. [Online]. Available:
http://link.aps.org/doi/10.1103/PhysRevX.6.031009
[18] M. A. Woodbury, “Inverting modified matrices,” Memorandum report,
vol. 42, p. 106, 1950.
[19] X.-P. Zhang, C. Rehtanz, and B. Pal, Flexible AC Transmission Systems:
Modelling and Control. Berlin: Springer, 206.
[20] “Smart Wires,” http://www.smartwires.com/, accessed: 2016-11-30.
[21] R. D. Zimmerman, C. E. Murillo-Sanchez, and R. J. Thomas, “Matpower: Steady-state operations, planning and analysis tools for power
systems research and education,” IEEE Trans. Power Syst., vol. 26, p. 12,
2011.
[22] T. Coletta, R. Delabays, I. Adagideli, and P. Jacquod, “Vortex flows in
high voltage ac power grids,” preprint arxiv:1605.07925, 2016.
[23] D. Manik, M. Timme, and D. Witthaut, “Cycle flows and multistabilty
in oscillatory networks: an overview,” preprint arxiv:1611.09825, 2016.
[24] Supplementary Information, available online at IEEE.
[25] S. Fliscounakis, P. Panciatici, F. Capitanescu, and L. Wehenkel, “Contingency ranking with respect to overloads in very large power systems
taking into account uncertainty, preventive and corrective actions,” IEEE
Trans. Power Syst., vol. 28, p. 4909, 2013.
[26] W. A. Bukhsh and K. McKinnon, “Network data of real transmission networks,” http://www.maths.ed.ac.uk/optenergy/NetworkData/, 2013, [Online; accessed 03-August-2015].
[27] L. J. Grady and J. R. Polimeni, Discrete Calculus - Applied Analysis on
Graphs for Computational Science. Springer, 2010.
[28] D. Labavić, R. Suciu, H. Meyer-Ortmanns, and S. Kettemann,
“Long-range response to transmission line disturbances in DC
electricity grids,” The European Physical Journal Special Topics,
vol. 223, no. 12, pp. 2517–2525, oct 2014. [Online]. Available:
http://link.springer.com/10.1140/epjst/e2014-02273-0
[29] S. Kettemann, “Delocalization of phase perturbations and the stability
of ac electricity grids,” preprint arXiv:1504.05525, 2015.
[30] D. Jung and S. Kettemann, “Long-range Response in AC Electricity
Grids,” arXiv preprint, no. 3, pp. 1–8, 2015. [Online]. Available:
http://arxiv.org/abs/1512.05391
[31] P. Hines, I. Dobson, and P. Rezaei, “Cascading power outages propagate
locally in an influence graph that is not the actual grid topology,” IEEE
Transactions on Power Systems, vol. PP, no. 99, pp. 1–1, 2016.
[32] T. Mühlpfordt, T. Faulwasser, and V. Hagenmeyer, “Solving stochastic ac
power flow via polynomial chaos expansion,” in 2016 IEEE Conference
on Control Applications (CCA), Sept 2016, pp. 70–76.
| 3 |
1
The Arbitrarily Varying Broadcast Channel with
Degraded Message Sets with Causal Side
Information at the Encoder
Uzi Pereg and Yossef Steinberg
arXiv:1709.04770v2 [cs.IT] 15 Sep 2017
Abstract
In this work, we study the arbitrarily varying broadcast channel (AVBC), when state information is available at the transmitter
in a causal manner. We establish inner and outer bounds on both the random code capacity region and the deterministic code
capacity region with degraded message sets. The capacity region is then determined for a class of channels satisfying a condition
on the mutual informations between the strategy variables and the channel outputs. As an example, we consider the arbitrarily
varying binary symmetric broadcast channel with correlated noises. We show cases where the condition holds, hence the capacity
region is determined, and other cases where there is a gap between the bounds.
Index Terms
Arbitrarily varying channel, broadcast channel, degraded message sets, causal state information, Shannon strategies, side
information, minimax theorem, deterministic code, random code, symmetrizability.
The arbitrarily varying channel (AVC) was first introduced by Blackwell et al. [5] to describe a communication channel
with unknown statistics, that may change over time. It is often described as communication in the presence of an adversary,
or a jammer, attempting to disrupt communication.
The arbitrarily varying broadcast channel (AVBC) without side information (SI) was first considered by Jahn [13], who
derived an inner bound on the random code capacity region, namely the capacity region achieved by encoder and decoders
with a random experiment, shared between the three parties. As indicated by Jahn, the arbitrarily varying broadcast channel
inherits some of the properties of its single user counterpart. In particular, the random code capacity region is not necessarily
achievable using deterministic codes [5]. Furthermore, Jahn showed that the deterministic code capacity region either coincides
with the random code capacity region or else, it has an empty interior [13]. This phenomenon is an analogue of Ahlswede’s
dichotomy property [2]. Then, in order to apply Jahn’s inner bound, one has to verify whether the capacity region has nonempty interior or not. As observed in [12], this can be resolved using the results of Ericson [10] and Csiszár and Narayan [8].
Specifically, a necessary and sufficient condition for the capacity region to have a non-empty interior is that both user marginal
channels are non-symmetrizable.
Various models of interest involve SI available at the encoder. In [19], the arbitrarily varying degraded broadcast channel
with non-causal SI is addressed, using Ahlswede’s Robustification and Elimination Techniques [1]. The single user AVC with
causal SI is addressed in the book by Csiszár and Körner [7], while their approach is independent of Ahlswede’s work. A
straightforward application of Ahlswede’s Robustification Technique (RT) would violate the causality requirement.
In this work, we study the AVBC with causal SI available at the encoder. We extend Ahlswede’s Robustification and
Elimination Techniques [2, 1], originally used in the setting of non-causal SI. In particular, we derive a modified version of
Ahlswede’s RT, suited to the setting of causal SI. In a recent paper by the authors [15], a similar proof technique is applied
to the arbitrarily varying degraded broadcast channel with causal SI. Here, we generalize those results, and consider a general
broadcast channel with degraded message sets with causal SI.
We establish inner and outer bounds on the random code and deterministic code capacity regions. Furthermore, we give
conditions on the AVBC under which the bounds coincide, and the capacity region is determined. As an example, we consider
the arbitrarily varying binary symmetric broadcast channel with correlated noises. We show that in some cases, the conditions
hold and the capacity region is determined. Whereas, in other cases, there is a gap between the bounds.
I. D EFINITIONS
AND
P REVIOUS R ESULTS
A. Notation
We use the following notation conventions throughout. Calligraphic letters X , S, Y, ... are used for finite sets. Lowercase
letters x, s, y, . . . stand for constants and values of random variables, and uppercase letters X, S, Y, . . . stand for random
variables. The distribution of a random variable X is specified by a probability mass function (pmf) PX (x) = p(x) over a
finite set X . The set of all pmfs over X is denoted by P(X ). We use xj = (x1 , x2 , . . . , xj ) to denote a sequence of letters
from X . A random sequence X n and its distribution PX n (xn ) = p(xn ) are defined accordingly. For a pair of integers i and
j, 1 ≤ i ≤ j, we define the discrete interval [i : j] = {i, i + 1, . . . , j}.
This research was supported by the Israel Science Foundation (grant No. 1285/16).
2
B. Channel Description
A state-dependent discrete memoryless broadcast channel (X × S, WY1 ,Y2 |X,S , Y1 , Y2 ) consists of a finite input alphabet
X , two finite output alphabets Y1 and Y2 , a finite state alphabet S, and a collection of conditional
pmfs WY1 ,Y2 |X,S . The
Qn
channel is memoryless without feedback, and therefore WY1n ,Y2n |X n ,S n (y1n , y2n |xn , sn ) = i=1 WY1 ,Y2 |X,S (y1,i , y2,i |xi , si ).
The marginals WY1 |X,S and WY2 |X,S correspond to user 1 and user 2, respectively. Throughout, unless mentioned otherwise,
it is assumed that the users have degraded message sets. That is, the encoder sends a private message which is intended for
user 1, and a public message which is intended for both users. For state-dependent broadcast channels with causal SI, the
channel input at time i ∈ [1 : n] may depend on the sequence of past and present states si .
The arbitrarily varying broadcast channel (AVBC) is a discrete memoryless broadcast channel WY1 ,Y2 |X,S with a state
sequence of unknown distribution, not necessarily independent nor stationary. That is, S n ∼ q(sn ) with an unknown joint
pmf q(sn ) over S n . In particular, q(sn ) can give mass 1 to some state sequence sn . We denote the AVBC with causal SI by
B = {WY1 ,Y2 |X,S }.
To analyze the AVBC with degraded message sets with causal SI, we consider the compound broadcast channel. Different
models of compound broadcast channels have been considered in the literature, as e.g. in [18] and [3]. Here, we define
the compound broadcast channel as a discrete memoryless broadcast channel with a discrete memoryless state, where the
state distribution
q(s) is not known in exact, but rather belongs to a family of distributions Q, with Q ⊆ P(S). That is,
Qn
S n ∼ i=1 q(si ), with an unknown pmf q ∈ Q over S. We denote the compound broadcast channel with causal SI by B Q .
The random parameter broadcast channel is a special case of a compound broadcast channel where the set Q consists of
a single distribution, i.e. when the state sequence is memoryless and distributed according to a given state distribution q(s).
Hence, we denote the random parameter broadcast channel with causal SI by B q .
In Figure 1, we set the basic notation for the broadcast channel families that we consider. The columns correspond to the
channel families presented above, namely the random parameter broadcast channel, the compound broadcast channel and the
AVBC. The rows indicate the role of SI, namely the case of no SI and causal SI. In the first row, and throughout, we use the
subscript ‘0’ to indicate the case where SI is not available.
C. Coding with Degraded Message Sets
We introduce some preliminary definitions, starting with the definitions of a deterministic code and a random code for the
AVBC B with degraded message sets with causal SI. Note that in general, the term ‘code’, unless mentioned otherwise, refers
to a deterministic code.
Definition 1 (A code, an achievable rate pair and capacity region). A (2nR0 , 2nR1 , n) code for the AVBC B with degraded
message sets with causal SI consists of the following; two message sets [1 : 2nR0 ] and [1 : 2nR1 ], where it is assumed
throughout that 2nR0 and 2nR1 are integers, a sequence of n encoding functions fi : [1 : 2nR0 ] × [1 : 2nR1 ] × S i → X ,
i ∈ [1 : n], and two decoding functions, g1 : Y1n → [1 : 2nR0 ] × [1 : 2nR1 ] and g2 : Y2n → [1 : 2nR0 ].
At time i ∈ [1 : n], given a pair of messages (m0 , m1 ) ∈[1 : 2nR0 ] × [1 : 2nR1 ] and a sequence si , the encoder transmits
xi = fi (m0 , m1 , si ). The codeword is then given by
xn = f n (m0 , m1 , sn ) , f1 (m0 , m1 , s1 ), f2 (m0 , m1 , s2 ), . . . , fn (m0 , m1 , sn ) .
(1)
Decoder 1 receives the channel output y1n , and finds an estimate for the message pair (m̂0 , m̂1 ) = g1 (y1n ). Decoder 2 only
estimates the common message with m
e 0 = g2 (y2n ). We denote the code by C = (f n (·, ·, ·), g1 (·), g2 (·)).
Define the conditional probability of error of C given a state sequence sn ∈ S n by
(n)
Pe|sn (C )
=
1
2n(R0 +R1 )
nR0
nR1
2X
2X
X
WY1n ,Y2n |X n ,S n (y1n , y2n |f n (m0 , m1 , sn ), sn ) ,
(2)
m0 =1 m1 =1 D(m0 ,m1 )c
where
D(m0 , m1 ) ,
PP
Channel
PP
PP
SI
P
P
Fig. 1.
(y1n , y2n ) ∈ Y1n × Y2n : g1 (y1n ) = (m0 , m1 ) , g2 (y2n ) = m0 .
Random Parameter
Compound
AVBC
without SI
–
B0Q
B0
causal SI
Bq
BQ
B
Notation of broadcast channel families. The columns correspond to the channel family, and the rows indicate the role of SI at the encoder.
(3)
3
Now, define the average probability of error of C for some distribution q(sn ) ∈ P(S n ),
X
(n)
Pe(n) (q, C ) =
q(sn ) · Pe|sn (C ) .
(4)
sn ∈S n
We say that C is a (2nR0 , 2nR1 , n, ε) code for the AVBC B if it further satisfies
Pe(n) (q, C ) ≤ ε ,
for all q(sn ) ∈ P(S n ) .
(5)
We say that a rate pair (R0 , R1 ) is achievable if for every ε > 0 and sufficiently large n, there exists a (2nR0 , 2nR1 , n, ε)
code. The operational capacity region is defined as the closure of the set of achievable rate pairs and it is denoted by C(B).
We use the term ‘capacity region’ referring to this operational meaning, and in some places we call it the deterministic code
capacity region in order to emphasize that achievability is measured with respect to deterministic codes.
We proceed now to define the parallel quantities when using stochastic-encoder stochastic-decoders triplets with common
randomness. The codes formed by these triplets are referred to as random codes.
Definition 2 (Random code). A (2nR0 , 2nR1 , n) random code for the AVBC B consists of a collection of (2nR0 , 2nR1 , n) codes
{Cγ = (fγn , g1,γ , g2,γ )}γ∈Γ , along with a probability distribution µ(γ) over the code collection Γ. We denote such a code by
C Γ = (µ, Γ, {Cγ }γ∈Γ ).
Analogously to the deterministic case, a (2nR0 , 2nR1 , n, ε) random code has the additional requirement
X
Pe(n) (q, C Γ ) =
µ(γ)Pe(n) (q, Cγ ) ≤ ε , for all q(sn ) ∈ P(S n ) .
(6)
γ∈Γ
The capacity region achieved by random codes is denoted by C⋆(B), and it is referred to as the random code capacity region.
Next, we write the definition of superposition coding [4] using Shannon strategies [16]. See also [17], and the discussion
after Theorem 4 therein. Here, we refer to such codes as Shannon strategy codes.
Definition 3 (Shannon strategy codes). A (2nR0 , 2nR1 , n) Shannon strategy code for the AVBC B with degraded message sets
with causal SI is a (2nR0 , 2nR1 , n) code with an encoder that is composed of two strategy sequences
un0 :[1 : 2nR0 ] → U0n ,
un1
:[1 : 2
nR0
] × [1 : 2
(7)
nR1
]→
U1n
,
(8)
and an encoding function ξ(u0 , u1 , s), where ξ : U0 × U1 × S → X , as well as a pair of decoding functions g1 : Y1n → [1 :
2nR0 ] × [1 : 2nR1 ] and g2 : Y2n → [1 : 2nR0 ]. The codeword is then given by
n
xn = ξ n (un0 (m0 ), un1 (m0 , m1 ), sn ) , ξ(un0,i (m0 ), un1,i (m0 , m1 ), si ) i=1 .
(9)
We denote the code by C = (un0 , un1 , ξ, g1 , g2 ).
D. In the Absence of Side Information – Inner Bound
In this subsection, we briefly review known results for the case where the state is not known to the encoder or the decoder,
i.e. SI is not available.
Consider a given AVBC with degraded message sets without SI, which we denote by B0 . Let
R0 ≤ Iq (U ; Y2 ) ,
[ \ (R0 , R1 ) :
R1 ≤ Iq (X; Y1 |U ) ,
(10)
R⋆0,in ,
R0 + R1 ≤ Iq (X; Y1 )
p(x,u) q(s)
In [13, Theorem 2], Jahn introduced an inner bound for the arbitrarily varying general broadcast channel. In our case, with
degraded message sets, Jahn’s inner bound reduces to the following.
Theorem 1 (Jahn’s Inner Bound [13]). Let B0 be an AVBC with degraded message sets without SI. Then, R⋆0,in is an achievable
rate region using random codes over B0 , i.e.
C⋆(B0 ) ⊇ R⋆0,in .
(11)
Now we move to the deterministic code capacity region.
Theorem 2 (Ahlswede’s Dichotomy [13]). The capacity region of an AVBC B0 with degraded message sets without SI either
coincides with the random code capacity region or else, its interior is empty. That is, C(B0 ) = C⋆(B0 ) or else, int C(B0 ) = ∅.
By Theorem 1 and Theorem 2, we have thatR⋆0,in is an achievable rate region, if the interior of the capacity region is
non-empty. That is, C(B0 ) ⊇ R⋆0,in , if int C(B0 ) 6= ∅.
Theorem 3 (see [10, 8, 12]). For an AVBC B0 without SI, the interior of the capacity region is non-empty, i.e. int C(B0 ) 6= ∅,
if and only if the marginals WY1 |X,S and WY2 |X,S are not symmetrizable.
4
II. M AIN R ESULTS
We present our results on the compound broadcast channel and the AVBC with degraded message sets with causal SI.
A. The Compound Broadcast Channel with Causal SI
We now consider the case where the encoder has access to the state sequence in a causal manner, i.e. the encoder has S i .
1) Inner Bound: First, we provide an achievable rate region for the compound broadcast channel with degraded message
sets with causal SI. Consider a given compound broadcast channel B Q with causal SI. Let
R0 ≤ Iq (U0 ; Y2 ) ,
[
\ (R0 , R1 ) :
R1 ≤ Iq (U1 ; Y1 |U0 ) ,
(12)
Rin (B Q ) ,
R0 + R1 ≤ Iq (U0 , U1 ; Y1 )
p(u0 ,u1 ), ξ(u0 ,u1 ,s) q(s)∈Q
subject to X = ξ(U0 , U1 , S), where U0 and U1 are auxiliary random variables, independent of S, and the union is over the
pmf p(u0 , u1 ) and the set of all functions ξ : U0 × U1 × S → X . This can also be expressed as
R0 ≤ inf q∈Q Iq (U0 ; Y2 ) ,
(R0 , R1 ) :
[
R1 ≤ inf q∈Q Iq (U1 ; Y1 |U0 ) ,
.
(13)
Rin (B Q ) =
R0 + R1 ≤ inf q∈Q Iq (U0 , U1 ; Y1 )
p(u0 ,u1 ), ξ(u0 ,u1 ,s)
Lemma 4. Let B Q be a compound broadcast channel with degraded message sets with causal SI available at the encoder. Then,
Rin (B Q ) is an achievable rate region for B Q , i.e.
C(B Q ) ⊇ Rin (B Q ) .
(14)
Specifically, if (R0 , R1 ) ∈ Rin (B Q ), then for some a > 0 and sufficiently large n, there exists a (2nR0 , 2nR1 , n, e−an ) Shannon
strategy code over the compound broadcast channel B Q with degraded message sets with causal SI.
The proof of Lemma 4 is given in Appendix A.
2) The Capacity Region: We determine the capacity region of the compound broadcast channel B Q with degraded message
sets with causal SI available at the encoder. In addition, we give a condition, for which the inner bound in Lemma 4 coincides
with the capacity region. Let
R0 ≤ Iq (U0 ; Y2 ) ,
(R0 , R1 ) :
\
[
R1 ≤ Iq (U1 ; Y1 |U0 )
.
(15)
Rout (B Q ) ,
R0 + R1 ≤ Iq (U0 , U1 ; Y1 )
q(s)∈Q p(u0 ,u1 ), ξ(u0 ,u1 ,s)
Now, our condition is defined in terms of the following.
Definition 4. We say that a function ξ : U0 × U1 × S → X and a set D ⊆ P(U0 × U1 ) achieve both Rin (B Q ) and Rout (B Q ) if
R0 ≤ Iq (U0 ; Y2 ) ,
[
\ (R0 , R1 ) :
R1 ≤ Iq (U1 ; Y1 |U0 ) ,
,
(16a)
Rin (B Q ) =
R0 + R1 ≤ Iq (U0 , U1 ; Y1 )
p(u0 ,u1 )∈D q(s)∈Q
and
Rout (B Q ) =
\
[
q(s)∈Q p(u0 ,u1 )∈D
(R0 , R1 ) :
R0 ≤ Iq (U0 ; Y2 ) ,
R1 ≤ Iq (U1 ; Y1 |U0 ) ,
,
R0 + R1 ≤ Iq (U0 , U1 ; Y1 )
(16b)
subject to X = ξ(U0 , U1 , S). That is, the unions in (12) and (15) can be restricted to the particular function ξ(u0 , u1 , s) and
set of strategy distributions D.
Observe that by Definition 4, given a function ξ(u0 , u1 , s), if a set D achieves both Rin (B Q ) and Rout (B Q ), then every
set D′ with D ⊆ D′ ⊆ P(U0 × U1 ) achieves those regions, and in particular, D′ = P(U0 × U1 ). Nevertheless, the condition
defined below requires a certain property that may hold for D, but not for D′ .
Definition 5. Given a convex set Q of state distributions, define Condition T Q by the following; for some ξ(u0 , u1 , s) and
D that achieve both Rin (B Q ) and Rout (B Q ), there exists q ∗ ∈ Q which minimizes the mutual informations Iq (U0 ; Y2 ),
Iq (U1 ; Y1 |U0 ), and Iq (U0 , U1 ; Y1 ), for all p(u0 , u1 ) ∈ D, i.e.
T Q : For some q ∗ ∈ Q,
q ∗ = arg min Iq (U0 ; Y2 ) = arg min Iq (U1 ; Y1 |U0 ) = arg min Iq (U0 , U1 ; Y1 ) ,
q∈Q
∀p(u0 , u1 ) ∈ D .
q∈Q
q∈Q
(17)
5
Intuitively, when Condition T Q holds, there exists a single jamming strategy q ∗ (s) which is worst for both users simultaneously. That is, there is no tradeoff for the jammer. As the optimal jamming strategy is unique, this eliminates ambiguity for
the users as well.
Theorem 5. Let B Q be a compound broadcast channel with causal SI available at the encoder. Then,
1) the capacity region of B Q follows
C(B Q ) = Rout (B Q ) , if int C(B Q ) 6= ∅ ,
(18)
and it is identical to the corresponding random code capacity region, i.e. C⋆(B Q ) = C(B Q ) if int C(B Q ) 6= ∅.
2) Suppose that Q ⊆ P(S) is a convex set of state distributions. If Condition T Q holds, the capacity region of B Q is given
by
C(B Q ) = Rin (B Q ) = Rout (B Q ) ,
(19)
and it is identical to the corresponding random code capacity region, i.e. C⋆(B Q ) = C(B Q ).
The proof of Theorem 5 is given in Appendix B. Regarding part 1, we note that when int C(B Q ) = ∅, then the inner bound
Rin (B Q ) has an empty interior as well (see (13)). Thus, int Rin (B Q ) 6= ∅ is also a sufficient condition for C(B Q ) = Rout (B Q ).
3) The Random Parameter Broadcast Channel with Causal SI: Consider the random parameter broadcast channel with
causal SI. Recall that this is simply a special case of a compound broadcast channel, where the set of state distributions
consists of a single member, i.e. Q = {q(s)}. Then, let
R0 ≤ Iq (U0 ; Y2 ) ,
(R0 , R1 ) :
[
R1 ≤ Iq (U1 ; Y1 |U0 ) ,
,
(20)
C(B q ) ,
R0 + R1 ≤ Iq (U0 , U1 ; Y1 )
p(u0 ,u1 ),ξ(u0 ,u1 ,s)
with
|U0 | ≤|X ||S| + 2 , |U1 | ≤ |X ||S|(|X ||S| + 2) .
(21)
q
Theorem 6. The capacity region of the random parameter broacast channel B with degraded message sets with causal SI is
given by
C(B q ) = C(B q ) .
(22)
Theorem 6 is proved in Appendix C.
B. The AVBC with Causal SI
We give inner and outer bounds, on the random code capacity region and the deterministic code capacity region, for the
AVBC B with degraded message sets with causal SI. We also provide conditions, for which the inner bound coincides with
the outer bound.
1) Random Code Inner and Outer Bounds: Define
R⋆in (B) , Rin (B Q )
, R⋆out (B) , Rout (B Q )
Q=P(S)
,
(23)
Q=P(S)
and
T =TQ
Q=P(S)
.
(24)
Theorem 7. Let B be an AVBC with degraded message sets with causal SI available at the encoder. Then,
1) the random code capacity region of B is bounded by
R⋆in (B) ⊆ C⋆(B) ⊆ R⋆out (B) .
(25)
2) If Condition T holds, the random code capacity region of B is given by
C⋆(B) = R⋆in (B) = R⋆out (B) .
(26)
The proof of Theorem 7 is given in Appendix D.
Before we proceed to the deterministic code capacity region, we need one further result. The following lemma is a restatement
of a result from [2], stating that a polynomial size of the code collection {Cγ } is sufficient. This result is a key observation
in Ahlswede’s Elimination Technique (ET), presented in [2], and it is significant for the deterministic code analysis.
Lemma 8. Consider a given (2nR0 , 2nR1 , n, εn ) random code C Γ = (µ, Γ, {Cγ }γ∈Γ ) for the AVBC B, where limn→∞ εn = 0.
Then, for every 0 < α < 1 and sufficiently large n, there exists a (2nR0 , 2nR1 , n, α) random code (µ∗ , Γ∗ , {Cγ }γ∈Γ∗ ) with
the following properties:
6
1) The size of the code collection is bounded by |Γ∗ | ≤ n2 .
2) The code collection is a subset of the original code collection, i.e. Γ∗ ⊆ Γ.
3) The distribution µ∗ is uniform, i.e. µ∗ (γ) = |Γ1∗ | , for γ ∈ Γ∗ .
The proof of Lemma 8 follows the same lines as in [2, Section 4] (see also [13, 19]). For completeness, we give the proof
in Appendix E.
2) Deterministic Code Inner and Outer Bounds: The next theorem characterizes the deterministic code capacity region,
which demonstrates a dichotomy property.
Theorem 9. The capacity region of an AVBC B with degraded message sets with causal SI either
coincides with the random
code capacity region or else, it has an empty interior. That is, C(B) = C⋆(B) or else, int C(B) = ∅.
The proof of Theorem 9 is given in Appendix F. Let U = (U0 , U1 ), hence U = U0 × U1 . For every pair of functions
′
ξ : U × S → X and ξ ′ : U0 × S → X , define the DMCs VYξ1 |U,S and VYξ2 |U0 ,S specified by
VYξ1 |U,S (y1 |u, s) = WY1 |X,S (y1 |ξ(u, s), s) ,
′
VYξ2 |U0 ,S (y2 |u0 , s) = WY2 |X,S (y2 |ξ ′ (u0 , s), s) ,
(27a)
(27b)
respectively.
Corollary 10. The capacity region of B is bounded by
C(B) ⊇ R⋆in (B) , if int C(B) 6= ∅ ,
C(B) ⊆ R⋆out (B) .
(28)
(29)
′
Furthermore, if VYξ1 |U,S and VYξ2 |U0 ,S are non-symmetrizable for some ξ : U × S → X and ξ ′ : U0 × S → X , and Condition
T holds, then C(B) = R⋆in (B) = R⋆out (B).
The proof of Corollary 10 is given in Appendix G.
III. D EGRADED B ROADCAST C HANNEL
WITH
C AUSAL SI
In this section, we consider the special case of an arbitrarily varying degraded broadcast channel (AVDBC) with causal SI,
when user 1 and user 2 have private messages.
A. Definitions
We consider a degraded broadcast channel (DBC), which is a special case of the general broadcast channel described in the
previous sections. Following the definitions by [17], a state-dependent broadcast channel WY1 ,Y2 |X,S is said to be physically
degraded if it can be expressed as
WY1 ,Y2 |X,S (y1 , y2 |x, s) = WY1 |X,S (y1 |x, s) · p(y2 |y1 ) ,
(30)
i.e. (X, S) Y1 Y2 form a Markov chain. User 1 is then referred to as the stronger user, whereas user 2 is referred
to
P as the weaker user. More generally, a broadcast channel is said to be stochastically degraded if WY2 |X,S (y2 |x, s) =
e(y2 |y1 ) for some conditional distribution pe(y2 |y1 ). We note that the definition of degradedness
y1 ∈Y1 WY1 |X,S (y1 | x, s)· p
here is stricter than the definition in [13, Remark IIB5]. Our results apply to both the physically degraded and the stochastically
degraded broadcast channels. Thus, for our purposes, there is no need to distinguish between the two, and we simply say that
the broadcast channel is degraded. We use the notation BD for an AVDBC with causal SI.
We consider the case where the users have private messages. A deterministic code and a random code for the AVDBC BD
with causal SI are then defined as follows.
Definition 6 (A private-message code, an achievable rate pair and capacity region). A (2nR1 , 2nR2 , n) private-message code
for the AVDBC BD with causal SI consists of the following; two message sets [1 : 2nR1 ] and [1 : 2nR2 ], where it is assumed
throughout that 2nR1 and 2nR2 are integers, a set of n encoding functions fi : [1 : 2nR1 ] × [1 : 2nR2 ] × S i → X , i ∈ [1 : n],
and two decoding functions, g1 : Y1n → [1 : 2nR1 ] and g2 : Y2n → [1 : 2nR2 ].
At time i ∈ [1 : n], given a pair of messages m1 ∈ [1 : 2nR1 ] and m2 ∈ [1 : 2nR2 ] and a sequence si , the encoder transmits
xi = fi (m1 , m2 , si ). The codeword is then given by
xn = f n (m1 , m2 , sn ) , f1 (m1 , m2 , s1 ), f2 (m1 , m2 , s2 ), . . . , fn (m1 , m2 , sn ) .
(31)
Decoder k receives the channel output ykn , for k = 1, 2., and finds an estimate for the k th message, m̂k = gk (ykn ). Denote the
code by C = (f n (·, ·, ·), g1 (·), g2 (·)).
7
Define the conditional probability of error of C given a state sequence sn ∈ S n by
(n)
Pe|sn (C )
=
1
2n(R1 +R2 )
nR1
nR2
2X
2X
X
m1 =1 m2 =1 D(m1 ,m2
WY1n ,Y2n |X n ,S n (y1n , y2n |f n (m1 , m2 , sn ), sn ) ,
(32)
)c
where
D(m1 , m2 ) ,
We say that C is a (2
nR1
,2
nR2
(y1n , y2n ) ∈ Y1n × Y2n : g1 (y1n ) = m1 , g2 (y2n ) = m2 .
(33)
, n, ε) code for the AVDBC B if it further satisfies
X
(n)
Pe(n) (q, C ) =
q(sn ) · Pe|sn (C ) ≤ ε , for all q(sn ) ∈ P(S n ) .
(34)
sn ∈S n
An achievable private-message rate pair (R1 , R2 ) and the capacity region C(BD ) are defined as usual.
We proceed now to define the parallel quantities when using stochastic-encoder stochastic-decoders triplets with common
randomness.
Definition 7 (Random code). A (2nR1 , 2nR2 , n) private-message random code for the AVDBC BD consists of a collection of
(2nR1 , 2nR2 , n) codes {Cγ = (fγn , g1,γ , g2,γ )}γ∈Γ , along with a probability distribution µ(γ) over the code collection Γ.
Analogously to the deterministic case, a (2nR1 , 2nR2 , n, ε) random code has the additional requirement
X
Pe(n) (q, C Γ ) =
µ(γ)Pe(n) (q, Cγ ) ≤ ε , for all q(sn ) ∈ P(S n ) .
(35)
γ∈Γ
The private-message capacity region achieved by random codes is denoted by C⋆(BD ), and it is referred to as the random code
capacity region.
By standard arguments, a private-message rate pair (R1 , R2 ) is achievable for the AVDBC BD if and only if (R0 , R1 ) is
achievable with degraded message sets, with R0 = R2 . This immediately implies the following results.
B. Results
The results in this section are a straightforward consequence of the results in Section II.
1) Random Code Inner and Outer Bounds: Define
[
\ (R1 , R2 ) : R2 ≤ Iq (U2 ; Y2 ) ,
⋆
,
Rin (BD ) ,
R1 ≤ Iq (U1 ; Y1 |U2 )
(36)
p(u1 ,u2 ), ξ(u1 ,u2 ,s) q(s)
and
R⋆out (BD ) ,
\
[
q(s) p(u0 ,u1 ), ξ(u0 ,u1 ,s)
(R1 , R2 ) : R2
R1
≤ Iq (U2 ; Y2 ) ,
≤ Iq (U1 ; Y1 |U2 )
.
(37)
Now, we define a condition in terms of the following.
Definition 8. We say that a function ξ : U1 × U2 × S → X and a set D⋆ ⊆ P(U1 × U2 ) achieve both R⋆in (BD ) and R⋆out (BD )
if
\ (R1 , R2 ) : R2 ≤ Iq (U2 ; Y2 ) ,
[
⋆
Rin (BD ) =
,
(38a)
R1 ≤ Iq (U1 ; Y1 |U2 )
p(u0 ,u1 )∈D ⋆ q(s)
and
R⋆out (BD ) =
\
[
q(s) p(u0 ,u1 )∈D ⋆
(R1 , R2 ) : R2
R1
≤ Iq (U2 ; Y2 ) ,
≤ Iq (U1 ; Y1 |U2 )
,
(38b)
subject to X = ξ(U1 , U2 , S). That is, the unions in (36) and (37) can be restricted to the particular function ξ(u1 , u2 , s) and
set of strategy distributions D⋆ .
Definition 9. Define Condition TD by the following; for some ξ(u1 , u2 , s) and D⋆ that achieve both R⋆in (BD ) and R⋆out (BD ),
there exists q ∗ ∈ P(S) which minimizes both Iq (U2 ; Y2 ) and Iq (U1 ; Y1 |U2 ), for all p(u1 , u2 ) ∈ D⋆ , i.e.
TD : For some q ∗ ∈ P(S),
q ∗ = arg min Iq (U2 ; Y2 ) = arg min Iq (U1 ; Y1 |U2 ) ∀p(u1 , u2 ) ∈ D⋆ .
q(s)
q(s)
Theorem 11. Let BD be an AVDBC with causal SI available at the encoder. Then,
8
1) the random code capacity region of BD is bounded by
R⋆in (BD ) ⊆ C⋆(BD ) ⊆ R⋆out (BD ) .
(39)
2) If Condition TD holds, the random code capacity region of BD is given by
C⋆(BD ) = R⋆in (BD ) = R⋆out (BD ) .
(40)
Theorem 11 is a straightforward consequence of Theorem 7.
2) Deterministic Code Inner and Outer Bounds: The next theorem characterizes the deterministic code capacity region,
which demonstrates a dichotomy property.
Theorem 12. The capacity region of an AVDBC BD with causal SI either coincides
with the random code capacity region or
else, it has an empty interior. That is, C(BD ) = C⋆(BD ) or else, int C(BD ) = ∅.
9. Now, Theorem 11 and Theorem 12 yield the following corollary.
Theorem 12 is a straightforward consequence of Theorem
′
For every function ξ ′ : U2 × S → X , define a DMC VYξ2 |U2 ,S specified by
′
VYξ2 |U2 ,S (y2 |u2 , s) =WY2 |X,S (y2 |ξ ′ (u2 , s), s) .
(41)
Corollary 13. The capacity region of BD is bounded by
C(BD ) ⊇ R⋆in (BD ) , if int C(BD ) 6= ∅ ,
C(BD ) ⊆ R⋆out (BD ) .
(42)
(43)
′
Furthermore, if VYξ2 |U2 ,S is non-symmetrizable for some ξ ′ : U2 ×S → X , and Condition TD holds, then C(BD ) = R⋆in (BD ) =
R⋆out (BD ).
No SI
q=0
q=0.25
q=0.5
q=0.75
q=1
0.3
R 0.2
2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
R1
Fig. 2. The private-message capacity region of the AVDBC in Example 1, the arbitrarily varying binary symmetric broadcast channel. The area under the
thick blue line is the capacity region of the AVDBC BD with causal SI, with θ0 = 0.005, θ1 = 0.9, and α = 0.2. The black square at the origin stands
q
) for q = 0, 0.25, 0.5, 0.75, 1, where the capacity
for the capacity region of the AVDBC BD,0 without SI, C(BD,0 ) = {(0, 0)}. The curves depict C(BD
q
⋆
region of BD is given by C(BD ) = Rout (BD ) = C(BD ) for q = 1 (see (37)).
IV. E XAMPLES
To illustrate the results above, we give the following examples. In the first example, we consider an AVDBC and determine
the private-message capacity region. Then, in the second example, we consider a non-degraded AVBC and determine the
capacity region with degraded message sets.
Example 1. Consider an arbitrarily varying binary symmetric broadcast channel (BSBC),
Y1 =X + ZS mod 2 ,
Y2 =Y1 + K mod 2 ,
9
where X, Y1 , Y2 , S, ZS , K are binary, with values in {0, 1}. The additive noises are distributed according to
Zs ∼Bernoulli(θs ) , for s ∈ {0, 1} ,
K ∼Bernoulli(α) ,
with θ0 ≤ 1 − θ1 ≤ 21 and α < 12 , where K is independent of (S, ZS ). It is readily seen the channel is physically degraded.
Then, consider the case where user 1 and user 2 have private messages.
We have the following results. Define the binary entropy function h(x) = −x log x − (1 − x) log(1 − x), for x ∈ [0, 1], with
logarithm to base 2. The private-message capacity region of the arbitrarily varying BSBC BD,0 without SI is given by
C(BD,0 ) = {(0, 0)} .
(44)
The private-message capacity region of the arbitrarily varying BSBC BD with causal SI is given by
[ (R1 , R2 ) : R2 ≤ 1 − h(α ∗ β ∗ θ1 ) ,
C(BD ) =
.
R1 ≤ h(β ∗ θ1 ) − h(θ1 )
(45)
0≤β≤1
It will be seen in the achievability proof that the parameter β is related to the distribution of U1 , and thus the RHS of (45)
can be thought of as a union over Shannon strategies. The analysis is given in Appendix H.
It is shown in Appendix H that Condition TD holds and C(BD ) = R⋆in (BD ) = R⋆out (BD ). Figure 2 provides a graphical
interpretation. Consider a DBC WY1 ,Y2 |X,S with random parameters with causal SI, governed by an i.i.d. state sequence,
q
distributed according to S ∼ Bernoulli(q), for a given 0 ≤ q ≤ 1, and let C(BD
) denote the corresponding capacity region.
q∗
Then, the analysis shows that Condition TD implies that there exists 0 ≤ q ∗ ≤ 1 such that C(BD ) = C(BD
), where
∗
q
q
q
C(BD
) ⊆ C(BD
) for every 0 ≤ q ≤ 1. Indeed, looking at Figure 2, it appears that the regions C(BD
), for 0 ≤ q ≤ 1, form a
q∗
well ordered set, hence C(BD ) = C(BD
) with q ∗ = 1.
Next, we consider an example of an AVBC which is not degraded in the sense defined above.
Example 2. Consider a state-dependent binary symmetric broadcast channel (BSBC) with correlated noises,
Y1 =X + ZS
mod 2 ,
Y2 =X + NS
mod 2 ,
where X, Y1 , Y2 , S, ZS , NS are binary, with values in {0, 1}. The additive noises are distributed according to
Zs ∼Bernoulli(θs ) , Ns ∼ Bernoulli(εs ) , for s ∈ {0, 1} ,
where S, Z0 , Z1 , N0 , N1 are independent random variables, with θ0 ≤ ε0 ≤ 21 and 12 ≤ ε1 ≤ θ1 .
Intuitively, this suggests that Y2 is a weaker channel. Nevertheless, observe that this channel is not degraded in the sense
defined in Section III-A (see (30)). For a given state S = s, the broadcast channel WY1 ,Y2 |X,S (·, ·|·, s) is stochastically degraded.
In particular, one can define the following random variables,
As ∼ Bernoulli(πs ) , where πs ,
ε s − θs
,
1 − 2θs
Ye2 = Y1 + AS mod 2 .
Then, Ye2 is distributed according to Pr Ye2 = y2 |X = x, S = s = WY2 |X,S2 (y2 |x, s), and X
(46)
(47)
(Y1 , S) Ye2 form a Markov
chain. However, since X and AS depend on the state, it is not necessarily true that (X, S) Y1 Ye2 form a Markov chain,
and the BSBC with correlated noises could be non-degraded.
We have the following results.
Random Parameter BSBC with Correlated Noises
First, we consider the random parameter BSBC B q , with a memoryless state S ∼ Bernoulli(q), for a given 0 ≤ q ≤ 1.
Define the binary entropy function h(x) = −x log x − (1 − x) log(1 − x), for x ∈ [0, 1], with logarithm to base 2. We show
that the capacity region of the random parameter BSBC B q with degraded message sets with causal SI is given by
[ (R0 , R1 ) : R0 ≤ 1 − h(β ∗ δ (2) ) ,
q
C(B q ) = C(B q ) =
,
(48)
R1 ≤ h(β ∗ δq(1) ) − h(δq(1) )
0≤β≤1
where
δq(1) =(1 − q)θ0 + q(1 − θ1 ) ,
δq(2) =(1 − q)ε0 + q(1 − ε1 ) .
(49)
10
0.5
q=0
q=1/3
q=2/3
q=1
0.4
0.3
R1
0.2
0.1
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
R0
Fig. 3. The capacity region of the AVBC in Example 2, the arbitrarily varying binary symmetric broadcast channel with correlated noises, with parameters
that correspond to case 1. The area under the thick blue line is the capacity region of the AVBC B with causal SI, with θ0 = 0.12, θ1 = 0.85, ε0 = 0.18
⋆ (B) = C(Bq ) for q = 1 (see
and ε1 = 0.78. The curves depict C(Bq ) for q = 0, 1/3, 2/3, 1, where the capacity region of B is given by C(B) = R
out
(48)).
The proof is given in Appendix I-A. It can be seen in the achievability proof that the parameter β is related to the distribution
of U1 , and thus the RHS of (48) can be thought of as a union over Shannon strategies.
Arbitrarily Varying BSBC with Correlated Noises
We move to the arbitrarily varying BSBC with correlated noises. As shown in Appendix I-B, the capacity region of the
arbitrarily varying BSBC B0 with degraded message sets without SI is given by C(B0 ) = {(0, 0)}. For the setting where causal
SI is available at the encoder, we consider two cases.
Case 1: Suppose that θ0 ≤ 1 − θ1 ≤ ε0 ≤ 1 − ε1 ≤ 12 . That is, S = 1 is a noisier channel state than S = 0, for both users.
The capacity region of the arbitrarily varying BSBC B with degraded message sets with causal SI is given by
[ (R0 , R1 ) : R0 ≤ 1 − h(β ∗ ε1 ) ,
C(B) = C(B q )
=
.
(50)
R1 ≤ h(β ∗ θ1 ) − h(θ1 )
q=1
0≤β≤1
It is shown in Appendix I-B that Condition T holds and C(B) = R⋆in (B) = R⋆out (B). Figure 3 provides a graphical
∗
interpretation. The analysis shows that Condition T implies that there exists 0 ≤ q ∗ ≤ 1 such that C(B) = C(B q ), where
∗
C(B q ) ⊆ C(B q ) for every 0 ≤ q ≤ 1. Indeed, looking at Figure 3, it appears that the regions C(B q ), for 0 ≤ q ≤ 1, form a
∗
well ordered set, hence C(B) = C(B q ) with q ∗ = 1.
Case 2: Suppose that θ0 ≤ 1 − θ1 ≤ 1 − ε1 ≤ ε0 ≤ 21 . That is, S = 1 is a noisier channel state for user 1, whereas S = 0
is noisier for user 2. The capacity region of the arbitrarily varying BSBC B with degraded message sets with causal SI is
bounded by
[ (R0 , R1 ) : R0 ≤ 1 − h(β ∗ ε0 ) ,
C(B) ⊇R⋆in (B) =
,
(51)
R1 ≤ h(β ∗ θ1 ) − h(θ1 )
0≤β≤1
and
C(B) ⊆R⋆out (B) ⊆ C(B q=0 ) ∩ C(B q=1 )
(R0 , R1 ) : R0 ≤ 1 − h(β0 ∗ ε0 ) ,
[
R0 ≤ 1 − h(β1 ∗ ε1 ) ,
=
R1 ≤ h(β0 ∗ θ0 ) − h(θ0 ) ,
0≤β0 ≤1 ,
R1 ≤ h(β1 ∗ θ1 ) − h(θ1 )
0≤β1 ≤1
.
(52)
11
Fig. 4. The inner and outer bounds on the capacity region of the AVBC in Example 2, the arbitrarily varying binary symmetric broadcast channel with
correlated noises, with parameters that correspond to case 2, namely, θ0 = 0.12, θ1 = 0.85, ε0 = 0.22 and ε1 = 0.88.
0.5
q=0
q=1
0.4
0.3
R1
0.2
0.1
0
0
0.1
0.2
0.3
R
0.4
0.5
0
(a) The dashed and dotted lines depict the boundaries of C(B q=0 ) and C(B q=1 ), respectively. The colored lines depict C(B q )
for a range of values of 0 < q < 1.
0.4
Inner bound
Outer bound
0.3239
0.3
R1
0.2
0.1
0
0
0.05
0.0784
0.1
0.15
R
0.2
0.25
0
(b) The area under the thick blue line is the inner bound R⋆in (B), and the area under the thin line is the outer bound R⋆out (B).
The analysis is given in Appendix I. Figure 4 provides a graphical interpretation. The dashed and dotted lines in Figure 4(a)
depict the boundaries of C(B q=0 ) and C(B q=1 ), respectively. The colored lines depict C(B q ) for a range of values of 0 <
q < 1. It appears that R⋆out (B) = ∩0≤q≤1 C(B q ) reduces to the intersection of the regions C(B q=0 ) and C(B q=1 ). Figure 4(b)
demonstrates the gap between the bounds in case 2.
A PPENDIX A
P ROOF OF L EMMA 4
We show that every rate pair (R0 , R1 ) ∈ Rin (B Q ) can be achieved using deterministic codes over the compound broadcast
channel B Q with causal SI. We construct a code based on superposition coding with Shannon strategies, and decode using
joint typicality with respect to a channel state type, which is “close” to some q ∈ Q.
We use the following notation. Basic method of types concepts are defined as in [7, Chapter 2]; including the definition of a
type P̂xn of a sequence xn ; a joint type P̂xn ,yn and a conditional type P̂xn |yn of a pair of sequences (xn , y n ); and a δ-typical
12
set Aδ (PX,Y ) with respect to a distribution PX,Y (x, y). Define a set Q̂n of state types
n
o
Q̂n = P̂sn : sn ∈ Aδ1 (q) , for some q ∈ Q ,
(53)
where
δ
,
2 · |S|
δ1 ,
(54)
where δ > 0 is arbitrarily small. That is, Q̂n is the set of types that are δ1 -close to some state distribution q(s) in Q. Note
that for any fixed δ (or δ1 ), for a sufficiently large n, the set Q̂n covers the set Q, and it is in fact a δ1 -blowup of Q. Now, a
code for the compound broadcast channel with causal SI is constructed as follows.
Codebook Generation: Fix the distribution PU0 ,U1 and the function ξ(u0 , u1 , s). Generate 2nR0 independent sequences at
random,
n
Y
(55)
PU0 (u0,i ) , for m0 ∈ [1 : 2nR0 ] .
un0 (m0 ) ∼
i=1
For every m0 ∈ [1 : 2
nR0
], generate 2
nR1
sequences at random,
un1 (m0 , m1 ) ∼
n
Y
PU1 |U0 (u1,i |u0,i (m0 )) , for m1 ∈ [1 : 2nR1 ] ,
(56)
i=1
conditionally independent given un0 (m0 ).
Encoding: To send a pair of messages (m0 , m1 ) ∈ [1 : 2nR0 ] × [1 : 2nR1 ], transmit at time i ∈ [1 : n],
xi = ξ (u0,i (m0 ), u1,i (m0 , m1 ), si ) .
Decoding: Let
PUq 0 ,U1 ,Y1 ,Y2 (u0 , u1 , y1 , y2 ) =
X
(57)
q(s)PU0 ,U1 (u0 , u1 )WY1 ,Y2 |X,S (y1 , y2 |ξ(u0 , u1 , s), s) .
(58)
s∈S
Observing y2n , decoder 2 finds a unique m
e 0 ∈ [1 : 2nR0 ] such that
(un0 (m
e 0 ), y2n ) ∈ Aδ (PU0 PYq2 |U0 ) ,
for some q ∈ Q̂n .
(59)
If there is none, or more than one such m
e 0 ∈ [1 : 2nR0 ], then decoder 2 declares an error.
n
Observing y1 , decoder 1 finds a unique pair of messages (m̂0 , m̂1 ) ∈ [1 : 2nR0 ] × [1 : 2nR1 ] such that
(un0 (m̂0 ), un1 (m̂0 , m̂1 ), y1n ) ∈ Aδ (PU0 ,U1 PYq1 |U0 ,U1 ) ,
for some q ∈ Q̂n .
(60)
If there is none, or more than such pair (m̂0 , m̂1 ), then decoder 1 declares an error. We note that using the set of types Q̂n
instead of the original set of state distributions Q alleviates the analysis, since Q is not necessarily finite nor countable.
Analysis of Probability of Error: Assume without loss of generality that the users sent the message pair (M0 , M1 ) = (1, 1).
Let q(s) ∈ Q denote the actual state distribution chosen by the jammer. By the union of events bound,
f0 6= 1 + Pr (M̂0 , M̂1 ) 6= (1, 1) ,
Pe(n) (q, C ) ≤ Pr M
(61)
where the conditioning on (M0 , M1 ) = (1, 1) is omitted for convenience of notation. The error event for decoder 2 is the
union of the following events.
′
E2,1 ={(U0n (1), Y2n ) ∈
/ Aδ (PU0 PYq2 |U0 ) for all q ′ ∈ Q̂n } ,
(62)
′
E2,2 ={(U0n (m0 ), Y n ) ∈ Aδ (PU0 PYq2 |U0 ) for some m0 6= 1, q ′ ∈ Q̂n } .
Then, by the union of events bound,
f2 =
Pr M
6 1 ≤ Pr (E2,1 ) + Pr (E2,2 ) .
(63)
(64)
′′
Considering the first term, we claim that the event E2,1 implies that (U0n (1), Y2n ) ∈
/ Aδ/2 (PU0 PYq2 |U0 ) for all q ′′ ∈ Q. Assume
′′
to the contrary that E2,1 holds, but there exists q ′′ ∈ Q such that (U0n (1), Y2n ) ∈ A /2 (PU0 PYq2 |U0 ). Then, for a sufficiently
large n, there exists a type q ′ (s) such that |q ′ (s) − q ′′ (s)| ≤ δ1 for all s ∈ S. It can then be inferred that q ′ ∈ Q̂n (see (53)),
and
′
′′
δ
(65)
|PYq2 |U0 (y2 |u0 ) − PYq2 |U0 (y2 |u0 )| ≤ |S| · δ1 = ,
2
δ
13
′
for all u0 ∈ U0 and y2 ∈ Y2 (see (54) and (58)). Hence, (U0n (1), Y2n ) ∈ Aδ (PU0 PYq2 |U0 ), which contradicts the first assumption.
Thus,
′′
δ
Pr (E2,1 ) ≤ Pr (U0n (1), Y2n ) ∈
/ A /2 (PU0 PYq2 |U0 ) for all q ′′ ∈ Q
δ
≤ Pr (U0n (1), Y2n ) ∈
/ A /2 (PU0 PYq2 |U0 ) .
(66)
The last expression tends to zero exponentially as n → ∞ by the law of large numbers and Chernoff’s bound.
Moving to the second term in the RHS of (64), we use the classic method of types considerations to bound Pr (E2,2 ). By
the union of events bound and the fact that the number of type classes in S n is bounded by (n + 1)|S| , we have that
Pr (E2,2 )
′
≤(n + 1)|S| · sup Pr (U0n (m0 ), Y2n ) ∈ Aδ (PU0 PYq2 |U0 ) for some m0 6= 1 .
(67)
q′ ∈Q̂n
For every m0 6= 1,
X
′
′
Pr (U0n (m0 ), Y2n ) ∈ Aδ (PU0 PYq2 |U0 ) =
PU0n (un0 ) · Pr (un0 , Y2n ) ∈ Aδ (PU0 PYq2 |U0 )
n
un
0 ∈U0
=
X
X
PU0n (un0 ) ·
n
un
0 ∈U0
PYq n (y2n ) ,
q
n
δ
y2n : (un
0 ,y2 )∈A (PU0 PY
′
2 |U0
(68)
2
)
′
where the last equality holds since U0n (m0 ) is independent of Y2n for every m0 6= 1. Let (un0 , y2n ) ∈ Aδ (PU0 PYq2 |U0 ). Then,
′
y2n ∈ Aδ2 (PYq2 ) with δ2 , |U0 | · δ. By Lemmas 2.6 and 2.7 in [7],
PYq n (y2n ) = 2
2
−n H(P̂yn )+D(P̂yn ||PYq )
2
2
2
≤2
−nH(P̂yn )
2
≤ 2−n(Hq′ (Y2 )−ε1 (δ)) ,
(69)
where ε1 (δ) → 0 as δ → 0. Therefore, by (67)−(69),
Pr (E2,2 )
≤(n + 1)|S|
· sup 2nR0 ·
q′ ∈Q̂n
|S|
≤(n + 1)
X
n
un
0 ∈U0
′
PU0n (un0 ) · |{y2n : (un0 , y2n ) ∈ Aδ (PU0 PYq2 |U0 )}| · 2−n(Hq′ (Y2 )−ε1 (δ))
· sup 2−n[Iq′ (U0 ;Y2 )−R0 −ε2 (δ)] ,
(70)
q′ ∈Q
with ε2 (δ) → 0 as δ → 0, where the last inequality is due to [7, Lemma 2.13]. The RHS of (70) tends to zero exponentially
as n → ∞, provided that R0 < inf q′ ∈Q Iq′ (U0 ; Y2 ) − ε2 (δ).
Now, consider the error event of decoder 1. For every (m0 , m1 ) ∈ [1 : 2nR0 ] × [1 : 2nR1 ], define the event
′
E1,1 (m0 , m1 ) = {(U0n (m0 ), U1n (m0 , m1 ), Y1n ) ∈ Aδ (PU0 ,U1 PYq1 |U0 ,U1 ) , for some q ′ ∈ Q̂n } .
Then, the error event is bounded by
n
o
[
(M̂0 , M̂1 ) 6= (1, 1) ⊆ E1,1 (1, 1)c ∪
E1,1 (m1 , 1) ∪
m1 ∈[1:2nR1 ] ,
m0 6=1
Thus, by the union of events bound,
Pr (M̂0 , M̂1 ) 6= (1, 1)
X
≤ Pr (E1,1 (1, 1)c ) +
X
m1 6=1
Pr (E1,1 (m0 , m1 )) +
≤2−θn + 2
q′ ∈Q
E1,1 (m0 , m1 ) .
(72)
Pr (E1,1 (m1 , 1))
m1 6=1
nR1
m1 ∈[1:2
],
m0 6=1
−n inf Iq′ (U0 ,U1 ;Y1 )−R0 −R1 −ε3 (δ)
[
(71)
+
X
Pr (E1,1 (m1 , 1)) ,
(73)
m1 6=1
where the last inequality follows from the law of large numbers and type class considerations used before, with ε3 (δ) →
0 as δ → 0. The middle term in the RHS of (73) exponentially tends to zero as n → ∞ provided that R0 + R1 <
14
inf Iq′ (U0 , U1 ; Y1 ) − ε3 (δ). It remains for us to bound the last sum. Using similar type class considerations, we have that for
q′ ∈Q
every q ′ ∈ Q̂n and m1 6= 1,
′
Pr (U0n (1), U1n (m1 , 1), Y1n ) ∈ Aδ (PU0 ,U1 PYq1 |U0 ,U1 )
X
PU0n (un0 ) · PU1n |U0n (un1 |un0 ) · PYq n |U n (y1n |un0 )
=
q
n n
δ
(un
0 ,u1 ,y1 )∈A (PU0 ,U1 PY
1
′
1 |U0 ,U1
0
)
≤2n(Hq′ (U0 ,U1 ,Y1 )+ε4 (δ)) · 2−n(H(U0 )−ε4 (δ)) · 2−n(H(U1 |U0 )−ε4 (δ)) · 2−n(Hq′ (Y1 |U0 )−ε4 (δ))
=2−n(Iq′ (U1 ;Y1 |U0 )−4ε4 (δ)) ,
(74)
where ε4 (δ) → 0 as δ → 0. Therefore, the sum term in the RHS of (73) is bounded by
X
Pr (E1,1 (m1 , 1))
m1 6=1
=
X
m1 6=1
′
Pr (U0n (1), U1n (m1 , 1), Y1n ) ∈ Aδ (PU0 ,U1 PYq1 |U0 ,U1 ) , for some q ′ ∈ Q̂n }
≤(n + 1)|S| · 2
−n inf Iq′ (U1 ;Y1 |U0 )−R1 −ε5 (δ)
q′ ∈Q
,
(75)
where the last line follows from (74), and ε5 (δ) → 0 as δ → 0. The last expression tends to zero exponentially as n → ∞
and δ → 0 provided that R1 < inf q′ ∈Q Iq′ (U1 ; Y1 |U0 ) − ε5 (δ).
The probability of error, averaged over the class of the codebooks, exponentially decays to zero as n → ∞. Therefore, there
must exist a (2nR0 , 2nR1 , n, e−an ) deterministic code, for a sufficiently large n.
A PPENDIX B
P ROOF OF T HEOREM 5
PART 1
At the first part of the theorem it is assumed that the interior of the capacity region is non-empty, i.e. int C(B Q ) 6= ∅.
Achievability proof. We show that every rate pair (R0 , R1 ) ∈ Rout (B Q ) can be achieved using a code based on Shannon
strategies with the addition of a codeword suffix. At time i = n + 1, having completed the transmission of the messages, the
type of the state sequence sn is known to the encoder. Following the assumption that the interior of the capacity region is
non-empty, the type of sn can be reliably communicated to both receivers as a suffix, while the blocklength is increased by
ν > 0 additional channel uses, where ν is small compared to n. The receivers first estimate the type of sn , and then use joint
typicality with respect to the estimated type.The details are provided below.
Following the assumption that int C(B Q ) 6= ∅, we have that for every ε1 > 0 and sufficiently large blocklength ν, there
e
e
e0 > 0 and R
e1 > 0.
g1 , e
g2 ) for the transmission of a type P̂sn at positive rates R
exists a (2ν R0 , 2ν R1 , ν, ε1 ) code Ce = (feν , e
Since the total number of types is polynomial in n (see [7]), the type P̂sn can be transmitted at a negligible rate, with a
blocklength that grows a lot slower than n, i.e.
ν = o(n) .
(76)
We now construct a code C over the compound broadcast channel with causal SI, such that the blocklength is n + o(n), and
the rate Rn′ approaches R as n → ∞.
nR0
independent sequences
Codebook Generation: Fix the distribution PU0 ,U1 and
Qn the function ξ(u0 , u1 , s). GeneratenR20
n
nR0
], generate 2nR1 sequences
], at random, each according to i=1 PU0 (u0,i ). For every m0 ∈ [1 : 2
u0 (m0 ), m0 ∈ [1 : 2
at random,
un1 (m0 , m1 ) ∼
n
Y
PU1 |U0 (u1,i |u0,i (m0 )) , for m1 ∈ [1 : 2nR1 ] ,
(77)
i=1
conditionally independent given un0 (m0 ). Reveal the codebook of the message pair (m0 , m1 ) and the codebook of the type
P̂sn to the encoder and the decoders.
Encoding: To send a message pair (m0 , m1 ) ∈ [1 : 2nR0 ] × [1 : 2nR1 ], transmit at time i ∈ [1 : n],
xi = ξ (u0,i (m0 ), u1,i (m0 , m1 ), si ) .
(78)
At time i ∈ [n + 1 : n + ν], knowing the sequence of previous states sn , transmit
xi = fei (P̂sn , sn+1 , . . . , sn+i ) ,
(79)
15
where P̂sn is the type of the sequence (s1 , . . . , sn ). That is, the encoded type P̂sn is transmitted as a suffix of the codeword.
We note that the type of the
P̂sn , and it is irrelevant for that matter, since the
sequence (sn+1 , . . . , sn+i ) is notν Rnecessarily
e1
e0
νR
Q
g2 ) for the transmission of
assumption that int C(B ) 6= ∅ implies that there exists a (2 , 2 , ν, ε1 ) code Ce = (feν , ge1 , e
e0 > 0 and R
e1 > 0.
P̂sn , with R
Decoding: Let
X
PUq 0 ,U1 ,Y1 ,Y2 (u0 , u1 , y1 , y2 ) =
(80)
q(s)PU0 ,U1 (u0 , u1 )WY1 ,Y2 |X,S (y1 , y2 |ξ(u0 , u1 , s), s) .
s∈S
y2n+ν .
Decoder 2 receives the output sequence
As a pre-decoding step, the receiver decodes the last ν output symbols, and
finds an estimate of the type of the state sequence, qb2 = e
g2 (y2,n+1 , . . . , y2,n+ν ). Then, given the output sequence y2n , decoder
nR0
] such that
2 finds a unique m
e 0 ∈ [1 : 2
(un0 (m
e 0 ), y2n ) ∈ Aδ (PU0 PYqb22|U0 ) .
(81)
(un0 (m̂0 ), un1 (m̂0 , m̂1 ), y1n ) ∈ Aδ (PU0 ,U1 PYqb11|U0 ,U1 ) .
(82)
If there is none, or more than one such m
e 0 ∈ [1 : 2nR0 ], then decoder 2 declares an error.
n+ν
Similarly, decoder 1 receives y1
and begins with decoding the type of the state sequence, qb1 = e
g1 (y1,n+1 , . . . , y1,n+ν ).
Then, decoder 1 finds a unique pair of messages (m̂0 , m̂1 ) ∈ [1 : 2nR0 ] × [1 : 2nR1 ] such that
If there is none, or more than one such pair (m̂0 , m̂1 ) ∈ [1 : 2nR0 ] × [1 : 2nR1 ], then decoder 1 declares an error.
Analysis of Probability of Error: By symmetry, we may assume without loss of generality that
Qn the users sent (M0 , M1 ) =
(1, 1). Let q(s) ∈ Q denote the actual state distribution chosen by the jammer, and let q(sn ) = i=1 q(si ). Then, by the union
of events bound, the probability of error is bounded by
f0 6= 1 + Pr (M̂0 , M̂1 ) 6= (1, 1) ,
Pe(n) (q, C ) ≤ Pr M
(83)
where the conditioning on (M0 , M1 ) = (1, 1) is omitted for convenience of notation.
Define the events
E1,0 = {b
q1 6= P̂S n }
′
E1,1 (m0 , m1 , q ) =
(84)
{(U0n (m0 ), U1n (m0 , m1 ), Y1n )
δ
∈A
′
(PU0 ,U1 PYq1 |U0 ,U1 )}
(85)
and
E2,0 = {b
q2 6= P̂S n }
′
E2,1 (m0 , q ) =
(86)
{(U0n (m0 ), Y2n )
δ
∈A
′
(PU0 PYq2 |U0 )} ,
(87)
for every m0 ∈ [1 : 2nR0 ], m1 ∈ [1 : 2nR1 ], and q ′ ∈ P(S). The error event of decoder 2 is bounded by
o
n
[
f2 6= 1 ⊆ E2,0 ∪ E2,1 (1, qb2 )c ∪
M
E2,1 (m0 , qb2 )
m0 6=1
= E2,0
[
c
c
∪ E2,0
∩ E2,1 (1, qb2 )c ∪
E2,0
∩ E2,1 (m0 , qb2 ) .
m0 6=1
By the union of events bound,
f2 6= 1
Pr M
[
c
c
≤ Pr (E2,0 ) + Pr E2,0
∩ E2,1 (1, qb2 )c + Pr
E2,0
∩ E2,1 (m0 , qb2 ) .
(88)
Pr (E1,0 ∪ E2,0 ) ≤ ε1 .
(89)
m0 6=1
e
e
Since the code Ce for the transmission of the type is a (2ν R0 , 2ν R1 , ν, ε1 ) code, where ε1 > 0 is arbitrarily small, we have
that the probability of erroneous decoding of the type is bounded by
16
Thus, the first term in the RHS of (88) is bounded by ε1 . Then, we maniplute the last two terms as follows.
X
c
f2 6= 1 ≤
q(sn ) Pr E2,0
∩ E2,1 (1, qb2 )c | S n = sn
Pr M
sn ∈Aδ2 (q)
X
+
sn ∈A
/ δ2 (q)
X
+
sn ∈Aδ2 (q)
+
X
sn ∈A
/ δ2 (q)
where
c
q(sn ) Pr E2,0
∩ E2,1 (1, qb2 )c | S n = sn
q(sn ) Pr
[
m0 6=1
q(sn ) Pr
[
m0 6=1
δ2 ,
c
E2,0
∩ E2,1 (m0 , qb2 ) | S n = sn
c
E2,0
∩ E2,1 (m0 , qb2 ) | S n = sn + ε1 ,
1
·δ.
2|S|
(90)
(91)
Next we show that the first and the third sums in (90) tend to zero as n → ∞.
Consider a given sn ∈ Aδ2 (q). For notational convenience, denote
q ′′ = P̂sn .
(92)
Then, by the definition of the δ-typical set, we have that |q ′′ (s) − q(s)| ≤ δ2 for all s ∈ S. It follows that
′′
|PU0 (u0 )PYq2 |U0 (y|u0 ) − PU0 (u0 )PYq2 |U0 (y2 |u0 )|
X
δ
PU1 |U0 (u1 |u0 )WY2 |X,S (y2 |ξ(u0 , u1 , s), s) ≤ δ2 · |S| = ,
≤δ2 ·
2
s,u
(93)
1
for all u0 ∈ U0 and y2 ∈ Y2 , where the last equality follows from (91).
Consider the first sum in the RHS of (90). Given a state sequence sn ∈ Aδ2 (q), we have that
c
c
∩ E2,1 (1, P̂sn )c | S n = sn
Pr E2,0
∩ E2,1 (1, qb2 )c | S n = sn = Pr E2,0
c
= Pr E2,0
∩ E2,1 (1, q ′′ )c | S n = sn
c
= Pr E2,0
| E2,1 (1, q ′′ )c , S n = sn · Pr (E2,1 (1, q ′′ )c ) | S n = sn ) ,
where the first equality follows from (86), and the second equality follows from (92). Then,
c
Pr E2,0
∩ E2,1 (1, qb2 )c | S n = sn ≤ Pr (E2,1 (1, q ′′ )c | S n = sn )
′′
= Pr (U0n (1), Y2n ) ∈
/ Aδ (PU0 PYq2 |U0 ) S n = sn .
(94)
(95)
Now, suppose that (U0n (1), Y2n ) ∈ Aδ/2 (PU0 PYq2 |U0 ), where q is the actual state distribution. By (93), in this case we have that
′′
(U0n (1), Y2n ) ∈ Aδ (PU0 PYq2 |U0 ). Hence, (95) implies that
δ
c
/ A /2 (PU0 PYq2 |U0 ) S n = sn .
Pr E2,0
∩ E2,1 (1, qb2 )c | S n = sn ≤ Pr (U0n (1), Y2n ) ∈
(96)
The first sum in the RHS of (90) is then bounded as follows.
X
c
q(sn ) Pr E2,0
∩ E2,1 (1, qb2 )c | S n = sn
sn ∈Aδ2 (q)
≤
X
sn ∈Aδ2 (q)
≤
X
sn ∈S n
δ
q(sn ) Pr (U0n (1), Y2n ) ∈
/ A /2 (PU0 PYq2 |U0 ) S n = sn
δ
q(sn ) Pr (U0n (1), Y2n ) ∈
/ A /2 (PU0 PYq2 |U0 ) S n = sn
δ
/ A /2 (PU0 PYq2 |U0 ) ≤ ε2 ,
= Pr (U0n (1), Y2n ) ∈
for a sufficiently large n, where the last inequality follows from the law of large numbers.
(97)
17
′′
We bound the third sum in the RHS of (90) using similar arguments. If (U0n (m0 ), Y2n ) ∈ Aδ (PU0 PYq2 |U0 ), then (U0n (m0 ), Y2n ) ∈
3δ
A /2 (PU0 PYq2 |U0 ), due to (93). Thus, for every sn ∈ Aδ2 (q),
[
X
c
Pr
E2,0
∩ E2,1 (m0 , qb2 ) | S n = sn ≤
Pr (E2,1 (m0 , q ′′ ) | S n = sn )
m0 6=1
m0 6=1
=
X
m0 6=1
≤
X
m0 6=1
′′
Pr (U0n (m0 ), Y2n ) ∈ Aδ (PU0 PYq2 |U0 ) S n = sn
3δ
Pr (U0n (m0 ), Y2n ) ∈ A /2 (PU0 PYq2 |U0 ) S n = sn .
(98)
This, in turn, implies that the third sum in the RHS of (90) is bounded by
X
[
c
q(sn ) Pr
E2,0
∩ E2,1 (m0 , qb ) | S n = sn
sn ∈Aδ2 (q)
≤
X
X
sn ∈S n m0 6=1
=
X
m0 6=1
m0 6=1
3δ
q(sn ) · Pr (U0n (m0 ), Y2n ) ∈ A /2 (PU0 PYq2 |U0 ) S n = sn
3δ
Pr (U0n (m0 ), Y2n ) ∈ A /2 (PU0 PYq2 |U0 )
≤2−n[Iq (U0 ;Y2 )−R0 −ε2 (δ)] ,
(99)
with ε2 (δ) → 0 as δ → 0. The last inequality follows from standard type class considerations. The RHS of (99) tends to zero
as n → ∞, provided that R0 < Iq (U0 ; Y2 ) − ε2 (δ). Then, it follows from the law of large numbers that the second and fourth
sums
RHS of (90) tend to zero as n → ∞. Thus, by (97) and (99), we have that the probability of error of decoder 2,
in the
f
Pr M2 6= 1 , tends to zero as n → ∞.
Now, consider the error event of decoder 1,
n
o
[
[
E1,1 (m0 , m1 , qb1 ) ∪
E1,1 (1, m1 , qb1 ) .
(100)
(M̂0 , M̂1 ) 6= (1, 1) ⊆ E1,0 ∪ E1,1 (1, 1, qb1 )c ∪
m1 6=1
m0 6=1 ,
m1 ∈[1:2nR1 ]
Thus, by the union of events bound,
c
Pr (M̂0 , M̂1 ) 6= (1, 1) ≤ Pr (E1,0 ) + Pr E1,0
∩ E1,1 (1, 1, qb1 )c + Pr
+ Pr
[
m1 6=1
[
m0 6=1 ,
m1 ∈[1:2nR1 ]
c
E1,0
∩ E1,1 (m0 , m1 , qb1 )
c
E1,0
∩ E1,1 (m1 , 1, qb1 ) .
(101)
By (89), the first term is bounded by ε1 , and as done above, we write
X
c
Pr (M̂0 , M̂1 ) 6= (1, 1) ≤
q(sn ) Pr E1,0
∩ E1,1 (1, 1, P̂sn )c | S n = sn
sn ∈Aδ2 (q)
+
X
sn ∈Aδ2 (q)
+
X
sn ∈Aδ2 (q)
q(sn ) Pr
q(sn ) Pr
[
m1 ∈[1:2nR1 ]
m0 6=1
[
δ2
m1 6=1
c
E1,0
∩ E1,1 (m0 , m1 , P̂sn ) | S n = sn
c
E1,0
∩ E1,1 (m1 , 1, P̂sn ) | S n = sn
+ 3 · Pr S ∈
/ A (q) + ε1 ,
n
(102)
/ Aδ2 (q) tends to zero as n → ∞. As for
where δ2 is given by (91). By the law of large numbers, the probability Pr S n ∈
the sums, we use similar arguments to those used above.
18
We have that for a given sn ∈ Aδ2 (q),
′′
|PU0 ,U1 (u0 , u1 )PYq1 |U0 ,U1 (y1 |u0 , u1 ) − PU0 ,U1 (u0 , u1 )PYq1 |U0 ,U1 (y1 |u0 , u1 )|
X
δ
≤ δ2 ·
WY1 |X,S (y1 |ξ(u0 , u1 , s) ≤ |S| · δ2 = ,
2
(103)
s∈S
with q ′′ = P̂sn , where the last equality follows from (91).
The first sum in the RHS of (102) is bounded by
X
c
q(sn ) Pr E1,0
∩ E1,1 (1, 1, P̂sn )c | S n = sn
sn ∈Aδ2 (q)
≤
X
sn ∈S n
δ
q(sn ) Pr (U0n (1), U1n (1, 1), Y1n ) ∈
/ A /2 (PU0 ,U1 PYq1 |U0 ,U1 ) | S n = sn
δ
/ A /2 (PU0 ,U1 PYq1 |U0 ,U1 ) ≤ ε2 .
= Pr (U0n (1), U1n (1, 1), Y1n ) ∈
The last inequality follows from the law of large numbers, for a sufficiently large n.
The second sum in the RHS of (102) is bounded by
X
sn ∈Aδ2 (q)
q(sn ) Pr
[
m1 ∈[1:2nR1 ]
m0 6=1
−n(Iq (U0 ,U1 ;Y1 )−R0 −R1 −ε3 (δ)
c
,
E1,0
∩ E1,1 (m0 , m1 , P̂sn ) | S n = sn
≤2
(104)
(105)
with ε3 (δ) → 0 as n → ∞ and δ → 0. This is obtained following the same analysis as for decoder 2. Then, the second sum
tends to zero provided that R0 + R1 < Iq (U0 , U1 ; Y1 ) − ε3 (δ).
The third sum in the RHS of (102) is bounded by
[
X
X
X
c
q(sn ) Pr
E1,0
∩ E1,1 (m1 , 1, P̂sn ) | S n = sn ≤
q(sn ) Pr E1,1 (m1 , 1, P̂sn ) | S n = sn .
sn ∈Aδ2 (q)
m1 6=1
sn ∈Aδ2 (q) m1 6=1
(106)
For every sn ∈ Aδ2 (q), it follows from (103) that the event E1,1 (m1 , 1, P̂sn ) implies that (U0n (1), U1n (m1 , 1), Y1n ) ∈
A3δ/2 (PUq 0 ,U1 ,Y1 ). Thus, the sum is bounded by
X
[
c
q(sn ) Pr
E1,0
∩ E1,1 (m1 , 1, P̂sn ) | S n = sn ≤ 2−n(Iq (U1 ;Y1 |U0 )−R1 −δ3 ) ,
sn ∈Aδ2 (q)
(107)
m1 6=1
where δ3 → 0 as δ → 0. Then, the RHS of (107) tends to zero as n → ∞ provided that R1 < Iq (U1 ; Y1 |U0 ) − δ3 .
We conclude that the RHS of both (90) and (102) tend to zero as n → ∞. Thus, the overall probability of error, averaged
over the class of the codebooks, decays to zero as n → ∞. Therefore, there must exist a (2nR0 , 2nR1 , n, ε) deterministic code,
for a sufficiently large n.
Converse proof. First, we claim that it can be assumed that U0 U1 X form a Markov chain. Define the following region,
R0 ≤ Iq (U0 ; Y2 ) ,
(R0 , R1 ) :
\
[
R1 ≤ Iq (U1 ; Y1 |U0 )
RM,out (B Q ) =
,
(108)
R
+
R
≤
I
(U
,
U
;
Y
)
e 1 ,s)
q(s)∈Q p(u0 ,u1 ), ξ(u
0
1
q
0
1
1
e 1 , S). Clearly, RM,out (B Q ) ⊆ Rout (B Q ), since RM,out (B Q ) is obtained by restriction of the function
subject to X = ξ(U
ξ(u0 , u1 , s) in the union on the RHS of (15). Moreover, we have that RM,out (B Q ) ⊇ Rout (B Q ), since, given some U0 , U1
e1 = (U0 , U1 ), and then X is a deterministic function of (U
e1 , S).
and ξ(u0 , u1 , s), we can define a new strategy variable U
Q
Q
As Rout (B ) = RM,out (B ), it can now be assumed that U0 U1 X (Y1 , Y2 ) form a Markov chain, hence Iq (U0 , U1 ; Y1 ) =
Iq (U1 ; Y1 ). Then, by similar arguements to those used in [14] (see also [7, Chapter 16]), we have that
R0 ≤ Iq (U0 ; Y2 ) ,
(R0 , R1 ) :
\
[
R0 + R1 ≤ Iq (U1 ; Y1 |U0 ) + Iq (U0 ; Y2 )
Rout (B Q ) =
.
(109)
R
+
R
≤
I
(U
;
Y
)
e
q(s)∈Q p(u0 ,u1 ), ξ(u1 ,s)
0
1
q
1
1
We show that for every sequence of (2nR0 , 2nR1 , n, θn ) codes, with limn→∞ θn = 0, we have that (R0 , R1 ) belongs to the
set above.
19
Define the following random variables,
n
U0,i , (M0 , Y1i−1 , Y2,i+1
) , U1,i , (M0 , M1 , S i−1 ) .
(110)
It follows that Xi is a deterministic function of (U1,i , Si ), and since the state sequence is memoryless, we have that Si is
independent of (U0,i , U1,i ). Next, by Fano’s inquality,
nR0 ≤ Iq (M0 ; Y2n ) + nεn ,
n(R0 + R1 ) ≤ Iq (M0 , M1 ; Y1n ) + nεn ,
(111)
(112)
n(R0 + R1 ) ≤ Iq (M1 ; Y1n |M0 ) + Iq (M0 ; Y2n ) + nεn ,
(113)
where εn → 0 as n → ∞. Applying the chain rule, we have that (111) is bounded by
Iq (M0 ; Y2n ) =
n
X
n
Iq (M0 ; Y2,i |Y2,i+1
)≤
Iq (M0 , M1 ; Y1n ) =
n
X
Iq (U0,i ; Y2,i ) ,
Iq (M0 , M1 ; Y1,i |Y1i−1 ) ≤
n
X
Iq (U0,i , U1,i ; Y1,i ) =
n
X
Iq (U1,i ; Y1,i ) ,
(115)
i=1
i=1
i=1
(114)
i=1
i=1
and (112) is bounded by
n
X
where the last equality holds since U0,i U1,i Y1,i form a Markov chain. As for (113), we have that
Iq (M1 ; Y1n |M0 ) + Iq (M0 , Y2n ) =
n
X
Iq (M1 ; Y1,i |M0 , Y1i−1 ) +
≤
n
Iq (M0 ; Y2,i |Y2,i+1
)
i=1
i=1
n
X
n
X
n
Iq (M1 , Y2,i+1
; Y1,i |M0 , Y1i−1 ) +
=
n
Iq (M1 ; Y1,i |M0 , Y1i−1 , Y2,i+1
)+
n
X
n
Iq (Y2,i+1
; Y1,i |M0 , Y1i−1 )
i=1
i=1
+
n
Iq (M0 , Y2,i+1
; Y2,i )
i=1
i=1
n
X
n
X
n
X
n
Iq (M0 , Y1i−1 , Y2,i+1
; Y2,i ) −
n
X
n
Iq (Y1i−1 ; Y2,i |M0 , Y2,i+1
).
(116)
i=1
i=1
Then, the second and fourth sums cancel out, by the Csiszár sum identity [9, Section 2.3]. Hence,
Iq (M1 ; Y1n |M0 ) + Iq (M0 ; Y2n ) ≤
n
X
n
Iq (M1 ; Y1,i |M0 , Y1i−1 , Y2,i+1
)+
≤
n
Iq (M0 , Y1i−1 , Y2,i+1
; Y2,i )
i=1
i=1
n
X
n
X
Iq (U1,i ; Y1,i |U0,i ) +
i=1
Thus, by (111)–(113) and (115)–(117), we have that
n
X
Iq (U0,i ; Y2,i ) .
(117)
i=1
n
R0 ≤
1X
Iq (U0,i ; Y2,i ) + εn ,
n i=1
(118)
n
R0 + R1 ≤
1X
Iq (U1,i ; Y1,i ) + εn ,
n i=1
n
R0 + R1 ≤
(119)
n
X
1X
Iq (U0,i ; Y2,i ) + εn .
Iq (U1,i ; Y1,i |U0,i ) +
n i=1
i=1
(120)
Introducing a time-sharing random variable K, uniformly distributed over [1 : n] and independent of (S n , U0n , U1n ), we have
that
R0 ≤Iq (U0,K ; Y2,K |K) + εn ,
R0 + R1 ≤Iq (U1,K ; Y1,K |K) + εn ,
R0 + R1 ≤Iq (U1,K ; Y1,K |U0,K , K) + Iq (U0,K ; Y2,K |K) + εn .
(121)
(122)
(123)
Define U0 , (U0,K , K) and U1 , (U1,K , K). Hence, PY1,K ,Y2,K |U0 ,U1 = PY1 ,Y2 |U0 ,U1 . Then, by (109) and (121)–(123), it
follows that (R0 , R1 ) ∈ Rout (B Q ).
20
PART 2
We show that when the set of state distributions Q is convex, and Condition T Q holds, the capacity region of the compound
broadcast channel B Q with causal SI is given by C(B Q ) = C⋆(B Q ) = Rin (B Q ) = Rout (B Q ) (and this holds regardless of
whether the interior of the capacity region is empty or not).
Due to part 1, we have that
C⋆(B Q ) ⊆ Rout (B Q ) .
(124)
C(B Q ) ⊇ Rin (B Q ) .
(125)
By Lemma 4,
Thus,
Rin (B Q ) ⊆ C(B Q ) ⊆ C⋆(B Q ) ⊆ Rout (B Q ) .
Q
Q
(126)
Q
To conclude the proof, we show that Condition T implies that Rin (B ) ⊇ Rout (B ), hence the inner and outer bounds
coincide. By Definition 4, if a function ξ(u0 , u1 , s) and a set D achieve Rin (B Q ) and Rout (B Q ), then
R0 ≤ minq∈Q Iq (U0 ; Y2 ) ,
(R0 , R1 ) :
[
R1 ≤ minq∈Q Iq (U1 ; Y1 |U0 ) ,
,
(127a)
Rin (B Q ) =
R0 + R1 ≤ minq∈Q Iq (U0 , U1 ; Y1 )
p(u0 ,u1 )∈D
and
Rout (B Q ) =
\
[
q(s)∈Q p(u0 ,u1 )∈D
(R0 , R1 ) :
R0
R1
R0 + R1
Hence, when Condition T Q holds, we have by Definition 5 that for some
R0
(R0 , R1 ) :
[
R1
Rin (B Q ) =
R0 + R1
p(u0 ,u1 )∈D
⊇Rout (B Q ) ,
≤ Iq (U0 ; Y2 ) ,
≤ Iq (U1 ; Y1 |U0 ) ,
.
≤ Iq (U0 , U1 ; Y1 )
(127b)
ξ(u0 , u1 , s), D ⊆ P(U0 × U1 ), and q ∗ ∈ Q,
≤ Iq∗ (U0 ; Y2 ) ,
≤ Iq∗ (U1 ; Y1 |U0 ) ,
≤ Iq∗ (U0 , U1 ; Y1 )
(128)
where the last line follows from (127b).
A PPENDIX C
P ROOF OF T HEOREM 6
At first, ignore the cardinality bounds in (21). Then, it immediately follows from Theorem 5 that C(B q ) = C(B q ), by taking
the set Q that consists of a single state distribution q(s).
To prove the bounds on the alphabet sizes of the strategy variables U0 and U1 , we apply the standard Carathéodory techniques
(see e.g. [7, Lemma 15.4]). Let
L0 , (|X | − 1)|S| + 3 ≤ |X ||S| + 2 ,
(129)
where the inequality holds since |S| ≥ 1. Without loss of generality, assume that X = [1 : |X |] and S = [1 : |S|]. Then, define
the following L0 functionals,
X
ϕij (PU1 ,X|S ) =
PU1 ,X|S (u1 , i|j) = PX|S (i|j) , i = 1, . . . |X | − 1, j = 1, . . . , |S| ,
(130)
u1 ∈U1
ψ1 (PU1 ,X|S ) = −
X
s,u1 ,x,y1
ψ2 (PU1 ,X|S ) = −
X
s,u1 ,x,y2
ψ3 (PU1 ,X|S ) = −
X
u1 ,x,s
−
X
u1 ,x,s
q(s)PU1 ,X|S (u1 , x|s)WY1 |X,S (y1 |x, s) log
q(s)PU1 ,X|S (u1 , x|s)WY2 |X,S (y2 |x, s) log
q(s)PU1 ,X|S (u1 , x|s) log
X
x′ ,s′
"
X
s′ ,u′1 ,x′
X
s′ ,u′1 ,x′
q(s′ )PU1 ,X|S (u′1 , x′ |s′ )WY1 |X,S (y1 |x′ , s′ ) ,
(131)
q(s′ )PU1 ,X|S (u′1 , x′ |s′ )WY2 |X,S (y2 |x′ , s′ ) ,
q(s′ )PU1 ,X|S (u1 , x′ |s′ )
q(s)PU1 ,X|S (u1 , x|s)WY1 |X,S (y1 |x, s) log P
P
x′ ,s′
′′ ′′
u′′
1 ,x ,s
(132)
q(s′ )PU1 ,X|S (u1 , x′ |s′ )WY1 |X,S (y1 |x′ , s′ )
q(s′′ )PU1 ,X|S (u′′1 , x′′ |s′′ )WY1 |X,S (y1 |x′′ , s′′ )
#
.
(133)
21
Then, observe that
X
p(u0 )ϕi,j (PU1 ,X|S,U0 (·, ·|·, u0 )) = PX|S (i|j) ,
(134)
p(u0 )ψ1 (PU1 ,X|S,U0 (·, ·|·, u0 )) = H(Y1 |U0 ) ,
(135)
p(u0 )ψ1 (PU1 ,X|S,U0 (·, ·|·, u0 )) = H(Y2 |U0 ) ,
(136)
p(u0 )ψ1 (PU1 ,X|S,U0 (·, ·|·, u0 )) = I(U1 ; Y1 |U0 ) .
(137)
u0 ∈U0
X
u0 ∈U0
X
u0 ∈U0
X
u0 ∈U0
By [7, Lemma 15.4], the alphabet size of U0 can then be restricted to |U0 | ≤ L0 , while preserving PX,S,Y1 ,Y2 ; I(U0 ; Y2 ) =
H(Y2 ) − H(Y2 |U0 ); I(U0 ; Y1 |U0 ); and I(U0 , U1 ; Y1 ) = I(U0 ; Y1 |U0 ) + H(Y1 ) − H(Y1 |U0 ).
Fixing the alphabet of U0 , we now apply similar arguments to the cardinality of U1 . Then, less than |X ||S|L0 − 1 functionals
are required for the joint distribution PU0 ,X|S , and an additional functional to preserve H(Y1 |U1 , U0 ). Hence, by [7, Lemma
15.4], the alphabet size of U0 can then be restricted to |U1 | ≤ |X ||S|L0 ≤ |X ||S|(|X ||S| + 2) (see (129)).
A PPENDIX D
P ROOF OF T HEOREM 7
A. Part 1
First, we explain the general idea. We devise a causal version of Ahlswede’s Robustification Technique (RT) [1, 19]. Namely,
we use codes for the compound broadcast channel to construct a random code for the AVBC using randomized permutations.
However, in our case, the causal nature of the problem imposes a difficulty, and the application of the RT is not straightforward.
In [1, 19], the state information is noncausal and a random code is defined via permutations of the codeword symbols. This
cannot be done here, because the SI is provided to the encoder in a causal manner. We resolve this difficulty using Shannon
strategy codes for the compound broadcast channel to construct a random code for the AVBC, applying permutations to the
strategy sequence (un1 , un0 ), which is an integral part of the Shannon strategy code, and is independent of the channel state.
The details are given below.
1) Inner Bound: We show that the region defined in (23) can be achieved by random codes over the AVBC B with causal
SI, i.e. C(B) ⊇ R⋆in (B). We start with Q
Ahlswede’s RT [1], stated below. Let h : S n → [0, 1] be a given function. If, for some
n
n
fixed αn ∈ (0, 1), and for all q(s ) = i=1 q(si ), with q ∈ P(S),
X
q(sn )h(sn ) ≤ αn ,
(138)
sn ∈S n
then,
1 X
h(πsn ) ≤ βn ,
n!
for all sn ∈ S n ,
(139)
π∈Πn
where Πn is the set of all n-tuple permutations π : S n → S n , and βn = (n + 1)|S| · αn .
According to Lemma 4, for every (R0 , R1 ) ∈ R⋆in (B), there exists a (2nR0 , 2nR1 , n, e−2θn ) Shannon strategy code for the
compound broadcast channel B P(S) with causal SI, for some θ > 0 and sufficiently large n. Given such a Shannon strategy
(n)
code C = (un0 (m0 ), un1 (m0 , m1 ), ξ(u0 , u1 , s), g1 (y1n ), g2 (y2n )), we have that (138) is satisfied with h(sn ) = Pe|sn (C ) and
αn = e−2θn . As a result, Ahlswede’s RT tells us that
1 X (n)
Pe|πsn (C ) ≤ (n + 1)|S| e−2θn ≤ e−θn , for all sn ∈ S n ,
(140)
n!
π∈Πn
for a sufficiently large n, such that (n + 1)|S| ≤ eθn .
On the other hand, for every π ∈ Πn ,
X
X
1
(a)
(n)
WY1n ,Y2n |X n ,S n (πy1n , πy2n |ξ n (un0 (m0 ), un1 (m0 , m1 ), πsn ), πsn )
Pe|πsn (C ) = n(R +R )
2 0 1 m ,m
n
n
0
(b)
=
(c)
=
1
2n(R0 +R1 )
1
2n(R0 +R1 )
X
1
(πy1 ,πy2 )∈D(m
/
0 ,m1 )
X
WY1n ,Y2n |X n ,S n (y1n , y2n |π −1 ξ n (un0 (m0 ), un1 (m0 , m1 ), πsn ), sn ) ,
m0 ,m1 (πy1n ,πy2n )∈D(m
/
0 ,m1 )
X
X
m0 ,m1 (πy1n ,πy2n )∈D(m
/
0 ,m1 )
WY1n ,Y2n |X n ,S n (y1n , y2n |ξ n (π −1 un0 (m0 ), π −1 un1 (m0 , m1 ), sn ), sn ) (141)
22
where (a) is obtained by plugging πsn and xn = ξ n (·, ·, ·) in (2) and then changing the order of summation over (y1n , y2n );
(b) holds because the broadcast channel is memoryless; and (c) follows from that fact that for a Shannon strategy code,
xi = ξ(u0,i , u1,i , si ), i ∈ [1 : n], by Definition 3. The last expression suggests the use of permutations applied to the encoding
strategy sequence and the channel output sequences.
Then, consider the (2nR0 , 2nR1 , n) random code C Π , specified by
fπn (m0 , m1 , sn ) = ξ n (π −1 un1 (m0 , m1 ), π −1 un0 (m0 ), sn ) ,
(142a)
and
g1,π (y1n ) = g1 (πy1n ) ,
1
|Πn |
g2,π (y2n ) = g(πy2n ) ,
1
n! .
(142b)
n
for π ∈ Πn , with a uniform distribution µ(π) =
=
Such permutations can be implemented without knowing s , hence
this coding scheme does not violate the causality requirement.
From (141), we see that
X
(n)
(n)
Pe|sn (C Π ) =
µ(π)Pe|πsn (C ) ,
(143)
π∈Πn
n
n
for all s ∈ S , and therefore, together with (140), we have that the probability of error of the random code C Π is bounded
by
Pe(n) (q, C Π ) ≤ e−θn ,
(144)
for every q(sn ) ∈ P(S n ). That is, C Π is a (2nR0 , 2nR1 , n, e−θn ) random code for the AVBC B with causal SI at the encoder.
This completes the proof of the inner bound.
2) Outer Bound: We show that the capacity region of the AVBC B with causal SI is bouned by C⋆(B) ⊆ R⋆out (B) (see
(23)). The random code capacity region of the AVBC is included within the random code capacity region of the compound
broadcast channel, namely
C⋆(B) ⊆ C⋆(B P(S) ) .
(145)
By Theorem 5 we have that C⋆(B Q ) ⊆ Rout (B Q ). Thus, with Q = P(S),
C⋆(B P(S) ) ⊆ R⋆out (B) .
(146)
It follows from (145) and (146) that C⋆(B) ⊆ R⋆out (B). Since the random code capacity region always includes the deterministic
code capacity region, we have that C(B) ⊆ R⋆ (B) as well.
out
Part 2
The second equality, R⋆in (B) = R⋆out (B), follows from part 2 of Theorem 5, taking Q = P(S). By part 1, R⋆in (B) ⊆ C⋆(B) ⊆
⋆
Rout (B), hence the proof follows.
A PPENDIX E
P ROOF OF L EMMA 8
The proof follows the lines of [2, Section 4]. Let k > 0 be an integer, chosen later, and define the random variables
L1 , L2 , . . . , Lk i.i.d. ∼ µ(ℓ) .
(147)
n
Fix s , and define the random variables
(n)
Ωj (sn ) = Pe|sn (CLj ) ,
j ∈ [1 : k] ,
(148)
which is the conditional probability of error of the code CLj given the state sequence sn .
P
P
(n)
Since C Γ is a (2nR1 , 2nR2 , n, εn ) code, we have that γ µ(γ) sn q(sn )Pe|sn (Cγ ) ≤ εn , for all q(sn ). In particular, for
a kernel, we have that
X
(n)
EΩj (sn ) =
µ(γ) · Pe|sn (Cγ ) ≤ εn ,
(149)
γ∈Γ
for all j ∈ [1 : k].
23
Now take n to be large enough so that εn < α. Keeping sn fixed, we have that the random variables Ωj (sn ) are i.i.d., due
to (147). Next the technique known as Bernstein’s trick [2] is applied.
k
k
X
X
(a)
Pr
Ωj (sn ) ≥ kα ≤ E exp β
Ωj (sn ) − kα
(150)
j=1
j=1
k
Y
n
eβΩj (s )
(151)
=e−βkα · E
j=1
(b) −βkα
=e
·
k
Y
j=1
(c)
≤ e−βkα ·
k
Y
j=1
(d)
n
o
n
E eβΩj (s )
E 1 + eβ · Ωj (sn )
≤ e−βkα · 1 + eβ εn
k
(152)
(153)
(154)
where (a) is an application of Chernoff’s inequality; (b) follows from the fact that Ωj (sn ) are independent; (c) holds since
eβx ≤ 1 + eβ x, for β > 0 and 0 ≤ x ≤ 1; (d) follows from (149). We take n to be large enough for 1 + eβ εn ≤ eα to hold.
Thus, choosing β = 2, we have that
k
X
1
Ωj (sn ) ≥ α ≤e−αk ,
Pr
(155)
k j=1
for all sn ∈ S n . Now, by the union of events bound, we have that
k
k
X
X
1
1
Ωj (sn ) ≥ α = Pr ∃sn :
Ωj (sn ) ≥ α
Pr max
sn k
k
j=1
j=1
k
X
X
1
≤
Pr
Ωj (sn ) ≥ α
k
n
n
j=1
(156)
(157)
s ∈S
≤|S|n · e−αk .
(158)
Since |S|n grows only exponentially in n, choosing k = n2 results in a super exponential decay.
∗
Consider the code C Γ = (µ∗ , Γ∗ = [1 : k], {CLj }kj=1 ) formed by a random collection of codes, with µ∗ (j) = k1 . It follows
that the conditional probability of error given sn , which is given by
(n)
∗
Pe|sn (C Γ ) =
k
1 X (n)
P n (CLj ) ,
k j=1 e|s
2
(159)
∗
exceeds α with a super exponentially small probability ∼ e−αn , for all sn ∈ S n . Thus, there exists a random code C Γ =
(µ∗ , Γ∗ , {Cγj }kj=1 ) for the AVBC B, such that
X
∗
∗
(n)
Pe(n) (q, C Γ ) =
q(sn )Pe|sn (C Γ ) ≤ α , for all q(sn ) ∈ P(S n ) .
(160)
sn ∈S n
A PPENDIX F
P ROOF OF T HEOREM 9
Achievability proof. To show achievability, we follow the lines of [2], with the required adjustments. We use the random code
constructed in the proof of Theorem 7 to construct a deterministic
code.
Let (R0 , R1 ) ∈ C⋆(B), and consider the case where int C(B) 6= ∅. Namely,
C(W1 ) > 0 , and C(W2 ) > 0 ,
(161)
where W1 = {WY1 |X,S } and W2 = {WY2 |X,S } denote the marginal AVCs with causal SI of user 1 and user 2, respectively.
By Lemma 8, forevery ε1 > 0 and sufficiently large n, there exists a (2nR0 , 2nR1 , n, ε1 ) random code C Γ = µ(γ) = k1 , Γ =
[1 : k], {Cγ }γ∈Γ , where Cγ = (fγn , g1,γ , g2,γ ), for γ ∈ Γ, and k = |Γ| ≤ n2 . Following (161), we have that for every
24
e
e
ε2 > 0 and sufficiently large ν, the code index γ ∈ [1 : k] can be sent over B using a (2ν R0 , 2ν R1 , ν, ε2 ) deterministic code
e0 > 0, R
e1 > 0. Since k is at most polynomial, the encoder can reliably convey γ to the receiver
Ci = (feν , e
g1 , ge0 ), where R
with a negligible blocklength, i.e. ν = o(n).
Now, consider a code formed by the concatenation of Ci as a prefix to a corresponding code in the code collection {Cγ }γ∈Γ .
That is, the encoder sends both the index γ and the message pair (m0 , m1 ) to the receivers, such that the index γ is transmitted
first by feν (γ, sν ), and then the message pair (m0 , m1 ) is transmitted by the codeword xn = fγn ( m0 , m1 , sν+1 , . . . , sν+n ).
Subsequently, decoding is performed in two stages as well; decoder 1 estimates the index at first, with γ
b1 = e
g1 (y1,1 , . . . , y1,ν ),
and the message pair (m0 , m1 ) is then estimated by (m
b 0, m
b 1 ) = g1,bγ1 (y1,ν+1 , . . . , y1,ν+n ). Similarly, decoder 2 estimates the
index with γ
b2 = e
g0 (y2,1 , . . . , y2,ν ), and the message m0 is then estimated by m
e 2 = g2,bγ2 (y2,ν+1 , . . . , y2,ν+n ).
By the union of events bound, the probability of error is then bounded by ε = ε1 + ε2 , for every joint distribution in
e
e
P ν+n (S ν+n ). That is, the concatenated code is a (2(ν+n)R1,n , 2(ν+n)R2,n , ν + n, ε) code over the AVBC B with causal SI,
e1,n = n · R1 approach R0 and
e0,n = n · R0 and R
where ν = o(n). Hence, the blocklength is n + o(n), and the rates R
ν+n
ν+n
R1 , respectively, as n → ∞.
Converse proof. In general, the deterministic code capacity region is included within the random code capacity region. Namely,
C(B) ⊆ C⋆(B).
P ROOF
A PPENDIX G
OF C OROLLARY 10
First, consider the inner and outer bounds in (28) and (29). The bounds are obtained as a direct consequence of part 1 of
Theorem 7 and Theorem 9. Note that the outer bound (29) holds regardless of any condition, since the deterministic code
capacity region is always included within the random code capacity region, i.e. C(B) ⊆ C⋆(B) ⊆ R⋆out (B).
′
Now, suppose that the marginals VYξ1 |U,S and VYξ2 |U0 ,S are non-symmetrizable for some ξ : U × S → X and ξ ′ : U0 × S → X ,
and Condition T holds. Then, based on [8, 7], both marginal (single-user) AVCs have positive capacity, i.e. C(W1 ) > 0 and
C(W2 ) > 0. Namely, int C(B) 6= ∅. Hence, by Theorem 9, the deterministic code capacity region coincides with the random
code capacity region, i.e. C(B) = C⋆(B). Then, the proof follows from part 2 of Theorem 7.
A PPENDIX H
A NALYSIS OF E XAMPLE 1
We begin with the case of an arbitrarily varying BSBC BD,0 without SI. We claim that the single user marginal AVC
W1,0 without SI, corresponding to the stronger user, has zero capacity. Denote q , q(1) = 1 − q(0). Then, observe that the
additive noise is distributed according to ZS ∼ Bernoulli(εq ), with ηq , (1 − q) · θ0 + q · θ1 , for 0 ≤ q ≤ 1. Based on [5],
C(W1,0 ) ≤ C⋆(W1,0 ) = min0≤q≤1 [1 − h(ηq )]. Since θ0 < 12 ≤ θ1 , there exists 0 ≤ q ≤ 1 such that ηq = 12 , thus C(W1,0 ) = 0.
The capacity region of the AVDBC BD,0 without SI is then given by C(BD,0 ) = {(0, 0)}.
Now, consider the arbitrarily varying BSBC BD with causal SI. By Theorem 11, the random code capacity region is bounded
q
by R⋆in (BD ) ⊆ C⋆(BD ) ⊆ R⋆out (BD ). We show that the bounds coincide, and are thus tight. Let BD
denote the random parameter
DBC WY1 ,Y2 |X,S with causal SI, governed by an i.i.d. state sequence, distributed according to S ∼ Bernoulli(q). By [17], the
corresponding capacity region is given by
[ (R1 , R2 ) : R2 ≤ 1 − h(α ∗ β ∗ δq ) ,
q
C(BD ) =
,
(162a)
R1 ≤ h(β ∗ δq ) − h(δq )
0≤β≤1
where
δq , (1 − q) · θ0 + q · (1 − θ1 ) ,
(162b)
T
q
q′
) ⊆ C(BD
). Thus, taking q ′ = 1, we have
for 0 ≤ q ≤ 1. For every given 0 ≤ q ′ ≤ 1, we have that R⋆out (BD ) = 0≤q≤1 C(BD
that
[ (R1 , R2 ) : R2 ≤ 1 − h(α ∗ β ∗ θ1 ) ,
⋆
Rout (BD ) ⊆
,
(163)
R1 ≤ h(β ∗ θ1 ) − h(θ1 )
1
0≤β≤ 2
where we have used the identity h(α ∗ (1 − δ)) = h(α ∗ δ).
Now, to show that the region above is achievable, we examine the inner bound,
[
(R1 , R2 ) : R2 ≤ min0≤q≤1 Iq (U2 ; Y2 ) ,
.
R⋆in (BD ) =
R1 ≤ min0≤q≤1 Iq (U1 ; Y1 |U2 )
p(u1 ,u2 ),ξ(u1 ,u2 ,s)
(164)
25
Consider the following choice of p(u1 , u2 ) and ξ(u1 , u2 , s). Let U1 and U2 be independent random variables,
1
U1 ∼ Bernoulli(β) , and U2 ∼ Bernoulli
,
2
(165)
for 0 ≤ β ≤ 21 , and let
ξ(u1 , u2 , s) = u1 + u2 + s mod 2 .
(166)
Then,
Hq (Y1 |U1 , U2 ) = Hq (S + ZS ) = h(δq ) ,
Hq (Y1 |U2 ) = Hq (U1 + S + ZS ) = h(β ∗ δq ) ,
Hq (Y2 |U2 ) = Hq (U1 + S + ZS + V ) = h(α ∗ β ∗ δq ) ,
Hq (Y2 ) = 1 ,
(167)
where addition is modulo 2, and δq is given by (162b). Thus,
Iq (U2 ; Y2 ) = 1 − h(α ∗ β ∗ δq ) ,
Iq (U1 ; Y1 |U2 ) = h(β ∗ δq ) − h(δq ) ,
hence
R⋆in (BD ) ⊇
[
0≤β≤ 12
(R1 , R2 ) : R2
R1
≤ min0≤q≤1 1 − h(α ∗ β ∗ δq ) ,
≤ min0≤q≤1 h(β ∗ δq ) − h(δq )
(168)
.
(169)
Note that θ0 ≤ δq ≤ 1 − θ1 ≤ 12 . For 0 ≤ δ ≤ 21 , the functions g1 (δ) = 1 − h(α ∗ β ∗ δ) and g2 (δ) = h(β ∗ δ) − h(δ) are
monotonic decreasing functions of δ, hence the minima in (169) are both achieved with q = 1. It follows that
[ (R1 , R2 ) : R2 ≤ 1 − h(α ∗ β ∗ θ1 ) ,
⋆
⋆
⋆
C (BD ) = Rin (BD ) = Rout (BD ) =
.
(170)
R1 ≤ h(β ∗ θ1 ) − h(θ1 )
0≤β≤1
It can also be verified that Condition TD holds (see Definition 9), in agreement with part 2 of Theorem 11. First, we specify
a function ξ(u1 , u2 , s) and a distributions set D⋆ that achieve R⋆in (BD ) and R⋆out (BD ) (see Definition 38). Let ξ(u1 , u2 , s) be
as in (166), and let D⋆ be the set of distributions p(u1 , u2 ) such that U1 and U2 are independent random variables, distributed
according to (165). By the derivation above, the requirement (38a) is satisfied. Now, by the derivation in [17, Section IV], we
have that
[
(R1 , R2 ) : R2 ≤ Iq (U2 ; Y2 ) ,
q
C(BD ) =
.
(171)
R1 ≤ Iq (U1 ; Y1 |U2 )
p(u1 ,u2 )∈D ⋆
Then, the requirement (38b) is satisfied as well, hence ξ(u1 , u2 , s) and D⋆ achieve R⋆in (BD ) and R⋆out (BD ). It follows that
Condition TD holds, as q ∗ = 1 satisfies the desired property with ξ(u1 , u2 , s) and D⋆ as described above.
We move to the deterministic code capacity region of the arbitrarily varying BSBC BD with causal SI. If θ1 = 12 , the
capacity region is given by C(BD ) = C⋆(BD ) = {(0, 0)}, by (170). Otherwise, θ0 < 12 < θ1 , and we now show that the
′
condition in Corollary 10 is met. Suppose that VYξ2 |U2 ,S is symmetrizable for all ξ ′ : U2 × S → X . That is, for every ξ ′ (u2 , s),
there exists λu2 = J(1|u2 ) such that
(1 − λub )WY2 |X,S (y2 |ξ ′ (ua , 0), 0) + λub WY2 |X,S (y2 |ξ ′ (ua , 1), 1) =
(1 − λua )WY2 |X,S (y2 |ξ ′ (ub , 0), 0) + λua WY2 |X,S (y2 |ξ ′ (ub , 1), 1) (172)
for all ua , ub ∈ U2 , y2 ∈ {0, 1}. If this is the case, then for ξ ′ (u2 , s) = u2 + s mod 2, taking ua = 0, ub = 1, y2 = 1, we
have that
(1 − λ1 ) · (α ∗ θ0 ) + λ1 · (1 − α ∗ θ1 ) = (1 − λ0 ) · (1 − α ∗ θ0 ) + λ0 · (α ∗ θ1 ) .
(173)
This is a contradiction. Since f (θ) = α∗θ is a monotonic increasing function of θ, and since 1−f (θ) = f (1−θ), we have that the
value of the LHS of (173) is in [0, 12 ), while the value of the RHS of (173) is in ( 12 , 1]. Thus, there exists ξ ′ : U2 × S → X such
′
that VYξ2 |X,S is non-symmetrizable for θ0 < 21 < θ1 . As Condition TD holds, we have that C(BD ) = R⋆in (BD ) = R⋆out (BD ),
due to Corollary 13. Hence, by (170), we have that the capacity region of the arbitrarily varying BSBC BD with causal SI is
given by (45).
26
A PPENDIX I
A NALYSIS OF E XAMPLE 2
A. Random Parameter BSBC with Correlated Noises
Consider the random parameter BSBC B q with causal SI. By Theorem 6, the capacity region of B q with degraded message
sets with causal SI is given by C(B q ) = C(B q ) (see (20)). Then, to show achievability, consider the following choice of
p(u0 , u1 ) and ξ(u0 , u1 , s). Let U0 and U1 be independent random variables,
1
, and U1 ∼ Bernoulli(β) ,
(174)
U0 ∼ Bernoulli
2
for 0 ≤ β ≤ 12 , and let
ξ(u0 , u1 , s) = u0 + u1 + s mod 2 .
(175)
Then,
Hq (Y1 |U0 , U1 ) = Hq (S + ZS ) = h(δq(1) ) ,
Hq (Y1 |U0 ) = Hq (U1 + S + ZS ) = h(β ∗ δq(1) ) ,
Hq (Y2 |U0 ) = Hq (U1 + S + NS ) = h(β ∗ δq(2) ) ,
Hq (Y2 ) = 1 ,
(176)
where addition is modulo 2, and δq , δq are given by (49). Thus,
(1)
(2)
Iq (U0 ; Y2 ) = 1 − h(β ∗ δq(2) ) ,
Iq (U1 ; Y1 |U0 ) = h(β ∗ δq(1) ) − h(δq(1) ) .
The last inequality on the sum rate in (20) is redundant, as shown below. Since θ0 ≤ ε0 ≤
δq(1) ≤ δq(2) ≤ 12 . Hence,
(177)
1
2
and
1
2
≤ θ1 ≤ ε1 , we have that
Iq (U0 ; Y2 ) = 1 − h(β ∗ δq(2) ) ≤ 1 − h(β ∗ δq(1) ) = Iq (U0 ; Y1 ) ,
(178)
which implies that Iq (U0 ; Y2 ) + Iq (U1 ; Y1 |U0 ) ≤ Iq (U0 , U1 ; Y1 ). This completes the proof of the direct part.
As for the converse, we need to show that if,
R1 > h(β ∗ δq(1) ) − h(δq(1) ) ,
for some 0 ≤ β ≤
1
2,
(179)
then it must follows that R0 ≤ 1 − h(β ∗ δq ). Indeed, by (20) and (179),
(2)
Hq (Y1 |U0 ) >h(β ∗ δq(1) ) − h(δq(1) ) + Hq (Y1 |U0 , U1 )
≥h(β ∗ δq(1) ) − h(δq(1) ) + min Hq (ξ(u0 , u1 , S) + ZS )
u0 ,u1
=h(β ∗ δq(1) ) − h(δq(1) ) + min (Hq (ZS ), Hq (S + ZS ))
=h(β ∗ δq(1) ) − h(δq(1) ) + min h((1 − q)θ0 + qθ1 ), h(δq(1) )
=h(β ∗ δq(1) ) − h(δq(1) ) + h(δq(1) )
=h(β ∗ δq(1) ) .
(180)
Then, since δq(1) ≤ δq(2) ≤ 12 , there exists a random variable L ∼ Bernoulli(λq ), with
δq(2) = δq(1) ∗ λq ,
(181)
for some 0 ≤ λq ≤ 12 , such that Ye2 = Y1 + L mod 2 is distributed according to Pr Ye2 = y2 | U0 = u0 , U1 = u1 =
P
s∈S q(s)WY2 |X,S (y2 |ξ(u0 , u1 , s), s). Thus,
(a)
(b)
(c)
Hq (Y2 |U0 ) =Hq (Ye2 |U0 ) ≥ h [h−1 (H(Y1 |U0 ))] ∗ λq ≥ h(β ∗ δq(1) ∗ λq ) = h(β ∗ δq(2) ) ,
(182)
where (a) is due to Mrs. Gerber’s Lemma [20], and (b)-(c) follow from (180) and (181), respectively.
B. Arbitrarily Varying BSBC with Correlated Noises
1) Without SI: We begin with the case of an arbitrarily varying BSBC B0 without SI. We claim that the single user marginal
AVCs W1,0 and W2,0 without SI, corresponding to user 1 and user 2, respectively, have zero capacity. Denote q , q(1) = 1 −
(1)
(2)
q(0). Then, observe that the additive noises are distributed according to ZS ∼ Bernoulli(ηq ) and NS ∼ Bernoulli(ηq ) , with
(1)
(2)
(1)
⋆
ηq , (1−q)·θ0 +q·θ1 and ηq , (1−q)·ε0 +q·ε1 , for 0 ≤ q ≤ 1. Based on [5], C(W1,0 ) ≤ C (W1,0 ) = min0≤q≤1 [1−h(ηq )].
(1)
Since θ0 < 12 ≤ θ1 , there exists 0 ≤ q1 ≤ 1 such that ηq1 = 12 , thus C(W1,0 ) = 0. Similarly, ε0 < 12 ≤ ε1 implies that
(2)
ηq2 = 21 , for some 0 ≤ q2 ≤ 1, thus C(W2,0 ) = 0 as well. The capacity region of the AVBC B0 without SI is then given by
C(B0 ) = {(0, 0)}.
27
2) Causal SI – Case 1: Consider the arbitrarily varying BSBC B with causal SI, with θ0 ≤ 1 − θ1 ≤ ε0 ≤ 1 − ε1 ≤ 21 . By
Theorem 7, the random code capacity region is bounded by R⋆in (B) ⊆TC⋆(B) ⊆ R⋆out (B). We show that the bounds coincide,
′
and are thus tight. By (15), (20) and (23), we have that R⋆out (B) = 0≤q≤1 C(B q ) ⊆ C(B q ), for every given 0 ≤ q ′ ≤ 1.
Thus, taking q ′ = 1, we have by (48) that
[ (R1 , R2 ) : R2 ≤ 1 − h(β ∗ ε1 ) ,
,
(183)
R⋆out (B) ⊆
R1 ≤ h(β ∗ θ1 ) − h(θ1 )
1
0≤β≤ 2
where we have used the identity h(α ∗ (1 − δ)) = h(α ∗ δ).
Now, to show that the region above is achievable, we examine the inner bound,
R0 ≤ min0≤q≤1 Iq (U0 ; Y2 ) ,
(R0 , R1 ) :
[
R1 ≤ min0≤q≤1 Iq (U1 ; Y1 |U0 )
.
R⋆in (B) =
R0 + R1 ≤ min0≤q≤1 Iq (U0 , U1 ; Y1 )
p(u0 ,u1 ),ξ(u0 ,u1 ,s)
Consider the following choice of p(u0 , u1 ) and ξ(u0 , u1 , s). Let U0 and U1 be independent random variables,
1
U2 ∼ Bernoulli
, and U1 ∼ Bernoulli(β) ,
2
(184)
(185)
for 0 ≤ β ≤ 12 , and let
ξ(u0 , u1 , s) = u0 + u1 + s mod 2 .
Then, as in Subsection I-A above, this yields the following inner bound,
)
(
(2)
[
(R0 , R1 ) : R0 ≤ min0≤q≤1 1 − h(β ∗ δq ) ,
⋆
Rin (B) ⊇
.
(1)
(1)
R1 ≤ min0≤q≤1 h(β ∗ δq ) − h(δq )
1
(186)
(187)
0≤β≤ 2
(1)
(2)
Note that θ0 ≤ δq ≤ 1 − θ1 ≤ 12 and ε0 ≤ δq ≤ 1 − ε1 ≤ 21 . For 0 ≤ δ ≤ 21 , the functions g1 (δ) = 1 − h(β ∗ δ) and
g2 (δ) = h(β ∗ δ) − h(δ) are monotonic decreasing functions of δ, hence the minima in (187) are both achieved with q = 1. It
follows that
[ (R0 , R1 ) : R0 ≤ 1 − h(β ∗ ε1 ) ,
C⋆(B) = R⋆in (B) = R⋆out (B) =
.
(188)
R1 ≤ h(β ∗ θ1 ) − h(θ1 )
0≤β≤1
It can also be verified that Condition T holds (see Definition 5 and (24)), in agreement with part 2 of Theorem 7. First, we
specify a function ξ(u0 , u1 , s) and a distribution set D⋆ that achieve R⋆in (B) and R⋆out (B) (see Definition 4). Let ξ(u0 , u1 , s) be
as in (186), and let D⋆ be the set of distributions p(u0 , u1 ) such that U0 and U1 are independent random variables, distributed
according to (185). By the derivation above, the first requirement in Definition 5 is satisfied with Q = P(S), and by our
derivation in Subsection I-A,
R0 ≤ Iq (U0 ; Y2 ) ,
(R0 , R1 ) :
[
R1 ≤ Iq (U1 ; Y1 |U0 )
.
(189)
C(B q ) =
R0 + R1 ≤ Iq (U0 , U1 ; Y1 )
p(u0 ,u1 )∈D ⋆
Then, the second requirement is satisfied as well, hence ξ(u0 , u1 , s) and D⋆ achieve R⋆in (B) and R⋆out (B). It follows that
Condition T holds, as q ∗ = 1 satisfies the desired property with ξ(u0 , u1 , s) and D⋆ as described above.
We move to the deterministic code capacity region of the arbitrarily varying BSBC B with causal SI. Consider the following
cases. First, if θ1 = 12 , then ε1 = 21 as well, and the capacity region is given by C(B) = C⋆(B) = {(0, 0)}, by (188). Otherwise,
θ0 < 12 < θ1 . Then, for the case where ε1 = 12 , we show that the random code capacity region, C⋆(B) = {(R0 , R1 ) : R0 = 0,
R1 ≤ C ⋆ (W1 ) = 1 − h(θ1 )} can be achieved by deterministic codes as well. Based on [8, 7], it suffices to show that there
exists a function ξ : U × S → X such that VYξ1 |U,S is non-symmetrizable.
Indeed, assume to the contrary that θ0 < 12 < θ1 , yet VYξ1 |U,S is symmetrizable for all ξ : U × S → X . That is, for every
ξ(u, s), there exists σu = J(1|u) such that
(1 − σub )WY1 |X,S (y1 |ξ(ua , 0), 0) + σub WY1 |X,S (y1 |ξ(ua , 1), 1) =
(1 − σua )WY1 |X,S (y1 |ξ(ub , 0), 0) + σua WY1 |X,S (y1 |ξ(ub , 1), 1) (190)
for all ua , ub ∈ U, y1 ∈ {0, 1}. If this is the case, then for ξ(u, s) = u + s mod 2, taking ua = 0, ub = 1, y1 = 1, we have
that
(1 − σ1 )θ0 + σ1 (1 − θ1 ) = (1 − σ0 )(1 − θ0 ) + σ0 θ1 .
(191)
28
This is a contradiction, since the value of the LHS of (191) is in [0, 12 ), while the value of the RHS of (191) is in ( 12 , 1].
Hence, VYξ1 |U,S is non-symmetrizable, and C(B) = C⋆(B).
The last case to consider is when θ0 ≤ ε0 < 12 < ε1 ≤ θ1 . We now claim that the condition in Corollary 10 is met. Indeed,
the contradiction in (191) implies that VYξ1 |U0 ,U1 ,S is non-symmetrizable with ξ(u0 , u1 , s) = u0 + u1 + s mod 2, given that
′
θ0 < 21 < θ1 . Similarly, VYξ2 |U0 ,S is non-symmetrizable with ξ ′ (u0 , s) = u0 + s mod 2, given that ε0 < 21 < ε1 . As Condition
T holds, we have that C(B) = R⋆in (B) = R⋆out (B), due to Corollary 10. Hence, by (188), we have that the capacity region of
the arbitrarily varying BSBC with correlated noises B with causal SI is given by (50).
3) Causal SI – Case 2: Consider the arbitrarily varying BSBC B with causal SI, with θ0 ≤ 1 − θ1 ≤ 1 − ε1 ≤ ε0 ≤ 12 . By
Theorem 7, the random code capacity region is bounded by R⋆in (B) ⊆ C⋆(B) ⊆ R⋆out (B). Next, we show that the deterministic
code capacity region is bounded by (51) and (52).
Inner Bound. Denote
[ (R0 , R1 ) : R0
R1
Ain ,
0≤β≤1
≤ 1 − h(β ∗ ε0 ) ,
≤ h(β ∗ θ1 ) − h(θ1 )
.
(192)
We show that R⋆in (B) ⊆ Ain and R⋆in (B) ⊇ Ain , hence R⋆in (B) = Ain . As in the proof for case 1 above, consider the set of
strategy distributions D⋆ and function ξ(u0 , u1 , s) as specified by (185) and (186). Then, this results in the following inner
bound,
R0 ≤ min0≤q≤1 Iq (U0 ; Y2 ) ,
[ (R0 , R1 ) :
R⋆in (B) ⊇
R1 ≤ min0≤q≤1 Iq (U1 ; Y1 |U0 )
R0 + R1 ≤ min0≤q≤1 Iq (U0 , U1 ; Y1 )
p∈D ⋆
)
(
(2)
[
(R0 , R1 ) : R0 ≤ min0≤q≤1 1 − h(β ∗ δq ) ,
=
(1)
(1)
R1 ≤ min0≤q≤1 h(β ∗ δq ) − h(δq )
1
0≤β≤ 2
=Ain ,
(193)
1
2
1
2.
where the last equality holds since in case 2, we assume that θ0 ≤ 1 − θ1 ≤ and 1 − ε1 ≤ ε0 ≤
Now, we upper bound R⋆in (B) by
R0 ≤ mins∈S Iq (U0 ; Y2 |S = s) ,
[ (R0 , R1 ) :
R1 ≤ mins∈S Iq (X; Y1 |U0 , S = s)
.
R⋆in (B) ⊆
R0 + R1 ≤ mins∈S Iq (X; Y1 |S = s)
p(u0 ,x)
(194)
We have replaced U1 with X since (U0 , U1 ) X Y1 form a Markov chain. Now, since X (Y1 , S) Y2 form a Markov chain,
the third inequality in (194) is not necessary. Furthermore WY1 |X,S (y1 |x, 1) is degraded with respect to WY1 |X,S (y1 |x, 0),
whereas WY2 |X,S (y2 |x, 0) is degraded with respect to WY2 |X,S (y2 |x, 1). Thus,
[ (R0 , R1 ) : R0 ≤ Iq (U0 ; Y2 |S = 0) ,
.
(195)
R⋆in (B) ⊆
R1 ≤ Iq (X; Y1 |U0 , S = 1)
p(u0 ,x)
Observe that the RHS of (195) is the capacity region of a BSBC without a state, specified by Y1 = X + Z1 mod 2,
Y2 = X + N0 mod 2 [4, 11]. This upper bound can thus be expressed as in the RHS of (192) (see e.g. [6, Example 15.6.5]).
Hence, R⋆in (B) = Ain , which proves the equality in (51).
To show that the inner bound is achievable by deterministic codes, i.e. C(B) ⊇ R⋆in (B), we consider the following cases.
First, if θ1 = 12 , then ε0 = 12 as well, and R⋆in (B) = {0, 0}, by (192). Otherwise, θ0 < 12 < θ1 . In particular, for ε0 = 21 , we
have that R⋆in (B) = {(R0 , R1 ) : R0 = 0, R1 ≤ 1 − h(θ1 )}. Then, as shown in the proof of case 1, there exists a function
ξ : U × S → X such that VYξ1 |U,S is non-symmetrizable. Thus, based on [8, 7], the deterministic code capacity of user 1
marginal AVC is given by C(W1 ) = 1 − h(θ1 ), which implies that R⋆in (B) is achievable for ε0 = 21 .
It remains to consider the case where θ0 ≤ ε0 < 21 < ε1 ≤ θ1 . By Corollary 10, in order to show that C(B) ⊇ R⋆in (B), it
suffices to prove that the capacity region
has non-empty interior. Following the same steps as in the proof of case 1 above, we
′
have that the channels VYξ1 |U,S and VYξ2 |U0 ,S are non-symmetrizable for ξ(u, s) = u + s mod 2 and ξ ′ (u0 , s) = u0 + s mod 2
(see (27)). Thus, based on [8, 7], the deterministic code capacity of the marginal AVCs of user 1 and user 2 are positive, which
implies that int C(B) 6= ∅, hence C(B) ⊇ R⋆in (B).
29
Outer Bound. Since the deterministic code capacity region is included within
T the random code capacity region, it follows that
C(B) ⊆ R⋆out (B). Based on (15), (20) and (23), we have that R⋆out (B) = 0≤q≤1 C(B q ). Thus,
R⋆out (B) ⊆ C(B q=0 ) ∩ C(B q=1 )
[ (R0 , R1 ) : R0
[ (R0 , R1 ) : R0 ≤ 1 − h(β ∗ ε0 ) ,
∩
=
R1 ≤ h(β ∗ θ0 ) − h(θ0 )
R1
0≤β≤ 12
0≤β≤ 21
(R0 , R1 ) : R0 ≤ 1 − h(β0 ∗ ε0 ) ,
[
R0 ≤ 1 − h(β1 ∗ ε1 ) ,
=
.
R1 ≤ h(β0 ∗ θ0 ) − h(θ0 ) ,
0≤β0 ≤1 ,
R1 ≤ h(β1 ∗ θ1 ) − h(θ1 )
0≤β1 ≤1
≤ 1 − h(β ∗ ε1 ) ,
≤ h(β ∗ θ1 ) − h(θ1 )
(196)
R EFERENCES
[1] R. Ahlswede. “Arbitrarily varying channels with states sequence known to the sender”. IEEE Trans. Inform. Theory
32.5 (Sept. 1986), pp. 621–629.
[2] R. Ahlswede. “Elimination of correlation in random codes for arbitrarily varying channels”. Z. Wahrscheinlichkeitstheorie
Verw. Gebiete 44.2 (June 1978), pp. 159–175.
[3] M. Benammar, P. Piantanida, and S. Shamai. “On the compound broadcast channel: multiple description coding and
interference decoding”. arXiv:1410.5187 (Oct. 2014).
[4] P. Bergmans. “Random coding theorem for broadcast channels with degraded components”. IEEE Trans. Inform. Theory
19.2 (Mar. 1973), pp. 197–207.
[5] D. Blackwell, L. Breiman, and A. J. Thomasian. “The capacities of certain channel classes under random coding”. Ann.
Math. Statist. 31.3 (Sept. 1960), pp. 558–567.
[6] T. M. Cover and J. A. Thomas. Elements of Information Theory. 2nd ed. Wiley, 2006.
[7] I. Csiszár and J. Körner. Information Theory: Coding Theorems for Discrete Memoryless Systems. 2nd ed. Cambridge
University Press, 2011.
[8] I. Csiszár and P. Narayan. “The capacity of the arbitrarily varying channel revisited: positivity, constraints”. IEEE Trans.
Inform. Theory 34.2 (Mar. 1988), pp. 181–193.
[9] A. El Gamal and Y.H. Kim. Network Information Theory. Cambridge University Press, 2011.
[10] T. Ericson. “Exponential error bounds for random codes in the arbitrarily varying channel”. IEEE Trans. Inform. Theory
31.1 (Jan. 1985), pp. 42–48.
[11] R.G. Gallager. “Capacity and coding for degraded broadcast channels”. Probl. Inf. Transm. 10.3 (May 1974), pp. 3–14.
[12] E. Hof and S. I. Bross. “On the deterministic-code capacity of the two-user discrete memoryless Arbitrarily Varying
General Broadcast channel with degraded message sets”. IEEE Trans. Inform. Theory 52.11 (Nov. 2006), pp. 5023–5044.
[13] J. H. Jahn. “Coding of arbitrarily varying multiuser channels”. IEEE Trans. Inform. Theory 27.2 (Mar. 1981), pp. 212–226.
[14] J. Körner and K. Marton. “General broadcast channels with degraded message sets”. IEEE Trans. Inform. Theory 23.1
(Jan. 1977), pp. 60–64.
[15] U. Pereg and Y. Steinberg. “The arbitrarily varying degraded broadcast channel with causal Side information at the
encoder”. Proc. IEEE Int’l Symp. Inform. Theory (ISIT’2017). Aachen, Germany.
[16] C. E. Shannon. “Channels with side Information at the transmitter”. IBM J. Res. Dev. 2.4 (Oct. 1958), pp. 289–293.
[17] Y. Steinberg. “Coding for the degraded broadcast channel with random parameters, with causal and noncausal side
information”. IEEE Trans. Inform. Theory 51.8 (July 2005), pp. 2867–2877.
[18] H. Weingarten et al. “The capacity region of the degraded multiple-input multiple-output compound broadcast channel”.
IEEE Trans. Inform. Theory 55.11 (Oct. 2009), pp. 5011–5023.
[19] A. Winshtok and Y. Steinberg. “The arbitrarily varying degraded broadcast channel with states known at the encoder”.
Proc. IEEE Int’l Symp. Inform. Theory (ISIT’2006). Seattle, Washington, July 2006, pp. 2156–2160.
[20] A. D. Wyner and J. Ziv. “A theorem on the entropy of certain binary sequences and applications: Part I”. IEEE Trans.
Inform. Theory 19.6 (Nov. 1973), pp. 769–772.
| 7 |
The International Journal of Soft Computing and Software Engineering [JSCSE], Vol. 3, No. 3, Special Issue:
The Proceeding of International Conference on Soft Computing and Software Engineering 2013 [SCSE’13],
San Francisco State University, CA, U.S.A., March 2013
Doi: 10.7321/jscse.v3.n3.91
e-ISSN: 2251-7545
Modeling the Behavior of Reinforced Concrete Walls
under Fire, Considering the Impact of the Span on
Firewalls
* Nadia Otmani-Benmehidi
Meriem Arar
Imene Chine
Civil engineering, Laboratory: LMGE,
University Badji Mokhtar
Annaba, Algeria
[email protected]
Civil engineering, Laboratory: LMGE,
University Badji Mokhtar
Annaba, Algeria
[email protected]
Civil engineering, Laboratory: LMGE,
University Badji Mokhtar
Annaba, Algeria
[email protected]
Abstract — Numerical modeling using computers is known to
a fire; which has developed fire rules. Regarding the fire
behavior of bearing walls, among the authors working in this
field, we mention Nadjai A [1], who performed a numerical
study validated by an experimental investigation on masonry
walls. Also, Cheer-Germ Go and Jun-Ren Tang [12] presented
an experimental investigation.
Our work presents a contribution to the study of the behavior
of reinforced concrete walls, cast in place, exposed to fire,
belonging to a residential building. These walls were studied
under the rules of wind and earthquake, by engineers in
preparation for their final project in study. The building is
composed of a ground floor + 9 floors, located in the Prefecture
of Annaba (Algeria) [2].
In a fire situation the temperature building rises as a function
of the material combustibility and the present oxygen. The fire
causes degradation in characteristics of the material, a
deformation in structural elements, and cracks will appear;
finally, the structure is in ruin. In order to prevent those
phenomena and to minimize the spread of the disaster with
controlling it as quickly as possible, we can use the firewall in
buildings.
In this paper we will study four concrete walls; two walls
with a section 20470 cm2 reinforced with bars of Ø10 and two
other walls having a section of 20350 cm 2 (reinforced with
Ø12). We consider a strip of 20 cm to reduce the work. The
thermal loading is defined by the standard fire ISO 834[3].
Three walls are exposed to fire on one side; the fourth wall is
exposed on two of its sides. The mechanical loading (i.e.
compressive load and moment) exerted on the walls in question
was taken from a study conducted at cold.
The thermal analysis gives the temperatures at every moment
and at every point of the walls. These temperatures were used in
the mechanical analysis. For the thermal analysis and the
mechanical analysis we used the software Safir[4]. This
software was developed by Franssen J M [4] in Belgium at the
University of Liege, performed for the thermal and mechanical
present several advantages compared to experimental testing. The
high cost and the amount of time required to prepare and to
perform a test were among the main problems on the table when
the first tools for modeling structures in fire were developed. The
discipline structures-in-fire modeling is still currently the subject of
important research efforts around the word, those research efforts
led to develop many software. In this paper, our task is oriented to
the study of fire behavior and the impact of the span reinforced
concrete walls with different sections belonging to a residential
building braced by a system composed of porticoes and sails.
Regarding the design and mechanical loading (compression forces
and moments) exerted on the walls in question, we are based on the
results of a study conducted at cold. We use on this subject the
software Safir witch obeys to the Eurocode laws, to realize this
study. It was found that loading, heating, and sizing play a capital
role in the state of failed walls. Our results justify well the use of
reinforced concrete walls, acting as a firewall. Their role is to limit
the spread of fire from one structure to another structure nearby,
since we get fire resistance reaching more than 10 hours depending
on the loading considered.
Keywords-fire; resistance; flame; behavior; walls
I.
INTRODUCTION
A structure must be designed and calculated so that it
remains fit for use as planned. It must resist to different degrees
of reliability during execution as well as that during service.
Finally, the structure must have adequate durability regarding
the maintenance costs. To meet the requirements outlined above,
we must: choose the materials appropriately, define a design and
appropriate dimensioning. For this purpose, it is imperative to
provide rules specific to each country. Various researches were
performed by experts in the field of fire, to find out the behavior
of the structures; as examples the separations and the bearer
elements (concrete column, steel column…) of a building during
600
The International Journal of Soft Computing and Software Engineering [JSCSE], Vol. 3, No. 3, Special Issue:
The Proceeding of International Conference on Soft Computing and Software Engineering 2013 [SCSE’13],
San Francisco State University, CA, U.S.A., March 2013
Doi: 10.7321/jscse.v3.n3.91
e-ISSN: 2251-7545
study of structures subjected to fire, taking into account the
material and geometrical nonlinearity and large displacements.
Rules Relating to concrete Firewalls
in Table I. These rules concern the walls with mechanical
slenderness at most equal to 50 and are valid for a wall exposed
to fire on one or two sides.
The concrete cast in place can be used to make firewalls.
Implementation of these structures must respond to rules and
code of calculation which concerned them ( DTU fire concrete)
[6].
A. Mechanical behavior relating to concrete: the Eurocode 2
model
The division of the macroscopically measurable strains in heated
concrete into individual strain components is done in the EC2
according to Eq (1)[7][14]:
ε tot = ε th +ε + ε tr + ε cr
TABLE I. DEGREE FIREWALLS
(1)
Bearing
wall
Separating
wall
where ε th is the free thermal strain, ε is the instantaneous
stress-related strain, ε tr is the transient creep strain and ε cr is the
basic creep strain.
The mechanical strain is the sum of the instantaneous stressrelated strain and the transient creep strain.
ε tot = ε th +ε m +ε cr
Degree
CF
Depth
(cm)
Depth
(cm)
1/2h
1h
1h30
2h
3h
4h
10
11
12
15
20
25
6
7
9
11
15
17.5
C. Fire walls according to Eurocode2
In section 5.4.3 of Eurocode 2[6], it is recommended that the
minimum thickness for normal weight concrete, should
not be less than:
200mm for unreinforced wall
140 mm for reinforced load-bearing wall
120 mm for reinforced non load bearing wall
(2)
where ε m is the mechanical strain.
In implicit models, the stress is directly related to the
mechanical strain, without calculation of the transient creep
strain. In the EC2 model, the relationship at a given Temperature
T between the stress and the mechanical strain is given for the
ascending branch by Eq (3).
D. Fire walls according to our numerical study
(3)
For more details we invite the lector to see [7].
B. Firewalls with elements in cellular concrete
We take as an example, a firewall [5] composed of concrete
columns of 45×45 cm and panels of 600×60×15 cm (Posed in
front or between the columns) presents a degree firewall equal to
4 hours. We must also note that the PV CSTB n° 87-25851 dated
11 /07 /95 precise that : “an experimental wall of element with
reinforced cellular concrete of 15 cm thickness with a nominal
density of 450 KG / m3 mounted on flexible joints, has a degree
firewall of 4 hours”. Depending on the thickness, the limit
height of wall is:
Wall thickness 15 cm corresponds to height: H = 17 m
Wall thickness 20 cm corresponds to height: H = 22 m
Wall thickness 25 cm corresponds to height: H = 28 m
As a first approximation, the degree of a firewall composed
of solid panels with pre-cast concrete can be deduced from
simplified rules, coming from the norm P 92-701 [6] expressed
(a) Length of wall superior to 3.5 m
(b) The span wall must be reduced with forecast column
According to results of numerical study; we recommend to add
column, when length of the fire wall exceed 3.5 m, to reduce the
span wall.
601
The International Journal of Soft Computing and Software Engineering [JSCSE], Vol. 3, No. 3, Special Issue:
The Proceeding of International Conference on Soft Computing and Software Engineering 2013 [SCSE’13],
San Francisco State University, CA, U.S.A., March 2013
Doi: 10.7321/jscse.v3.n3.91
e-ISSN: 2251-7545
20470cm2. Concerning the mechanical loading each wall is
submitted to its ultimate moment (M-ult) and its ultimate
compressive load (N-ult).
1 ,5 ≤ L1 ≤3,5
Kr = L1/L ,
→ Kr=3,5/L
(4)
L1= Kr. L
(5)
K r: factor of reduction
L1: reduced length of wall [m]
L: initial length of wall [m]
This recommendation should be added in the Eurocode.
II.
Figure 1. Discretization of walls
The two other walls, "Mu (12) 20" and "MuCH (12) 20"
have a section of 20350 cm2. They are armed with steel of
diameter Ø 12 spaced with 20 cm. Mu (12) 20 is submitted to its
ultimate loading (moment and load). The Wall "MuCH (12) 20"
has the same mechanical loading (ultimate moment and ultimate
compressive load) that Mu 20. For the thermal loading, "Mu
(12) 20" and "MuCH (12) 20" are exposed to a fire ISO834 [3]
on one side. The floor height (H) is equal to 2.86 m for the four
walls.
MODELING OF WALLS
To begin the numerical study, it is necessary to model the
walls considered. The Table II defines the geometrical
characteristics and the loads. The reinforcement of each type of
wall was calculated according to [2]. Walls: "Mu 20" and "MuF
2O" don’t have the same thermal load. The first is subjected to
normalized fire ISO834 [3] on one side, for the second one, we
apply the fire on two sides. They contain reinforcements Ø 10
spaced with 20 cm “Fig. 1”. They have the same section of
TABLE II. GEOMETRICAL CHARACTERISTICS AND LOADING OF CONSIDERED WALLS
Walls
H(m)
L (cm)
e (cm)
Ø (mm)
thermal loading
mechanical loading
Mu 20
2,86
470
20
10
ISO834 on one side
N-ult,M-ult of (Mu20)
MuF 20
2,86
470
20
10
ISO834 on two sides
N-ult,M-ult of (MuF20)
Mu (12)20
2,86
350
20
12
ISO834 on one side
N-ult,M-ult of (Mu (12)20)
MuCH (12)20
2,86
350
20
12
ISO834 on one side
N-ult,M-ult of (Mu 20)
III.
h : coefficient of convection, w/m2-K
Tg : temperature of the gas, given in the data as a function of
time, K
Ts: temperature on the boundary of the structure, K
THERMAL ANALYSIS
A. Basic equation
In the software Safir, the heat flux exchanged between a
boundary and the hot gas in a fire compartment can be modeled
according to the recommendation of Eurocode 1 with a linear
convective term and radiation term, see Equation5.
B. Temperatures in the wall « MuF20 »
In this numerical study, the thermal analysis is a prerequisite
for any result, so we start firstly by determining the temperatures
at each point of the walls by using code "SAFIR". This code is
based on norms [7] and [8]. We cite two cases of walls exposed
to fire (MuF 20 and Mu (12) 20).
In the case of the wall "MuF 20" which is exposed to fire(in
red) in two faces “Fig. 2”, at failed time t = 8940sec or 149min
(2.43 h), we get a temperature between 900.78 and 1048.50 ° C
(6)
: Stefan-Boltzman coefficient, 5.67 10-8
ε* : relative emissivity of the material
602
The International Journal of Soft Computing and Software Engineering [JSCSE], Vol. 3, No. 3, Special Issue:
The Proceeding of International Conference on Soft Computing and Software Engineering 2013 [SCSE’13],
San Francisco State University, CA, U.S.A., March 2013
Doi: 10.7321/jscse.v3.n3.91
e-ISSN: 2251-7545
at the surfaces in contact with the fire. Away from both faces
exposed to fire, temperature decreases to 457.60 ° C.
failed time. Of course after a long period, the temperature rises
considerably.
IV.
MECHANICAL ANALYSIS
Figure 2. Temperatures of wall « MuF 20 » at failed time Units
C. Temperatures in the wall« Mu (12)20 »
Figure 4. Appearance of Mu (12) 20 at the failed time
The mechanical data file is dependent on thermal analysis for
the use of the elements temperatures in function
of time. The file in question contains the dimensions of the wall
(height and width), the number of nodes is equal to 21. The
number of the beam element according to the discretization is
taken equal to 10, each element contains 3 nodes. Mechanical
loading is represented by a normal force and moment for each
wall, the calculation is performed for a
strip of 20 cm. Figure 4 shows the appearance of the wall Mu
(12) 20 at failed time (t = 25664 sec).
Figure 3. Temperatures of wall « Mu (12)20 » at failed time
We note from Table III, that Mu (12) 20 has a better fire
behavior compared to other walls, because of its good rigidity.
MuF20 is identical to Mu 20; however MuF20 is exposed to fire
at two sides which explains the good fire behavior of Mu 20
compared to the behavior of MuF 20.
The results obtained from the numerical study concerning
variations in temperature at the ruin time in the concrete section
are in “Fig. 3”. For the failed time (25680s) or 7h 13min,
observed temperature of the face exposed to fire (number 1)
varies between 970,08 and 1222.20 ° C. In the side who is not
exposed to fire (number 2), the temperature is 213,70 ° C at the
TABLE III. FIRE RESISTANCE OF CONSIDERED WALLS
603
The International Journal of Soft Computing and Software Engineering [JSCSE], Vol. 3, No. 3, Special Issue:
The Proceeding of International Conference on Soft Computing and Software Engineering 2013 [SCSE’13],
San Francisco State University, CA, U.S.A., March 2013
Doi: 10.7321/jscse.v3.n3.91
e-ISSN: 2251-7545
Wall
coating
(cm)
Height
(m)
Ø
(mm)
span
Depth
M-ult
N-ult
Rf
(m)
(m)
(t.m)
(t)
(min)
Mu 20
2,4
2,86
Ø 10
4,7
0,2
2,26
41,2
179,78
MuCH (12)20
3
2,86
Ø 12
3,5
0,2
2,26
41,2
278,93
Mu (12)20
3
2,86
Ø 12
3,5
0,2
0,14
26,32
427,75
MuF 20
2,4
2,86
Ø 10
4,7
0,2
2,26
41,2
148,76
A. Displacement and strain of Mu (12)20
In Figure 5, the curve represents the horizontal displacement
of the wall "Mu (12) 20." There is a positive evolution
(dilatation) during the exposition to fire. The maximum
displacement of the node 11 (middle of the wall) at the collapse
is 10 cm after a period of t = 25500sec (7h), which is
representing 50% of the wall thickness. This displacement is the
largest (buckling phenomenon). Given that Mu (12) 20 with
section (20x 350) is exposed to fire on one side and
mechanically loaded with an ultimate load of 15,040 N and an
ultimate moment of 80 N.m according to [2] [9]. We can say that
this wall has a good fire resistance.
Figure 6. Vertical displacement of Mu (12)20 at the upper end
In the case of the node 21 “Fig. 6” located at the upper
extremity, the node 21 presents the maximum vertical
displacement in sight of the boundary conditions. The vertical
displacement is positive and equal to 1, 4 cm (there is an
expansion due to thermal loading). This displacement is
followed by the collapse of the wall at 25500sec (7h).
B. Displacement and strain of Mu20
The mechanical analysis shows that Mu20 deforms with
increasing temperature and with time. Curve of node 11“Fig. 7”
represents a positive evolution throughout the time of exposition
to fire.
Figure5. Horizontal displacement of Mu (12)20 at half height
Figure 7. Horizontal displacements of nodes 3 and 11
604
The International Journal of Soft Computing and Software Engineering [JSCSE], Vol. 3, No. 3, Special Issue:
The Proceeding of International Conference on Soft Computing and Software Engineering 2013 [SCSE’13],
San Francisco State University, CA, U.S.A., March 2013
Doi: 10.7321/jscse.v3.n3.91
e-ISSN: 2251-7545
The maximum displacement at the collapse is 9cm, given
that Mu20 (20x470) is exposed to fire on one side and
mechanically loaded with a force (17532 N) and a moment
(961,7 N.m). But the node 3 has a small displacement, equal to
3cm.
C.
Displacement and strain of MuF 20
Figure 9. Vertical displacement of wall MuF20 at the upper end
The Wall "MuF20" at its upper end (node 21), underwent
dilatation (vertical displacement) of 2,5 cm after an estimated
ruin time of 8900sec (148 min)“Fig. 9”. We note that this
displacement is important compared to vertical displacements of
previous walls, because MuF 20 is exposed to fire according two
sides, its dilatation is considerable. In addition, loading has a
considerable effect on the walls in case of fire. The fire acts
indirectly on the structures (reinforced concrete walls), it
destroys the mechanical properties of materials (concrete, steel),
so that they become incapable of supporting the loads.
Figure 8. Horizontal displacement of the nodes 3 and 11
“Fig.8” shows the horizontal displacements of nodes 3 and
11. MuF20 (20x470) is exposed to fire on both sides, in the case
of node 11, whose curve has a greater displacement reaching
4,5cm in an estimated time of 8925sec (149 min). but node 3 has
a small displacement equal to 1, 5 cm.
The curves obtained in “Fig.11”; show the fire resistances of two
walls exposed to fire on one side, and submitted identically to
different rates of mechanical loading. These two walls does not
have the same dimensions, but are subject to the same
mechanical loading and to the same thermal loading (ISO834),
as it was mentioned previously.
Their sections are respectively, for Mu 20: 20470cm2 and for
MuCH (12) 20: 20350 cm2. We note that the fire resistances of
these two types of walls are considerably higher than preceding
walls (Figure 10). We observe that MuCH (12) acts better than
Mu 20. The section of MuCH (12)20 is lower than to that of Mu
20, thus his stress resistance (=N/S) is greater than the stress
resistance of Mu 20. Otherwise the section of reinforcement
(Ø12) of MuCH (12) 20 is greater than the section of
reinforcement (Ø10) of Mu 20.
18000
16000
14000
12000
Load(N/20cm)
20000
N(Mu 20)
N(MuF 20)
10000
8000
6000
4000
100
150
200
250
300
350
Rf(min)
400
Figure 10. Fire resistance of walls « Mu 20 » and « MuF 20 » depending on the
load
605
The International Journal of Soft Computing and Software Engineering [JSCSE], Vol. 3, No. 3, Special Issue:
The Proceeding of International Conference on Soft Computing and Software Engineering 2013 [SCSE’13],
San Francisco State University, CA, U.S.A., March 2013
Doi: 10.7321/jscse.v3.n3.91
e-ISSN: 2251-7545
V.
COMPARISON OF CONSIDERED WALLS
Load(N/20cm)
20000
18000
16000
14000
In “Fig.10” the curves show the fire resistance of two walls
exposed at fire; the first on two sides (grey curve) and the
second (black curve) on one side, considering four rates of
loading (100%, 70%, 50% and 30%).
We find that the resistance of Mu 20 who was exposed on
one side is larger than the resistance of MuF 20 which was
exposed to fire on both sides. We also find that mechanical
N(Mu 20)
N(MuCH(12)20)
12000
10000
8000
The Analysis of these results shows that the increasing of the
span wall causes a reduction in the fire resistance.
In addition, this analysis justifies well the use of reinforced
concrete walls, to limit the spread of fire from one structure to
another structure nearby, since the resistances obtained are
considerable (10 hours). Finally, we deduce that MuCH(12) 20
has a better fire behavior (stop fire) because it has a good fire
resistance which reaches 10 hours.
observed that the fire resistance of the wall with the little
span is considerably higher than that fire resistance wall
with the great span. We conclude that a significant span,
more than 3 m is unfavorable for the firewall, since it
leads to a reduction in fire resistance.
Walls studied have appreciable fire resistances, which
justifies well the use of reinforced concrete walls
(firewall), to limit the spread of fire from one structure to
another structure nearby. Particularly “Mu (12) 20» has
an appropriate size, allowing it to play the role of a
firewall, because it has better fire resistance and good
rigidity.
Furthermore, it would be interesting to carry out an
experimental study on the walls considered to complete
this work.
6000
Rf(min)
4000
100
200
300
400
500
600
700
Figure 11. Fire resistance of walls « Mu 20» and «MuCH (12)20 » depending
on the load
VI.
CONCLSIONS
Mu 20 has a better fire behavior than MuF20 (exposed to
fire at two sides) despite of their similarities “Fig.12”.
The walls "Mu (12) 20 and" MuCH (12) 20 " are similar
but the mechanical loading of the first is smaller than the
second which gives that the resistance of Mu (12) 20 is
equal to the double of the resistance of MuCH (12) 20.
We note that the sizing has a significant effect on fire
resistance, as well, Mu 20 with a section of 20x470 cm 2
is less resistant than Mu (12) 20 having a section :
20x350 cm2. It is recommended to forecast column,
when length of the firewall exceed 3.5 m, to reduce the
span wall.
The fire resistances of walls considered are close to the
fire resistances given by the norms [table1] [5] [6]. On
the other hand the displacements of walls are in
accordance with the appearance of the curves founded by
(A Nadjai, 2006).
We note that mechanical loading has a considerable
effect on the walls in case of fire, experimental results of
numerous researches of structures studied (for example,
in university of Liege in BELGIUM), have previously
demonstrated that the fire acts indirectly on the structures
(in our case the reinforced concrete walls). The fire
destroys the mechanical properties of materials (concrete,
steel), so that they become unable to bearer the
mechanical load.
ACKNOWLEDGMENT
The work presented was possible with the assistance of
Professor Jean Marc Franssen and Mr. Thomas Gernay which
we thank gratefully.
REFERENCES
[1]
[2]
In order to know the impact of dimensioning, more
precisely of the span wall in case of fire we considered
two walls (Mu 20 and MuCH (12) 20) not having the
same dimensions exposed every two, to the same
mechanical loading and to the same thermal loading. We
606
Nadjai A∗, O’Gara M, F. Ali and Jurgen R, Compartment Masonry Walls
in Fire Situations, Belfast, 2006.
Hacène chaouch A, Etude d’un bâtiment à usage d’habitation, RDC + 9
étages , 2010,
[3]
ENV 1991 1-2, Eurocode 1, Actions sur les structures – Partie
1-2 : Actions générales - Actions sur les structures exposées au
feu, 2002.
[4]
Franssen
J-M.
SAFIR, A Thermal/Structural Program Modeling
Structures under Fire, Engineering Journal, A.I.S.C., Vol 42, No. 3,
2005,143-158.
The International Journal of Soft Computing and Software Engineering [JSCSE], Vol. 3, No. 3, Special Issue:
The Proceeding of International Conference on Soft Computing and Software Engineering 2013 [SCSE’13],
San Francisco State University, CA, U.S.A., March 2013
Doi: 10.7321/jscse.v3.n3.91
e-ISSN: 2251-7545
[5]
8.5 Les murs coupe-feu en béton, ( DTU feu béton, Eurocode 2, DAN
partie 1-2, 1997)iut-tice.ujf-grenoble.fr/ticeespaces/GC/materiaux/mtx3/.../8.5.pdf
[6] DTU feu béton, Eurocode 2, DAN partie 1-2, DTU23.1, la norme P 92-701
), fascicule 62 du CCTG, 1997.
[7] Eurocode 2. Calcul des structures en béton et Document d'Application
Nationale Partie 1-2 : Règles générales, Calcul du comportement au feu,
2005
[8] NBN EN 1993-1-2, EUROCODE 3 : Calcul des structures en acier –
Partie 1-2 : Règles générales – Calcul du comportement au feu, 2004.
[9] Autodesk Robot Structural Analysis, 2009
[10] Otmani N, modélisation multi physique du comportement au feu des
colonnes en acier et en béton armé, thèse de doctorat, Université d’
Annaba, Algérie, 2010.
[11] Dotreppe J.C, Brüls A, Cycle de formation, Résistance au feu des
constructions, application des Eurocodes dans le cadre de la formation, Fire
Safety Engineering, 2000.
[12] Cheer-Germ Go, Jun-Ren Tang, Jen-Hao Chi , Cheng-Tung Chen, Yue-Lin
Huang, “Fire-resistance property of reinforced lightweight aggregate
concrete wall” Construction and Building Materials 30 (2012) 725–733
[13] G.A. Khoury, B.N. Gringer, and P.J.E. Sullivan, “Transient thermal strain
of concrete: Literature review, conditions within specimen and behaviour
of individual constituents,” Magazine of Concrete Research, vol. 37, 1985,
pp. 131–144.
[14] Gernay T, A multiaxial constitutive model for concrete in the fire situation
including transient creep and cooling down phases, thesis, University of
Liege, Belgium, 2011/2012.
[15] Dimia M .S, Guenfoud M, Gernay T, Franssen J-M, Collapse of concrete
columns during and after the cooling phase of a fire, Journal of Fire
Protection Engineering, (2011) 21(4) 245–263
TABLE IV. NOMENCLATURE
Mu 20:
MuF 20:
Mu (12) 20:
MuCH (12) 20:
Firewall
N-ult:
M-ult:
:
Rf:
H:
h:
L:
e:
N:
S:
Ø:
Reinforced concrete wall with a thickness equal to 20cm and a span equal to 470cm
(reinforcement :Ø10), this wall was exposed to fire on one side.
Reinforced concrete wall with a thickness equal to 20cm and a span equal to 470cm
(reinforcement :Ø10), this wall was exposed to fire on two sides.
Reinforced concrete wall with a thickness equal to 20cm and a span equal to 350cm
(reinforcement :Ø12), this wall was exposed to fire on one side.
Reinforced concrete wall with a thickness equal to 20cm and a span equal to 350cm
(reinforcement: Ø12), this wall was exposed to fire on one side. In this case, we use the ultimate
mechanical loading of wall Mu 20.
Reinforced concrete wall intended to limit the spread of fire from a structure to another nearby.
Ultimate compressive load.
Ultimate moment.
Stress.
Fire resistance.
Height.
Hour.
Span of wall.
Thickness of wall.
Compressive load.
Area of section.
Diameter of used steel.
607
| 5 |
Stochastic simulators based optimization by Gaussian process
metamodels - Application to maintenance investments planning
issues
Short title: Metamodel-based optimization of stochastic simulators
arXiv:1512.07060v2 [stat.ME] 3 May 2016
Thomas BROWNE, Bertrand IOOSS, Loı̈c LE GRATIET, Jérôme
LONCHAMPT, Emmanuel REMY
EDF Lab Chatou, Industrial Risk Management Department
Corresponding author: Bertrand Iooss
EDF R&D, 6 Quai Watier, F-78401 Chatou, France
Phone: +33130877969
Email: [email protected]
Abstract
This paper deals with the optimization of industrial asset management strategies,
whose profitability is characterized by the Net Present Value (NPV) indicator which
is assessed by a Monte Carlo simulator. The developed method consists in building
a metamodel of this stochastic simulator, allowing to get, for a given model input,
the NPV probability distribution without running the simulator. The present work is
concentrated on the emulation of the quantile function of the stochastic simulator by
interpolating well chosen basis functions and metamodeling their coefficients (using the
Gaussian process metamodel). This quantile function metamodel is then used to treat
a problem of strategy maintenance optimization (four systems installed on different
plants), in order to optimize an NPV quantile. Using the Gaussian process framework,
an adaptive design method (called QFEI) is defined by extending in our case the well
known EGO algorithm. This allows to obtain an “optimal” solution using a small
number of simulator runs.
Keywords: Adaptive design, Asset management, Computer experiments, Gaussian
process, Stochastic simulator.
1
Introduction
The French company Electricité de France (EDF) looks for assessing and optimizing
its strategic investments decisions for its electricity power plants by using probabilistic
and optimization methods of “cost of maintenance strategies”. In order to quantify the
technical and economic impact of a candidate maintenance strategy, some economic
indicators are evaluated by Monte Carlo simulations using the VME software developed by EDF R&D (Lonchamp and Fessart [15]). The major output result of the
1
Monte Carlo simulation process from VME is the probability distribution of the Net
Present Value (NPV) associated with the maintenance strategy. From this distribution, some indicators, such as the mean NPV, the standard deviation of NPV or the
regret investment probability (P(N P V < 0)), can easily be derived.
Once these indicators have been obtained, one is interested in optimizing the strategy, for instance by determining the optimal investments dates leading to the highest
mean NPV and the lowest regret investment probability. Due to the discrete nature
of the events to be optimized, the optimization method is actually based on genetic
algorithms. However, during the optimization process, one of the main issues is the
computational cost of the stochastic simulator to optimize (here VME), which leads to
methods requiring a minimal number of simulator runs (Dellino and Meloni [7]). Optimization algorithms require often several thousands of simulator runs and, in some
cases, are no more applicable.
The solution investigated in this paper is a first attempt to break this computational
cost by the way of using a metamodel instead of the simulator within the mathematical optimization algorithm. From a first set of simulator runs (called the learning
sample and coming from a specific design of experiments), a metamodel consists in
approximating the simulator outputs by a mathematical model (Fang et al. [8]). This
metamodel can then be used to predict the simulator outputs for other input configurations, that can be served for instance for optimization issues (Forrester et al. [9]),
safety assessment (de Rocquigny et al. [6]) or sensitivity analysis (Iooss and Lemaı̂tre
[10]).
Many metamodeling techniques are available in the computer experiments literature
(Fang et al. [8]). Formally, the function G representing the computer model is defined
as
G: E → R
(1)
x 7→ G(x)
where E ⊂ Rd (d ∈ N \ {0}) is the space of the input variables of the computer
b from an
model. Metamodeling consists in the construction of a statistical estimator G
initial sample of G values corresponding to a learning set χ with χ ⊂ E and #χ < ∞
(with # the cardinal operator). In this paper, we will use the Gaussian process (also
called Kriging) metamodel (Sacks et al. [23]) which has been shown relevant in many
applicative studies (for example Marrel et al. [18]). Moreover, this metamodel is the
base ingredient of the efficient global Optimization (EGO) algorithm (Jones et al. [11]),
which allows to perform powerful optimization of CPU-time expensive deterministic
computer code (Eq. (1)) as shown in Forrester et al. [9].
However, all these conventional methods are not suitable in our applicative context
because of the stochastic nature of the VME simulator: the output of interest is not a
single scalar variable but a full probability density function (or a cumulative distribution function, or a quantile function). The computer code G is therefore stochastic:
G :
E×Ω → R
(x, ω) 7→ G(x, ω)
(2)
where Ω denotes the probability space. In the framework of robust design, robust
optimization and sensitivity analysis, previous works with stochastic simulators consist
2
mainly in approximating the mean and variance of the stochastic output (Bursztyn and
Steinberg [5], Kleijnen [12], Ankenman et al. [1], Marrel et al. [16], Dellino and Meloni
[7]). Other specific characteristics of the distribution of the random variable G(x, ·)
(as a quantile) can also be modeled to obtain, in some cases, more efficient algorithms
(Picheny et al. [21]). In the following, G(x), for any x ∈ E, denotes the output random
variable G(x, ·).
In this paper, as first proposed by Reich et al. [22] (who used a simple Gaussian
mixture model), we are interested in a metamodel which predicts the full distribution
of the random variable G(x), ∀x ∈ E. We first focus on the output probability density
functions of G (i.e. the densities of G(x) for any input x ∈ E) as they are a natural
way to model a random distribution. Therefore we define the following deterministic
function f :
f : E → F
(3)
x 7→ fx density of the r.v. G(x)
with F the family of probability density functions:
F = g ∈ L1 (R), positive, measurable,
Z
g=1 .
(4)
R
A first idea is to estimate the function f over the input set E. For a point x ∈
E, building fx with a kernel method requires a large number NMC of realizations of
G(x). A recent work (Moutoussamy et al. [19]) has proposed kernel-regression based
algorithms to build an estimator fb of the output densities, under the constraint that
each call to f is CPU-time expensive. Their result stands as the starting point for the
work presented in this paper.
The next section presents the optimization industrial issues and our application
case. In the third section, we re-define the functions of interest as the output quantile
functions of G as they come with less constraints. We propose to use the Gaussian
process metamodel and we develop an algorithm to emulate the quantile function
instead of the probability density function. In the fourth section, this framework
is used to develop a new quantile optimization algorithm, called Quantile Function
Expected Improvement criterion and inspired from the EGO algorithm. The normality
assumptions set for the metamodel imply that the function to maximize, a given level
of quantile, is also the realization of a Gaussian process. In the following applicative
section, this adaptive design method allows to obtain an “optimal” solution using a
small number of VME simulator runs. Finally, a conclusion synthesizes the main results
of this paper.
2 Maintenance investments planning issues and
the VME tool
2.1
Engineering asset management
Asset management processes, focused on realizing values from physical industrial assets,
have been developed for years. However, for the last one or two decades, these methods
3
have been going from qualitative or semi-qualitative ones to quantitative management
methods that are developed in the field of Engineering Assets Management (EAM).
As a matter of fact, with budget issues becoming more and more constrained, the
idea is not anymore to justify investments for the assets (maintenance, replacement,
spare parts purchases . . . ) but to optimize the entire portfolio of investments made
by a company or an organization. The value of an asset may be captured in its Life
Cycle Cost (LCC) that monetizes all the events that may impact an asset throughout
its useful life in all its dimensions (reliability, maintenance, supply chain . . . ). The
investments that are made for these assets (for example preventive replacements or
spare part purchases) are then valorized by the variation of the LCC created by these
changes in the asset management strategy. This variation is made of positive impacts
(usually avoided losses due to avoided failures or stock shortages) and negative ones
(investments costs). If the cash-flows used to calculate the LCC are discounted to take
into account the time value of money, the value of an investment is then equivalent to
a Net Present Value (NPV) as described in Figure 1.
Figure 1: Net Present Value of an investment in Engineering Asset Management.
EAM is then about evaluating and optimizing the value of the asset management
strategies in order to support investments decision making.
4
Input
System replacement date for plant
System replacement date for plant
System replacement date for plant
System replacement date for plant
System recovering date
1
2
3
4
Name
x1
x2
x3
x4
x5
min
41
41
41
41
11
max
50
50
50
50
20
Table 1: Minimal and maximal values (in years) of the five inputs used in the VME model.
2.2
VME case study
As some of the cash-flows generated by the asset depend on stochastic events (failures
dates depending on the probabilistic reliability), the NPV of an investment is also a
stochastic variable that needs to be assessed. One of the tools developed by EDF
to do so is a tool called VME that uses Monte Carlo simulation to evaluate various
risk indicators needed to support decision making. The algorithm of VME consists
in replicating the event model shown in Figure 2 with randomized failure dates for
both a reference strategy and the strategy to be evaluated in order to get a good
approximation of the NPV probabilistic distribution. Usually, the NPV takes some
values in dollars or euros; for confidentiality reasons, no real cost is given in this paper
and a fictive monetary unit is used.
One replication of the Monte-Carlo simulation consists in creating random dates for
failures (according to reliability input data) and propagating them to the occurrence
of the various events in the model (maintenance task, spare part purchase or further
failure). The delay between these events may be deterministic (delay between the
purchase of a spare and its delivery) or probabilistic (time to failure after maintenance),
but both of them are defined by input parameters. Each event of the model generates
cash flows (depending on input data such as the cost of a component, the value of
one day of forced outage, . . . ) that pile up to constitute the Life Cycle Cost of the
assets under one asset management strategy. The result of one replication is then the
NPV that is the difference of the two correlated life-cycle costs of the current strategy
and the assessed new strategy. This evaluation is replicated in order to obtain a good
estimation of the probabilistic distribution of the NPV.
The test-case used in this paper is made of four systems installed on different plants,
all these systems being identical (same reliability law) and using the same spare parts
stock. The goal is to find the best replacement dates (in year) of these systems as it
is wanted to prevent any failure event while replacements cannot be carried out too
often. When to purchase a new spare part also needs to be taken into account to secure
the availability of the plants fleet (see Lonchamp and Fessart [14] for more details).
This is given by the date (in year) of recovering a new system. Therefore the whole
strategy relies on these five dates which are taken as inputs for VME. These dates can
take different values (only discrete and integer values), which are described in Table 1,
and which have to be optimized.
These five dates are the deterministic events displayed in Figure 2 in green. The
random events, that make the NPV random, are the dates of failure of the plants. They
5
Figure 2: Event model used in VME.
are illustrated in Figure 2 in red. For a given set of five dates, VME computes a possible
NPV based on a realization of the date of failure, randomly generated, regarding the
different steps of the event model. This input space of dates is denoted E and regroups
the possible years for replacements and the supply:
E=
4
O
!
{41, 42, ..., 50} × {11, 12, ..., 20} .
(5)
i=1
E is therefore a discrete set (#E = 105 ). We have
G :
E×Ω
→ R
(x, ω) = (x1 , x2 , x3 , x4 , x5 , ω) 7→ NPV(x, ω),
(6)
f :
→ F
7
→
fx
E
x = (x1 , x2 , x3 , x4 , x5 )
(density of NPV(x)).
Figure 3 provides examples of the output density of VME. The 10 input values
inside E have been randomly chosen. It shows that there is a small variability between
the curves.
The optimization process consists in finding the point of E which gives the “maximal
NPV” value. As NPV(x) is a random variable, we have to summarize its distribution
by a deterministic operator H, for example:
H(g) = E(Z) , Z ∼ g , ∀g ∈ F
6
(7)
0.06
0.04
FVME.Chi[i, ]
0.02
0.00
●●●●●●●●●●●●●●
●●● ●●●●●
●●●
●●
●●●
●●
●●●
●●
●●●●
●
●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●
●
●●●●●
●●●●
●
●●●●
●
●●●●
●
●●●●
●
●●●
●
●●●
●
●●●
●●●
●
●●●
●
●●●
●
●●●
●
●●●
●
●●●
●
●●●
●
●●●
●
●●●
●
●●●
●
●●●
●
●●●
●
●●●
●
●●●●
●
●●●●
●●
●●●●
●
●●●●●
●●
●●●●●
●
●●●●●●
●
●●●●●●●
●●
●
●●●●●●●●
●●●●●●●●●
●●●
●
●●●●●●●●●
●
●●●●●●●●●●●
●●●●
●
●
●
●
●●●●●●●●●●●●●●
●
●●
●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●
●
●
●
●
●
●
●
●
●
●
●
●●●●●●●●●●●●●●●●●●●
●
●
●
●
●
●
●
●
●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
−20
−10
0
10
20
Supp
Figure 3: 10 output probability densities of the VME code (randomly sampled). In abscissa,
the values of NPV are given using a fictive monetary unit.
or
H(g) = qZ (p) , Z ∼ g , ∀g ∈ F
(8)
with qZ (p) the p-quantile (0 < p < 1) of g. Our VME-optimization problem turns then
to the determination of
x∗ := arg max H(fx ).
(9)
x∈E
However, several difficulties occur:
• VME is a CPU-time costly simulator and the size of the set E is large. Computing (fx )x∈E , needing NMC × #E simulator calls (where NMC is worth several
thousands), is therefore impossible. Our solution is to restrict the VME calls to
a learning set χ ⊂ E (with #χ = 200 in our application case), randomly chosen
inside E. We will then have (fx )x∈χ .
• Our metamodel, that we will denote
ˆ
fˆx
, cannot be computed on E due
x∈E
to its large size. A new set E is therefore defined, with #E = 2000. In this
work, we limit our study to this restricted space (also denoted E for simplicity)
instead of the full space. Other work will be performed in the future to extend
our methodology to the study on the full space.
3 Gaussian process regression metamodel of a
stochastic simulator
3.1
Basics on the Gaussian process regression model
The Gaussian process regression model is introduced here in the framework of deterministic scalar computer codes, following closely the explanations of Marrel et al. [18].
7
The aim of a metamodel is to build a mathematical approximation of a computer code
denoted by G(x) ∈ R, x = (x1 , . . . , xd ) ∈ E ⊂ Rd from n of its evaluations. These
evaluations are performed from an experimental design set denoted in this work by
χ = (x(1) , . . . , x(n) ) where x(i) ∈ E, i = 1, . . . , n. The simulations on χ are denoted by
yn = (y (1) , . . . , y (n) ) with y (i) = G(x(i) ), i = 1, . . . , n.
Gaussian process regression models (Sacks et al. [23]), also called Kriging models
(Stein [24]), suppose that the simulator response G(x) is a realization of a Gaussian
random process Y of the following form:
Y (x) = h(x) + Z(x),
(10)
where h(x) is a deterministic function and Z(x) is a centered Gaussian process.
h(x) gives the mean approximation of the computer code. In this work, h(x) is
considered to be a one-degree polynomial model:
h(x) = β0 +
d
X
βj xj ,
j=1
where β = [β0 , β1 , . . . , βd ]t is a regression parameter vector. The stochastic part Z(x)
allows to model the non-linearities that h(x) does not take into account. Furthermore,
the random process Z(x) is considered stationary with a covariance of the following
form:
Cov(Z(x), Z(u)) = σ 2 Kθ (x − u),
where σ 2 is the variance of Z(x), Kθ is a correlation function and θ ∈ Rd is a vector of
hyperparameters. This structure allows to provide interpolation and spatial correlation
properties. Several parametric families of correlation functions can be chosen (Stein
[24]).
Now let us suppose that we want to predict the output of G(x) at a new point
∆
∆
x∆ = (x∆
1 , . . . , xd ) ∈ E. The predictive distribution for G(x ) is obtained by con∗
ditioning Y (x ) by the known values yn of Y (x) on χ (we use the notation Yn =
Y (χ) = (Y (x(1) ), . . . , Y (x(n) )). Classical results on Gaussian processes imply that this
distribution is Gaussian:
[Y (x∆ )|Yn = yn ] = N E[Y (x∆ )|Yn = yn ], Var[Y (x∆ )|Yn ] .
(11)
Finally, the metamodel is given by the mean of the predictive distribution (also called
kriging mean):
E[Y (x∆ )|Yn = yn ] = h(x∆ ) + k(x∆ )t Σ−1
n (Yn − h(χ))
with
and
k(x∆ ) = [Cov(Y (x(1) ), Y (x∆ )), . . . , Cov(Y (x(n) ), Y (x∆ ))]t
= σ 2 [Kθ (x(1) , x∆ ), . . . , Kθ (x(n) , x∆ ))]t
Σn = σ 2 Kθ x(i) − x(j)
8
i,j=1...n
.
(12)
Furthermore, the accuracy of the metamodel can be evaluated by the predictive mean
squared error given by
∆
M SE(x∆ ) = Var[Y (x∆ )|Yn ] = σ 2 − k(x∆ )t Σ−1
n k(x )
(13)
The conditional mean (12) is used as a predictor and its mean squared error (MSE)
(13) – also called kriging variance – is a local indicator of the prediction accuracy. More
generally, Gaussian process regression model defines a Gaussian predictive distribution
for the output variables at any arbitrary set of new points. This property can be used
for uncertainty and sensitivity analysis (see for instance Le Gratiet et al. [13]).
Regression and correlation parameters β = (β0 , β1 , . . . , βd ), σ 2 and θ are generally
estimated with a maximum likelihood procedure (Fang et al. [8]). The maximum
likelihood estimates of β and σ 2 are given by the following closed form expressions:
β̂ = h(χ)t Σ−1
n h(χ)
σ̂ 2 =
yn − h(χ)β̂
t
−1
h(χ)t Σ−1
n yn ,
Σ−1
yn − h(χ)β̂
n
.
n − (d + 1)
However, such expression does not exist for the estimate of θ and it has to be evaluated
by solving the following problem (see for instance Marrel et al. [18]):
h
i
θ̂ = argminθ log(det(Σn )) + n log(σ̂ 2 ) .
In practice, we have to rely on the estimators ĥ, β̂ and θ̂. Therefore we introduce
some extra variability in the computation of the Kriging mean and the variance. For
instance, this harms the normality distribution of [Y (x∆ )|Yn = yn ]. In [25] and [26],
the authors offer an accurate computation of the kriging variance based on a correction
term.
3.2
3.2.1
Emulation of the simulator quantile function
General principles
In our VME-optimization problem, we are especially interested by several quantiles
(for example at the order 1%, 5%, 50%, 95%, 99%) rather than statistical moments. In
Moutoussamy et al. [19] and Browne [4], quantile prediction with a density-based emulator has shown some deficiencies. Mainly, both the positivity and integral constraints
make it impossible to apply the following approach to output densities. Therefore,
instead of studying Eq. (6), we turn our modeling problem to
G :
E×Ω
→ R
(x, ω) = (x1 , x2 , x3 , x4 , x5 , ω) 7→ NPV(x, ω),
(14)
Q :
E
x
→ Q
7
→
Qx
9
quantile function of NPV(x)
where Q is the space of increasing functions defined on ]0, 1[, with values in [a, b] (which
is the support of the NPV output). For x ∈ E, a quantile function is defined by
∀p ∈]0, 1[, Qx (p) = t ∈ [a, b]
Z t
fx (ε)dε = p.
such that
(15)
a
For the same points as in Figure 3, Figure 4 shows the 10 quantile function outputs Q
which present a rather low variability.
20
●
10
●
0
QVME.Chi[i, 6:96]
30
●
●
●
●
●
●
●●
●
●●
●●
●●
●●●
●●●
●●
●●●
●●●●
●●●●●●
●●●●
●●●●
●●●●
●●●●
●●●●
●●
●●●
●●●●●●●●●●●
●
●●●●●●●●●●●●●●
●●
0.2
0.4
0.6
0.8
SuppQ[6:96]
Figure 4: The 10 quantile functions Q of the VME code associated with the 10 VME pdf
given in Figure 3.
We consider
the learning set χ (n = #χ) and NMC × n G-simulator calls in order to
N
MC
obtain Q̃x
, the empirical quantile functions of (NPV(x))x∈χ . In this work, we
x∈χ
will use NMC = 104 , which is sufficiently large to obtain a precise estimator of Qx with
MC . Therefore, we neglect this Monte Carlo error. In the following, we simplify the
Q̃N
x
MC by Q .
notations by replacing Q̃N
x
x
The approach we adopt is similar to the one used in metamodeling a functional
output of a deterministic simulator (Bayarri et al., [2], Marrel et. al. [17]). The first
step consists in finding a low-dimensional functional basis in order to reduce the output
dimension by projection, while the second step consists in emulating the coefficients of
the basis functions. However, in our case, due to the nature of the functional outputs
(quantile functions), some particularities will arise.
3.2.2 Projection of Qx by the Modified Magic Points (MMP) algorithm
Adapted from the Magic Points algorithm (Maday et al. [20]) for probability density
functions, the MMP algorithm has been proposed in Moutoussamy et al. [19]. It is a
greedy algorithm that builds an interpolator (as a linear combination of basis functions)
for a set of functions. Its principle is to iteratively pick a basis function in the learning
sample output set and projecting the set of functions on the picked functions. Its
10
advantage over a more classical projection method (such as Fourier basis systems) is
that it sequentially builds an optimal basis for this precise projection.
The first step of the algorithm consists in selecting in the learning sample output
set the functions which are the most correlated with the other ones. This function
constitutes the first element of the functional basis. Then, at each step j ∈ {2, . . . , k}
of the building procedure, the element of the learning sample output set that maximizes
the L2 distance between itself and the interpolator — using the previous j − 1 basis
functions — is added to the functional basis. The total number k of functions is chosen
with respect to a convergence criterion. Mathematical details about this criterion are
not provided in the present paper.
In this paper, we apply the MMP algorithm on quantile functions. Therefore, any
quantile function (Qx )x∈χ is expressed as follows:
Q̂x =
k
X
ψj (x)Rj , ∀x ∈ χ,
j=1
where R = (R1 , ..., Rk ) are the quantile basis functions determined by MMP and
ψ = (ψ1 , . . . , ψk ) are the coefficients obtained by the projection of Qx on R. In order
to ensure the monotonic increase of Q̂x , we can restrict the coefficients to the following
constrained space:
n
o
C = ψ ∈ Rk , ψ1 , ..., ψk ≥ 0 .
However, this restriction is sufficient but not necessary. That is why this constraint is not considered in Section 4. Indeed, it allows to preserve usefull properties
of Gaussian process metamodels (such as any linear combinations of Gaussian process
metamodels is Gaussian) for the optimization procedure. In practice, the monotonicity
is verified afterwards.
3.2.3
Gaussian process metamodeling of the basis coefficients
The estimation of the coefficients ψ(x) = (ψ1 (x), . . . , ψk (x)) (x ∈ E) is performed
with k independent Gaussian process metamodels. According to (11), each ψj (x),
j = 1, . . . , k, can be modeled by a Gaussian process of the following form:
ψj (x) ∼ N ψ̂j (x), M SEj (x) , ∀j ∈ {1, ..., k} , ∀x ∈ E
(16)
where ψ̂j (x) is the kriging mean (12) and M SEj (x) the kriging variance (13) obtained
from the n observations ψj (χ).
Finally, the following metamodel can be used for Q̂x :
k
X
ˆ
Q̂x =
ψ̂j (x)Rj , ∀x ∈ E,
(17)
j=1
However, we have to ensure that ψ̂j ∈ C (j = 1 . . . k). The logarithmic transformation can then be used:
T1 :
C → Rk
ψ 7→ (log(ψ1 + 1), ..., log(ψk + 1))
11
and its inverse transformation:
T2 :
Rk → C
φ 7→ (exp(φ1 ) − 1, ..., exp(φk ) − 1) .
Then supposing that φ(x) := T1 (ψ(x)) , ∀x ∈ E, is a Gaussian process realization with
k independent margins, it can be estimated by:
φ̂(x) = E[φ(x) | φ(χ) = T1 (ψ(χ))] , ∀x ∈ E,
with ψ(χ) the learning sample output. The following biased estimates of ψ(x) can then
be considered:
ψ̂(x) = T2 (φ̂(x)) , ∀x ∈ E,
and (17) can be applied as our metamodel predictor of the quantile function. However,
the positivity constraint is not necessary and in our application the monotonicity is
respected without considering it. Therefore, these transformations have not be used in
our work.
3.3
3.3.1
Numerical tests on a toy function
Presentation
ˆ
In order to test the efficiency of the estimator Q̂, we first substitute the industrial
stochastic simulator VME by the following toy function G:
G(x) = sin (x1 + U1 ) + cos (x2 + U2 ) + x3 × U3 ,
(18)
with x = (x1 , x2 , x3 ) ∈ {0.1; 0.2; ...; 1}3 , U1 ∼ N (0, 1), U2 ∼ Exp(1) and U3 ∼
U([−0.5, 0.5]), which are all independent. G(x) is a real random variable and we have
#E = 103 . The goal is to estimate the output quantile functions:
Q:
E → Q
x 7→ Qx
quantile function of G(x).
As G is a simple simulator and the input set E has a low dimension (d = 3), we can
afford to compute NMC = 104 runs of G(x) for each input x ∈ E. Hence we can easily
deduce all the output quantile functions (Qx )x∈E .
We display in Figure 5 the output quantile functions (Qx ) for 10 different x ∈ E.
3.3.2
Applications to the MMP algorithm
ˆ
At present, we proceed at the first step for our estimator Q̂ by using the MMP algorithm. We randomly pick the learning set χ ⊂ E such that #χ = 150.
As a result of the MMP algorithm, we get a basis of functions, (R1 , ..., Rk ), extracted
from the output quantile functions, (Qx )x∈E , as well as the MMP-projections of the
output quantile functions on χ:
∀x ∈ χ ,
Q̂x =
k
X
j=1
12
ψj (x)Rj .
0
−2
−1
Quantiles
1
2
Sample of output quantile functions of G
0.0
0.2
0.4
0.6
0.8
1.0
[0;1]
Figure 5: 10 ouput quantile functions of the simulator G (randomly sampled).
In the example, we set k = 4.
Since the metamodel of the stochastic simulator is based on the estimation of the
MMP-projection Q̂, it is necessary to verify its relevance. This is why we compute the
following MMP error rate:
err1 =
1 X k Qx − Q̂x kL2
= 0.09%.
#χ x∈χ
k Qx kL2
This result shows that the MMP projection is very close to the real output quantile
ˆ
functions: if Q̂ is a good estimator of Q̂, then it is a good estimator of Q.
It is important to recall that the basis (R1 , ..., Rk ) depends on the choice of χ, and
so does Q̂. It probably approximates Q better on χ than on the whole input set E.
Therefore it is natural to wonder whether the approximation Q̂ is still relevant on E.
Since we could compute all the output quantile functions (Qx )x∈E , we could go further
and compute
1 X k Qx − Q̂x kL2
err2 =
= 0.13%.
#E x∈E
k Qx kL2
From this result, we infer that even though the approximation is a little less correct on
E, it is still efficient.
3.3.3
Applications to the Gaussian process regression
We now have the MMP projections Q̂x
x∈χ
, as well as the coefficients (ψ1 (x), ..., ψk (x))x∈χ .
The Gaussian process regression in this model consists in assuming that the k coordinates (ψ1 (x), ..., ψk (x))x∈E are the realizations of k independent Gaussian processes
whose values on χ are already known. From Eq. (17), we have the expression of the
metamodel:
k
X
ˆ
∀x ∈ E , Q̂x =
ψ̂j (x)Rj .
j=1
13
We verify the efficiency of the metamodel by computing the following error:
err3 =
ˆ
1 X k Qx − Q̂x kL2
= 1.42%.
#E x∈E
k Qx kL2
We conclude from this result, that the metamodel provides a good approximation for
the output quantile functions of the simulator G as runs were performed only on χ.
We display in Figure 6 the output quantile function Qx for a point x ∈ E, with its
ˆ
successive approximations Q̂x and Q̂x .
2−step estimation for the quantile function of G(x)
0
−1
Q_x(p)
1
2
●●
●●
●●
●●
●●
●●
●●
●●
●
●●
●●
●●
●●
●●
●●
●●
●●
●●
●
●
●●
●●
●●
●●
●●
●●
●●
●●
●●
●
●●
●●
●●
●●
●●●
●●●
●●●
●●●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
−2
●
●
0.0
0.2
0.4
0.6
0.8
1.0
]0;1[
ˆ
Figure 6: For a point x ∈ E, Qx is in black points, Q̂x in red line and Q̂x in red dotted line.
Figure 6 confirms that, for this point x ∈ E randomly picked, the MMP projection
is really close to the initial quantile function. As for Q̂x , the error is larger but the
whole distribution of G(x) is well estimated.
4
4.1
Application to an optimization problem
Direct optimization on the metamodel
A first solution would be to apply our quantile function metamodel with a quantilebased objective function:
H : Q → R
(19)
q 7→ q(p)
with p ∈]0, 1[. We look for:
x∗ := arg max Qx (p)
(20)
x∈E
but have only access to:
ˆ
x̃∗ := arg max Q̂x (p).
x∈E
14
(21)
ˆ
We also study the relative error of H(Q̂) on E by computing:
1
err =
×
maxx∈E (Qx (p)) − minx∈E (Qx (p))
!
X
ˆ
| Qx (p) − Q̂x (p) | .
(22)
x∈E
As an example for p = 0.5 (median estimation), with the toy-function G introduced
in (18), we find
ˆ
maxx∈E (Qx (p)) = 0.82, maxx∈E (Q̂x (p)) = 0.42, err = 5.4%.
As the estimated maximum value (0.42) has a large error with respect to the value of
the exact solution (0.82), we strongly suspect that the quantile function metamodel
cannot be directly applied to solve the optimization problem.
In the next section, we will also see that this strategy does not work either on the
VME application. We then propose an adaptive algorithm which consists in sequentially adding simulation points in order to capture interesting quantile functions to be
added in our functional basis.
4.2
QFEI: an adaptive optimization algorithm
ˆ
After the choice of χ, E and the families (Qx )x∈χ , (Q̂x )x∈χ and (Q̂x )x∈E , our new
algorithm will propose to perform new interesting (for our specific problem) calls to
the VME simulator on E (outside of χ). With the Gaussian process metamodel,
which provides a predictor and its uncertainty bounds, this is a classical approach
used for example in black-box optimization problem (Jones et al. [11]) and rare event
estimation (Bect et al. [3]). The goal is to provide some algorithms which mix global
space exploration and local optimization.
Our algorithm is based on the so-called EGO (Efficient Global Optimization) algorithm (Jones et al. [11]) which uses the Expected Improvement (EI) criterion to
maximize a deterministic simulator. Our case is different as we want to maximize:
H :
E → R
x 7→ Qx (p) p-quantile of NPV(x),
(23)
ie the p-quantile function, for p ∈]0, 1[, of the stochastic simulator. We will then propose a new algorithm called the QFEI (for Quantile Function Expected Improvement)
algorithm.
As previously, we use a restricted set E with #E = 5000 (E is a random sample in
the full set), the initial learning set χ ⊂ E with #χ = 200 (initial design of experiment),
ˆ
(Qx )x∈χ , (Q̂x )x∈χ and (Q̂x )x∈E . We denote the current learning set by D (the initial
learning set increased with additional points coming from QFEI). The Gaussianity on
the components of ψ is needed for the EGO algorithm, that is why we do not perform
the logarithmic transformation presented in Section 3.2.3. In our case, it has not
implied negative consequences.
15
We apply the Gaussian process metamodeling on the k independent components
P
ψ1 , ..., ψk (see Equation (16)). As Q̂x (p) = kj=1 ψj (x)Rj (p), we have
ˆ
Q̂x (p) ∼ N Q̂x (p),
k
X
Rj (p)2 M SEj (x) , ∀x ∈ E.
(24)
j=1
Then Q̂x (p) is a realization of the
with
UD
Ûx
σU2 |D (x)
underlying Gaussian process Ux =
Pk
j=1 ψj (x)Rj (p)
:= (Ux )x∈D
,
:= E[Ux | UD ],
∀x ∈ E,
:= Var[Ux | UD ], ∀x ∈ E.
(25)
The conditional mean and variance of Ux are directly obtained from the k Gaussian
process metamodels of the ψ coefficients (16).
At present, we propose to use the following improvement random function:
I :
E → R
x 7→ (Ux − max (UD ))+ ,
(26)
where x 7→ (x)+ = x.1x>0 is the positive part function. In our adaptive design,
finding a new point consists in solving:
xnew := arg max E[I(x)].
(27)
x∈E
Added points are those which have more chance to improve the current optimum. The
expectation of the improvement function writes (the simple proof is given in Browne
[4]):
Ûx − max(UD )
σU |D (x)
(28)
where ϕ and φ correspond respectively to the density and distribution functions of the
reduced centered Gaussian law.
In practice, several iterations of this algorithm are performed, allowing to complete
the experimental design D. At each iteration, a new projection functional basis is
computed and the k Gaussian process metamodels are re-estimated. The stopping
criterion of the QFEI algorithm can be a maximal number of iterations or a stabilization
criterion on the obtained solutions. No guarantee on convergence of the algorithm can
be given. In conclusion, this algorithm provides the following estimation of the optimal
point x∗ :
x̂∗ := arg max(UD ).
(29)
E[I(x)] = σU |D (x) (u(x)φ(u(x)) + ϕ(u(x))) , ∀x ∈ E,
with
u(x) =
x∈D
4.3
Validation of the QFEI algorithm on the toy function
We get back to the toy-function G introduced in (18). The goal is to determine x∗ ∈ E
such that:
x∗ = arg max Qx (p)
x∈E
16
with p = 0.4. In other words, we try to find the input x ∈ E whose output distribution
has the highest 40%-quantile. In this very case, we have:
x∗ = (1, 0, 1, 0.5)
with Qx∗ (p) = 0.884.
We have also computed
1 X
Qx (p) = −0.277,
#E x∈E
Var (Qx (p))x∈E = 0.071.
ˆ
Let us first remember that we set an efficient metamodel Q̂ for Q in the section 3.3.3.
Indeed, we had err3 = 1.42%.
As before, we test the natural way to get an estimation for x∗ , by determining
ˆ
x̃∗ = arg maxQ̂x (p).
x∈E
Unfortunately, still in our previous example, we get
x̃∗ = (0.9, 0, 2, 0.8)
which is far from being satisfying. Besides, when we compute the real output distribution for x̃∗ , we have
Qx̃∗ (p) = 0.739.
ˆ
Therefore only relying on Q̂ to estimate x∗ would lead to an important error. This
is due to the high smoothness of the function x −→ Qx (p): even a small error in the
estimator Q· (p) completely upsets the order of the family (Qx (p))x∈E .
At present, we use the QFEI algorithm (Eq. (29)) in order to estimate x∗ by x̂∗ ,
with 20 iterations. At the end of the experiments, the design D is the learning set χ
to which we have consecutively added 20 points of E.
With QFEI, we finally get
x̂∗ = (1, 0, 1, 0.2)
with Qx̂∗ (p) = 0.878.
It is important to mention that x̂∗ has the second highest output 40%-quantile.
Overall we tested the whole procedure 30 times with, in each case, a new learning
set χ. In the 30 trials, we made sure that the maximizer x∗ was not already in χ.
In 22 cases, we obtain x̂∗ = x∗ , which is the best result that we could expect. For
the remaining 8 times, we obtain x̂∗ = (1, 0, 1, 0.2). We can conclude that the QFEI
algorithm is really efficient for this optimization problem.
5
Application to the VME case study
We return to our VME application (cf Section 2). For the projection of Qx by the
Modified Magic Points (MMP) algorithm, the choice k = 5 has shown sufficient approximation capabilities. For one example of quantile function output, a small relative
17
10
●
●
●
●
0
−5
QVME.Chi[i, ]
5
●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●●
●●
●●
●●
●
●●●
●●
●●●
●●
●●
●●
●●●
●
●
●
●●
●●
●●
●●●
●●●●
●●●●
●
●●●●●●●●●●●●
●●
0.2
0.4
0.6
0.8
SuppQ
Figure 7: For one point x ∈ χ, Q̂x (red line) and Qx (black points).
QVME.E[i, ]
−5
0
5
10
L2 -error (0.2%) between the observed quantile function and the projected quantile
function is obtained. Figure 7 confirms also the relevance of the MMP method.
We build the Gaussian process metamodel on the set E (with the choice k = 5).
For one example of quantile function output, a small relative L2 -error (2.8%) between
the observed quantile function and the emulated quantile function is obtained. Figure
8 confirms also the relevance of the metamodeling method.
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●●
●
●
●●
●
●●
●●●
●●
●●
●
●
●●
●●●
●●
●
●●
●●
●●●
●●
●●●●
●
●
●●●●●●●●●●●●●
0.2
0.4
0.6
0.8
SuppQ
ˆ
Figure 8: For one point x ∈ χ, Q̂x (red line) and Qx (black points).
As for the toy function, our optimization exercise is to determine x∗ ∈ E such that
x∗ = arg max Qx (p)
x∈E
with p = 0.5. We first try to directly apply an optimization algorithm on the previously
18
obtained metamodel. As an example, for p = 0.5 (median estimation), we find:
ˆ
maxx∈E (Qx (p)) = 0.82, maxx∈E (Q̂x (p)) = 0.42, err = 5.4%.
(30)
ˆ
If we define y = arg maxx∈E [Q̂x (p)] the best point from the metamodel, we obtain
Qy (p) = 0.29 while maxx∈χ Qx (p) = 0.35. The exploration of E by our metamodel
does not bring any information. We have observed the same result by repeating the
experiments 100 times (changing the initial design each time). It means that the
punctual errors on the quantile function metamodel are too large for this optimization
algorithm. In fact, the basis functions R1 , ..., R5 that the MMP algorithm has chosen
on χ are not able to represent the extreme parts of the quantile functions of E. As a
conclusion of this test, the quantile function metamodel cannot be directly applied to
solve the optimization problem.
We now apply the QFEI algorithm. In our application case, we have performed all
the simulations in order to know (Qx )x∈E , therefore the solution x∗ . Our first objective
is to test our proposed algorithm for p = 0.4 which has the following solution:
(
x∗
= (41, 47, 48, 45, 18)
∗
Qx (p) = −1.72.
(31)
We have also computed
1 X
Qx (p) = −3.15,
#E x∈E
Var (Qx (p))x∈E = 0.59.
(32)
We start with D := χ and we obtain
max (Qx ) = −1.95.
x∈χ
(33)
After 50 iterations of the QFEI algorithm, we obtain:
(
x̂∗
= (41, 47, 45, 46, 19)
Qx̂∗ (p) = −1.74.
(34)
We observe that x̂∗ and Qx̂∗ (p) are close to x∗ and ' Qx∗ (p). This is a first confirmation
of the relevance of our method. With respect to the initial design solution, the QFEI
has allowed to obtain a strong improvement of the proposed solution. 50 repetitions of
this experiment (changing the initial design) has also proved the robustness of QFEI.
The obtained solution is always one of the five best points on E.
QFEI algorithm seems promising but a lot of tests remain to perform and will be
pursued in future works: changing p (in particular testing extremal cases), increasing
the size of E, increasing the dimension d of the inputs, . . .
6
Conclusion
In this paper, we have proposed to build a metamodel of a stochastic simulator using
the following key points:
19
1. Emulation of the quantile function which proves better efficiency for our problem
than the emulation of the probability density function;
2. Decomposition of the quantile function in a sum of the quantile functions coming
from the learning sample outputs;
3. Selection of the most representative quantile functions of this decomposition using
an adaptive choice algorithm (called the MMP algorithm) in order to have a small
number of terms in the decomposition;
4. Emulation of each coefficient of this decomposition by a Gaussian process metamodel, by taking into account constraints ensuring that a quantile function is
built.
The metamodel is then used to treat a simple optimization strategy maintenance
problem using a stochastic simulator (VME), in order to optimize an output (NPV)
quantile. Using the Gaussian process metamodel framework and extending the EI
criterion to quantile function, the adaptive QFEI algorithm has been proposed. In
our example, it allows to obtain an “optimal” solution using a small number of VME
simulator runs.
This work is just a first attempt and needs to be continued in several directions:
• Consideration of a variable NMC whose decrease could help to fight against the
computational cost of the stochastic simulator,
• Improvement of the initial learning sample choice by replacing the random sample
by a space filling design (Fang et al. [8]),
• Algorithmic improvements to counter the cost of the metamodel evaluations and
to increase the size of the study set E,
• Multi-objective optimization (several quantiles to be optimized) in order to take
advantage of our powerful quantile function emulator,
• Including the estimation error induced in practice by ĥ, β̂ and θ̂ to define a more
precise version of the QFEI algorithm,
• Application to more complex real cases,
• Consideration of a robust optimization problem where environmental input variables of the simulator has not to be optimized but just create an additional
uncertainty on the output.
References
[1] B. Ankenman, B.L. Nelson, and J. Staum. Stochastic kriging for simulation metamodeling. Operations Research, 58:371–382, 2010.
[2] M.J. Bayarri, J.O. Berger, J. Cafeo, G. Garcia-Donato, F. Liu, J. Palomo, R.J.
Parthasarathy, R. Paulo, J. Sacks, and D. Walsh. Computer model validation
with functional output. The Annals of Statistics, 35:1874–1906, 2007.
20
[3] J. Bect, D. Ginsbourger, L. Li, V. Picheny, and E. Vazquez. Sequential design of
computer experiments for the estimation of a probability of failure. Statistics and
Computing, 22:773–793, 2012.
[4] T. Browne. Développement d’émulateurs de codes stochastiques pour la résolution
de problèmes d’optimisation robuste par algorithmes génétiques. Rapport de stage
de Master M2 de l’Université Paris Descartes, EDF R&D, Chatou, France, 2014.
[5] D. Bursztyn and D.M. Steinberg. Screening experiments for dispersion effects.
In A. Dean and S. Lewis, editors, Screening - Methods for experimentation in
industry, drug discovery and genetics. Springer, 2006.
[6] E. de Rocquigny, N. Devictor, and S. Tarantola, editors. Uncertainty in industrial
practice. Wiley, 2008.
[7] G. Dellino and C. Meloni, editors. Uncertainty management in simulationoptimization of complex systems. Springer, 2015.
[8] K-T. Fang, R. Li, and A. Sudjianto. Design and modeling for computer experiments. Chapman & Hall/CRC, 2006.
[9] A. Forrester, A. Sobester, and A. Keane, editors. Engineering design via surrogate
modelling: a practical guide. Wiley, 2008.
[10] B. Iooss and P. Lemaı̂tre. A review on global sensitivity analysis methods.
In C. Meloni and G. Dellino, editors, Uncertainty management in SimulationOptimization of Complex Systems: Algorithms and Applications. Springer, 2015.
[11] D.R. Jones, M. Schonlau, and W.J. Welch. Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13:455–492, 1998.
[12] J.P.C. Kleijnen. Design and analysis of simulation experiments, Second Edition.
Springer, 2015.
[13] L. Le Gratiet, C. Cannamela, and B. Iooss. A Bayesian approach for global
sensitivity analysis of (multifidelity) computer codes. SIAM/ASA Journal of Uncertainty Quantification, 2:336–363, 2014.
[14] J. Lonchampt and K. Fessart. An optimization approach for life cycle management
applied to large power transformers. EPRI Report, 1023033, 2012.
[15] J. Lonchampt and K. Fessart. Investments portfolio optimal planning, a tool for
an intergrated life management. ANS embedded meeting on Risk Management for
Complex Socio-technical Systems (RM4CSS), Washington DC, USA, 2013.
[16] A. Marrel, B. Iooss, S. Da Veiga, and M. Ribatet. Global sensitivity analysis
of stochastic computer models with joint metamodels. Statistics and Computing,
22:833–847, 2012.
[17] A. Marrel, B. Iooss, M. Jullien, B. Laurent, and E. Volkova. Global sensitivity
analysis for models with spatially dependent outputs. Environmetrics, 22:383–397,
2011.
[18] A. Marrel, B. Iooss, F. Van Dorpe, and E. Volkova. An efficient methodology
for modeling complex computer codes with Gaussian processes. Computational
Statistics and Data Analysis, 52:4731–4744, 2008.
21
[19] V. Moutoussamy, S. Nanty, and B. Pauwels. Emulators for stochastic simulation
codes. ESAIM: PROCEEDINGS AND SURVEYS, 48:116–155, 2015.
[20] Y. Maday N.C. Nguyen, A.T. Patera, and G.S.H. Pau. A general, multipurpose
interpolation procedure: the magic points. In Proceedings of the 2nd International
Conference on Scientific Computing and Partial Differential Equations, 2007.
[21] V. Picheny, D. Ginsbourger, Y. Richet, and G. Caplin. Quantile-based optimization of noisy computer experiments with tunable precision. Technometrics,
55:2–13, 2013.
[22] B.J. Reich, E. Kalendra, C.B. Storlie, H.D. Bondell, and M. Fuentes. Variable
selection for high dimensional Bayesian density estimation: Application to human
exposure simulation. Journal of the Royal Statistical Society: Series C (Applied
Statistics), 61:47–66, 2012.
[23] J. Sacks, W.J. Welch, T.J. Mitchell, and H.P. Wynn. Design and analysis of
computer experiments. Statistical Science, 4:409–435, 1989.
[24] M.L. Stein. Interpolation of spatial data. Springer, 1999.
[25] E. Zhu and M. L. Stein. Spatial sampling design for prediction with estimated
parameters. Journal of Agricultural, Biological, and Environmental Statistics,
11:24–44, 2006.
[26] D. L. Zimmerman. Optimal network design for spatial prediction, covari- ance parameter estimation, and empirical prediction. Environmetrics, 17:635–652, 2006.
22
| 7 |
Updated version of the paper published at the ICDL-Epirob 2017 conference (Lisbon, Portugal), with the same title and authors.
Embodied Artificial Intelligence through
Distributed Adaptive Control: An Integrated
Framework
Clément Moulin-Frier
SPECS Lab
Universitat Pompeu Fabra
Barcelona, Spain
Email:
[email protected]
Martì Sanchez-Fibla
SPECS Lab
Universitat Pompeu Fabra
Barcelona, Spain
Email: [email protected]
Jordi-Ysard Puigbò
SPECS Lab
Universitat Pompeu Fabra
Barcelona, Spain
Email:
[email protected]
Xerxes D. Arsiwalla
SPECS Lab
Universitat Pompeu Fabra
Barcelona, Spain
Email:
[email protected]
Paul FMJ Verschure
SPECS Lab
Universitat Pompeu Fabra
ICREA
Institute for Bioengineering of Catalonia (IBEC)
Barcelona Institute of Science and Technology (BIST)
Barcelona, Spain
Email: [email protected]
Abstract—In this paper, we argue that the future of Artificial
Intelligence research resides in two keywords: integration and
embodiment. We support this claim by analyzing the recent
advances in the field. Regarding integration, we note that the
most impactful recent contributions have been made possible
through the integration of recent Machine Learning methods
(based in particular on Deep Learning and Recurrent Neural
Networks) with more traditional ones (e.g. Monte-Carlo tree
search, goal babbling exploration or addressable memory
systems). Regarding embodiment, we note that the traditional
benchmark tasks (e.g. visual classification or board games) are
becoming obsolete as state-of-the-art learning algorithms
approach or even surpass human performance in most of them,
having recently encouraged the development of first-person 3D
game platforms embedding realistic physics. Building on this
analysis, we first propose an embodied cognitive architecture
integrating heterogeneous subfields of Artificial Intelligence into
a unified framework. We demonstrate the utility of our approach
by showing how major contributions of the field can be expressed
within the proposed framework. We then claim that
benchmarking environments need to reproduce ecologically-valid
conditions for bootstrapping the acquisition of increasingly
complex cognitive skills through the concept of a cognitive arms
race between embodied agents.
Index Terms—Cognitive Architectures, Embodied Artificial
Intelligence, Evolutionary Arms Race, Unified Theories of
Cognition.
I. INTRODUCTION
In recent years, research in Artificial Intelligence has been
primarily dominated by impressive advances in Machine
Learning, with a strong emphasis on the so-called Deep
Learning framework. It has allowed considerable achievements
such as human-level performance in visual classification [1]
and description [2], in Atari video games [3] and even in the
highly complex game of Go [4]. The Deep Learning approach
is characterized by supposing very minimal prior on the task to
be solved, compensating this lack of prior knowledge by
feeding the learning algorithm with an extremely high amount
of training data, while hiding the intermediary representations.
However, it is important noting that the most important
contributions of Deep Learning for Artificial Intelligence often
owe their success in part to their integration with other types
of learning algorithms. For example, the AlphaGo program
which defeated the world champions in the famously complex
game of Go [4], is based on the integration of Deep
Reinforcement Learning with a Monte-Carlo tree search
algorithm. Without the tree search addition, AlphaGo still
outperforms previous machine performances but is unable to
beat high-level human players. Another example can be found
in the original Deep Q-Learning algorithm (DQN, Mnih et al.,
2015), achieving very poor performance in some Atari games
where the reward is considerably sparse and delayed (e.g.
Montezuma Revenge). Solving such tasks has required the
Updated version of the paper published at the ICDL-Epirob 2017 conference (Lisbon, Portugal), with the same title and authors.
integration of DQN with intrinsically motivated learning
algorithms for novelty detection [5], or goal babbling [6].
such as the prisoner dilemma [13] and studying the emergence
of cooperation and competition among agents [14].
A drastically different approach has also received considerable
attention, arguing that deep learning systems are not able to
solve key aspects of human cognition [7]. The approach states
that human cognition relies on building causal models of the
world through combinatorial processes to rapidly acquire
knowledge and generalize it to new tasks and situations. This
has led to important contributions through model-based
Bayesian learning algorithms, which surpass deep learning
approaches in visual classification tasks while displaying
powerful generalization abilities in one-shot training [8]. This
solution, however, comes at a cost: the underlying algorithm
requires a priori knowledge about the primitives to learn from
and about how to compose them to build increasingly abstract
categories. An assumption of such models is that learning
should be grounded in intuitive theories of physics and
psychology, supporting and enriching acquired knowledge [7],
as supported by infant behavioral data [9].
The above examples emphasize two important challenges in
modern Artificial Intelligence. Firstly, there is a need for a
unified integrative framework providing a principled
methodology for organizing the interactions of various
subfields (e.g. planning and decision making, abstraction,
classification, reinforcement learning, sensorimotor control or
exploration). Secondly, Artificial Intelligence is arriving at a
level of maturation where more realistic benchmarking
environments are required, for two reasons: validating the full
potential of the state-of-the-art artificial cognitive systems, as
well as understanding the role of environmental complexity in
the shaping of cognitive complexity.
Considering the pre-existence of intuitive physics and
psychology engines as an inductive bias for Machine Learning
is far from being a trivial assumption. It immediately raises the
question: where does such knowledge come from and how is it
shaped through evolutionary, developmental and cultural
processes? All the aforementioned approaches are lacking this
fundamental component shaping intelligence in the biological
world, namely embodiment. Playing Atari video games,
complex board games or classifying visual images at a human
level are considerable milestones of Artificial Intelligence
research. Yet, in contrast, biological cognitive systems are
intrinsically shaped by their physical nature. They are
embodied within a dynamical environment and strongly
coupled with other physical and cognitive systems through
complex feedback loops operating at different scales: physical,
sensorimotor, cognitive, social, cultural and evolutionary.
Nevertheless, many recent Artificial Intelligence benchmarks
have focused on solving video games or board games,
adopting a third-person view and relying on a discrete set of
actions with no or poor environmental dynamics. A few
interesting software tools have however recently been released
to provide more realistic benchmarking environments. This for
example, is the case of Project Malmo [10] which provides an
API to control characters in the MineCraft video game, an
open-ended environment with complex physical and
environmental dynamics; or Deepmind Lab [11], allowing the
creation of rich 3D environments with similar features.
Another example is OpenAI Gym [12], providing access to a
variety of simulation environments for the benchmarking of
learning algorithms, especially reinforcement learning based.
Such complex environments are becoming necessary to
validate the full potential of modern Artificial Intelligence
research, in an era where human performance is being
achieved on an increasing number of traditional benchmarks.
There is also a renewed interest for multi-agent benchmarks in
light of the recent advances in the field, solving social tasks
In this paper, we first propose an embodied cognitive
architecture structuring the main subfields of Artificial
Intelligence research into an integrated framework. We
demonstrate the utility of our approach by showing how major
contributions of the field can be expressed within the proposed
framework, providing a powerful tool for their conceptual
description and comparison. Then we argue that the
complexity of a cognitive agent strongly depends on the
complexity of the environment it lives in. We propose the
concept of a cognitive arms race, where an ecology of
embodied cognitive agents interact in a dynamic environment
reproducing ecologically-valid conditions and driving them to
acquire increasingly complex cognitive abilities in a positive
feedback loop.
II. AN INTEGRATED COGNITIVE ARCHITECTURE FOR EMBODIED
ARTIFICIAL INTELLIGENCE
Considering an integrative and embodied approach to
Artificial Intelligence requires dealing with heterogeneous
aspects of cognition, where low-level interaction with the
environment interacts bidirectionally with high-level reasoning
abilities. This reflects a historical challenge in formalizing how
cognitive functions arise in an individual agent from the
interaction of interconnected information processing modules
structured in a cognitive architecture [15], [16]. On one hand,
top-down approaches mostly rely on methods from Symbolic
Artificial Intelligence (from the General Problem Solver [17]
to Soar [18] or ACT-R [19] and their follow-up), where a
complex representation of a task is recursively decomposed
into simpler elements. On the other hand, bottom-up
approaches instead emphasize lower-level sensory-motor
control loops as a starting point of behavioral complexity,
which can be further extended by combining multiple control
loops together, as implemented in behavior-based robotics
[20] (sometimes referred as intelligence without
representation [21]). These two approaches thus reflect
different aspects of cognition: high-level symbolic reasoning
for the former and low-level embodied behaviors for the latter.
However, both aspects are of equal importance when it comes
to defining a unified theory of cognition. It is therefore a major
challenge of cognitive science to unify both approaches into a
single theory, where (a) reactive control allows an initial level
Updated version of the paper published at the ICDL-Epirob 2017 conference (Lisbon, Portugal), with the same title and authors.
of complexity in the interaction between an embodied agent
and its environment and (b) this interaction provides the basis
for learning higher-level representations and for sequencing
them in a causal way for top-down goal-oriented control.
For this aim, we adopt the principles of the Distributed
Adaptive Control (DAC) theory of the mind and brain [22],
[23]. Besides its biological grounding, DAC is an adequate
modeling framework for integrating heterogeneous concepts of
Artificial Intelligence and Machine Learning into a coherent
cognitive architecture, for two reasons: (a) it integrates the
principles of both the aforementioned bottom-up and top-down
approaches into a coherent information processing circuit; (b)
it is agnostic to the actual implementation of each of its
functional modules. Over the last fifteen years, DAC has been
applied to a variety of complex and embodied benchmark
tasks, for example foraging [22], [24] or social humanoid
robot control [16], [25].
A. The DAC-EAI cognitive architecture: Distributed
Adaptive Control for Embodied Artificial Intelligence
DAC posits that cognition is based on the interaction of
interconnected control loops operating at different levels of
abstraction (Figure 1). The functional modules constituting
the architecture are usually described in biological or
psychological terms (see e.g. [26]). Here we propose instead to
describe them in purely computational term, with the aim of
facilitating the description of existing Artificial Intelligence
systems within this unified framework. We call this
instantiation of the architecture DAC-EAI: Distributive
Adaptive Control for Embodied Artificial Intelligence.
Figure 1: The DAC-EAI architecture allows a coherent
organization of heterogeneous subfields of Artificial Intelligence.
DAC-EAI stands for Distributed Adaptive Control for Embodied
Machine Learning. It is composed of three layers operating in
parallel and at different levels of abstraction. See text for detail,
where each module name is referred with italics.
The first level, called the Somatic layer, corresponds to the
embodiment of the agent within its environment. It includes
the sensors and actuators, as well internal variables to be
regulated (e.g. energy or safety levels). The self-regulation of
these internal variables occurs in the Reactive layer and
extends the aforementioned behavior-based approaches (e.g.
the Subsumption architecture [20]) with drive reduction
mechanisms through predefined sensorimotor control loops
(i.e. reflexes). In Figure 1, this corresponds to the mapping
from the Sensing to the Motor Control module through Self
Regulation. The Reactive layer offers several advantages when
analyzed from the embodied artificial intelligence perspective
of this paper. First, reward is traditionally considered in
Machine Learning as a scalar value associated with external
states of the environment. DAC proposes instead that it should
derive from the internal dynamics of multiple internal
variables modulated by the body-environment real-time
interaction, providing an embodied notion of reward in
cognitive agents. Second, the Reactive layer generates a first
level of behavioral complexity through the interaction of
predefined sensorimotor control loops for self-regulation. This
provides a notion of embodied inductive bias bootstrapping
and structuring learning processes in the upper levels of the
architecture. This is a departure from the model-based
approaches mentioned in the introduction [7], where inductive
biases are instead considered as intuitive core knowledge in
the form of a pre-existent physics and psychology engine.
Behavior generated in the Reactive layer bootstraps learning
processes for acquiring a state space of the agent-environment
interaction in the Adaptive layer. The Representation Learning
module receives input from Sensing to form increasingly
abstract representations. For example, unsupervised learning
methods such as deep autoencoders [27] could be a possible
implementation of this module. The resulting abstract states of
the world are mapped to their associated values through the
Value Prediction module, informed by the internal states of the
agent from Self Regulation. This allows the inference of action
policies maximizing value through Action Selection, a typical
reinforcement learning problem [28]. We note that Deep QLearning [3] provides an integrated solution to the three
processes involved in the Adaptive layer, based on Deep
Convolutional Networks for Representation Learning, Q-value
estimation for Value Prediction and an ε-greedy policy for
Action Selection. However, within our proposed framework,
the self-regulation of multiple internal variables in the
Reactive layer requires the agent to switch between different
action policies (differentiating e.g. between situation of low
energy vs. low safety). A possible way to achieve this using
the Deep Q-Learning framework is to extend it to multi-task
learning (see e.g. [29]). Since it is likely that similar abstract
features are relevant to various tasks, a promising solution is to
share the representation learning part of the network (the
convolutional layers in [3]) across tasks, while multiplying the
fully-connected layers in a task-specific way.
The state space acquired in the Adaptive layer then supports
the acquisition of higher-level cognitive abilities such as goal
selection, memory and planning in the Contextual layer. The
abstract representations acquired in Representation Learning
Updated version of the paper published at the ICDL-Epirob 2017 conference (Lisbon, Portugal), with the same title and authors.
are linked together through Relational Learning. The
availability of abstract representations in possibly multiple
modalities provides the substrate for causal and compositional
linking. Several state-of-the-art methods are of interest for
learning such relations, such as Bayesian program learning [8]
or Long Short Term Memory neural network (LSTM, [30]).
Based on these higher-level representations, Goal Selection
forms the basis of goal-oriented behavior by selecting valuable
states to be reached, where value is provided by the Value
Prediction
module.
Intrinsically-motivated
methods
maximizing learning progress can be applied here for an
efficient exploration of the environment [31]. The selected
goals are reached through Planning, where any adaptive
method of this field can be applied [32]. The resulting action
plans, learned from action-state-value tuples generated by the
Adaptive layer, propagate down the architecture to modulate
behavior. Finally, an addressable memory system registers the
activity of the Contextual layer, allowing the persistence of the
agent experience over the long term for lifelong learning
abilities [33]. In psychological terms, this memory system is
analog to an autobiographical memory.
These high-level cognitive processes, in turn, modulate
behavior at lower levels via top-down pathways shaped by
behavioral feedback. The control flow is therefore distributed
within the architecture, both from bottom-up and top-down
interactions between layers, as well as from lateral information
processing into the subsequent layers.
B. Expressing existing Machine Learning systems within
the DAC-EAI framework
We now demonstrate the generality of the proposed DACEML architecture by describing how well-known Artificial
Intelligence systems can be conceptually described as subparts of the DAC-EAI architecture (Figure 2).
We start with behavior-based robotics [20], implementing a set
of reactive controllers through low-level coupling between
sensors to effectors. Within the proposed framework, there are
described as the lower part of the architecture, spanning the
Somatic and Reactive layers (Figure 2B). However, those
approaches are not considering the self-regulation of internal
variables but instead of exteroceptive variables, such as light
quantity for example.
In contrast, top-down robotic planning algorithms [34]
correspond to the right column (Action) of the DAC-EAI
architecture: spanning from Planning to Action Selection and
Motor Control, where the current state of the system is
typically provided by pre-processed sensory-related
information along the Reactive or Adaptive layers (Figure
2C). More recent Deep Reinforcement Learning methods,
such as the original Deep Q-Learning algorithm (DQN, [3])
typically span over all the Adaptive layer, They use
convolutional deep networks learning abstract representation
from pixel-level sensing of video game frames, Q-learning for
predicting the cumulated value of the resulting states and
competition among discrete actions as an action selection
process (Figure 2D). Still, there is no real motor control in
this system, given that most available benchmarks operate on a
limited set of discrete (up-down-left-right) or continuous
(forward speed, rotation speed) actions. Not shown in Figure
2, classical reinforcement learning [28] relies on the same
architecture as Figure 2D, however not addressing the
representation learning problem, since the state space is
usually pre-defined in these studies (often considering a grid
world).
Several extensions based on the DQN algorithm exist. For
example, intrinsically-motivated deep reinforcement learning
[6] extends it with a goal selection mechanism (Figure 2E).
This extension allows solving tasks with delayed and sparse
reward (e.g. Montezuma Revenge) by encouraging exploratory
behaviors. AlphaGo also relies on a Deep Reinforcement
Learning method (hence spanning the Adaptive layer as in the
last examples), coupled with a Monte-Carlo tree search
algorithm which can be conceived as a planning process (see
also [35]), as represented in Figure 2F.
Another recent work, adopting a drastically opposite approach
as compared to end-to-end deep learning, addresses the
problem of learning highly abstract concepts from the
perspective of the human ability to perform one-shot learning.
The resulting model, called Bayesian Program Learning [8],
relies on a priori knowledge about the primitives to learn from
and about how to compose them to build increasingly abstract
categories. In this sense, it is described within the DAC-EAI
framework as addressing the pattern recognition problem from
the perspective of relational learning, where primitives are
causally linked for composing increasingly abstract categories
(Figure 2G).
Finally, the Differentiable Neural Computer [36], the
successor of the Neural Turing Machine [37], couples a neural
controller (e.g. based on an LSTM) with a content-addressable
memory. The whole system is fully differentiable and is
consequently optimizable through gradient descent. It can
solve problems requiring some levels of sequential reasoning
such has path planning in a subway network or performing
inferences in a family tree. In DAC-EAI terms, we describe it
as an implementation of the higher part of the architecture,
where causal relations are learned from experience and
selectively stored in an addressable memory, which can further
by accessed for reasoning or planning operations (Figure 2H).
An interesting challenge with such an integrative approach is
therefore to express a wide range of Artificial systems within a
unified framework, facilitating their description and
comparison in conceptual terms.
Updated version of the paper published at the ICDL-Epirob 2017 conference (Lisbon, Portugal), with the same title and authors.
Figure 2: The DAC-EAI architecture allows a conceptual description of many Artificial Intelligence systems within a unified
framework. A) The complete DAC-EAI architecture (see Figure 1 for a larger version). The other subfigures (B to E) show conceptual
descriptions of different Artificial Intelligence systems within the DAC-EAI framework. B): Behavior-based Robotics [20]. C) Top-down
robotic planning [34]. D) Deep Q-Learning [3]. E) Intrinsically-Motivated Deep Reinforcement Learning [6]. F) AlphaGo [4]. G) Bayesian
Program Learning [8]. H) Differentiable Neural Computer [36].
III. THE COGNITIVE ARMS RACE: REPRODUCING
ECOLOGICALLY-VALID CONDITIONS FOR DEVELOPING
COGNITIVE COMPLEXITY
A general-purpose cognitive architecture for Artificial
Intelligence, as the one proposed in the previous section,
tackles the challenge of general-purpose intelligence with the
aim of addressing any kind of task. Traditional benchmarks,
mostly based on datasets or on idealized reinforcement
learning tasks, are progressively becoming obsolete in this
respect. There are two reasons for this. The first one is that
state-of-the-art learning algorithms are now achieving human
performance in an increasing number of these traditional
benchmarks (e.g. visual classification, video or board games).
The second reason is that the development of complex
cognitive systems is likely to depend on the complexity of the
environment they evolve in1. For these two reasons, Machine
Learning benchmarks have recently evolved toward first-
1 See also https://deepmind.com/blog/open-sourcing-deepmind-lab/: “It is
possible that a large fraction of animal and human intelligence is a direct
consequence of the richness of our environment, and unlikely to arise
without it”.
person 3D game platforms embedding realistic physics [10],
[11] and likely to become the new standards in the field.
It is therefore fundamental to figure out what properties of the
environment act as driving forces for the development of
complex cognitive abilities in embodied agents. We propose in
this paper the concept of a cognitive arms race as a
fundamental driving force catalyzing the development of
cognitive complexity. The aim is to reproduce ecologicallyvalid conditions among embodied agents forcing them to
continuously improve their cognitive abilities in a dynamic
multi-agent environment. In natural science, the concept of an
evolutionary arms race has been defined as follows: “an
adaptation in one lineage (e.g. predators) may change the
selection pressure on another lineage (e.g. prey), giving rise
to a counter-adaptation” [38]. This process produces the
conditions of a positive feedback loop where one lineage
pushes the other to better adapt and vice versa. We propose
that such a positive feedback loop is a key driving force for
achieving an important step towards the development of
machine general intelligence.
A first step for achieving this objective is the computational
modeling of two populations of embodied cognitive agents,
preys and predators, each agent being driven by the cognitive
architecture proposed in the previous section. Basic survival
behaviors are implemented as sensorimotor control loops
operating in the Reactive layer, where predators hunt preys,
while preys escape predators and are attracted to other food
sources. Since these agents adapt to environmental constraints
through learning processes occurring in the upper levels of the
architecture, they will reciprocally adapt to each other. A
cognitive adaptation (in term of learning) of members of one
population will perturb the equilibrium attained by the others
for self-regulating their own internal variables, forcing them to
re-adapt in consequence. This will provide an adequate setup
for studying the conditions of entering in a cognitive arms race
between populations, where both reciprocally improve their
cognitive abilities against each other.
A number of previous works have tackled the challenge of
solving social dilemmas in multi-agent simulations (see e.g.
[13] for a recent attempt using Deep Reinforcement Learning).
Within these works, the modeling of wolf-pack hunting
behavior ([13], [39], [40]) is of particular interest as a starting
point for bootstrapping a cognitive arms race. Such behaviors
are based both on competition between the prey and the wolf
group, as well as cooperation between wolves to maximize
hunting efficiency. This provides a complex structure of codependencies among the considered agents where adaptations
of one’s behavior will have consequences on the equilibrium
of the entire system. Such complex systems have usually been
studied in the context of Evolutionary Robotics [41] where coadaptation is driven by a simulated Darwinian selection
process. However complex co-adaptation can also be studied
through coupled learning among agents endowed with the
cognitive architecture presented in the previous section.
It is interesting to note that there exist precursors of this
concept of an arms race in the recent literature under a quite
different angle. An interesting example is a Generative
Adversarial Network [42], where a pattern generator and a
pattern discriminator compete and adapt against each other.
Another example is the AlphaGo program [4] which was partly
trained by playing games against itself, consequently
improving its performance in an iterative way. Both these
systems owe their success in part to their ability to enter into a
positive feedback loop of performance improvement.
IV. CONCLUSION
Building upon recent advances in Artificial Intelligence and
Machine Learning, we have proposed in this paper a cognitive
architecture, called DAC-EAI, allowing the conceptual
description of many Artificial Intelligence systems within a
unified framework. Then we have proposed the concept of a
cognitive arms race between embodied agent population as a
potentially powerful driving force for the development of
cognitive complexity.
We believe that these two research directions, summarized by
the keywords integration and embodiment, are key challenges
for leveraging the recent advances in the field toward the
achievement of General Artificial Intelligence. This ambitious
objective requires a cognitive architecture autonomously and
continuously optimizing its own behavior through embodied
interaction with the world. This is, however, not a sufficient
condition for an agent to continuously learn increasingly
complex skills. Indeed, in an environment of limited
complexity with sufficient resources, the agent will rapidly
converge towards an efficient strategy and there will be no
need to further extend the repertoire of skills. However, if the
environment contains other agents competing for the same,
limited resources, the efficiency of one’s strategy will depend
on the strategies adopted by the others. The constraints
imposed by such a multi-agent environment with limited
resources are likely to be a crucial factor in bootstrapping a
positive-feedback loop of continuous improvement through
competition among the agents, as described in the previous
section.
The main lesson of our integrative effort at the cognitive level,
as summarized in Figure 2, is that powerful algorithms and
control systems are existing which, taken together, span all the
relevant aspects of cognition required to solve the problem of
General Artificial Intelligence2. We see however that there is
still a considerable amount of work to be done to integrate all
the existing subparts into a coherent and complete cognitive
system. This effort is central to the research program of our
group and we have already demonstrated our ability to
implement a complete version of the architecture (see [16],
[24] for our most recent contributions).
As we already noted in previous publications [15], [26], [43]–
[45], there is, however, a missing ingredient in these systems
preventing them to being considered at the same level as
animal intelligence: they are not facing the constraint of the
massively multi-agent world in which biological systems
evolve. We propose here that a key constraint imposed by a
multi-agent world is the emergence of positive feedback loops
between competing agent populations, forcing them to
continuously adapt against each other.
Our approach is facing several important challenges. The first
one is to leverage the recent advances in robotics and machine
learning toward the achievement of general artificial
intelligence, based on the principled methodology provided by
the DAC framework. The second one is to provide a unified
theory of cognition [46] able to bridge the gap between
computational and biological science. The third one is to
understand the emergence of general intelligence within its
ecological substrate, i.e. the dynamical aspect of coupled
physical and cognitive systems.
ACKNOWLEDGMENT
Work supported by ERC’s CDAC project: "Role of
Consciousness in Adaptive Behavior" (ERC-2013-ADG
341196) & EU project Socialising Sensori-Motor
Contingencies
(socSMC-641321—H2020-FETPROACT2014), as well as the INSOCO Plan Nacional Project
(DPI2016-80116-P).
V. REFERENCES
[1]
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z.
Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L.
2
But this does not mean those aspects are sufficient to solve the problem.
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” Int.
J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015.
A. Karpathy and L. Fei-Fei, “Deep Visual-Semantic Alignments for
Generating Image Descriptions,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2015,
pp. 3128–3137.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G.
Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski,
S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D.
Kumaran, D. Wierstra, S. Legg, D. Hassabis, others, S. Petersen, C.
Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D.
Wierstra, S. Legg, and D. Hassabis, “Human-level control through
deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–
533, Feb. 2015.
D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den
Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M.
Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I.
Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and
D. Hassabis, “Mastering the game of Go with deep neural networks
and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan.
2016.
M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton,
and R. Munos, “Unifying count-based exploration and intrinsic
motivation,” in Advances in Neural Information Processing
Systems, 2016, pp. 1471–1479.
T. D. Kulkarni, K. R. Narasimhan, A. Saeedi, and J. B. Tenenbaum,
“Hierarchical deep reinforcement learning: Integrating temporal
abstraction
and
intrinsic
motivation,”
arXiv
Prepr.
arXiv1604.06057, 2016.
B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman,
“Building Machines That Learn and Think Like People,” Behav.
Brain Sci., Nov. 2017.
B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum, “Human-level
concept learning through probabilistic program induction,” Science
(80-. )., vol. 350, no. 6266, pp. 1332–1338, Dec. 2015.
A. E. Stahl and L. Feigenson, “Observing the unexpected enhances
infants’ learning and exploration,” Science (80-. )., vol. 348, no.
6230, pp. 91–94, 2015.
M. Johnson, K. Hofmann, T. Hutton, and D. Bignell, “The Malmo
Platform for Artificial Intelligence Experimentation,” Int. Jt. Conf.
Artif. Intell., pp. 4246–4247, 2016.
C. Beattie, J. Z. Leibo, D. Teplyashin, T. Ward, M. Wainwright, H.
Küttler, A. Lefrancq, S. Green, V. Valdés, A. Sadik, and others,
“DeepMind Lab,” arXiv Prepr. arXiv1612.03801, 2016.
G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman,
J. Tang, and W. Zaremba, “OpenAI Gym,” in arXiv preprint
arXiv:1606.01540, 2016.
J. Z. Leibo, V. Zambaldi, M. Lanctot, J. Marecki, and T. Graepel,
“Multi-agent Reinforcement Learning in Sequential Social
Dilemmas,” Proceedings of the 16th Conference on Autonomous
Agents and MultiAgent Systems. International Foundation for
Autonomous Agents and Multiagent Systems, pp. 464–473, 2017.
A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J.
Aru, J. Aru, and R. Vicente, “Multiagent Cooperation and
Competition with Deep Reinforcement Learning,” Nov. 2015.
C. Moulin-Frier, X. D. Arsiwalla, J.-Y. Puigbò, M. Sánchez-Fibla,
A. Duff, and P. F. M. J. Verschure, “Top-Down and Bottom-Up
Interactions between Low-Level Reactive Control and Symbolic
Rule Learning in Embodied Agents,” in Proceedings of the
Workshop on Cognitive Computation: Integrating neural and
symbolic approaches. 30th Annual Conference on Neural
Information Processing Systems (NIPS 2016), 2016.
C. Moulin-Frier, T. Fischer, M. Petit, G. Pointeau, J.-Y. Puigbo, U.
Pattacini, S. C. Low, D. Camilleri, P. Nguyen, M. Hoffmann, H. J.
Chang, M. Zambelli, A.-L. Mealier, A. Damianou, G. Metta, T.
Prescott, Y. Demiris, P.-F. Dominey, and P. Verschure, “DAC-h3:
A Proactive Robot Cognitive Architecture to Acquire and Express
Knowledge About the World and the Self,” Submitt. to IEEE Trans.
Cogn. Dev. Syst., 2017.
A. Newell, J. C. Shaw, and H. A. Simon, “Report on a general
problem-solving program,” IFIP Congr., pp. 256–264, 1959.
J. E. Laird, A. Newell, and P. S. Rosenbloom, “SOAR: An
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
architecture for general intelligence,” Artificial Intelligence, vol.
33, no. 1. pp. 1–64, 1987.
J. R. Anderson, The Architecture of Cognition. Harvard University
Press, 1983.
R. Brooks, “A robust layered control system for a mobile robot,”
IEEE J. Robot. Autom., vol. 2, no. 1, pp. 14–23, 1986.
R. A. Brooks, “Intelligence without representation,” Artif. Intell.,
vol. 47, no. 1–3, pp. 139–159, 1991.
P. F. M. J. Verschure, T. Voegtlin, and R. J. Douglas,
“Environmentally mediated synergy between perception and
behaviour in mobile robots,” Nature, vol. 425, no. 6958, pp. 620–
624, 2003.
P. F. M. J. Verschure, C. M. A. Pennartz, and G. Pezzulo, “The
why, what, where, when and how of goal-directed choice: neuronal
and computational principles,” Philos. Trans. R. Soc. B Biol. Sci.,
vol. 369, no. 1655, p. 20130483, 2014.
G. Maffei, D. Santos-Pata, E. Marcos, M. Sánchez-Fibla, and P. F.
M. J. Verschure, “An embodied biologically constrained model of
foraging: From classical and operant conditioning to adaptive realworld behavior in DAC-X,” Neural Networks, 2015.
V. Vouloutsi, M. Blancas, R. Zucca, P. Omedas, D. Reidsma, D.
Davison, and others, “Towards a Synthetic Tutor Assistant: The
EASEL Project and its Architecture,” in International Conference
on Living Machines, 2016, pp. 353–364.
P. F. M. J. Verschure, “Synthetic consciousness: the distributed
adaptive control perspective,” Philos. Trans. R. Soc. Lond. B. Biol.
Sci., vol. 371, no. 1701, pp. 263–275, 2016.
G. E. Hinton and R. R. Salakhutdinov, “Reducing the
Dimensionality of Data with Neural Networks,” Science (80-. ).,
vol. 313, no. 5786, 2006.
R. S. Sutton and A. G. Barto, Reinforcement Learning: an
introduction. The MIT Press, 1998.
J. Oh, S. Singh, H. Lee, and P. Kohli, “Zero-Shot Task
Generalization with Multi-Task Deep Reinforcement Learning,”
Jun. 2017.
S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,”
Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997.
A. Baranes and P.-Y. Oudeyer, “Active Learning of Inverse Models
with Intrinsically Motivated Goal Exploration in Robots,” Rob.
Auton. Syst., vol. 61, no. 1, pp. 49–73, 2013.
S. M. LaValle, Planning Algorithms. 2006.
Z. Chen and B. Liu, Lifelong Machine Learning. Morgan &
Claypool Publishers, 2016.
J.-C. Latombe, Robot motion planning, vol. 124. Springer Science
& Business Media, 2012.
X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang, “Deep
learning for real-time Atari game play using offline Monte-Carlo
tree search planning,” in Advances in neural information
processing systems, 2014, pp. 3338–3346.
A. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A.
Grabska-Barwińska, S. G. Colmenarejo, E. Grefenstette, T.
Ramalho, and others, “Hybrid computing using a neural network
with dynamic external memory,” Nature, pp. 1476–4687, 2016.
A. Graves, G. Wayne, and I. Danihelka, “Neural Turing Machines,”
Arxiv, vol. abs/1410.5, 2014.
R. Dawkins and J. R. Krebs, “Arms races between and within
species.,” Proc. R. Soc. Lond. B. Biol. Sci., vol. 205, no. 1161, pp.
489–511, 1979.
A. Weitzenfeld, A. Vallesa, and H. Flores, “A Biologically-Inspired
Wolf Pack Multiple Robot Hunting Model,” in 2006 IEEE 3rd
Latin American Robotics Symposium, 2006, pp. 120–127.
C. Muro, R. Escobedo, L. Spector, and R. P. Coppinger, “Wolfpack (Canis lupus) hunting strategies emerge from simple rules in
computational simulations,” Behav. Processes, vol. 88, no. 3, pp.
192–197, Nov. 2011.
D. Floreano and L. Keller, “Evolution of adaptive behaviour in
robots by means of Darwinian selection,” PLoS Biol., vol. 8, no. 1,
pp. 1–8, 2010.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, “Generative
adversarial nets,” in Advances in neural information processing
systems, 2014, pp. 2672–2680.
[43]
[44]
[45]
[46]
X. D. Arsiwalla, I. Herreros, C. Moulin-Frier, M. Sanchez, and P. F.
M. J. Verschure, “Is Consciousness a Control Process?,” in
International Conference of the Catalan Association for Artificial
Intelligence, 2016, pp. 233–238.
C. Moulin-Frier and P. F. M. J. Verschure, “Two possible driving
forces supporting the evolution of animal communication.
Comment on ‘Towards a Computational Comparative
Neuroprimatology: Framing the language-ready brain’ by Michael
A. Arbib,” Phys. Life Rev., 2016.
X. D. Arsiwalla, C. Moulin-Frier, I. Herreros, M. Sanchez-Fibla,
and P. Verschure, “The Morphospace of Consciousness,” May
2017.
A. Newell, Unified theories of cognition. Harvard University Press,
1990.
| 2 |
1
Techniques for Improving the Finite Length
Performance of Sparse Superposition Codes
arXiv:1705.02091v4 [cs.IT] 19 Nov 2017
Adam Greig, Student Member, IEEE, and Ramji Venkataramanan, Senior Member, IEEE
Abstract—Sparse superposition codes are a recent class of
codes introduced by Barron and Joseph for efficient communication over the AWGN channel. With an appropriate power allocation, these codes have been shown to be asymptotically capacityachieving with computationally feasible decoding. However, a
direct implementation of the capacity-achieving construction does
not give good finite length error performance. In this paper, we
consider sparse superposition codes with approximate message
passing (AMP) decoding, and describe a variety of techniques
to improve their finite length performance. These include an
iterative algorithm for SPARC power allocation, guidelines for
choosing codebook parameters, and estimating a critical decoding
parameter online instead of pre-computation. We also show
how partial outer codes can be used in conjunction with AMP
decoding to obtain a steep waterfall in the error performance
curves. We compare the error performance of AMP-decoded
sparse superposition codes with coded modulation using LDPC
codes from the WiMAX standard.
Index Terms—Sparse regression codes, Approximate Message
Passing, Low-complexity decoding, Finite length performance,
Coded modulation
I. I NTRODUCTION
W
E consider communication over the memoryless additive white Gaussian noise (AWGN) channel given by
y = x + w,
where the channel output y is the sum of the channel input
x and independent zero-mean Gaussian noise w of variance
σ 2 . There is an average power constraint P on theP
input, so a
n
length-n codeword (x1 , . . . , xn ) has to satisfy n1 i=1 x2i ≤
P . The goal is to build computationally efficient codes that
have low probability of decoding error at rates close to the
AWGN channel capacity C = 21 log(1+snr). Here snr denotes
the signal-to-noise ratio P/σ 2 .
Though it is well known that Shannon-style i.i.d. Gaussian
codebooks can achieve very low probability of error at rates
approaching the AWGN capacity [1], this approach has been
largely avoided in practice due to the high decoding complexity of unstructured Gaussian codes. Current state of the art approaches for the AWGN channel such as coded modulation [2],
[3] typically involve separate coding and modulation steps. In
this approach, a binary error-correcting code such as an LDPC
or turbo code is first used to generate a binary codeword from
the information bits; the code bits are then modulated with
a standard scheme such as quadrature amplitude modulation.
A. Greig and R. Venkataramanan are with Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK (e-mails: [email protected],
[email protected]).
This work was supported in part by EPSRC Grant EP/N013999/1, and by
an EPSRC Doctoral Training Award.
Though these schemes have good empirical performance, they
have not been proven to be capacity-achieving for the AWGN
channel.
Sparse Superposition Codes or Sparse Regression Codes
(SPARCs) were recently proposed by Barron and Joseph [4],
[5] for efficient communication over the AWGN channel. In
[5], they introduced an efficient decoding algorithm called
“adaptive successive decoding” and showed that it achieved
near-exponential decay of error probability (with growing
block length), for any fixed rate R < C. Subsequently, an
adaptive soft-decision successive decoder was proposed in
[6], [7], and Approximate Message Passing (AMP) decoders
were proposed in [8]–[11]. The adaptive soft-decision decoder
in [7] as well as the AMP decoder in [11] were proven
to be asymptotically capacity-achieving, and have superior
finite length performance compared to the original adaptive
successive decoder of [5].
The above results mainly focused on characterizing the error
performance of SPARCs in the limit of large block length. In
this work, we describe a number of code design techniques
for improved finite length error performance. Throughout
the paper, we focus on AMP decoding due to its ease of
implementation. However, many of the code design ideas
can also be applied to the adaptive soft-decision successive
decoder in [6], [7]. A hardware implementation of the AMP
decoder was recently reported in [12], [13]. We expect that the
techniques proposed in this paper can be used to reduce the
complexity and optimize the decoding performance in such
implementations.
In the remainder of this section, we briefly review the
SPARC construction and the AMP decoder from [11], and
then list the main contributions of this paper. A word about
notation before we proceed. Throughout the paper, we use log
to denote logarithms with base 2, and ln to denote natural
logarithms. For a positive integer N , we use [N ] to denote
the set {1, . . . , N }. The transpose of a matrix A is denoted
by A∗ , and the indicator function of an event E by 1{E}.
A. The sparse superposition code
A SPARC is defined in terms of a design matrix A of
dimension n × M L. Here n is the block length, and M, L
are integers which are specified below in terms of n and the
rate R. As shown in Fig. 1, the design matrix A has L sections
with M columns each. In the original construction of [4],
[5] and in the theoretical analysis in [6], [7], [11], [14], the
entries of A are assumed to be i.i.d. Gaussian ∼ N (0, 1/n).
For our empirical results, we use a random Hadamard-based
construction for A that leads to significantly lower encoding
and decoding complexity [9]–[11].
2
Section 1
M columns
Section 2
M columns
decoder produces iteratively refined estimates of the message
vector, denoted by β 1 , β 2 , . . . , β T , where T is the (prespecified) number of iterations. Starting with β 0 = 0, for
t = 0, 1, . . . , T − 1 the AMP decoder generates
kβ t k2
z t−1
t
t
P−
,
(2)
z = y − Aβ + 2
τt−1
n
Section L
M columns
A:
βit+1 = ηit (β t + A∗ z t ),
β:
0,
√
0, nP1 ,
√
0, nP2 , 0,
√
nPL , 0,
,0
T
where
ηit (s) =
Fig. 1. A is the n×LM design matrix, β is an M L×1 sparse vector
with one non-zero in each of the L sections. The length-n codeword
is Aβ. The message determines the locations of the non-zeros in β,
while P1 , . . . , PL are fixed a priori.
Codewords are constructed as sparse linear combinations of
the columns of A. In particular, a codeword is of the form Aβ,
where β = (β1 , . . . , βM L )∗ is a length M L column vector
with the property that there is exactly one non-zero βj for
the section 1 ≤ j ≤ M , one non-zero βj for the section
M + 1 ≤ j ≤ 2M , and so
√ forth. The non-zero value of β
in each section ` is set to nP` , where P
P1 , . . . , PL are preL
specified positive constants that satisfy
`=1 P` = P , the
average symbol power allowed.
Both A and the power allocation {P1 , . . . , PL } are known
to both the encoder and decoder in advance. The choice of
power allocation plays a crucial role in determining the error
performance of the decoder. Without loss of generality, we
will assume that the power allocation is non-increasing across
sections. Two examples of power allocation are:
P
• Flat power allocation, where P` = L for all `. This choice
was used in [4] to analyze the error performance with
optimal (least-squares) decoding.
• Exponentially decaying power allocation, where P` ∝
2−2C`/L . This choice was used for the asymptotically
capacity-achieving decoders proposed in [5], [7], [11].
At finite block lengths both these power allocations could be
far from optimal and lead to poor decoding performance. One
of the main contributions of this paper is an algorithm to
determine a good power allocation for the finite-length AMP
decoder based only on R, P , σ 2 .
Rate: As each of the L sections contains M columns, the
total number of codewords is M L . With the block length being
n, the rate of the code is given by
log(M L )
L log M
=
.
(1)
n
n
In other words, a SPARC codeword corresponding to L log M
input bits is transmitted in n channel uses.
Encoding: The input bitstream is split into chunks of log M
bits. A chunk of log M input bits can be used to index the
location of the non-zero entry in one section of β. Hence
L successive chunks determine the message vector β, with
the `th chunk of log M input bits determining the non-zero
location in section `, for 1 ≤ ` ≤ L.
Approximate Message Passing (AMP) decoder: The AMP
R=
(3)
p
√
`
exp si τnP
2
t
√ ,
nP` P
nP`
j∈sec(i) exp sj τ 2
1 ≤ i ≤ M L.
t
(4)
Here the notation j ∈ sec(i) refers to all indices j in the same
section as i. (Note that there are√M indices in each section.)
At the end of each step t, βit / nP` may be interpreted as
the updated posterior probability of the ith entry being the
non-zero one in its section.
The constants τt2 are specified by the following scalar
recursion called “state evolution” (SE):
τ02 = σ 2 + P,
τt2 = σ 2 + P (1 − x(τt−1 )),
t ≥ 1, (5)
where
x(τ ) :=
L
X
P`
`=1
P
√
√
E
e
e
nP`
τ
(U1` +
nP`
τ
√
√
(U1` +
nP`
τ
)
√
nP`
τ
)
+
PM
j=2 e
nP`
τ
Uj`
.
(6)
{Uj` }
In (6),
are i.i.d. N (0, 1) random variables for j ∈
[M ], ` ∈ [L]. The significance of the SE parameters τt2
is discussed in Section II. In Section IV, we use an online
approach to accurately compute the τt2 values rather than precomputing them via (6).
At the end of T iterations, the decoded message vector βb
is produced
by setting the maximum value in section ` of β T
√
to nP` and the remaining entries to zero, for 1 ≤ ` ≤ L.
Error rate of the AMP decoder: We measure the section
error rate Esec as
Esec =
L
o
1 X nb
1 β` 6= β`
L
(7)
`=1
Assuming a uniform mapping between the input bitstream
and the non-zero locations in each section, each section error
will cause approximately half of the bits it represents to be
incorrect, leading to a bit error rate Eber ≈ 21 Esec .
Another figure of merit is the codeword error rate Ecw ,
which estimates the probability P(βb 6= β). If the SPARC is
used to transmit a large number of messages (each via a length
n codeword), Ecw measures the fraction of codewords that are
decoded with one or more section errors. The codeword error
rate is insensitive to where and how many section errors occur
within a codeword when it is decoded incorrectly.
At finite code lengths, the choice of a good power allocation
crucially depends on whether we want to minimize Esec or
Ecw . As we will see in the next section, a power allocation
that yields reliably low section error rates may result in a
3
high codeword error rate, and vice versa. In this paper, we
will mostly focus on obtaining the best possible section error
rate, since in practical applications a high-rate outer code
could readily correct a small fraction of section errors to
give excellent codeword error rates as well. Further, the bit
error rate (which is approximately half the section error rate)
is useful to compare with other channel coding approaches,
where it is a common figure of merit.
B. Organization of the paper and main contributions
In the rest of the paper, we describe several techniques to
improve the finite length error performance and reduce the
complexity of AMP decoding. The sections are organized as
follows.
• In Section II, we introduce an iterative power allocation
algorithm that gives improved error performance with
fewer tuning parameters than other power allocation
schemes.
• In Section III, we analyze the effects of the code parameters L, M and the power allocation on error performance
and its concentration around the value predicted by state
evolution.
• In Section IV, we describe how an online estimate of
the key SE parameter τt2 improves error performance
and allows a new early-stopping criterion. Furthermore,
the online estimate enables us to accurately estimate
the actual section error rate at the end of the decoding
process.
• In Section V, we derive simple expressions to estimate
Esec and Ecw given the rate and power allocation.
• In Section VI we compare the error performance of
AMP-decoded SPARCs to LDPC-based coded modulation schemes used in the WiMAX standard.
• In Section VII, we describe how partial outer codes can
be used in conjunction with AMP decoding. We propose
a three-stage decoder consisting of AMP decoding, followed by outer code decoding, and finally, AMP decoding
once again. We show that by covering only a fraction of
sections of the message β with an outer code, the threestage decoder can correct errors even in the sections not
covered by the outer code. This results in bit-error curves
with a steep waterfall behavior.
The main technical contributions of the paper are the
iterative power allocation algorithm (Section II) and the threestage decoder with an outer code (Section VII). The other
sections describe how various choices of code parameters
influence the finite length error performance, depending on
whether the objective is to minimize the section error rate
or the codeword error rate. We remark that the focus in
this paper is on improving the finite length performance
using the standard SPARC construction with power allocation.
Optimizing the finite length performance of spatially-coupled
SPARCs considered in [9], [10] is an interesting research
direction, but one that is beyond the scope of this paper.
II. P OWER A LLOCATION
Before introducing the power allocation scheme, we briefly
give some intuition about the AMP update rules (2)–(4), and
the SE recursion in (5)–(6). The update step (3) to generate
each estimate of β is underpinned by the following key
property: after step t, the “effective observation” β t + A∗ z t is
approximately distributed as β + τt Z, where Z is standard
normal random vector independent of β. Thus τt2 is the
effective noise variance at the end of step t. Assuming that
the above distributional property holds, β t+1 is just the Bayesoptimal estimate of β based on the effective observation. The
entry βit+1 is proportional to the posterior probability of the
ith entry being the non-zero entry in its section.
We see from (5) that the effective noise variance τt2 is the
sum of two terms. The first is the channel noise variance
σ 2 . The other term P (1 − x(τt−1 )) can be interpreted as the
interference due to the undecoded sections in β t . Equivalently,
x(τt−1 ) is the expected power-weighted fraction of sections
which are correctly decodable at the end of step t.
The starting point for our power allocation design is the
following result from [11], which gives analytic upper and
lower bounds for x(τ ) of (5).
Lemma 1. [14, Lemma 1(b)] Let ν` :=
sufficiently large M , and for any δ ∈ (0, 1),
x(τ ) ≤
L
X
P` h
`=1
P
LP`
Rτ 2 ln 2 .
For
i
2
1 {ν` > 2 − δ} + M −κ1 δ 1 {ν` ≤ 2 − δ} ,
(8)
2
x(τ ) ≥
M −κ2 δ
1− √
δ ln M
!
L
X
`=1
P`
1 {ν` > 2 + δ} .
P
(9)
where κ1 , κ2 are universal positive constants.
As the constants κ1 , κ2 in (8)–(9) are not precisely specified,
for designing power allocation schemes, we use the following
approximation for x(τ ):
x(τ ) ≈
L
X
P`
`=1
P
1 LP` > 2Rτ 2 ln 2 .
(10)
This approximate version, which is increasingly accurate as
L, M grow large, is useful for gaining intuition about suitable
power allocations. Indeed, if the effective noise variance after
step t is τt2 , then (10) says that any section ` whose normalized
power LP` is larger than the threshold 2Rτt2 ln 2 is likely to be
decodable correctly in step (t+1), i.e., in β t+1 , the probability
mass within the section will be concentrated on the correct
non-zero entry. For a given power allocation, we can iteratively
estimate the SE parameters (τt2 , x(τt2 )) for each t using the
lower bound in (10). This provides a way to quickly check
whether or not a given power allocation will lead to reliable
decoding in the large system limit. For reliable decoding at
a given rate R, the effective noise variance given by τt2 =
σ 2 + P (1 − x(τt−1 )) should decrease with t until it reaches a
value close to σ 2 in a finite number of iterations. Equivalently,
x(τt ) in (6) should increase to a value very close to 1.
For a rate R < C, there are infinitely many power allocations
for which (10) predicts successful decoding in the large system
limit. However, as illustrated below, their finite length error
performance may differ significantly. Thus the key question
4
Fig. 2. The dashed lines show the minimum required power in section
for successful decoding when R = C (above), and R = 0.7C (below),
where C = 2 bits. The solid line shows the exponentially-decaying
power allocation in (11).
addressed in this section is: how do we choose a power
allocation that gives the lowest section error rate?
The exponentially-decaying power allocation given by
P (22C/L − 1) −2C`/L
2
, ` ∈ [L],
(11)
1 − 2−2C
was proven in [11] to be capacity-achieving in the large system
limit, i.e., it was shown that the section error rate Esec of the
AMP decoder converges almost surely to 0 as n → ∞, for
any R < C. However, it does not perform well at practical
block lengths, which motivated the search for alternatives. We
now evaluate it in the context of (10) to better explain the
development of a new power allocation scheme.
P` =
Given a power allocation, using (10) one can compute
the minimum required power for any section ` ∈ [L] to
decode, assuming that the sections with higher power have
decoded correctly. The dashed lines in Figure 2 shows the
minimum power required for each section to decode (assuming
the exponential allocation of (11) for the previous sections),
for R = C and R = 0.7C. The figure shows that the
power allocation in (11) matches (up to order L1 terms) with
the minimum required power when R = C. However, for
R = 0.7C, we see that the exponentially-decaying allocation
allocates significantly more power to the earlier sections than
the minimum required, compared to later sections. This leads
to relatively high section error rates, as shown in Figure 6.
Figure 2 shows that the total power allocated by the minimal
power allocation at R = 0.7C is significantly less than the
available power P . Therefore, the key question is: how do
we balance the allocation of available power between the
various sections to minimize the section error rate? Allocating
excessive power to the earlier sections ensures they decode
reliably early on, but then there will not be sufficient power
left to ensure reliable decoding in the final sections. This
is the reason for the poor finite length performance of the
exponentially-decaying allocation. Conversely, if the power
is spread too evenly then no section particularly stands out
against the noise, so it is hard for the decoding to get started,
and early errors can cause cascading failures as subsequent
sections are also decoded in error.
This trade-off motivated the following modified exponential
Fig. 3. The modified power allocation with a = f = 0.7 results in
slightly more than the minimum power required for the first 70% of
sections; the remaining available power is allocated equally among
the last 30% of sections. The original allocation with P` ∝ 2−2C`/L
is also shown for comparison.
power allocation proposed in [11]:
(
κ · 2−2aC`/L 1 ≤ ` ≤ f L,
P` =
κ · 2−2aCf
f L + 1 ≤ ` ≤ L,
(12)
where
PL the normalizing constant κ is chosen to ensure that
`=1 P` = P . In (12), the parameter a controls the steepness
of the exponential allocation, while the parameter f flattens
the allocation after the first fraction f of the sections. Smaller
choices of a lead to less power allocated to the initial sections, making a larger amount available to the later sections.
Similarly, smaller values of f lead to more power allocated to
the final sections. See Figure 3 for an illustration.
While this allocation improves the section error rate by a
few orders of magnitude (see [11, Fig. 4]), it requires costly
numerical optimization of a and f . A good starting point is
to use a = f = R/C, but further optimization is generally
necessary. This motivates the need for a fast power allocation
algorithm with fewer tuning parameters.
A. Iterative power allocation
We now describe a simple parameter-free iterative algorithm
to design a power allocation. The L sections of the SPARC
are divided into B blocks of L/B sections each. Each section
within a block is allocated the same power. For example, with
L = 512 and B = 32, there are 32 blocks with 16 sections
per block. The algorithm sequentially allocates power to each
of the B blocks as follows. Allocate the minimum power to
the first block of sections so that they can be decoded in the
first iteration when τ02 = σ 2 +P . Using (10), we set the power
in each section of the first block to
2Rτ02 ln 2
L
, 1≤`≤ .
L
B
Using (10) and (5), we then estimate τ12 = σ 2 + (P − BP1 ).
Using this value, allocate the minimum required power for the
second block of sections to decode, i.e., P` = 2 ln 2Rτ12 /L for
L
2L
B + 1 ≤ ` ≤ B . If we sequentially allocate power in this
manner to each of the B blocks, then the total power allocated
by this scheme will be strictly less than P whenever R < C.
We therefore modify the scheme as follows.
For 1 ≤ b ≤ B, to allocate power to the bth block of
sections assuming that the first (b − 1) blocks have been
P` =
5
Fig. 4. Example illustrating the iterative power allocation algorithm
with B = 5. In each step, the height of the light gray region
represents the allocation that distributes the remaining power equally
over all the remaining sections. The dashed red line indicates the
minimum power required for decoding the current block of sections.
The dark gray bars represent the power that has been allocated at the
beginning of the current step.
Algorithm 1 Iterative power allocation routine
Require: L, B, σ 2 , P , R such that B divides L.
L
Initialise k ← B
for b = 0 to B − P
1 do
bk
Premain ← P − `=1 P`
2
2
τ ← σ + Premain
Pblock ← 2 ln(2)Rτ 2 /L
if Premain /(L − bk) > Pblock then
Pbk+1 , . . . , PL ← Premain /(L − bk)
break
else
Pbk+1 , . . . , P(b+1)k ← Pblock
end if
end for
return P1 , . . . , PL
allocated, we compare the two options and choose the one that
allocates higher power to the block: i) allocating the minimum
required power (computed as above) for the bth block of
sections to decode; ii) allocating the remaining available power
equally to sections in blocks b, . . . , B, and terminating the
algorithm. This gives a flattening in the final blocks similar
to the allocation in (12), but without requiring a specific
parameter that determines where the flattening begins. The
iterative power allocation routine is described in Algorithm 1.
Figure 4 shows a toy example building up the power allocation
for B = 5, where flattening is seen to occur in step 4. Figure 5
shows a more realistic example with L = 512 and R = 0.7C.
Choosing B: By construction, the iterative power allocation
scheme specifies the number of iterations of the AMP decoder
in the large system limit. This is given by the number of blocks
with distinct powers; in particular the number of iterations
(in the large system limit) is of the order of B. For finite
code lengths, we find that it is better to use a termination
criterion for the decoder based on the estimates generated by
the algorithm. This criterion is described in Sec. IV. This datadriven termination criterion allows us to choose the number of
Fig. 5. Iterative allocation, with L = 512, and B = 16 blocks.
Flattening occurs at the 11th block.
AMP section error rate Esec vs R at snr = 7, 15, 31,
corresponding to C = 1.5, 2, 2.5 bits (shown with dashed vertical
lines). At each snr, the section error rate is reported for rates
R/C = 0.70, 0.75, 0.80, 0.85, 0.90. The SPARC parameters are
M = 512, L = 1024. The top black curve shows the Esec with
P` ∝ 2−2C`/L . The lower green curve shows Esec for the iterative
power allocation, with B = L and RPA numerically optimized. (See
Sec. III-B for a discussion of RPA .)
Fig. 6.
blocks B to be as large as L. We found that choosing B = L,
together with the termination criterion in Sec. IV, consistently
gives a small improvement in error performance (compared to
other choices of B), with no additional time or memory cost.
Additionally, with B = L, it is possible to quickly determine a pair (a, f ) for the modified exponential allocation in
(12) which gives a nearly identical allocation to the iterative
algorithm. This is done by first setting f to obtain the same
flattening point found in the iterative allocation, and then
searching for an a which matches the first allocation coefficient P1 between the iterative and the modified exponential
allocations. Consequently, any simulation results obtained for
the iterative power allocation could also be obtained using
a suitable (a, f ) with the modified exponential allocation,
without having to first perform a costly numerical optimization
over (a, f ).
Figure 6 compares the error performance of the exponential
and iterative power allocation schemes discussed above for
different values of R at snr = 7, 15, 31. The iterative power
allocation yields significantly improved Esec for rates away
from capacity when compared to the original exponential allocation, and additionally outperforms the modified exponential
allocation results reported in [11].
6
For the experiments in Figure 6, the value for R used in
constructing the iterative allocation (denoted by RP A ) was
optimized numerically. Constructing an iterative allocation
with R = RP A yields good results, but due to finite length
concentration effects, the RP A yielding the smallest average
error rate may be slightly different from the communication
rate R. The effect of RP A on the concentration of error rates
is discussed in Section III-B. We emphasize that this optimization over RP A is simpler than numerically optimizing the pair
(a, f ) for the modified exponential allocation. Furthermore,
guidelines for choosing RP A as a function of R are given in
Section III-B.
Fig. 7. AMP error performance with increasing M , for L = 1024,
III. E RROR C ONCENTRATION T RADE - OFFS
In this section, we discuss how the choice of SPARC design
parameters can influence the trade-off between the ‘typical’
value of section error rate and concentration of actual error
rates around the typical values. The typical section error rate
refers to that predicted by state evolution (SE). Indeed, running
the SE equations (5)–(6) until convergence gives the following
prediction for the section error rate:
√
√
SE
Esec
L
1X
E
:= 1−
L
`=1
√
e
nP`
τT
e
U1` +
nP`
τT
√
nP`
τT
U1` +
nP`
τT
√
+
PM
j=2 e
nP`
τT
Uj`
,
(13)
where τT2 denotes the value in the final iteration. The conSE
centration refers to how close the SE prediction Esec
is to the
observed section error rate.
As we describe below, the choice of SPARC parameters
(L, M ) and the power allocation both determine a trade-off
SE
, and concentration of
between obtaining a low value for Esec
SE
. This trade-off is of
the actual section error rate around Esec
particular interest when applying an outer code to the SPARC,
as considered in Section VII, which may be able to reliably
handle only a small number of section errors.
A. Effect of L and M on concentration
Recall from (1) that the code length n at a given rate R
is determined by the choice of L and M according to the
relationship nR = L log M . In general, L and M may be
chosen freely to meet a desired rate and code length.
To understand the effect of increasing M , consider Figure
7 which shows the error performance of a SPARC with R =
1.5, L = 1024, as we increase the value of M . From (1), the
code length n increases logarithmically with M . We observe
that the section error rate (averaged over 200 trials) decreases
with M up to M = 29 , and then starts increasing. This is in
sharp contrast to the SE prediction (13) (plotted using a dashed
line in Figure 7) which keeps decreasing as M is increased.
This divergence between the actual section error rate and
the SE prediction for large M is due to large fluctuations in
the number of section errors across trials. Recent work on the
error exponent of SPARCs with AMP decoding shows that the
concentration of error rates near the SE prediction is strongly
Eb
R = 1.5, and N
= 5.7 dB (2 dB from Shannon limit). See Section V
0
for details of Ēsec .
dependent on both L and M . For R < C, [14, Theorem 1]
shows that for any > 0, the section error rate Esec satisfies
2
−κT L
( ln(1+snr) −f (M ))
SE
P Esec > Esec
+ ≤ KT e (log M )2T −1 4(1+snr)
,
(14)
where T is the number of iterations until state evolution
convergence, κT , KT are constants depending on T , and
−κ2 δ 2
√
f (M ) = M
is a quantity that tends to zero with growing
δ ln M
M . For any power allocation, T increases as R approaches
C. For example, T ∝ 1/ log(C/R) for the exponential power
allocation. We observe that the deviation probability bound on
the RHS of (14) depends on the ratio L/(log M )2T −1 .
In our experiments, T is generally on the order of a few
tens. Therefore, keeping L constant, the probability of large
SE
increases with M . This
deviations from the SE prediction Esec
leads to the situation shown in Figure 7, which shows that the
SE
continues to decrease with M , but beyond
SE prediction Esec
a certain value of M , the observed average section error rate
becomes progressively worse due to loss of concentration. This
is caused by a small number of trials with a very large number
of section errors, even as the majority of trials experience
lower and lower error rates as M is increased. This effect can
be clearly seen in Figure 8, which compares the histogram of
section error rates over 200 trials for M = 64 and M = 4096.
The distribution of errors is clearly different, but both cases
have the same average section error rate due to the poorer
concentration for M = 4096.
To summarize, given R, snr, and L, there is an optimal
M that minimizes the empirical section error rate. Beyond
this value of M , the benefit from any further increase is
outweighed by the loss of concentration. For a given R, values
of M close to L are a good starting point for optimizing
the empirical section error rate, but obtaining closed-form
estimates of the optimal M for a given L is still an open
question.
For fixed L, R, the optimal value of M increases with snr.
This effect can be seen in the results of Figure 12, where there
is an inversion in the order of best-performing M values as
Eb /N0 increases. This is because as snr increases, the number
of iterations T for SE to converge decreases. A smaller T
7
Fig. 8. Histogram of AMP section errors over 200 trials M = 64 (top)
Eb
and M = 4096 (bottom), with L = 1024, R = 1.5, N
= 5.7dB.
0
The left panels highlight distribution of errors around low section
error counts, while the right panels show the distribution around higherror-count events. As shown in Figure 7, both cases have an average
section error rate of around 10−2 .
Fig. 9. Histogram of AMP section errors over 1000 trials for RPA =
0.98R (top) and RPA = 1.06R (bottom). The SPARC parameters
are L = 1024, M = 512, R = 1.6, snr = 15. The left panels
highlight distribution of trials with low section error counts (up to
8); the right panels indicate the distribution of infrequent but higherror-count trials. At lower RPA , many more trials have no section
errors, but those that do often have hundreds. At higher RPA , at most
7 section errors were seen, but many fewer trials had zero section
errors.
mitigates the effect of larger M in the large deviations bound
of (14). In other words, a larger snr leads to better error rate
concentration around the SE prediction, so larger values of M
are permissible before the performance starts degrading.
B. Effect of power allocation on concentration
The non-asymptotic bounds on x(τ ) in Lemma 1 indicate
that at finite lengths, the minimum power required for a
section ` to decode in an iteration may be slightly different
than that indicated by the approximation in (10). Recall that
the iterative power allocation algorithm in Section II-A was
designed based on (10). We can compensate for the difference
between the approximation and the actual value of x(τ ) by
running the iterative power allocation in Algorithm 1 using a
modified rate RPA which may be slightly different from the
communication rate R. The choice of RPA directly affects the
error concentration. We now discuss the mechanism for this
effect and give guidelines for choosing RPA as a function of
R.
If we run the power allocation algorithm with RPA > R,
from (10) we see that additional power is allocated to the initial
blocks, at the cost of less power for the final blocks (where the
allocation is flat). Consequently, it is less likely that one of the
initial sections will decode in error, but more likely that some
number of the later sections will instead. Figure 9 (bottom)
shows the effect of choosing a large RPA = 1.06R: out of
a total of 1000 trials, there were no trials with more than 7
sections decoded in error (the number of sections L = 1024);
however, relatively few trials (29%) have zero section errors.
Conversely, choosing RPA < R allocates less power to the
initial blocks, and increases the power in the final sections
which have a flat allocation. This increases the likelihood of
the initial section being decoded in error; in a trial when this
happens, there will be a large number of section errors. However, if the initial sections are decoded correctly, the additional
power in the final sections increases the probability of the trial
being completely error-free. Thus choosing RP A < R makes
completely error-free trials more likely, but also increases the
likelihood of having trials with a large number of sections in
error. In Figure 9 (top), the smaller RPA = 0.98R gives zero
or one section errors in the majority (81%) of cases, but the
remaining trials typically have a large number of sections in
error.
To summarize, the larger the RP A , the better the concentration of section error rates of individual trials around the
overall average. However, increasing RP A beyond a point just
increases the average section error rate because of too little
power being allocated to the final sections.
For different values of the communication rate R, we
empirically determined an RPA that gives the lowest average
section error rate, by starting at RPA = R and searching the
neighborhood in steps of 0.02R. Exceptionally, at low rates
(for R ≤ 1), the optimal RPA is found to be 0, leading to a
completely flat power allocation with P` = PL for all `. We
note from (10) that for 1 ≥ R > 2τ 2Pln 2 , the large system
0
limit theory does not predict that we can decode any of the L
sections — this is because no section is above the threshold
in the first iteration of decoding. However, in practice, we
observe that some sections will decode initially (due to the
correct column being aligned favorably with the noise vector),
and this reduces the threshold enough to allow subsequent
decoding to continue in most cases. For R ≤ 1, when RPA
closer to R is used, the lower power in later sections hinders
the finite length decoding performance.
PA
We found that the value of RR
that minimizes the average
section error rate increases with R. In particular, the optimal
RP A
PA
was 0 for R ≤ 1; the optimal RR
for R = 1.5 was
R
PA
close to 1, and for R = 2, the optimal RR
was between
1.05 and 1.1. Though this provides a useful design guideline,
a deeper theoretical analysis of the role of RP A in optimizing
the finite length performance is an open question.
Finally, a word of caution when empirically optimizing RPA
to minimize the average section error rate. Due to the loss of
concentration as RP A is decreased below R, care must be
taken to run sufficient trials to ensure that a rare unseen trial
8
with many section errors will not catastrophically impact the
overall average section error rate. For example, in one scenario
with L = 1024, M = 512, snr = 15, R = 1.4, RPA = 1.316,
we observed 192 trials with errors out of 407756 trials, but
only 4 of these trials had more than one error, with between
400 to 600 section errors in those 4 cases. The average section
error rate was 5.6 × 10−6 . With fewer trials, it is possible
that no trials with a large number of section errors would
be observed, leading to an estimated error rate an order of
magnitude better, at around 4.6 × 10−7 .
IV. O NLINE C OMPUTATION OF τt2 AND E ARLY
T ERMINATION
Recall that the update step (4) of the AMP decoder requires
the SE coefficients τt2 , for t ∈ [T ]. In the standard implementation [11], these coefficients are computed in advance using the
SE equations (5)–(6). The total number of iterations T is also
determined in advance by computing the number of iterations
required the SE to converge to its fixed point (to within a
specified tolerance). This technique produced effective results,
but advance computation is slow as each of the L expectations
in (6) needs to be computed numerically via Monte-Carlo
simulation, for each t. A faster approach is to compute the
τt2 coefficients using the asymptotic expression for x(τ ) given
in (10). This gives error performance nearly identical to the
earlier approach with significant time savings, but still requires
advance computation. Both these methods are referred to as
“offline” as the τt2 values are computed a priori.
A simple way to estimate τt2 online during the decoding
process is as follows. In each step t, after producing z t as in
(2), we estimate
n
τbt2 =
1X 2
kz t k2
z .
=
n
n i=1 i
(15)
The justification for this estimate comes from the analysis of
the AMP decoder in [11], [14], which shows that for large
n, τbt2 is close to τt2 in (5) with high probability. In particular,
[14] provides a concentration inequality for τbt2 similar to (14).
We note that such a similar online estimate has been used
previously in various AMP and GAMP algorithms [8]–[10],
[15]. Here, we show that in addition to being fast, the online
estimator permits an interpretation as a measure of SPARC
decoding progress and provides a flexible termination criterion
for the decoder. Furthermore, the error performance with the
online estimator was observed to be the same or slightly better
than the offline methods.
Recall from the discussion at the beginning of Section II
that in each step, we have
st := β t + A∗ z t ≈ β + τt Z,
(16)
where Z is a standard normal random vector independent of
β. Starting from τ02 = σ 2 + P , a judicious choice of power
allocation ensures that the SE parameter τt2 decreases with t,
until it converges at τT2 = σ 2 in a finite number of iterations
T.
However, at finite lengths there are deviations from this
trajectory of τt2 predicted by SE, i.e., the variance of the
Fig. 10. Comparison between offline and online trajectories of the
effective noise variance, at L = 1024, M = 512, P = 15, σ 2 =
1, R = 1.6. The dashed line represents the pre-computed SE trajectory of τt2 . The plot shows 15 successful runs, and one uncommon
run with many section errors. The true value of Var[st − β] during
decoding tracks τbt2 too precisely to distinguish on this plot.
effective noise vector (st −β) may deviate from τt2 . The online
estimator τbt2 is found to track Var(st −β) = kst −βk2 /n very
accurately, even when this variance deviates significantly from
τt2 . This effect can be seen in Figure 10, where 16 independent
decoder runs are plotted and compared with the SE trajectory
for τt2 (dashed line). For the 15 successful runs, the empirical
variance Var(st − β) approaches σ 2 = 1 along different
trajectories depending on how the decoding is progressing. In
the unsuccessful run, Var(st − β) converges to a value much
larger than σ 2 .
In all the runs, τbt2 is indistinguishable from Var(st − β).
This indicates that we can use the final value τbT2 to accurately
estimate the power of the undecoded sections — and thus the
number of sections decoded correctly — at runtime. Indeed,
(b
τT2 − σ 2 ) is an accurate estimate of the total power in the
incorrectly decoded sections. This, combined with the fact that
the power allocation is non-increasing, allows the decoder to
estimate the number of incorrectly decoded sections.
Furthermore, we can use the change in τbt2 between iterations to terminate the decoder early. If the value τbt2 has
not changed between successive iterations, or the change is
within some small threshold, then the decoder has stalled and
no further iterations are worthwhile. Empirically we find that
a stopping criterion with a small threshold (e.g., stop when
2
|b
τt2 − τbt−1
| < PL ) leads to no additional errors compared
to running the decoder for the full iteration count, while
giving a significant speedup in most trials. Allowing a larger
threshold for the stopping criterion gives even better running
time improvements. This early termination criterion based on
τbt2 gives us flexibility in choosing the number of blocks B in
the iterative power allocation algorithm of Section II-A. This
is because the number of AMP iterations is no longer tied to
B, hence B can be chosen as large as desired.
To summarize, the online estimator τbt2 provides an estimate
of the noise variance in each AMP iteration that accurately
reflects how the decoding is progressing in that trial. It thereby
enables the decoder to effectively adapt to deviations from
the τt2 values predicted by SE. This explains the improved
9
performance compared to the offline methods of computing τt2 .
More importantly, it provides an early termination criterion for
the AMP decoder as well as a way to track decoding progress
and predict the number of section errors at runtime.
V. P REDICTING ESEC , EBER AND ECW
For a given power allocation {P` } and reasonably large
SPARC parameters (n, M, L), it is desirable to have a quick
way to estimate the section error rate and codeword error rate,
without resorting to simulations. Without loss of generality, we
assume that the power allocation is asymptotically good, i.e.,
the large system limit SE parameters (computed using (10))
predict reliable decoding, i.e., the SE converges to xT = 1
and τT2 = σ 2 in the large system limit. The goal is to estimate
the finite length section error rate Esec .
One way to estimate Esec is via the state evolution prediction
(13), using τT = σ. However, computing (13) requires
computing L expectations, each involving a function of M
independent standard normal random variables. The following
result provides estimates of Esec and Ecw that are as accurate
as the SE-based estimates, but much simpler to compute.
Proposition 1. Let the power allocation {P` } be such that the
state evolution iteration using the asymptotic approximation
(10) converges to τT2 = σ 2 . Then, under the idealized assumption that β T +A∗ z T = β+τT Z (where Z is a standard normal
random vector independent of β), we have the following. The
probability of a section (chosen uniformly at random) being
incorrectly decoded is
M −1
√
L
nP`
1X
EU Φ
+U
.
(17)
Ēsec = 1 −
L
σ
`=1
The probability of the codeword being incorrectly decoded is
M −1
√
L
Y
nP`
Ēcw = 1 −
EU Φ
+U
.
(18)
σ
Fig. 11. Comparison of codeword error rate between simulation results
and Perr -based analysis, for Ecw with varying M . L = 1024, R = 1.5,
Eb /N0 = 5.7dB. Results are well matched even when concentration
is poor.
error can be computed as
p
nP` + σZ`,1 > σZ`,j , 2 ≤ j ≤ M
Perr,` = 1 − P
√
Z Y
M
nP`
P Z`,j <
=1−
+ u Z`,1 = u φ(u)du
σ
R j=2
M −1
√
nP`
+U
,
= 1 − EU Φ
σ
(19)
where φ and Φ denote the density and the cumulative distribution function of the standard normal distribution, respectively.
In the second line of (19), we condition on Z`,1 and then use
the fact that Z`,1 , . . . , Z`,M are i.i.d. ∼ N (0, 1).
The probability of a section chosen
uniformly at random
PL
being incorrectly decoded is L1 `=1 Perr,` . The probability
of codeword error is one minus the
QLprobability that no section
is in error, which is given by 1− `=1 (1−Perr,` ). Substituting
for Perr,` from (19) yields the expressions in (17) and (18).
`=1
In both expressions above, U is a standard normal random
variable, and Φ(.) is the standard normal cumulative distribution function.
Proof: As τT2 = σ 2 , the effective observation in the
final iteration has the representation β + σZ. The denoising
function η T generates a final estimate based on this effective
observation, and the index of the largest entry in each section
b Consider
is chosen to form the decoded message vector β.
the decoding of section ` of β. Without loss of generality, we
can assume that the first entry of the section is the non-zero
one. Using the notation β`,j to denote
√ the jth entry of the
section β` , we therefore have β`,1 = nP` , and β`,j = 0 for
2 ≤ j ≤ M . As the effective observation for section ` has the
representation (β T + A∗ z T )` = β` + σZ` , the section will be
incorrectly decoded if and only if the following event occurs:
np
o
np
o
nP` + σZ`,1 ≤ σZ`,2 ∪. . .∪
nP` + σZ`,1 ≤ σZ`,M .
Therefore, the probability that the `th section is decoded in
The section error rate and codeword error rate can be
estimated using the idealized expressions in (17) and (18). This
still requires computing L expectations, but each expectation is
now a function of a single Gaussian random variable, rather
than the M independent ones in the SE estimate. Thus we
reduce the complexity by a factor of M over the SE approach;
evaluations of Ēsec and Ēcw typically complete within a second.
SE
Figure 7 shows Ēsec alongside the SE estimate Esec
for
L = 1024, and various values of M . We see that both these
estimates match the simulation results closely up to a certain
value of M . Beyond this point, the simulation results diverge
from theoretical estimates due to lack of concentration in
section error rates across trials, as described in Sec. III-A.
Figure 11 compares the idealized codeword error probability
in (18) with that obtained from simulations. Here, there is a
good match between the estimate and the simulation results as
the concentration of section error rates across trials plays no
role — any trial with one or more section errors corresponds
to one codeword error.
10
Figure 12 shows that for L = 1024, the best value of M
among those considered increases from M = 29 at lower
snr values to M = 213 at higher snr values. This is due to
the effect discussed in Section III-A, where larger snr values
can support larger values of M , before performance starts
degrading due to loss of concentration.
At both R = 1 and R = 1.5, the SPARCs outperform
the LDPC coded modulation at Eb /N0 values close to the
Shannon limit, but the error rate does not drop off as quickly
at higher values of Eb /N0 . One way to enhance SPARC performance at higher snr is by treating it as a high-dimensional
modulation scheme and adding an outer code. This is the focus
of the next section.
Fig. 12. Comparison with LDPC coded modulation at R = 1
Fig. 13. Comparison with LDPC coded modulation at R = 1.5
VI. C OMPARISON WITH C ODED M ODULATION
In this section, we compare the performance of AMPdecoded SPARCs against coded modulation with LDPC codes.
Specifically, we compare with two instances of coded modulation with LDPC codes from the WiMax standard IEEE
802.16e: 1) A 16-QAM constellation with a rate 12 LDPC code
for an overall rate R = 1 bit/channel use/real dimension, and
2) A 64-QAM constellation with a rate 21 LDPC code for an
overall rate R = 1.5 bits/channel use/real dimension. (The
spectral efficiency is 2R bits/s/Hz.) The coded modulation
results, shown in dashed lines in Figures 12 and 13, are
obtained using the CML toolkit [16] with LDPC code lengths
n = 576 and n = 2304.
Each figure compares the bit error rates (BER) of the coded
modulation schemes with various SPARCs of the same rate,
including a SPARC with a matching code length of n = 2304.
Using P = Eb R and σ 2 = N20 , the signal-to-noise ratio of the
b
SPARC can be expressed as σP2 = 2RE
N0 . The SPARCs are
implemented using Hadamard-based design matrices, power
allocation designed using the iterative algorithm in Sec. II-A
with B = L, and online τbt2 parameters with the early
termination criterion (Sec. IV). An IPython notebook detailing
the SPARC implementation is available at [17].
VII. AMP WITH PARTIAL O UTER C ODES
Figures 12 and 13 show that for block lengths of the order of
a few thousands, AMP-decoded SPARCs do not exhibit a steep
waterfall in section error rate. Even at high Eb /N0 values, it
is still common to observe a small number of section errors.
If these could be corrected, we could hope to obtain a sharp
waterfall behavior similar to the LDPC codes.
In the simulations of the AMP decoder described above,
when M and RPA are chosen such that the average error rates
are well-concentrated around the state evolution prediction,
the number of section errors observed is similar across trials.
Furthermore, we observe that the majority of sections decoded
incorrectly are those in the flat region of the power allocation,
i.e., those with the lowest allocated power. This suggests we
could use a high-rate outer code to protect just these sections,
sacrificing some rate, but less than if we naı̈vely protected
all sections. We call the sections covered by the outer code
protected sections, and conversely the earlier sections which
are not covered by the outer code are unprotected. In [4], it
was shown that a Reed-Solomon outer code (that covered all
the sections) could be used to obtain a bound the probability
of codeword error from a bound on the probability of excess
section error rate.
Encoding with an outer code (e.g., LDPC or Reed-Solomon
code) is straightforward: just replace the message bits corresponding to the protected sections with coded bits generated
using the usual encoder for the chosen outer code. To decode,
we would like to obtain bit-wise posterior probabilities for
each codeword bit of the outer code, and use them as inputs to
a soft-information decoder, such as a sum-product or min-sum
decoder for LDPC codes. The output of the AMP decoding
algorithm permits this: it yields β T , which contains weighted
section-wise posterior probabilities; we can directly transform
these into bit-wise posterior probabilities. See Algorithm 2 for
details.
Moreover, in addition to correcting AMP decoding errors
in the protected sections, successfully decoding the outer
code also provides a way to correct remaining errors in the
unprotected sections of the SPARC codeword. Indeed, after
decoding the outer code we can subtract the contribution of
the protected sections from the channel output sequence y,
and re-run the AMP decoder on just the unprotected sections.
The key point is that subtracting the contribution of the later
11
T
β:
···
···
···
L sections
Luser
Lparity
Lunprotected
Lprotected
LLDP C
Fig. 14. Division of the L sections of β for an outer LDPC code
(protected) sections eliminates the interference due to these
sections; then running the AMP decoder on the unprotected
sections is akin to operating at a much lower rate.
Thus the decoding procedure has three stages: i) first round
of AMP decoding, ii) decoding the outer code using soft
outputs from the AMP, and iii) subtracting the contribution of
the sections protected by the outer code, and running the AMP
decoder again for the unprotected sections. We find that the
final stage, i.e., running the AMP decoder again after the outer
code recovers errors in the protected sections of the SPARC,
provides a significant advantage over a standard application
of an outer code, i.e., decoding the final codeword after the
second stage.
We describe this combination of SPARCs with outer codes
below, using an LDPC outer code. The resulting error rate
curves exhibit sharp waterfalls in final error rates, even when
the LDPC code only covers a minority of the SPARC sections.
We use a binary LDPC outer code with rate RLDP C ,
block length nLDP C and code dimension kLDP C , so that
kLDP C /nLDP C = RLDP C . For clarity of exposition we
assume that both nLDP C and kLDP C are multiples of log M
(and consequently that M is a power of two). As each section
of the SPARC corresponds to log M bits, if log M is an
integer, then nLDP C and kLDP C bits represent an integer
number of SPARC sections, denoted by
LLDP C =
nLDP C
log M
and
Lprotected =
kLDP C
,
log M
respectively. The assumption that kLDP C and nLDP C are
multiples of log M is not necessary in practice; the general
case is discussed at the end of the next subsection.
We partition the L sections of the SPARC codeword as
shown in Fig 14. There are Luser sections corresponding to
the user (information) bits; these sections are divided into
unprotected and protected sections, with only the latter being
covered by the outer LDPC code. The parity bits of the LDPC
codeword index the last Lparity sections of the SPARC. For
convenience, the protected sections and the parity sections
together are referred to as the LDPC sections.
For a numerical example, consider the case where L =
1024, M = 256. There are log M = 8 bits per SPARC section.
For a (5120, 4096) LDPC code (RLDP C = 4/5) we obtain the
following relationships between the number of the sections of
each kind:
(5120 − 4096)
nLDP C − kLDP C
=
= 128,
Lparity =
log M
8
Luser = L − Lparity = 1024 − 128 = 896,
Algorithm 2 Weighted position posteriors β` to bit posteriors
p0 , . . . , plog M −1 for section ` ∈ [L]
Require: β` = [β`,1 , . . . , β`,M ], for M a power of 2
Initialise bit posteriors p0 , . . . , plog M −1 ← 0
PM
Initialise normalization constant c ← i=1 β`,i
for log i = 0, 1, . . . , log M − 1 do
b ← log M − log i − 1
k←i
while k < M do
for j = k + 1, k + 2, . . . , k + i do
pb ← pb + β`,j /c
end for
k ← k + 2i
end while
end for
return p0 , . . . , plog M −1
kLDP C
4096
=
= 512,
log M
8
= Lprotected + Lparity = 512 + 128 = 640,
Lprotected =
LLDP C
Lunprotected = Luser − Lprotected = L − LLDP C = 384.
There are Luser log M = 7168 user bits, of which the final
kLDP C = 4096 are encoded to a systematic nLDP C = 5120bit LDPC codeword. The resulting L log M = 8192 bits
(including both the user bits and the LDPC parity bits) are
encoded to a SPARC codeword using the SPARC encoder and
power allocation described in previous sections.
We continue to use R to denote the overall user rate, and n
to denote the SPARC code length so that nR = Luser log M .
The underlying SPARC rate (including the overhead due to the
outer code) is denoted by RSP ARC . We note that nRSP ARC =
L log M , hence RSP ARC > R. For example, with R = 1
and L, M and the outer code parameters as chosen above,
n = Luser (log M )/R = 7168, so RSP ARC = 1.143.
A. Decoding SPARCs with LDPC outer codes
At the receiver, we decode as follows:
1) Run the AMP decoder to obtain β T . Recall that entry
j within section ` of β T is proportional to the posterior
probability of the column j being the transmitted one
for section `. Thus the AMP decoder gives section-wise
posterior probabilities for each section ` ∈ [L].
2) Convert the section-wise posterior probabilities to bitwise posterior probabilities using Algorithm 2, for each of
the LLDP C sections. This requires O(LLDP C M log M )
time complexity, of the same order as one iteration of
AMP.
3) Run the LDPC decoder using the bit-wise posterior
probabilities obtained in Step 2 as inputs.
4) If the LDPC decoder fails to produce a valid LDPC
codeword, terminate decoding here, using β T to produce
β̂ by selecting the maximum value in each section (as
per usual AMP decoding).
5) If the LDPC decoder succeeds in finding a valid codeword, we use it to re-run AMP decoding on the unprotected sections. For this, first convert the LDPC codeword
12
bits to a partial β̂LDP C as follows, using a method similar
to the original SPARC encoding:
a) Set the first Lunprotected M entries of β̂LDP C to zero,
b) The remaining LLDP C sections (with M entries per
section) of β̂LDP C will have exactly one-non zero entry per section, with the LDPC codeword determining
the location of the non-zero in each section. Indeed,
noting that nLDP C = LLDP C log M , we consider the
LDPC codeword as a concatenation of LLDP C blocks
of log M bits each, so that each block of bits indexes
the location of the non-zero entry in one section of
β̂LDP
√ C . The value of the non-zero in section ` is set
to nP` , as per the power allocation.
Now subtract the codeword corresponding to β̂LDP C
from the original channel output y, to obtain y 0 =
y − Aβ̂LDP C .
6) Run the AMP decoder again, with input y 0 , and operating
only over the first Lunprotected sections. As this operation
is effectively at a much lower rate than the first decoder
(since the interference contribution from all the protected
sections is removed), it is more likely that the unprotected
bits are decoded correctly than in the first AMP decoder.
We note that instead of generating y 0 , one could run the
AMP decoder directly on y, but enforcing that in each
AMP iteration, each of the LLDP C sections has all its
non-zero mass on the entry determined by β̂LDP C , i.e.,
consistent with Step 5.b).
7) Finish decoding, using the output of the final AMP
decoder to find the first Lunprotected M elements of β̂,
and using β̂LDP C for the remaining LLDP C M elements.
In the case where nLDP C and kLDP C are not multiples of
log M , the values LLDP C = nLDP C / log M and Lprotected =
kLDP C / log M will not be integers. Therefore one section at
the boundary of Lunprotected and Lprotected will consist of
some unprotected bits and some protected bits. Encoding is not
affected in this situation, as the LDPC encoding happens prior
to SPARC codeword encoding. When decoding, conversion to
bit-wise posterior probabilities is performed for all sections
containing LDPC bits (including the intermediate section at the
boundary) and only the nLDP C bit posteriors corresponding
to the LDPC codeword are given to the LDPC decoder. When
forming β̂LDP C , the simplest option is to treat the intermediate
section as though it were unprotected and set it to zero. It is
also possible to compute column posterior probabilities which
correspond to the fixed LDPC bits and probabilities arising
from y, though doing so is not covered in this paper.
B. Simulation results
The combined AMP and outer LDPC setup described above
was simulated using the (5120, 4096) LDPC code (RLDP C =
4/5) specified in [18] with a min-sum decoder. Bit error rates
were measured only over the user bits, ignoring any bit errors
in the LDPC parity bits.
Figure 15 plots results at overall rate R = 45 , where
the underlying LDPC code (modulated with BPSK) can be
compared to the SPARC with LDPC outer code, and to a plain
SPARC with rate 45 . In this case RP A = 0, giving a flat power
Fig. 15. Comparison to plain AMP and to BPSK-modulated LDPC
at overall rate R = 0.8. The SPARCs are both L = 768,
M = 512. The underlying SPARC rate when the outer code is
included is RSP ARC = 0.94. The BPSK-modulated LDPC is the
same CCSDS LDPC code [18] used for the outer code. For this configuration, Luser = 654.2, Lparity = 113.8, Lunprotected = 199.1,
Lprotected = 455.1, and LLDP C = 568.9.
Fig. 16. Comparison to plain AMP and to the QAM-64 WiMAX
LDPC of Section VI at overall rate R = 1.5 The SPARCs are both
L = 1024, M = 512. The underlying SPARC rate including the outer
code is RSP ARC = 1.69. For this configuration, Luser = 910.2,
Lparity = 113.8, Lunprotected = 455.1, Lprotected = 455.1, and
LLDP C = 455.1.
allocation. Figure 16 plots results at overall rate R = 1.5,
where we can compare to the QAM-64 WiMAX LDPC code,
and to the plain SPARC with rate 1.5 of Figure 13.
The plots show that protecting a fraction of sections with
an outer code does provide a steep waterfall above a threshold
Eb
value of N
. Below this threshold, the combined SPARC
0
+ outer code has worse performance than the plain rate
R SPARC without the outer code. This can be explained
as follows. The combined code has a higher SPARC rate
RSP ARC > R, which leads to a larger section error rate for
the first AMP decoder, and consequently, to worse bit-wise
Eb
below the
posteriors at the input of the LDPC decoder. For N
0
threshold, the noise level at the input of the LDPC decoder is
beyond than the error-correcting capability of the LDPC code,
so the LDPC code effectively does not correct any section
13
errors. Therefore the overall performance is worse than the
performance without the outer code.
Above the threshold, we observe that the second AMP decoder (after subtracting the contribution of the LDPC-protected
sections) is successful at decoding the unprotected sections
that were initially decoded incorrectly. This is especially
apparent in the R = 45 case (Figure 15), where the section
errors are uniformly distributed over all sections due to the flat
power allocation; errors are just as likely in the unprotected
sections as in the protected sections.
experimentation is necessary to determine the best tradeoff.
An interesting direction for future work would be to develop
an EXIT chart analysis to jointly optimize the design of the
SPARC and the outer LDPC code.
ACKNOWLEDGEMENT
The authors thank the Editor and the anonymous referees
for several helpful comments which improved the paper.
R EFERENCES
C. Outer code design choices
In addition to the various SPARC parameters discussed in
previous sections, performance with an outer code is sensitive
to what fraction of sections are protected by the outer code.
When more sections are protected by the outer code, the
overhead of using the outer code is also higher, driving
RSP ARC higher for the same overall user rate R. This leads
to worse performance in the initial AMP decoder, which has
to operate at the higher rate RSP ARC . As discussed above,
if RSP ARC is increased too much, the bit-wise posteriors
input to the LDPC decoder are degraded beyond its ability
to successfully decode, giving poor overall performance.
Since the number of sections covered by the outer code
depends on both log M and nLDP C , various trade-offs are
possible. For example, given nLDP C , choosing a larger value
of log M corresponds to fewer sections being covered by the
outer code. This results in smaller rate overhead, but increasing
log M may also affect concentration of the error rates around
the SE predictions, as discussed in Section III-A. We conclude
with two remarks about the choice of parameters for the
SPARC and the outer code.
1) When using an outer code, it is highly beneficial to have
good concentration of the section error rates for the initial
AMP decoder. This is because a small number of errors
in a single trial can usually be fully corrected by the outer
code, while occasional trials with a very large number of
errors cannot.
2) Due to the second AMP decoder operation, it is not
necessary for all sections with low power to be protected
by the outer code. For example, in Figure 15, all sections
have equal power, and around 30% are not protected by
the outer code. Consequently, these sections are often
not decoded correctly by the first decoder. Only once
the protected sections are removed is the second decoder
able to correctly decode these unprotected sections. In
general the aim should be to cover all or most of the
sections in the flat region of the power allocation, but
[1] R. G. Gallager, Information theory and reliable communication.
Springer, 1968.
[2] A. Guillén i Fàbregas, A. Martinez, and G. Caire, Bit-interleaved coded
modulation. Now Publishers Inc, 2008.
[3] G. Böcherer, F. Steiner, and P. Schulte, “Bandwidth efficient and
rate-matched low-density parity-check coded modulation,” IEEE Trans.
Commun., vol. 63, no. 12, pp. 4651–4665, 2015.
[4] A. Barron and A. Joseph, “Least squares superposition codes of moderate dictionary size are reliable at rates up to capacity,” IEEE Trans. Inf.
Theory, vol. 58, no. 5, pp. 2541–2557, Feb 2012.
[5] A. Joseph and A. R. Barron, “Fast sparse superposition codes have near
exponential error probability for R < C,” IEEE Trans. Inf. Theory,
vol. 60, no. 2, pp. 919–942, Feb. 2014.
[6] A. R. Barron and S. Cho, “High-rate sparse superposition codes with
iteratively optimal estimates,” in Proc. IEEE Int. Symp. Inf. Theory,
2012.
[7] S. Cho and A. Barron, “Approximate iterative Bayes optimal estimates for high-rate sparse superposition codes,” in Sixth Workshop on
Information-Theoretic Methods in Science and Engineering, 2013.
[8] J. Barbier and F. Krzakala, “Replica analysis and approximate message
passing decoder for superposition codes,” in Proc. IEEE Int. Symp. Inf.
Theory, 2014, pp. 1494–1498.
[9] J. Barbier, C. Schülke, and F. Krzakala, “Approximate message-passing
with spatially coupled structured operators, with applications to compressed sensing and sparse superposition codes,” Journal of Statistical
Mechanics: Theory and Experiment, no. 5, 2015.
[10] J. Barbier and F. Krzakala, “Approximate message-passing decoder and
capacity-achieving sparse superposition codes,” IEEE Trans. Inf. Theory,
vol. 63, no. 8, pp. 4894 – 4927, 2017.
[11] C. Rush, A. Greig, and R. Venkataramanan, “Capacity-achieving sparse
superposition codes via approximate message passing decoding,” IEEE
Trans. Inf. Theory, vol. 63, no. 3, pp. 1476–1500, March 2017.
[12] C. Condo and W. J. Gross, “Sparse superposition codes: A practical
approach,” in Proc. IEEE Workshop on Signal Processing Systems
(SiPS), 2015.
[13] ——, “Implementation of sparse superposition codes,” IEEE Trans.
Signal Process., vol. 65, no. 9, pp. 2421–2427, 2017.
[14] C. Rush and R. Venkataramanan, “The error exponent of sparse regression codes with AMP decoding,” in Proc. IEEE Int. Symp. Inf. Theory,
2017.
[15] S. Rangan, “Generalized approximate message passing for estimation
with random linear mixing,” in Proc. IEEE Int. Symp. Inf. Theory, 2011.
[16] (2008) The coded modulation library. [Online]. Available: http://www.
iterativesolutions.com/Matlab.htm
[17] Python script for SPARC with AMP decoding. [Online]. Available: https:
//github.com/sigproc/sparc-amp/blob/master/sparc amp.ipynb
[18] 131.0-B-2 TM Synchonization and Channel Coding, CCSDS, August
2011. [Online]. Available: https://public.ccsds.org/Pubs/131x0b2ec1.pdf
| 7 |
IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 4, No 2, July 2011
ISSN (Online): 1694-0814
www.IJCSI.org
Performance Analysis Cluster and GPU Computing
Environment on Molecular Dynamic Simulation of BRV-1 and
REM2 with GROMACS
Heru Suhartanto1, Arry Yanuar2 and Ari Wibisono3
1
2
Faculty of Computer Science, Universitas Indonesia
Depok, 16424, Indonesia
Department of Pharmacy, Faculty of Mathematics and Natural Science,
Universitas IndonesiaDepok, 16424, Indonesia
3
Faculty of Computer Science, Universitas Indonesia
Depok, 16424, Indonesia
Abstract
One of application that needs high performance computing
resources is molecular d ynamic. There is some software
available that perform molecular dynamic, one of these is a well
known GROMACS. Our previous experiment simulating
molecular dynamics of Indonesian grown herbal compounds
show sufficient speed up on 32 n odes Cluster computing
environment. In order to obtain a reliable simulation, one usually
needs to run the experiment on the scale of hundred nodes. But
this is expensive to develop and maintain. Since the invention of
Graphical Processing Units that is also useful for general
programming, many applications have been developed to run on
this. This paper reports our experiments that evaluate the
performance of GROMACS that runs on two different
environment, Cluster computing resources and GPU based PCs.
We run the experiment on BRV-1 and REM2 compounds. Four
different GPUs are installed on the same type of PCs of quad
cores; they are Gefore GTS 250, GTX 465, GTX 470 and Quadro
4000. We build a cluster of 16 nodes based on these four quad
cores PCs. The preliminary experiment shows that those run on
GTX 470 is the best among the other type of GPUs and as well
as the cluster computing resource. A speed up around 11 and 12
is gained, while the cost of computer with GPU is only about 25
percent that of Cluster we built.
Keywords: Dynamic, GROMACS,
Computing, Performance Analysis.
GPU,
Cluster
1. Introduction
Virus is as one of the cause of illness. It is the
smallest natural organism. Because of its simplicity and its
small size, biologists choose virus as the first effort to
simulate life forms with computer, and then choose one of
the smallest, called mpsac tobacco satellite for further
investigation. Researchers simulate viruses in a d rop of
salt water use software – NAMD (Nanoscale Molecular
Dynamics) built by the University of Illinois at UrbanaChampaign [13]
Molecular Dynamic (MD) shows molecule structures,
movement and function of molecules. MD performs the
computation of atom movement in molecular system using
molecular mechanics. The dynamic of a protein molecule
is affected by protein structure and is an important element
of special function and also general function of the protein.
The understanding of relation between the dynamical three
dimension structures of a protein is very important to
know how a protein works. However, further real
experiment of protein dynamic is very difficult to be done.
Thus, people develop molecular dynamic simulation as a
virtual experimental method which is able to analyze the
relation between structure and protein dynamic. The
simulation explores conformation energy of a protein
molecule itself. Up to today, the development of MD
simulation is still in progress. MD simulation in general is
used to gain information on the movement and the changes
of structure conformation of a protein as well as other
biological macromolecules. Through this simulation,
thermodynamic and kinetic information of a protein can be
explored [2].
GROMACS is one of a computer program which is
able to run MD simulation and energy minimization. It
simulates Newton movement equation of system with
hundreds to million molecules. The use of GROMACS
can be implemented on biochemistry molecule such as
protein which is dynamic and owns complicated binding.
In addition, research on non biological molecules, such as
polymers can be done with the program [7]. The property
of a protein that is dynamic and changing in time is one of
131
IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 4, No 2, July 2011
ISSN (Online): 1694-0814
www.IJCSI.org
the reasons why MD is necessary to be done [6]. With this
simulation, the protein molecular movement and
interaction that occurs in molecular level on a certain time
can be estimated [8].
Our previous experiment using GROMACS on
Cluster computing environment produced analysis on MD
simulation results with three proteins of RGK sub family;
they are Rad, Gem and RemGTpase. MD simulation
against protein of subfamily RGK will help us to analyze
the activities difference and the regulation mechanism
among protein that is member of such subfamily. [9]. As
we will mention in the following, we still face some
limitation of computing resources.
The parallel computation technique with MD using
GROMACS c an visualize atomic details on the
formation of s mall DPPC (dipalmitoyl phosphatidyl
choline) with time order up to 90 ns [2].
#CPU
1
4
8
16
32
0
0
0
0
0
41.7
10.4
5.2
2.6
1.3
Days
206.3
52.1
26.0
13.0
6.5
1250
512.5
156.3
78.1
39.1
3750
937.5
468.8
234.4
117.2
Figure 1. Simulation in 90 ns which shows the formation of DPPC
vesicle (dipalmitoyl phosphatidyl choline) [2]
The figure 1 shows that, the simulation requires huge
amount of computing resources in the scale of hundreds
processors in order to complete the processes within an
acceptable short time. However, for some people, to build
and maintain a cl uster of hundreds computer acting as
hundreds processors requires a lot of other resources such
as electricity supplies and space to accommodate the
machines. The invention of General Programming o n
Graphical Processing Units (GPGPU) which provides
hundreds of processors encourage many scientists to
investigate the possibility of running their experiments on
GPU based computers.
GPU computing with CUDA (Compute Unified
Driver Architecture) brings parallel data computation into
wider societies. At the end of 2007, there were more than
40 millions application based CUDA GPU developers. In
term of price, GPU graphical card is relative cheap. In the
year 2007, one unit of the card that is capable of
performing 500 G igaFlops costs less than 200 U SD.
Currently, one unit of GPU of GEForce GTX 470 that
consists of 448 processors cost less than 300 USD. System
based on GPU runs relative fast. The bandwidth and the
computation reach about 10 times of regular CPU. The
micro-benchmark performance reaches mathematical
instruction about 472 G FLOPS (for GPU of 8800 U ltra
type); the raw bandwidth reaches 86 GB per second (for
GPU of Tesla C870 type). Some application can ran faster
such as N-body computation that reaches 240 GFLOPS.
This is about 12 billion interaction per second. This case
study was conducted on molecular dynamic and seismic
data processing [2].
The General Purpose Programming on GPU (GPGPU)
is the programming processes for general non graphical
application on GPU. Initially, the technique is relatively
difficult. The problems to be solved should be considered
related to graphics. Data has to be mapped into texture
maps and the algorithms should be adjusted for image
synthesis. Even though this technique is promising, but it
is delicate particularly for non graphic developers. There
are also some constraints such as the overheads of API
graphics, the memory limitation, and the needs for data
passes to fulfill the bandwidth.
The libraries for GPGPU is developed and provided
in a ready to use programming languages . Many libraries
are developed for CUDA such as CUBLAS (Basic Linear
Algebra Subprograms in CUDA) and CUFFT (Fast
Fourier Transform in CUDA) [3,4]. With the availability
of GPGPU, CUDA and the libraries, one of difficulties in
developing application run on GPU is solved. With the
increasing performance of GPU, it is expected that the
GPU computing support the development new application
or new techniques in many areas of researches and
industries.
Many applications have been developed to run on
GPU based computer. The capability of a machine to
provide images in highly detailed within a very fast time
unit is needed in breast cancer scanning processes.
Techniscan, an industry that develops imaging system for
automatically ultrasound, switched t heir implementation
from CPU based into CUDA and NVIDIA Tesla GPUs.
CUDA based system is capable of processing Techniscan
algorithm two time faster. Once the appliance obtain
operational permit from the government, then patients will
know his or her cancer detection results within one visit
[5].
On the limited cluster resources that one have and current
development of a pplications which run nicely on GPU;
motivate us to study the performance of molecular
dynamic simulation run on two different computing
environment. This paper reports our experiments that
evaluate the performance of GROMACS that run on
cluster computing resources and GPU based PC.
2.The experiment and results
132
IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 4, No 2, July 2011
ISSN (Online): 1694-0814
www.IJCSI.org
We choose four different types of GPU, namely GeForce
GTS 450, GeForce GTX 465, GeForce GTX 470, and
Quadro 4000 ( For simplicity, the phrase GeForce will not
be mentioned af ter this). Their specifications are in the
following table
Table 1. The specification of PC installed with GPU
GTX 470
GTX
Quadro
GTS
Description
Cuda Cores
448
465
352
4000
256
250
128
Memory
1280 MB
1024
2GB
1
GDDR5
MB
GDDR5
DDR3
1674
1603
MHz
MHz
320 Bit
256 Bit
256 Bit
256-bit
133.9
102.6
89,6
70.4
GB/sec
GB/sec
GB/sec
GB
GDDR 5
Mem. Clock
Mem.
interface
1100
Width
Mem.Bandwidth
We install GTS 250 into a PC that based on Intel®
Pentium ( R ) 4 C PU 3.20 GHz, 4 G B RAM, a 80 G B
SATA HDD. While the other GPUs, GTX 465, GTX 470
and Quadro 4000 a re installed into PCs with their
specification in the following Table 2. We also built 16
cores Cluster computing environment from four PCs with
these specification.
Table 2. The specification of PC installed with GPU
INTEL Core i5 760 Processor for Desktop Quad Core, 2.8GHz,
8MB Cache, Socket LGA1156
Memory DDR3 2x 2GB, DDR3, PC-10600 Kingston
Main board ASUS P7H55M-LX
Thermaltake LitePower 700Watt
DVD±RW Internal DVD-RW, SATA, Black, 2MB, 22x DVD+R
Write SAMSUNG/LITEON
Thermaltake VH8000BWS Armor+ MX
WESTERN DIGITAL Caviar Black HDD Desktop 3.5 inch SATA
500GB, 7200RPM, SATA II, 64MB Cache, 3.5", include SATA
cables and mounting screws
We installed SDK CUDA 2.3 and s ome application
on our GPU PCs, such as GROMACS 4.0.5 [13] which
run on GPU but not fully using processors on the GPU
for parallel computation. GROMACS is a program that is
designed to run serially or in parallel. The parallel features
in GROMACS are provided to speed up the simulation
processes. GROMACS has parallel algorithm related to
domain decomposition. The domain decomposition
algorithm is useful to solve boundary value problems by
decomposing t he main problem into some smaller
boundary value problems. In GROMACS, the domain
decomposition algorithm divides the simulation works into
different processors so that the processes can be done in
parallel. Each of the processor will collect and coordinate
the motion of simulated particles.
133
In the parallel simulation processes, processors will
communicate to each others. Workload imbalance is
caused by different particle distribution on each of the
processors as well as the different particle interaction
among the processors. In order to have work balance,
GROMACS uses dynamic load balancing algorithm where
the volume of each domain decomposition can be adjusted
independently . The mdrun script of GROMACS will
automatically start dynamic load balancing when there is
instability energy computation within 5 percent or more.
Load imbalance is recorded on output log created by
GROMACS.
As there is no test performance record yet for
GROMACS 4.5 [13] running on Quadro 4000, so when a
user runs MD simulation, a warning message appear
saying that the graphic card used has not been tested to use
GROMACS 4.5 software. A further investigation is
necessary in order to see how GROMACS can run nice
much better on Quadro 4000.
In the experiment, we use two different compounds.
Breda Virus 1 abbreviated as BRV-1 or 1 br v (PDB id
1BRV) [14]. Its scientific name is B ovine respiratory
syncytial virus [Biochemistry, 1996, 35 (47), pp 14684–
14688]. The second one is REM2 (PDB id: 3CBQ).
REM2 is the molecular function induces angiogenesis
(essential for growth, invasion and metastasis tumor). So
REM2 a potential target to overcome condition of
angiogenesis. REM2 was known to regulate p53 to the
nature of immortal in somatic cells. REM2 also cause the
stem cells immortal, REM2 alleged role in the mechanism
self-renewal in stem cells to hESC (human Embryonic
Stem Cell which protects from the process of apoptosis
(programmed cell death). [12]
In the following table 3, we provide the time required in
the simulation of 1BRV compound in various values of
time steps
Table 3. The time of simulation processes of 1 BRV
Time steps
Resources
200ps
400ps
600ps
800ps
1000ps
Cluster 16
39m:22s
1h:19m:13s
1h:59m:32s
2h:39m:37s
3h:19m:08s
23m:26s
46m:57s
1h:10m:23s
1h:33m:58s
1h:57m:18s
6m:45s
13m:35s
20:23
27m:05s
33m:29s
8m:48s
8m:32s
12m:46s
16m:57s
21m:21s
3m:30s
7m:00
10m:29s
14m:07s
17m:14s
GTS 250
Quaddro 4000
GTX 465
GTX 470
It is obvious that the experiments with GTX 470
outperform simulations in other environment. For example,
in 1000 ps time steps with GTX 470, it requires 17
minutes and 14 second to finish the simulation, while with
IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 4, No 2, July 2011
ISSN (Online): 1694-0814
www.IJCSI.org
GTS 250, it requires 1 h our 27 m inutes and 32 s econd
which is almost seven times of GTX 470. The
performance degrades from GTX 470, GTX 465, Quadro
4000, and to GTS 250. This perhaps that each of these
GPUs has different number of GPU cores, 448, 352, 256
and 128 cores respectively. The interesting one is that all
the experiment on GPU outperforms those of Cluster 16.
about 25 percent that of Cluster 16 pr ocessors. Even
though our preliminary findings are interesting, but it
needs further investigation as the development of
Molecular Dynamics codes on GPU is still in progress.
We next simulate REM2, and the time spent in simulation
is provided in the following table 4. It is also obvious that
the similar pattern happen in this simulation that GTX 470
performs best, and all GPU outperform those of Cluster 16.
This research is supported by Universitas Indonesia Multi
Discipline Research Grant in the year 2010. The Final
stage of the writing process of the paper was conducted
when the first author visited the School of ITEE – the
University of Queensland.
Table 4 The time of simulation processes of REM2
Acknowledgments
References
Time steps
Resources
200ps
400ps
600ps
800ps
1000ps
Cluster 16
45m:21s
1h:27m:27s
2h:13m:05s
2h:57m:01s
3h:43m:15s
GTS 250
27m:56s
56m:11s
1h:24m:26s
1h:52m:23s
2h:23m:32s
4000
6m:45s
13m:35s
20m:23s
27m:05s
33m:29s
GTX 465
4m:13s
8m:32s
12m:46s
16m:57s
21m:21s
GTX 470
3m:30s
7m:00s
10m:29s
14m:07s
17m:14s
Quaddro
We finally provide the speed up of GTX 470 compare with
Cluster 16, the following table 5 summarizes the result
from each simulation of 1BRV (SU-1BRV) a nd REM2
(SU-REM2).
Table 5. Speed up of GTX 470 relative to Cluster 16
SU\tsteps
200ps
400ps
600ps
800ps
1000ps
SU-REM2
12.95714
12.49286
12.69475
12.53955
12.95455
SU-1BRV
11.24762
11.31667
11.40223
11.30697
11.55513
It is obvious that GTX470 gain speed up at about 11
to Cluster16 on 1BRV simulation, and about 12 on REM2
simulation. In term of hardware price, the PC with GTX
470 is about 25 percent that of Cluster 16 with the same
specification.
3. Conclusion
Molecular dynamics simulation using BRV-1 and
REM2 is conducted on t wo different computing
environment, cluster computing o f 16 nodes and GPU
computing of various types of NVIDIA – GTS 250, GTX
465, GTX 470 and Quadro 4000. Both cluster and GPU
computing has the same hardware specification, except
that of GTS 250. The experiment show the efficacy of
GTX 470 compare to the others. It gain speed up around
11 and 12, the cost of this GTX 470 computer is only
[1] Luebke, David, The Democratization of Parallel computing:
High Performance Computing with
CUDA, t he
International Conference for High Performance Computing,
Networking,
Storage
and
Analysis,
2007,
http://sc07.supercomputing.org/
[2] de Vries, A.H., A. E. Mark, and S. J. Marrink Molecular
Dynamics Simulation of the Spontaneous Formation of a
Small DPPC Vesicle in Water in Atomistic Detail, J. Am.
Chem. Soc. 2004, 126, 4488-448
[3] Buck, Ian, Cuda Programming, the International Conference
for High Performance Computing, Networking, Storage and
Analysis, 2007, http://sc07.supercomputing.org/
[4] Fatica, Massimiliano, CUDA Libraries, the International
Conference for High Performance Computing, Networking,
Storage
and
Analysis,
2007,
http://sc07.supercomputing.org/
[5] Cuda Medicine, Aplikasi Medicine,
http://www.nvidia.co.uk/object/cuda_medical_uk.html
[akses 13 Feb 2010]
[6]Karplus, M. & J. Kuriyan. Molecular Dynamics and Protein
Function. PNAS, 2005. 102 (19): 6679-6685
[7]Spoel DVD, Erick L, Berk H. Gerit G, Alan EmM & Herman
JCB., Gomacs: Fast, Flexible and Free., J. Comput Chem,
2005, 26(16): 1701-1707
[8]Adcock SA dan JA McCammon. Molecular Dynamics:
Survey Methods for Simulating tha Activity of Protein.
Chem Rev 2006. 105(5):1589-1615
[9]Correll,RN., Pang C, Niedowicz, DM, Finlin, BS and. Andres,
DA., The RGK family of GTP-binding Proteins: Regulators
of Voltage-dependent Calcium Channels and Cytoskeleton
Remodeling
[10]Kutzner, C, D. Van Der Spoel, M Fechner, E Lindahl, U W.
Schmitt, B L. De Groot, H Grubmüller, Speeding up
parallel GROMACS on high-latency networks J. C omp.
Chem. 2007. 28(12): 2075-2084.
[11]J.C. Phillips, R. Braun,W.Wang, J. Gumbart, E. Tajkhorshid,
E. Villa, C. Chipot, R.D. Skeel, L. Kale, K. Schulten,
Scalable molecular dynamics with NAMD, J. Comp. Chem.
26 (2005) 1781–1802.
[12]Ruben Bierings, Miguel Beato, Michael J. Edel, an
Endothelial Cell Genetic Screen Identifies the GTPase
Rem2 as a Suppresor of P19 Expression that Promotes
Endothelial Cell Proliferation and Angiogenesis, the
134
IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 4, No 2, July 2011
ISSN (Online): 1694-0814
www.IJCSI.org
Journal of Biological Chemistry, vol 283, no. 7, pp. 4408 –
4416
[13]GROMACS, http://www.gromacs.org
[14][http://www.uniprot.org/taxonomy/360393].
Heru Suhartanto is a Professor in Faculty of Computer Science,
Universitas Indonesia (Fasilkom UI). He has been w ith Fasilkom
UI since 1986. Previously he hel d some positions such as Post
doctoral fellow at Advanced Computational Modelling Centre, the
University of Queensland, Australia in 1998 – 2000; two periods
vice Dean for General Affair at Fasilkom UI since 2000. He
graduated from undergraduate study at Department of
Mathematics, UI in 1986. He holds Master of S cience, from
Department of Computer Science, The University of Toronto,
Canada since 1990. He also holds Ph.D in Parallel Computing
from Department of Mathematics, The University of Queensland
since 1998. His main research interests are Numerical, Parallel,
Cloud and Grid computing. He is also a member of reviewer of
several referred international journal such as journal of
Computational and Applied Mathematics, International Journal of
Computer Mathematics,
and Journal of Universal Computer
Science. Furthermore, he ha s supervised some Master and PhD
students; he ha s won some research grants; holds several
software copyrights; published a number of boo ks in Indonesian
and i nternational papers in proceeding and j ournal. H e is also
member of IEEE and ACM.
Arry Yanuar is an as sistant Professor of Department Pharmacy,
Universitas Indonesia since 1990. He graduated from
undergraduate program Department of Pharmacy, Universitas
Indonesia in 1990. He also holds Apoteker Profession certificate in
1991. In 1997, he finished his Master Program from Department of
Pharmacy, Universitas Gadjah Mada. He holds PhD in 2006 from
Nara Institute of Science and Technology (NAIST), Jepang, with
Structure Biology/protein Cristalography laboratorium. In 19992003 he w orked as pharmacy expert ini ISO certification for
pharmacy industries at Llyod Register Quality Assurance. In 2002,
he visited National Institute of Health (NIH), Bethesda, USA. He
won several research grants and published some paper at
international journals and conferences.
Ari Wibisono is a r esearch Assistant at Faculty of Computer
Science Universitas Indonesia (Fasilkom UI). He graduated from
undergraduate program Fasilkom UI and currently takes Master
program at Fasilkom UI.
135
| 5 |
1
A Note on the Information-Theoretic-(in)Security of
Fading Generated Secret Keys
arXiv:1705.07533v1 [cs.IT] 22 May 2017
Robert Malaney∗
Abstract—In this work we explore the security of secret keys
generated via the electromagnetic reciprocity of the wireless
fading channel. Identifying a new sophisticated colluding attack,
we explore the information-theoretic-security for such keys in the
presence of an all-powerful adversary constrained only by the
laws of quantum mechanics. Specifically, we calculate the reduction in the conditional mutual information between transmitter
and receiver that can occur when an adversary with unlimited
computational and communication resources places directionalantenna interceptors at chosen locations. Such locations, in
principal, can be arbitrarily far from the intended receiver yet
still influence the secret key rate.
The fast generation in real-world communication systems
of an information-theoretic-secure key remains an ongoing
endeavor. Currently, almost all key generation systems being
considered for commercial deployment scenarios are those
based on the quantum mechanical properties of the information
carriers. However, although great progress has been made over
the years with respect to quantum key distribution (QKD)
systems, significant practical challenges remain before their
deployment becomes ubiquitous (see [1] for a recent review).
The technical reasons for such a circumstance are many-fold,
but they do give rise to a search for other sources of shared
randomness that can be exploited for secret key generation.
Transceivers connected via a wireless channel offer up one
possibility - via purely classical means.1
Indeed, for many years it has been known that the random
fading inherent in the general wireless environment (coupled
with electromagnetic reciprocity) is a potential source of secret
key generation (for recent reviews see [4–7]). Conditioned on
the reasonable (and trivial to verify) assumption that Eve (the
adversary) is not in the immediate vicinity (a few cm at GHz
frequencies) of Bob (the legitimate receiver), it is often stated
that fading can lead to information-theoretic-secure keys. Here
we clarify this is not the case when Me → ∞, Me being the
number of receiving devices held by Eve.
An all-powerful adversary constrained only by nature herself is used in almost all security analyses of QKD systems
[1] - and it is the protection from such an unearthly adversary
that lends quantum-based key systems their acclaimed security
status. Such acclaimed security is information-theoretic-secure
(conditioned on the classical channel being authenticated) and
remains in place irrespective of the computational resources
or energy afforded to the adversary, even as Me → ∞.
∗ Robert Malaney is with the School of Electrical Engineering and Telecommunications at the University of New South Wales, Sydney, NSW 2052,
Australia. email: [email protected]
1 Such a classical route is particularly important since QKD directly over
wireless channels in the GHz range is constrained to short distances [2, 3].
Here, we explore an attack by such an all-powerful Eve on
wireless fading generated key systems. More specifically, we
quantify how this Eve can place directional-antenna receivers
(e.g. apertures, linear-arrays, phased-arrays, etc.) at multiple
locations in real-word scattering environments, and in principal
drive to arbitrary low levels the conditional mutual information
between Bob and Alice (the transmitter). Practical scenarios
invoking limited forms of the attack are discussed, showing
(at the very least) how current fading-generated key systems
are partially susceptible to the attack.
We will take the term ‘information-theoretic-secure’ to
specifically mean the following. Conditioned on some welldefined assumption (or restriction) on the system model,
but independent of the capabilities of an adversary (other
than being constrained by natural law), the key information
accessible to an adversary can be driven to arbitrary small
levels for increasing use of some system resource. Specifically,
consider some series of observations of the random variables
X = (X1 , X2 , . . . Xn ,), Y = (Y1 , Y2 , . . . Yn ,), and Z =
(Z1 , Z2 , . . . Zn ,) of a shared random resource by Alice, Bob
and Eve, respectively. We assume a scheme with unlimited
message exchanges between Alice and Bob (available to Eve)
whereby for some sufficiently large n, keys computed by Alice
and Bob (KA and KB , respectively) are made to satisfy the
following requirements for some ǫ > 0, (i) Pr (KA 6= KB ) ≤
ε, (ii) n−1 I (KA ; Z) ≤ ε, (iii) n−1 I (KA ) ≥ rK − ε, and
(iv) n−1 log |C| ≤ n−1 H (KA ) + ε, where H (·) is the entropy, I (·;·) is the mutual information between two random
variables, I (·; ·| ·) is the mutual information between two
random variables conditioned on another random variable, |C|
is the cardinality of the key’s alphabet (C), and rK is an
achievable secret key rate of the scheme. In general, the secret
key rate is not known, but an upper limit can be given by
[8, 9] rK ≤ min (I (X; Y ) , I (X; Y | Z)), where the mutual
information between X and Y conditioned on Z can be written
I (X; Y | Z) = H(X, Z) + H(Y, Z) − H(Z) − H(X, Y, Z),
where H (·, ·, . . .) is the joint entropy of a sequence of
random variables.2 If we introduce the Kullback-Leibler information divergence between two probability mass functions
p(w) and q(w) with sample
space
i W , viz., D ( pk q) =
h
P
q(w)
p(w)
p(w) log q(w) = E − log p(w) then the conditional muw∈W
tual information can be estimated via a discrete formulation,
, p(x) being
viz., I (X; Y | Z) = D p(x, y, z)k p(x,z)p(y,z)
p(z)
the probability X = x, and p (·, ·, . . .) the corresponding joint
probability.
2 Here,
I(X; Y | Z) ≤ I(X; Y ) for all our calculations.
2
channel coherence time (and therefore the key rate). The movement within the scattering environment ultimately manifests
itself at the receiver through variation in received amplitudes
and delays. However, to enable clarity of exposition, we will
make some simplifications to our scattering model - noting
that the attack we describe can in fact be applied to any
scattering environment scenario. The simplifications are that
we assume equal amplitudes for all received signals, and
adopt random uniform distributions for all AoA and all phases
as measured by the receiver. This is in fact the celebrated
2D isotropic scattering model of Clarke [11]. Moving to
the baseband signal henceforth for convenience, we note in
Clarke’s model the signal of a transmitted symbol can be
N
P
written as g(t) = √1N
exp (j (wd t cos αn + φn )), where
n=1
Cn (t) sinϕn (t). In the Rayleigh channel these
wd is now the maximum Doppler frequency in rad/s, and φn
is the phase of each path. Assuming both αn and φn are
independent and uniformly distributed in [−π, π), then in the
limit of large N the amplitude of the signal g(t) is distributed
as a Rayleigh distribution.
Of particular interest to us here will be the statistics of g(t)
at low values of N , since in such circumstances the potential
for an adversary to intercept all signals is larger. The higher
order statistics of the distribution within Clarke’s model at
finite N have been explored in [12], showing that the following
autocorrelation functions are in place; Υgg (τ ) = J0 (ωd τ ),
J 2 (ω τ )
and Υ|g|2 |g|2 (τ ) = 1 + J02 (ωd τ ) − 0 Nd , where J0 (·) is
the zero-order Bessel function of the first kind. For large N
these functions approach those of the Rayleigh distribution.
Importantly, the Υ|g|2 |g|2 function is well approximated by an
exponentiated sinc function at values of N ≥ 6, meaning that
(as per the usual assumption for any fading generated key),
Eve must be several wavelengths away from Bob for the secret
key rate to be non-zero.3
Many refinements on Clarke’s model exist with perhaps the
most widely used being that of [12] in which the main differn
,
entiator is a constraint placed on the AoA, viz., αn = 2πn+θ
N
where θn is independently (relative to φn ) and uniformly
distributed in [−π, π). This latter simulator is wide sense
stationary, is more efficient, and has improved second-order
statistics. In consideration of all statistical measures, it is noted
that for this refined model any differences between N & 8 and
the N = ∞ model (pure Rayleigh distribution) are largely
inconsequential [12].4
In real-world channels, therefore, we have to be aware that
even in cases where the channel appears to be consistent
with a Rayleigh channel, the number of propagation paths
contributing to the received signal can be relatively small. This
can be seen more clearly from Fig. (1) where the probability
density functions formed from six and five propagation paths
quadratures
q are independent Gaussian processes. Writing
2
2
|A| = rI (t) + rQ (t) and ϑ = arctan (rQ (t)/rI (t)) we
then have r(t) = |A| cos (2πfc t + ϑ), where |A| is Rayleigh
distributed and ϑ is uniformly distributed. In such a channel,
|A| and/or ϑ can be used for secret key construction.
Ultimately the secret key is dependent on movement in the
scattering environment, and it this movement that sets the
3 A relaxation of this requirement may be obtained in specific correlated
channel scenarios applicable to a distance of order 10 wavelengths (∼ meters
at GHz frequencies) away from the receiver [13]. The attack we describe here
is unrelated to the special case of correlated channels. It is a general attack.
Eve’s receivers can in principal be positioned anywhere (e.g. kms away from
the intended receiver) yet still mount a successful attack.
4 In the calculations to follow, we find that the key rates computed are, in
effect, independent of whether this refined model or Clarke’s original model
is the adopted Rayleigh simulator.
Fig. 1. Probability density functions (pdf) for different path settings. The
inset graph shows the pdf at Bob for an infinite number of paths (dashed),
6 paths (dot-dashed), and a colluding Eve who has perfectly intercepted (see
text) 5 paths (solid). The sketch on left illustrates the nature of the colluding
attack where the solid (black) lines indicate some of the potentially many
rays towards Bob that Eve intercepts, and the (red) dashed line indicates one
of the potentially many ‘interference’ paths to a directional antenna held by
Eve.
We adopt the narrow-band flat-fading channel, and take
the far-field approximation with wave propagation confined
to a plane geometry. We assume the electric field vector
is orthogonal to the plane and that isotropic gain antennas
are held by Alice and Bob. If we consider, at the carrier
frequency fc (wavelength λc ), a bandpass transmitted signal
s(t) = Re s̃(t)ej2πfc t , where s̃(t) is the complex envelope,
the received
signal can then be written
bandpass
(e.g. [10]),
N
P
D
[t−τn ])
j2π ([fc +fn
]
r(t) = Re
C e
s̃(t − τ ) . Here N is
n
n
n=1
the number of propagation paths reaching the receiver, Cn and
τn are the amplitude and time delay, respectively, and fnD is the
Doppler frequency (n indicates the nth path). This latter quantity can be expressed as fnD = (v/λc ) cos αn , v being the velocity of the receiver and αn being the angle of arrival (AoA)
of the nth path at the receiver, relative to the velocity vector.
Similar to the transmit signal, a complex envelope for the reN
P
ceived signal can be written, r̃(t) =
Cn e−jϕn (t) s̃(t − τn ),
n=1
where ϕn (t) = 2π fc + fnD τn − fnD t . Therefore, we
have r(t) = Re r̃(t)ej2πfc t . In the case of a transmitted
single tone this can be written as r(t) = rI (t) cos 2πfc t −
N
P
rQ (t) sin 2πfc t, where rI (t) =
Cn (t) cosϕn (t) and
n=1
rQ (t) =
N
P
n=1
3
are shown in comparison to the infinite path limit. The five
path model corresponds to a case where Eve is missing one
of the propagation paths used to construct Bob’s signal. For
the cases shown the Kullback-Leibler divergence between the
Rayleigh distribution and the lower-path models is very small.
Let us assume the communications obtained by Bob consist
of the combined signals from N last-scattering events. We
are interested in determining the effect, on some secret key
generation scheme, caused by Eve’s interception of all (or
some fraction of) the N last-scattered paths received by Bob.
We assume Eve has Me >> N directional-antenna receivers,
and has placed them at multiple locations with the aim
of continuously intercepting all of the last-scattered signals
towards Bob with high probability.5 We assume that these
locations are much greater than λc from Bob.
Beyond our assumption of 2D geometry, and that the
amplitude of each last-scattered ray entering any receiver is
equal, we also assume that the number of paths reaching
each of Eve’s antennas is equal to N .6 Extension of our
analysis to cover these issues is cumbersome, but straightforward. To make our mathematical notation less unwieldy,
we will artificially set Me = N in our equations, with the
understanding that we are then simply ignoring all of Eve’s
devices which (at any given time) are not intercepting any
scattered rays towards Bob. For added focus, we will assume
Eve uses circular apertures of diameter d as her directional
receivers - the physics and properties of which can be found
elsewhere, e.g. [14]. Eve configures her nth aperture at each
location so as to maximize signal gain for the signal directed
by the last scatterer in the direction of Bob (i.e. the nth of N
rays reaching Bob is centered in the main lobe of Eve’s nth
aperture). In such circumstances the signal collected by Eve’s
nth receiver can be approximated as,
exp
(j (wde t + φen )) +
1
exp (j (wde t cos αek + φek )) ×
N
P
gcn (t) = √
2λc J1 ( λπd sin(βke ))
N
c
k=2
πd sin(βke )
where the superscript e applied to any previously used variable
means it is now applied to Eve (but same meaning), where βke
represents the angle between the kth propagation path (side
lobe ‘interference’ path) arriving at Eve’s detector and ray n
(i.e. βne = 0), and where J1 (·) is the Bessel function of the
first kind of order one. Note that the maximum Doppler shift
wde on Eve’s detector is included so as to cover the general
case. However, for focus we will assume all of Eve’s detectors
are stationary, and in the following always set wde = 0.
To reduce the mathematical complexity further we have not
5 Such a possibility can be enhanced in some scenarios by additional actions
on Eve’s part. For example, a scenario in which Eve has conducted an a
priori ray-tracing measurement (or analysis) campaign between a given pair
of transmit and receive locations thereby obtaining probabilistic information
on likely last scattering points (for that given pair of locations). Of course in
the limit ME → ∞ her probability of intercepting all paths approaches one
in any case.
6 As an aside, we find a doubling of this number of paths at each of Eve’s
detectors has negligible impact on the results. Also note, as the number of
paths reaching Eve approach infinity, the size of her aperture must be made
to approach infinity for the attack to remain viable. Neither limits are ever in
place of course.
included in our analysis an obliquity factor (1 + cos βke ) /2,
which makes our calculations conservative (i.e. results in
higher key rates).
Upon receipt of the signals gcn (t) Eve will adjust the
signals for the known distance offset between each detector
and Bob, and the known motion of Bob. This entails a
phase adjustment at each detector which manifests itself as an
‘adjusting’ phase φna . The combined adjusted signal obtained
by Eve after such signal processing, can then be written as
N
P
g(t) =
gcn (t) exp(jφna ).
n=1
Assuming Eve’s different apertures intercept all paths that
are received by Bob, the above relations lead us to conclude
that, in principle, by increasing her aperture size Eve can determine Bob’s received signal to arbitrary accuracy. In practice
this accuracy will be limited by any receiver noise on Eve’s
antennas, and error due to imprecise location information
on Bob. However, with regard to these accuracy limitations
(which we include in our Monte Carlo simulations below),
we note the following two points that favor Eve. (i) Given her
all-powerful status, Eve can set her noise to be at the quantumlimit (quantum noise). (ii) Beyond any other means available
to her, an unlimited Eve can determine the location of Bob
at time t to any required accuracy through signal acquisition.
More specifically in regard to (ii), the
√ minimum position error
via signal processing varies as 1/ Me - a result that holds
even if some of Eve’s devices are affected by shadowing in
which the path-loss exponents are unknown [15].
To make further progress we must introduce an actual
scheme for generating a secret key. Although there are many
such schemes (e.g. [4–7]) we will adopt here a generic formulation that covers the conceptual framework of the widely used
signal threshold schemes. The basic concept of such schemes
is to quantize a received signal metric, say amplitude, into
a 1 or 0 value. For some parameter T > 0, and for some
median value m of the expected amplitude distribution, the
decision value can then be set dependent on whether the
amplitude is below m − T or above m + T . Such schemes
offer many pragmatic advantages and compensate to a large
extent errors introduced through a lack of exact reciprocity
between transceiver configurations. Assuming a given level
of Gaussian noise at Bob and Eve’s receivers, an appropriate
value of T can be chosen. Further, so as to maximize the
entropy of the final key, we introduce an ‘entropy’ factor s.
For a given T and probability density function R′ (r) for the
received
of s can be determined through
R m−T ′amplitudeRr,∞the value
′
R
(r)dr
=
R
(r)dr.
Note, in general R′ is the
0
m+T +s
distribution for the amplitudes in the presence of non-zero
Gaussian receiver noise. When r is measured by Alice and/or
Bob to be between the two ‘allowed’ regions, as defined by
the integrals of this relation, it is agreed by both parties that
the measurement be dropped.
Clearly, in practice larger values of T will minimize mismatches in the key at the cost of a reduced key generation
rate. Ultimately, in any real scheme a period of reconciliation
and privacy amplification will be pursued in order to obtain
the final key. However, here we will simply investigate the
upper limit of the key rate through a numerical evaluation
4
I(X;Y| Z)
1
Actual Paths=6 & Adversary Paths=4
Actual Paths=6 & Adversary Paths=5
Actual Paths=6 & Adversary Paths=6
0.5
0
0
0.1
0.2
0.3
0.4
0.5
0.6
Diameter (m)
I(X;Y| Z)
1
Actual Paths=20 & Adversary Paths=15
Actual Paths=20 & Adversary Paths=19
Actual Paths=20 & Adversary Paths=20
0.5
0
0
0.1
0.2
0.3
0.4
0.5
0.6
Diameter (m)
I(X;Y| Z)
1
Actual Paths=20 & Adversary Paths=15
Actual Paths=20 & Adversary Paths=19
Actual Paths=20 & Adversary Paths=20
0.5
0
0
0.1
0.2
0.3
0.4
0.5
0.6
Diameter (m)
Fig. 2. Change in the conditional mutual information between Alice and
Bob as function of the diameter of Eve’s directional antenna (a circular
aperture) for different path conditions. Six paths (top figure) and 20 paths
(middle figure) are used to construct the approximate Rayleigh distribution.
One calculation (bottom figure) on the 20 path scenario assumes zero receiver
noise at Eve and zero location error on Bob. Results shown are for 1 million
Monte Carlo runs.
of the conditional mutual information as defined earlier. We
assume Eve’s strategy on detection is to decide on the binary
number in the ‘disallowed’ region by setting s = T = 0. We
also assume all issues on the decision strategy of the scheme
and all communications between Alice and Bob (e.g. which
measurements to drop) are available to Eve.
Fig. (2) (top) displays a calculation of the conditional
mutual information as a function of aperture diameter (all
of Eve’s circular apertures are assumed to be the same size)
in which a receiver noise contribution (on all receivers) is
set so that the signal-to-noise ratio (SNR) is equal to 17dB.
The maximum Doppler shift of Bob is set to 10Hz, λc is
set to 0.1m, and a Gaussian error on the pointing of Eve’s
apertures (due to location error on Bob) is set to a standard
deviation of 0.002 radians. The threshold is set at three
times the receiver noise. We can see that if all signals are
intercepted the key rate can be driven to almost zero over
the range of aperture diameters probed. For fewer signals
intercepted we see that useful key rates are still possible,
albeit at significantly diminished values relative to a no-attack
scenario. For comparison, Fig. (2) (middle) displays similar
calculations but for 20 propagation paths forming the Rayleigh
distribution, and Fig. (2) (bottom) shows the same calculation
when Eve’s detectors are operating with zero receiver noise,
and location errors on Bob are assumed to be zero.
The specific key scheme discussed here is limited in
scope relative to the large number of possible key generation
schemes available. More sophisticated schemes, such as those
based on multi-antenna transceiver configurations, the use
of optimal coding techniques, and the use of channel state
information, are possible. However, straightforward extensions
of the attack described here would still apply to all of these
more sophisticated schemes - only the quantitative details
on how the key rate is diminished under the attack will be
different.
Indeed, we note the attack described here can be generalized
further so as to always drive the secret key rate to zero, even if
we relax the assumption that it is only the last-scattering rays
that are intercepted. An all-powerful Eve, with ME → ∞, can
intercept all propagation paths (of any energy) at all points in
space, and in principal possess knowledge on all characteristics
of all scatterers. With the unlimited computational resources
afforded to her the classical Maxwell equations can then be
solved exactly, thereby providing information on any of Bob’s
received signals at an accuracy limited only by quantum mechanical effects. Of course, such an attack whilst theoretically
possible, is not tenable. The calculations described here can
be considered a limited form of such an attack, tenable in a
real-world scattering environment.
In conclusion, we have described a new attack on classical
schemes used to generate secret keys via the shared randomness inherent in wireless fading channels. Although the attack
we have described will be difficult to implement in a manner
that drives the secret key rate to zero, our work does illustrate
how such a rate can at least be partially reduced. As such,
all schemes for secret key generation via the fading channel
must invoke a new restriction - a limitation on the combined
information received by a colluding Eve.
[1] E. Diamanti, H.-K. Lo, B. Qi, and Z. Yuan, “Practical challenges in
quantum key distribution,” npj Quantum Information, 2, 16025, 2016.
[2] C. Weedbrook, S. Pirandola, S. Lloyd, and T. C. Ralph, “Quantum
cryptography approaching the classical limit,” Phys. Rev. Lett. 105,
110501, 2010.
[3] N. Hosseinidehaj and R. Malaney, “Quantum entanglement distribution
in next-generation wireless communication systems,” IEEE VTC Spring,
Sydney, Australia (arXiv:1608.05188), 2017.
[4] L. Lai, Y. Liang, H. V. Poor, and W. Du, “Key generation from wireless
channels,” Physical Layer Security in Wireless Comms., (CRC), 2014.
[5] P. Narayan and H. Tyagi, “Multiterminal secrecy by public discussion,”
Found. and Trends in Comms. and Info. Theory,” 13, pp. 129–275, 2016.
[6] J. Zhang, T. Q. Duong, A. Marshall, and R. Woods, “Key generation from wireless channels: A review,” IEEE Access, 10.1109/Access.2016.2521718, 2016.
[7] H. V. Poor and R. F. Schaeferb, “Wireless physical layer security,” PNAS,
vol. 114, no. 1, pp. 19-26, 2017.
[8] U. Maurer, “Secret key agreement by public discussion from common
information,” IEEE Trans. Inf. Theory, vol. 39, no. 3, pp. 733-742, 1993.
[9] R. Ahlswede and I. Csiszar,“Common randomness in information theory
and cryptography-Part I: Secret sharing” IEEE Trans. Inf. Theory, vol.
39, no. 4, pp. 1121-1132, 1993.
[10] G. L. Stüber, “Principles of Mobile Communication,” (Kluwer) 2002.
[11] R. H. Clarke, “A statistical theory of mobile-radio reception” Bell Syst.
Tech. J., pp. 957-1000, Jul.-Aug. 1968.
[12] C. Xiao,Y. R. Zheng, and N. C. Beaulieu, “Novel sum-of-sinusoids
simulation models for Rayleigh and Rician fading channels” IEEE
Transactions on Wireless Communications, vol. 5, no. 12, 2006.
[13] X. He, H. Dai, W. Shen, and P. Ning,“Is link signature dependable for
wireless security,” in Proc. IEEE Infocom, Turin, Italy, pp. 200–204, 2013.
[14] S. J. Orfanidis, “Electromagnetic Waves and Antennas,” 2016.
[15] R. Malaney, “Nuisance parameters and location accuracy in log-normal
fading models,” IEEE Trans. Wireless Communications, vol 6, issue 3,
pp. 937–947, 2007.
| 7 |
arXiv:1802.05541v1 [cs.CE] 30 Jan 2018
Novel weak form quadrature elements for
non-classical higher order beam and plate
theories
Md Ishaquddin∗, S.Gopalakrishnan†
Department of Aerospace Engineering, Indian Institute of Science Bengaluru
560012, India
Abstract
Based on Lagrange and Hermite interpolation two novel versions
of weak form quadrature element are proposed for a non-classical
Euler-Bernoulli beam theory. By extending these concept two new
plate elements are formulated using Lagrange-Lagrange and mixed
Lagrange-Hermite interpolations for a non-classical Kirchhoff plate
theory. The non-classical theories are governed by sixth order partial
differential equation and have deflection, slope and curvature as degrees of freedom. A novel and generalize way is proposed herein to
implement these degrees of freedom in a simple and efficient manner.
A new procedure to compute the modified weighting coefficient matrices for beam and plate elements is presented. The proposed elements
have displacement as the only degree of freedom in the element domain and displacement, slope and curvature at the boundaries. The
Gauss-Lobatto-Legender quadrature points are considered as element
nodes and also used for numerical integration of the element matrices.
The framework for computing the stiffness matrices at the integration points is analogous to the conventional finite element method.
Numerical examples on free vibration analysis of gradient beams and
plates are presented to demonstrate the efficiency and accuracy of the
proposed elements.
Keywords: Quadrature element, gradient elasticity theory, weighting coefficients, non-classical dofs, frequencies, mixed interpolation
∗
†
Corresponding author: E-mail address: [email protected]
E-mail address: [email protected]; Phone: +91-80-22932048
1
1.0 INTRODUCTION
In recent decades the research in the field of computational solid and fluid
mechanics focused on developing cost effective and highly accurate numerical schemes. Subsequently, many numerical schemes were proposed and
applied to various engineering problems. The early research emphasized on
the development of finite element and finite difference methods [1–3], these
methodologies had limitations related to the computational cost. Alternatively, differential quadrature method (DQM) was proposed by Bellman [4]
which employed less number of grid points. Later, many enriched versions
of differential quadrature method were developed, for example, differential
quadrature method [5–10], harmonic differential quadrature method [11, 12],
strong form differential quadrature element method (DQEM) [13–19], and
weak form quadrature element method [20–23]. The main theme in these
improved DQ versions was to develop versatile models to account for complex loading, discontinuous geometries and generalized boundary conditions.
Lately, much research inclination is seen towards the strong and weak
form DQ methods due their versatality [13–23]. The strong form differential
quadrature method which is built on governing equations, require explicit
expressions for interpolation functions and their derivatives, and yield unsymmetric matrices. In contrast, the weak form quadrature method is fomulated using variation principles, and the weighting coefficients are computed
explicitly at integration points using the DQ rule, leading to symmetric matrices. The aforementioned literature forcussed on developing DQ schemes for
classcial beam and plate theories which are governed by fourth order partial
differential equations. The DQ solution for the sixth and eighth order differential equations using GDQR technique is due to Wu et al. [24, 25]. In their
research, they employed strong form of governing equation in conjunction
with Hermite interpolation function to compute the weighting coefficients
and demonstrated the capability for structural and fluid mechanics problems.
Recently, Wang et al. [26] proposed a strong form differential quadrature element based on Hermite interpolation to solve a sixth order partial differential
equation associated with a non-local Euler-Bernoulli beam. The capability
of the element was demonstrated through free vibration analysis. In this
article the main focus is to propose a weak form quadrature beam and plate
element for non-classical higher order theories, which are characterized by
sixth order partial differential equations. As per the authors knowledge no
such work is reported in the literature till date.
The non-classical higher order theories unlike classical continuum theories
are governed by sixth order partial differential equations [27–32]. These nonclassical continuum theories are modified versions of classical continuum the2
ories incorporating higher order gradient terms in the constitutive relations.
The higher order terms consists of stress and strain gradients accompanied
with intrinsic length which accounts for micro and nano scale effects [27–32].
These scale dependent non-classical theories are efficient in capturing the micro and nano scale behaviours of structural systems [29–31]. One such class
of non-classical gradient elasticity theory is the simplified theory by Mindlin
0
et al. [29], with one gradient elastic modulus and two classical lame constant for structural applications [32–34]. This simlified theory was applied
earlier to study the static, dynamic and buckling behaviour of gradient elastic
beams [35–37] and plates [38–40] by developing analytical solutions. Pegios
et al. [41] developed a finite element model for static and stability analysis
of gradient beams. The numerical solution of 2-D and 3-D gradient elastic
structural problems using finite element and boundary element methods can
be found in [42].
In this paper, we propose for the first time, two novel versions of weak
form quadrature beam elements to solve a sixth order partial differential
equation encountered in higher order non-classical elasticity theories. The
two versions of quadrature beam element are based on Lagrange and C 2 continuous Hermite interpolations, respectively. Further, we extend this concept
and develop two new types of quadrature plate elements for gradient elastic plate theories. The first element employs Lagrange interpolation in x
and y direction and second element is based on Lagrange-Hermite mixed
interpolation with Lagrange interpolation in x and Hermite in y direction.
These elements are formulated with the aid of variation principles, differential quadrature rule and Gauss Lobatto Legendre (GLL) quadrature rule.
Here, the GLL points are used as element nodes and also to perform numerical integration to evaluate the stiffness and consistent mass matrices. The
proposed elements have displacement, slope and curvature as the degrees of
freedom at the element boundaries and only displacement in the domain.
A new way to incorporate the non-classical boundary conditions associated
with the gradient elastic beam and plate theory is proposed and implemented.
The novelty in the proposed scheme is the way the classical and non-classical
boundary conditions are represented accurately and with ease. It should be
noted that the higher order degrees of freedom at the boundaries are built
into the formulation only to enforce the boundary conditions.
The paper is organized as follows, first the theoretical basis of gradient
elasticity theory required to formulate the quadrature elements is presented.
Next, the quadrature elements based on Lagrange and Hermite interpolations functions for an Euler-Bernoulli gradient beam are formulated. Later,
the formulation for the quadrature plate elements are given. Finally, numerical results on free vibration analysis of gradient beams and plates are
3
presented to demonstrate the capability of the proposed elements followed
by conclusions.
1
Strain gradient elasticity theory
In this study, we consider Mindlin’s [29] simplified strain gradient microelasticity theory with two classical and one non-classical material constants.
0
The two classical material constants are Lame constants and the non-classical
one is related to intrinsic bulk length g. The theoretical basis of gradient elastic theory required to formulate the quadrature beam and plate elements are
presented in this section.
1.1
Gradient elastic beam theory
The stress-strain relation for a 1-D gradient elastic theory is given as [35, 43]
τ = 2 µ ε + λ trε I
ς = g 2 [2 µ ∇ε + λ ∇(trε) I]
(1)
0
∂
∂
+ ∂y
is the Laplacian operator and I
where λ, µ are Lame constants.∇ = ∂x
is the unit tensor. τ , ς denotes Cauchy and higher order stress respectively,
ε and (tr ε) are the classical strain and its trace which are expressed in terms
of displacement vector w as
1
ε = (∇w + w∇) ,
2
trε = ∇w
(2)
From the above equations the constitutive relations for an Euler-Bernoulli
gradient beam can be defined as
τx = Eεx ,
0
ςx = g 2 εx ,
εx = −z
∂ 2 w(x, t)
∂x2
(3)
For the above state of stress and strain the strain energy in terms of displacements for a beam defined over a domain −L/2 ≤ x ≤ L/2 can be written
as [43]
Z
00
1 L/2
000
U=
EI (w )2 + g 2 (w )2 dx
(4)
2 −L/2
The kinetic energy is given as
1
K=
2
Z
t1
t0
Z
L/2
ρAẇ2 dxdt
−L/2
4
(5)
where E, A, I and ρ are the Young’s modulus, area, moment of inertia, and
density, respectively. w(x, t) is transverse displacement and over dot indicates differentiation with respect to time.
Using the The Hamilton’s principle [45]:
Z t1
(U − K) dt = 0
δ
(6)
t0
we get the following weak form expression for elastic stiffness matrix ‘K’ and
consistent mass matrix ‘m’ as
Z L/2
000
000
K=
EI w00 δw00 + g 2 w δw dx
(7)
−L/2
Z
L/2
˙ dx
ρA ẇ δw
m=
(8)
−L/2
The governing equation of motion for a gradient elastic Euler-Bernoulli beam
is obtained as
EI(wiv − g 2 wvi ) + mẅ = 0
(9)
The above sixth order equation of motion yields three independent vari0
00
ables related to deflection w, slope w and curvature w and six boundary
conditions in total, as given below
Classical boundary conditions :
000
V = EI[w − g 2 wv ] = 0 or w = 0, at x = (−L/2, L/2)
00
0
M = EI[w − g 2 wiv ] = 0 or w = 0, at x = (−L/2, L/2)
(10)
Non-classical boundary conditions :
000
00
M̄ = [g 2 EIw ] = 0 or w = 0, at x = (−L/2, L/2)
(11)
where V , M and M̄ are shear force, bending moment and higher order
moment, respectively.
5
1.2
Gradient elastic plate theory
The strain-displacement relations for a Kirchhoff’s plate theory are defined
as [46]
εxx = −z w̄xx ,
εyy = −z w̄yy ,
γxy = −2z w̄xy
(12)
where w̄(x, y, t) is transverse displacement of the plate. The stress-strain
relations for a gradient elastic Kirchhoff plate are given by [31, 43]:
Classical:
E
(εxx + νεyy )
1 − ν2
E
(εyy + νεxx )
=
1 − ν2
E
=
εxy
1+ν
(13)
E
∇2 (εxx + νεyy )
1 − ν2
E
∇2 (εyy + νεxx )
=g 2
1 − ν2
E
∇2 εxy
=g 2
1+ν
(14)
τxx =
τyy
τxy
Non-classical:
ςxx =g 2
ςyy
ςxy
where τxx , τyy ,τxy , are the classical Cauchy stresses and ςxx ,ςyy , ςxy denotes
higher order stresses related to gradient elasticity. The strain energy for a
gradient elastic Kirchhoff plate is gven by [31, 40]
Up = Ucl + Usg
(15)
where Ucl and Usg are the classical and gradient elastic strain energy given
by
Z Z h
i
1
2
2
2
2
w̄xx
+ w̄yy
+ 2w̄xy
+ 2 ν (w̄xx w̄yy − w̄xy
) dxdy
(16)
Ucl = D
2
A
Usg
1
= g2D
2
Z Z h
2
2
2
2
w̄xxx
+ w̄yyy
+ 3(w̄xyy
+ w̄xxy
)
A
i
2
2
− w̄xxy
dxdy
+ 2 ν (w̄xyy w̄xxx + w̄xxy w̄yyy − w̄xyy
6
(17)
where, D =
Eh3
.
12(1−ν 2 )
The kinetic energy is given by
Z
1
K=
2
ρ h w̄˙ 2 dx dy
(18)
Using the The Hamilton’s principle:
Z t1
(U − K) dt = 0
δ
(19)
A
t0
we obtain the following expression for elastic stiffness and mass matrix for a
gradient elastic plate
Elastic stif f ness matrix :
[K] = [K]cl + [K]sg
(20)
where [K]cl , [K]sg are classical and non-classical elastic stiffness matrix defined as
Z h
[K]cl =D
w̄xx δ w̄xx + w̄yy δ w̄yy + 2w̄xy δ w̄xy +
A
i
ν (δ w̄xx w̄yy + w̄xx δ w̄yy − 2w̄xy δ w̄xy ) dxdy
(21)
2
[K]sg =g D
Z h
w̄xxx δ w̄xxx + w̄yyy δ w̄yyy + 3(w̄xyy δ w̄xyy +
A
w̄xxy δ w̄xxy ) + ν (w̄xyy δ w̄xxx + w̄xxx δ w̄xyy +
i
w̄xxy δ w̄yyy + w̄yyy δ w̄xxy − 2 w̄xyy δ w̄xyy − 2 w̄xxy δ w̄yxx ) dxdy
(22)
Consistent mass matrix :
Z
[M ] =
ρ h w̄˙ δ w̄˙ dx dy
(23)
A
The equation of motion for a gradient elastic Kirchhoff plate considering
the inertial effect is obtained as:
D∇4 w̄ − g 2 D∇6 w̄ + ρh
7
∂ 2 w̄
=0
∂t2
(24)
where,
∂ 4 w̄ ∂ 4 w̄
∂ 4 w̄
+
+
2
,
∂x4
∂y 4
∂x2 ∂y 2
∂ 6 w̄ ∂ 6 w̄
∂ 6 w̄
∂ 6 w̄
∇6 =
+
+
3
+
3
∂x6
∂y 6
∂x4 ∂y 2
∂x2 ∂y 4
∇4 =
the associated boundary conditions for the plate with origin at (0, 0) and
domain defined over (−lx /2 ≤ x ≤ lx /2), (−ly /2 ≤ y ≤ ly /2), are listed
below.
Classical boundary conditions :
∂ 3 w̄
∂ 3 w̄
+
(2
−
ν)
Vx = −D
∂x3
∂x∂y 2
∂ 5 w̄
∂ 5 w̄
∂ 5 w̄
+ (3 − ν)
+3 2 3 =0
+g D
∂x5
∂x∂y 4
∂y ∂x
or
w̄ = 0 at x = (−lx /2, lx /2)
∂ 3 w̄
∂ 3 w̄
+
(2
−
ν)
Vy = −D
∂y 3
∂y∂x2
∂ 5 w̄
∂ 5 w̄
∂ 5 w̄
+ (3 − ν)
+3 2 3 =0
+g D
∂y 5
∂y∂x4
∂x ∂y
or
w̄ = 0, at y = (−ly /2, ly /2)
(25)
2
2
∂ 2 w̄
∂ 2 w̄
Mx = −D
+
ν
∂x2
∂y 2
∂ 4 w̄
∂ 4 w̄
∂ 4 w̄
+g D
+ ν 4 + (3 − ν) 2 2 = 0
∂x4
∂y
∂x ∂y
or
w̄x = 0, at x = (−lx /2, lx /2)
∂ 2 w̄
∂ 2 w̄
+
ν
My = −D
∂y 2
∂x2
∂ 4 w̄
∂ 4 w̄
∂ 4 w̄
+g D
+ ν 4 + (3 − ν) 2 2 = 0
∂y 4
∂x
∂x ∂y
or
w̄y = 0, at y = (−ly /2, ly /2)
(26)
2
2
8
Non-classical boundary conditions :
3
∂ w̄
∂ 3 w̄
2
= 0 or w̄xx = 0, at x = (−lx /2, lx /2)
M̄x = −g D
+ν
∂x3
∂x∂y 2
3
∂ w̄
∂ 3 w̄
2
= 0 or w̄yy = 0, at y = (−ly /2, ly /2)
M̄y = −g D
+ν
∂y 3
∂y∂x2
(27)
where lx and ly are the length and width of the plate. Vx ,Vy are the shear
force, Mx ,My are the bending moment and M̄x ,M̄y are the higher order moment. The different boundary conditions employed in the present study for
a gradient elastic Kirchhoff plate are
Simply supported on all edges SSSS :
w̄ = Mx = w̄xx = 0 at x = (−lx /2, lx /2)
w̄ = My = w̄yy = 0 at y = (−ly /2, ly /2)
Free on all edges FFFF :
Vx = Mx = M̄x = 0 at x = (−lx /2, lx /2)
Vy = My = M̄y = 0 at y = (−ly /2, ly /2)
Simply supported and free on adjacent edges SSFF :
w̄ = My = w̄yy = 0 at y = −ly /2
w̄ = Mx = w̄xx = 0 at x = lx /2
Vx = Mx = M̄x = 0 at x = −lx /2
Vy = My = M̄y = 0 at y = ly /2
for the SSFF plate at (−lx /2, −ly /2) and (lx /2, ly /2), w̄ = 0 condition is
enforced. The above boundary conditions are described by a notation, for
example, consider a SSFF plate, the first and second letter correspond to y =
−ly /2 and x = lx /2 edges, similarly, the third and fourth letter correspond
to the edges y = ly /2 and x = −lx /2, respectively. Further, the letter S, C
and F correspond to simply supported, clamped and free edges of the plate.
2
Quadrature element for a gradient elastic
Euler-Bernoulli beam
Two novel quadrature elements for a gradient Euler-Bernoulli beam are presented in this section. First, the quadrature element based on Lagrangian
9
interpolation is formulated. Later, the quadrature element based on C 2 continuous Hermite interpolation is developed. The procedure to modify the
DQ rule to implement the classical and non-classical boundary conditions
are explained. A typical N-node quadrature element for an Euler-Bernoulli
gradient beam is shown in the Figure 1.
𝑥, 𝜉 =
2𝑥
𝐿
𝑤1′′
𝑤5′′
𝑤1′
𝑤1
𝑤2
𝑤4
𝑤3
𝑤5′
𝑤5
𝐿
Figure 1: A typical quadrature element for a gradient Euler-Bernoulli beam.
It can be noticed from the Figure 1, each interior node has only displacement w as degrees of freedom and the boundary has 3 degrees of freedom
0
00
w, w , w . The new displacement vector now includes the slope and curvature as additional degrees of freedom at the element boundaries given by:
0
0
00
00
wb = {w1 , · · · , wN , w1 , wN , w1 , wN }. The procedure to incorporate these extra boundary degrees of freedom in to the formulation will be presented next
for Lagrange and C 2 continuous Hermite interpolation based quadrature elements.
2.1
Lagrange interpolation based quadrature beam element
The displacement for a N-node quadrature beam is assumed as [10]:
w(x, t) =
N
X
Lj (x)wjb
j=1
=
N
X
L̄j (ξ)wjb
(28)
j=1
Lj (x) and L̄j (ξ) are Lagrangian interpolation functions in x and ξ co-ordinates
respectively, and ξ = 2x/L with ξ ∈ [−1, 1]. The Lagrange interpolation
10
functions are defined as [7, 10]
N
Y
β(ξ)
(ξ − ξk )
Lj (ξ) =
=
β(ξj )
(ξj − ξk )
k=1
(29)
(k6=j)
where
β(ξ) = (ξ − ξ1 )(ξ − ξ2 ) · · · (ξ − ξj−1 )(ξ − ξj+1 ) · · · (ξ − ξN )
β(ξj ) = (ξj − ξ1 )(ξj − ξ2 ) · · · (ξj − ξj−1 )(ξj − ξj+1 ) · · · )(ξj − ξN )
The first order derivative of the above interpolation function can be written
as,
N
N
Y
Y
(ξi − ξk )/
= (ξj − ξk ) (i 6= j)
k=1
k=1
(k6=j)
(k6=i,j)
0
(30)
Aij = Lj (ξi )
N
X
1
(ξi −ξk )
k=1
(k6=i)
The conventional higher order weighting coefficients are computed as
Bij =
N
X
k=1
Aik Akj ,
Cij =
N
X
Bik Akj (i, j = 1, 2, ..., N )
(31)
k=1
Here, Bij and Cij are weighting coefficients for second and third order derivatives, respectively.
The sixth order partial differential equation given in Equation (9) ren0
00
ders slope w and curvature w as extra degrees of freedom at the element
boundaries. To account for these extra boundary degrees of freedom in the
formulation, the derivatives of conventional weighting function Aij , Bij , and
Cij are modified as follows:
First order derivative matrix :
Aij (i, j = 1, 2, · · · , N )
Āij =
0 (i, j = 1, 2, · · · , N, j = N + 1, · · · , N + 4)
11
(32)
Second order derivative matrix :
Bij (j = 1, 2, · · · , N )
B̄ij =
0 (j = N + 1, · · · , N + 4)(i = 2, 3, · · · , N − 1)
B̄ij =
N
−1
X
(33)
Aik Akj (j = 1, 2, · · · , N, i = 1, N )
k=2
B̄i(N +1) = Ai1 ;
B̄i(N +2) = AiN (i = 1, N )
Third order derivative matrix :
Cij (j = 1, 2, · · · , N )
C̄ij =
0 (j = N + 1, · · · , N + 4, i = 2, 3, · · · , N − 1)
C̄ij =
N
−1
X
(34)
(35)
Bik Akj (j = 1, 2, · · · , N, i = 1, N )
k=2
C̄i(N +3) = Ai1 ;
C̄i(N +4) = AiN (i = 1, N )
(36)
Using the above Equations (32)-(36), the element matrices can be expressed in terms of weighting coefficients as
Elastic stif f ness matrix :
Kij =
N
N
X
8EI X
2 32EI
H
B̄
B̄
+
g
Hk C̄ki C̄kj
k ki kj
L3 k=1
L5 k=1
(i, j = 1, 2, ..., N, N + 1, · · · , N + 4)
(37)
ρAL
Hk δij (i, j = 1, 2, ..., N )
2
(38)
Consistent mass matrix :
Mij =
Here ξ and H are the coordinate and weights of GLL quadrature. δij is
the Dirac-delta function.
12
2.2
Hermite interpolation based quadrature beam element
For the case of quadrature element based on C 2 continuous Hermite interpolation the displacement for a N-node gradient beam element is assumed
as
w(ξ, t) =
N
X
0
0
00
00
φj (ξ)wj + ψ1 (ξ)w1 + ψN (ξ)wN + ϕ1 (ξ)w1 + ϕN (ξ)wN =
j=1
N
+4
X
Γj (ξ)wjb
j=1
(39)
φ, ψ and ϕ are Hermite interpolation functions defined as [24, 26]
ϕj (ξ) =
1
Lj (x)(x − xj )2 (x − xN −j+1 )2 (j = 1, N )
2(ξj − ξN −j+1 )2
1
Lj (ξ)(ξ − ξj )(ξ − ξN −j+1 )2
(ξj − ξN −j+1 )2
4
1
− 2Lj (ξj ) +
ϕj (ξ) (j = 1, N )
ξj − ξN −j+1
(40)
ψj (ξ) =
(41)
1
2
2
1
φj (ξ) =
Lj (ξ)(ξ − ξN −j+1 ) − Lj (ξj ) +
ψj (ξ)
(ξj − ξN −j+1 )2
ξj − ξN −j+1
4L1j (ξj )
2
2
+
− Lj (ξj ) +
ϕj (ξ) (j = 1, N )
ξj − ξN −j+1 (ξj − ξN −j+1 )2
(42)
φj (ξ) =
1
Lj (ξ)(ξ − ξ1 )2 (ξ − ξN )2 (j = 2, 3, ..., N − 1)
2
2
(ξj − ξ1 ) (ξj − ξN )
(43)
The kth order derivative of w(ξ) with respect to ξ is obtained from Equation (39) as
k
w (ξ) =
N
X
φkj (ξ)wj
+
0
ψ1k (ξ)w1
+
0
k
ψN
(ξ)wN
j=1
+
00
ϕk1 (ξ)w1
+
00
ϕkN (ξ)wN
=
N
+4
X
Γkj (ξ)wjb
j=1
(44)
Using the above Equation (40)-(44), the element matrices can be expressed in terms of weighting coefficients as
13
Elastic stif f ness matrix :
N
N
X
8EI X
(2) (2)
(3) (3)
2 32EI
Kij = 3
Hk Γki Γkj + g
Hk Γki Γkj
5
L k=1
L k=1
(i, j = 1, 2, ..., N, N + 1, · · · , N + 4)
(45)
here ξ and H are the coordinate and weights of GLL quadrature. The consistent mass matrix remains the same as given by Equation (38).
Combining the stiffness and mass matrix, the system of equations after
applying the boundary conditions can be expressed as
kbb
kbd
∆
I
0
b
fb
=
(46)
kdb
0
kdd
∆d
ω 2 Mdd
∆d
where the vector ∆b contains the boundary related non-zero slope and curvature dofs. Similarly, the vector ∆d includes all the non-zero displacement
dofs of the beam. In the present analysis the boundary force vector is assumed to be zero, fb = 0. Now, expressing the ∆b dofs in terms of ∆d , the
system of equations reduces to
in o
h
h
in o
−1
(47)
kdd − kdb kbb
kbd ∆d = ω 2 Mdd wd
h
i
−1
Here, K̄ = kdd − kdb kbb
kbd is the modified stiffness matrix associated
with ∆d dofs. The above system of equations leads to an Eigenvalue problem
and its solutions renders frequencies and corresponding mode shapes.
3
Quadrature element for gradient elastic Kirchhoff plate
In this section, we formulate two novel quadrature elements for non-classical
gradient Kirchhoff plate. First, the quadrature element based on Lagrange
interpolation in ξ and η direction is presented. Next, the quadrature element
based on Lagrange-Hermite mixed interpolation, with Lagrangian interpolation is ξ direction and Hermite interpolation assumed in η direction is
formulated. The GLL points in ξ and η directions are used as element nodes.
Similar to the beam elements discussed in the section 2, the plate element
also has displacement w̄ as the only degrees of freedom in the domain and
at the edges it has 3 degrees of freedom w̄, w̄x or w̄y , w̄xx or w̄yy depending
14
upon the edge. At the corners the element has five degrees of freedom, w̄,
w̄x , w̄y , w̄xx and w̄yy . The new displacement vector now includes the slope
and curvature as additional degrees of freedom at the element boundaries
j
j
, · · · }, where
, · · · , w̄yy
given by: wp = {w̄i , · · · , w̄N ×N , w̄xj , · · · , w̄yj , · · · , w̄xx
(i = 1, 2, · · · , N × N ; j = 1, 2, · · · , 4N ). A quadrature element for a gradient Kirchhoff plate with Nx × Ny grid is shown in the Figure 2.
𝑦, 𝜂 =
2𝑦
𝑙𝑦
𝑑𝑤 𝑑 2 𝑤
Dofs related to 𝑑𝑦 , 𝑑𝑦2 on edge
𝑦 = 𝑙𝑦 /2
Dofs related
to
𝑑𝑤 𝑑 2 𝑤
,
𝑑𝑥 𝑑𝑥 2
on
edge
𝑥 = −𝑙𝑥 /2
𝒘′′
𝟓𝟏
𝒘′′
𝟓𝟐
𝒘′′
𝟓𝟑
𝒘′′
𝟓𝟓
𝒘′′
𝟓𝟒
Dofs related to
𝒘′𝟓𝟏
𝒘′𝟓𝟐
𝒘′𝟓𝟑
𝒘′𝟓𝟒
𝒘′𝟓𝟓
𝑑𝑤 𝑑 2 𝑤
,
𝑑𝑥 𝑑𝑥 2
𝑥 = 𝑙𝑥 /2
𝒘′′
𝟓𝟏
𝒘′𝟓𝟏
𝒘𝟓𝟏
𝒘𝟓𝟐
𝒘𝟓𝟑
𝒘𝟓𝟒
𝒘𝟓𝟓
𝒘′𝟓𝟓
𝒘′′
𝟓𝟓
𝒘′′
𝟒𝟏
𝒘′𝟒𝟏
𝒘𝟒𝟏
𝒘𝟒𝟐
𝒘𝟒𝟑
𝒘𝟒𝟒
𝒘𝟒𝟓
𝒘′𝟒𝟓
𝒘′′
𝟒𝟓
𝒘′′
𝟑𝟏
𝒘′𝟑𝟏
𝒘𝟑𝟏
𝒘𝟑𝟐
𝒘𝟑𝟑
𝒘𝟑𝟒
𝒘𝟑𝟓
𝒘′𝟑𝟓
𝒘′′
𝟑𝟓
𝒘′′
𝟐𝟏
𝒘′𝟐𝟏
𝒘𝟐𝟏
𝒘𝟐𝟐
𝒘𝟐𝟑
𝒘𝟐𝟒
𝒘𝟐𝟓
𝒘′𝟐𝟓
𝒘′′
𝟐𝟓
𝒘′′
𝟏𝟏
𝒘′𝟏𝟏
𝒘𝟏𝟏
𝒘𝟏𝟐
𝒘𝟏𝟑
𝒘𝟏𝟒
𝒘𝟏𝟓
𝒘′𝟏𝟓
𝒘′′
𝟏𝟓
𝒘′𝟏𝟏
𝒘′𝟏𝟐
𝒘′𝟏𝟑
𝒘′𝟏𝟒
𝒘′𝟏𝟓
Dofs related to
𝒘′′
𝟏𝟏
𝒘′′
𝟏𝟐
𝒘′′
𝟏𝟑
𝒘′′
𝟏𝟓
𝒘′′
𝟏𝟒
on edge
𝑥, 𝜉 =
𝒍𝒚
𝑑𝑤 𝑑 2 𝑤
,
𝑑𝑦 𝑑𝑦 2
2𝑥
𝑙𝑥
on
edge 𝑦 = −𝑙𝑦 /2
𝒍𝒙
Figure 2: A typical quadrature element for a gradient elastic Kirchhoff plate
with N = Nx = Ny = 5.
Here, N = Nx = Ny = 5 are the number of grid points in ξ and η
directions, respectively. It can be seen from the Figure 2, the plate element
has three degrees of freedom on each edge, five degrees of freedom at the
0
corners and only displacement in the domain. The slope w̄ and curvature
00
w̄ dofs related to each edge of the plate are highlighted by the boxes. The
transformation used for the plate is ξ = 2x/lx and η = 2y/ly with −1 ≤
15
(ξ, η) ≤ 1.
3.1
Lagrange interpolation based quadrature element
for gradient elastic plates
The displacement for a Nx × Ny node quadrature plate element is assumed
as
N X
N
N X
N
X
X
p
p
w̄(x, y, t) =
Li (x)Lj (y)wij (t) =
L̄i (ξ)L̄j (η)wij
(t)
(48)
i=1 j=1
i=1 j=1
p
w̄ij
(t)
where
is the nodal displacement vector for the plate and L̄i (ξ), L̄j (η)
are the Lagrange interpolation functions in ξ and η directions, respectively.
The slope and curvature degrees of freedom at the element boundaries are
accounted while computing the weighting coefficients of higher order derivatives as discussed in section 2.1. Substituting the above Equation (48) in
Equation (20) we get the stiffness matrix for a gradient elastic quadrature
plate element as
N
N
T
ab X X
Hi Hj F (ξi , ηj ) cl [D]cl [F (ξi , ηj )]cl
[K]cl =
4 i=1 j=1
[K]sg = g
2 ab
4
N X
N
X
Hi Hj [F (ξi , ηj )]Tsg [D]sg [F (ξi , ηj )]sg
(49)
(50)
i=1 j=1
where (ξi ηj ) and (Hi , Hj ) are abscissas and weights of GLL quadrature
rule. [F (ξi , ηj )]cl and [F (ξi , ηj )]sg are the classical and non-classical strain
matrices at location (ξi , ηj ) for gradient elastic plate. [D]cl and [D]sg are the
constitutive matrices corresponding to classical and gradient elastic plate.
The classical and non-classical strain matrices are defined as
N +4
X
4
ξ
p
B̄ik
w̄kj
2
a
k=1
N +4
X
4
η p
p
(i, j = 1, 2, .., N )
(51)
F (ξi , ηj ) cl {w̄ } =
B̄ik w̄ik
2
b
k=1
N +4 N +4
X
X
8
ξ
η
p
Āil Ājk w̄lk
ab
l=1
k=1
16
N +4
X
2 8
ξ
p
g 3
C̄ik
w̄kj
a
k=1
N +4
X
η p
2 8
g 3
C̄ik w̄ik
b
k=1
p
F (ξi , ηj ) sg {w̄ } =
N +4 N +4
X
X
2 8
p
ξ
η
g
w̄
B̄
Ā
lk
il
jk
a2 b
l=1 k=1
N
+4
N
+4
2 8 XX ξ η p
g
Āil B̄jk w̄lk
ab2
l=1
(i, j = 1, 2, .., N )
k=1
The classical and non-classical constitutive matrices are given as
1
µ
0
1
0
D]cl = µ
0
0
2(1 − µ)
D]sg
1
0
=
0
0
(52)
0
µ
1
0
µ
(3 − 2µ)
µ
0
0
(53)
µ
0
(3 − 2µ)
(54)
The diagonal mass matrix is given by
Mkk =
3.2
ρhab
Hi Hj
4
(i, j = 1, 2, ..., N ) (k = (i − 1) × N + j)
(55)
Mixed interpolation based quadrature element for
gradient elastic plates
The quadrature element presented here is based on mixed Lagrange-Hermite
interpolation, with Lagrangian interpolation is assumed in ξ direction and
17
Hermite interpolation in η direction. The displacement for a Nx × Ny node
mixed interpolation quadrature plate element is assumed as
w̄(x, y, t) =
N N
+4
X
X
p
Li (x)Γj (y)wij
(t)
i=1 j=1
=
N N
+4
X
X
p
L̄i (ξ)Γ̄j (η)wij
(t)
(56)
i=1 j=1
p
where w̄ij
(t) is the nodal displacement vector of the plate and L̄i (ξ) and Γ̄j (η)
are the Lagrange and Hermite interpolation functions in ξ and η directions,
respectively. The formulations based on mixed interpolation methods have
advantage in excluding the mixed derivative dofs at the free corners of the
plate [10]. The modified weighting coefficient matrices derived in section 2.1,
using Lagrange interpolations and those given in section 2.2, for Hermite
interpolations are used in forming the element matrices. Substituting the
above Equation (56) in Equation (20), we get the stiffness matrix for gradient
elastic quadrature plate element based on mixed interpolation as
N
[K]cl =
[K]sg = g
N
T
ab X X
Hi Hj G(ξi , ηj ) cl [D]cl [G(ξi , ηj )]cl
4 i=1 j=1
2 ab
4
N X
N
X
Hi Hj [G(ξi , ηj )]Tsg [D]sg [G(ξi , ηj )]sg
(57)
(58)
i=1 j=1
where (ξi ηj ) and (Hi , Hj ) are abscissas and weights of GLL quadrature rule.
[D]cl and [D]sg are the classical and gradient elastic constitutive matrices for
the plate defined in the section 3.1. The classical [G(ξi , ηj )]cl and non-classical
[G(ξi , ηj )]sg strain matrices at the location (ξi , ηj ) are defined as,.
N +4
X
4
(ξ) p
B̄ik w̄kj
2
a
k=1
N +4
X
4
2(η) p
p
(i, j = 1, 2, .., N ) (59)
G(ξi , ηj ) cl {w̄ } =
Γ̄
w̄
jk
ik
2
b
k=1
N +4 N +4
X
X
8
(ξ)
1(η)
p
Āil Γ̄jk w̄lk
ab
l=1
k=1
18
N +4
X
2 8
(ξ) p
g 3
C̄ik w̄kj
a
k=1
N +4
X
3(η) p
2 8
g 3
Γ̄jk w̄ik
b
k=1
p
F (ξi , ηj ) sg {w̄ } =
N +4 N +4
X
X
2 8
2(ξ)
(η)
p
g
Γ̄
Ā
w̄
il
lk
jk
a2 b
l=1 k=1
N
+4
N
+4
2 8 X X 1(ξ) (η) p
g
Γ̄il B̄jk w̄lk
ab2
l=1
(i, j = 1, 2, .., N ) (60)
k=1
The diagonal mass matrix remains the same as Equation (55). Here,
Ā, B̄ and C̄ are the first, second and third order derivatives of Lagrange
interpolation functions along the ξ direction. Similarly, Γ̄1 , Γ̄2 and Γ̄3 are
the first, second and third order derivatives of Hermite interpolation functions
in the η direction .
4
Numerical Results and Discussion
The efficiency of the proposed quadrature beam and plate element is demonstrate through free vibration analysis. Initially, the convergence study is
performed for an Euler-Bernoulli gradient beam, followed by frequency comparisons for different boundary conditions and g values. Similar, study is
conducted for a Kirchhoff plate and the numerical results are tabulated and
compared with available literature. Four different values of length scale parameters, g = 0.00001, 0.05, 0.1, and 0.5 are considered in this study. Single element is used with GLL quadrature points as nodes to generate all
the results reported herein. For results comparison the proposed gradient
quadrature beam element based on Lagrange interpolation is designated as
SgQE-L and the element based on Hermite interpolation as SgQE-H. Similarly, the plate element based on Lagrange interpolation in ξ and η directions
as SgQE-LL and the element based on mixed interpolation as SgQE-LH. In
this study, the rotary inertia related to slope and curvature degrees of freedom is neglected.
19
4.1
Quadrature beam element for gradient elasticity
theory
The numerical data used for the analysis of beams is as follows: Length
L = 1, Young’s modulus E = 3 × 106 , Poission’s ratio ν = 0.3 and density ρ = p
1. All the frequencies reported for beams are nondimensional as
2
ω̄ = ωL ρA/EI. Where A and I are area and moment of inertia of the
beam and ω is the natural frequency. The analytical solutions for gradient
elastic Euler-Bernoulli beam with different boundary conditions are obtained
by following the approach given in [44] and the associated frequency equations are presented in Appendix-I. The classical and non-classical boundary
conditions used in the free vibration analysis for different end support are:
Simply supported :
00
classical : w = M = 0 , non-classical : w = 0
at x = (− L2 , L2 )
Clamped :
0
00
classical : w = w = 0 , non-classical : w = 0
at x = (− L2 , L2 )
Free-free :
classical : Q = M = 0 , non-classical : M̄ = 0
at x = (− L2 , L2 )
Cantilever :
0
classical : w = w = 0
00
non-classical : w = 0
at x = − L2 , Q = M = 0 at x =
at x = − L2 , M̄ = 0 at x = L2
Propped cantilever :
0
classical : w = w = 0 at x = − L2 , w = M = 0 at x =
00
00
non-classical : w = 0 at x = − L2 , w = 0 at x = L2
L
2
L
2
The size of the displacement vector ∆d defined in Equation (46) remains
as N − 2 for all the boundary conditions of the beam except for free-free and
cantilever beam which are N and N −1, respectively. However, the size of the
∆b vector depends upon the number of non-zero slope and curvature dofs at
the element boundaries. The non-classical boundary conditions employed for
00
simply supported gradient beam are w = 0 at x = (− L2 , L2 ), the equations
related to curvature degrees of freedom are eliminated and the size of ∆b is
2. For the gradient cantilever beam the non-classical boundary conditions
00
used are w = 0 at x = − L2 and M̄ = 0 at x = L2 . The equation related
to curvature degrees of freedom at x = − L2 is eliminated and the equation
related to higher order moment at x = L2 is retained and the size of ∆b = 2.
20
In the case of clamped beam the non-classical boundary conditions read
00
w = 0 at x = (− L2 , L2 ) and the ∆b is zero. Similarly, the size for the propped
00
cantilever beam will be 3 as w = 0 at x = (− L2 , L2 ). Finally, for a free-free
beam the size of ∆b vector is 4 due to M̄ = 0 at x = (− L2 , L2 ).
4.1.1
Frequency convergence for gradient elastic quadrature beam
elements
In this section, the convergence behaviour of frequencies obtained using proposed SgQE-L and SgQE-H elements for simply supported and free-free
Euler-Bernoulli beam are compared. Figure 3, shows the comparison of first
three frequencies for a simply supported gradient beam and their convergence
trends for g/L = 0.1. The convergence is seen faster for both SgQE-L and
SgQE-H elements for all the three frequencies with solution converging to
analytical values with 10 nodes. Similar trend is noticed in the the Figure 4,
for free-free beam. It is to be noted that, the proposed SgQE-L and SgQE-H
elements are efficient in capturing the rigid body modes associated with the
generalized degrees of freedom. The frequencies reported for free-free beam
are related to elastic modes and the rigid mode frequencies are not reported
here, which are zeros. Hence, single SgQE-L or SgQE-H element with fewer
number of nodes can produce accurate solutions even for higher frequencies.
SgQE−L
SgQE−H
Analytical
100
Mode−III
Nondimensional frequency
80
60
Mode−II
40
20
Mode−I
0
4
6
8
10
12
14
16
18
20
22
Number of nodes N
Figure 3: Convergence behaviour of frequencies for a simply supported gradient beam (g/L = 0.1).
21
SgQE−L
SgQE−H
Analytical
140
Mode−III
Nondimensional frequency
120
100
80
Mode−II
60
40
Mode−I
20
0
4
6
8
10
12
14
16
18
20
22
Number of nodes N
Figure 4: Convergence behaviour of frequencies for a free-free gradient beam
(g/L = 0.1).
4.1.2
Free vibration analysis of gradient beams using SgQE-L and
SgQE-H elements
To demonstrate the applicability of the SgQE-L and SgQE-H elements for
different boundary conditions the frequencies are compared with the analytical solutions in Tables 1-5. The comparison is made for first six frequencies
obtained for different values of g/L = 0.00001, 0.05, 0.1, 0.5.
22
Freq. g/L
ω̄1
ω̄2
ω̄3
ω̄4
ω̄5
ω̄6
0.00001
0.05
0.1
0.5
SgQE-L
9.869
9.870
9.874
9.984
SgQE-H
9.869
9.871
9.874
9.991
Analytical
9.870
9.871
9.874
9.991
SgQE-L
39.478
39.498
39.554
41.302
SgQE-H
39.478
39.498
39.556
41.381
Analytical
39.478
39.498
39.556
41.381
SgQE-L
88.826
88.923
89.207
97.725
SgQE-H
88.826
88.925
89.220
98.195
Analytical
88.826
88.925
89.220
98.195
SgQE-L
157.914
158.221
159.125
185.378
SgQE-H
157.915
158.225
159.156
186.497
Analytical
157.914
158.226
159.156
186.497
SgQE-L
246.740
247.480
249.655
310.491
SgQE-H
247.305
247.475
249.760
313.741
Analytical
246.740
247.500
249.765
313.743
SgQE-L
355.344
357.039
361.805
486.229
SgQE-H
355.306
356.766
361.564
488.302
Analytical
355.306
356.880
361.563
488.240
Table 1: Comparison of first six frequencies for a simply supported gradient
beam
23
Freq. g/L
ω̄1
ω̄2
ω̄3
ω̄4
ω̄5
ω̄6
0.00001
0.05
0.1
0.5
SgQE-L
22.373
22.376
22.387
22.691
SgQE-H
22.373
22.377
22.387
22.692
Analytical
22.373
22.377
22.387
22.692
SgQE-L
61.673
61.708
61.814
64.841
SgQE-H
61.673
61.708
61.814
64.856
Analytical
61.673
61.708
61.814
64.856
SgQE-L
120.903
121.052
121.496
133.627
SgQE-H
120.904
121.052
121.497
133.710
Analytical
120.903
121.052
121.497
133.710
SgQE-L
199.859
202.864
201.553
234.596
SgQE-H
199.876
200.287
201.556
234.875
Analytical
199.859
200.286
201.557
234.875
SgQE-L
298.550
299.528
302.422
374.535
SgQE-H
298.556
299.365
302.403
375.234
Analytical
298.555
299.537
302.443
375.250
SgQE-L
417.217
419.418
425.469
562.869
SgQE-H
416.991
418.438
424.747
562.758
Analytical
416.991
418.942
424.697
562.536
Table 2: Comparison of first six frequencies for a free-free gradient beam
In the Table 1, the comparison of first six frequencies for a simply supported gradient beam are shown. For g/L = 0.00001, all the frequencies
match well with the exact frequencies of classical beam. Good agreement
with analytical solutions is noticed for all the frequencies obtained using
SgQE-L and SgQE-H elements for higher values of g/L. In Table 2, the frequencies corresponding to elastic modes are tabulated and compared for a
free-free beam. Similarly, in Tables 3-5, comparison in made for cantilever,
clamped and propped cantilever gradient beams, respectively. The frequencies obtained using SgQE-L and SgQE-H elements are in close agreement
with the analytical solutions for different g/L values. Hence, based on the
24
above findings it can be stated that the SgQE-I and SgQE-II elements can
be applied for free vibration analysis of gradient Euler-Bernoulli beam for
any choice of boundary conditions and g/L values.
Freq. g/L
ω̄1
ω̄2
ω̄3
ω̄4
ω̄5
ω̄6
0.00001
0.05
0.1
0.5
SgQE-L
22.324
22.801
23.141
27.747
SgQE-H
22.590
22.845
23.310
27.976
Analytical
22.373
22.831
23.310
27.976
SgQE-L
61.540
62.720
63.984
79.450
SgQE-H
62.276
63.003
64.365
79.970
Analytical
661.673
62.961
64.365
79.970
SgQE-L
120.392
122.916
125.542
162.889
SgQE-H
122.094
123.594
126.512
164.927
Analytical
120.903
123.511
126.512
164.927
SgQE-L
199.427
203.581
208.627
286.576
SgQE-H
201.843
204.502
209.887
289.661
Analytical
199.859
204.356
209.887
289.661
SgQE-L
297.282
304.138
312.503
455.285
SgQE-H
301.541
305.843
314.956
462.238
Analytical
298.555
305.625
314.956
462.238
SgQE-L
421.194
427.786
442.299
681.749
SgQE-H
421.092
427.787
442.230
691.292
Analytical
416.991
427.461
442.230
691.292
Table 3: Comparison of first six frequencies for a clamped gradient beam
25
Freq. g/L
ω̄1
ω̄2
ω̄3
ω̄4
ω̄5
ω̄6
0.00001
0.05
0.1
0.5
SgQE-L
3.532
3.545
3.584
3.857
SgQE-H
3.534
3.552
3.587
3.890
Analytical
3.532
3.552
3.587
3.890
SgQE-L
21.957
22.188
22.404
24.592
SgQE-H
22.141
22.267
22.497
24.782
Analytical
22.141
22.267
22.496
24.782
SgQE-L
61.473
62.150
62.822
71.207
SgQE-H
61.997
62.375
63.094
71.863
Analytical
61.997
62.375
63.094
71.863
SgQE-L
120.465
121.867
123.424
146.652
SgQE-H
121.495
122.313
123.966
148.181
Analytical
121.495
122.313
123.966
148.181
SgQE-L
199.141
201.636
204.752
257.272
SgQE-H
200.848
202.377
205.658
260.336
Analytical
202.377
205.658
260.336
200.847
SgQE-L
297.489
301.551
307.229
410.222
SgQE-H
300.043
302.667
308.605
415.802
Analytical
300.043
302.667
308.605
415.802
Table 4: Comparison of first six frequencies for a cantilever gradient beam
26
Freq. g/L
ω̄1
ω̄2
ω̄3
ω̄4
ω̄5
ω̄6
0.00001
0.05
0.1
0.5
SgQE-L
15.413
15.520
15.720
17.351
SgQE-H
15.492
15.581
15.740
17.324
Analytical
15.492
15.581
15.740
17.324
SgQE-L
49.869
50.313
51.026
57.767
SgQE-H
50.207
50.512
51.089
58.197
Analytical
50.207
50.512
51.089
58.197
SgQE-L
104.044
105.043
106.557
127.127
SgQE-H
104.758
105.457
106.865
128.005
Analytical
104.758
105.457
106.865
128.005
SgQE-L
177.922
179.778
182.822
231.247
SgQE-H
179.149
180.500
183.389
233.357
Analytical
179.149
180.500
183.389
233.357
SgQE-L
271.502
274.654
280.210
378.692
SgQE-H
273.383
275.749
281.089
382.058
Analytical
273.383
275.749
281.089
382.058
SgQE-L
384.785
389.746
399.154
575.841
SgQE-H
387.463
391.341
400.509
582.607
Analytical
387.463
391.341
400.509
582.607
Table 5: Comparison of first six frequencies for a propped cantilever gradient
beam
4.2
Quadrature plate element for gradient elasticity
theory
Three different boundary conditions of the plate, simply supported on all
edges (SSSS), free on all edges (FFFF) and combination of simply supported
and free (SSFF) are considered. The converge behaviour of SgQE-LL and
SgQE-LH plate elements is verified first, later numerical comparisons are
made for all the three plate conditions for various g/lx values. Allp
the fre2
quencies reported herein for plate are non-dimensional as ω̄ = ωlx ρh/D.
27
The numerical data used for the analysis of plates is: length lx = 1, width
ly = 1, thickness h = 0.01, Young’s modulus E = 3 × 106 , Poission’s ratio
ν = 0.3 and density ρ = 1. The number of nodes in either direction are
assumed to be equal, N = Nx = Ny . The choice of the essential and natural
boundary conditions for the above three plate problems are given in section
1.2.
The size of the displacement vector ∆d defined in Equation (46) remains
as (N − 2) × (N − 2) for all the boundary conditions of the gradient plate
except for free-free and cantilever plate which are N × N and (N × N ) − N ,
respectively. However, the size of the ∆b vector depends upon the number of
non-zero slope and curvature dofs along the element boundaries. The nonclassical boundary conditions employed for SSSS gradient plate are w̄xx = 0
at x = (− l2x , l2x ) and w̄yy = 0 at y = (− l2y , l2y ), the equations related to
curvature degrees of freedom are eliminated and the size of ∆b will be 4N −8,
as the w̄x = w̄y = 0 at the corners of the plate. For a FFFF plate the nonclassical boundary conditions employed are M̄x = 0 at x = (− l2x , l2x ) and
M̄y = 0 at y = (− l2y , l2y ), and the size of ∆b is 8N . Finally, for SSFF plate
∆b = 6N − 4.
4.2.1
Frequency convergence of gradient elastic quadrature plate
elements
In Figure 5, convergence of first three frequencies for a SSSS plate obtained
using SgQE-LL and SgQE-LH elements for g/lx = 0.05 is plotted and compared with analytical solutions [38]. Both SgQE-LL and SgQE-LH elements
show excellent convergence behaviour for all the three frequencies. Figures
6 and 7, illustrate the frequency convergence for FFFF and SSFF plates,
respectively, for g/lx = 0.05. Only the SgQE-LL and SgQE-LH element frequencies are shown, as the gradient solution are not available in literature
for comparison. It is observed that SgQE-LL and SgQE-LH elements exhibit
identical convergence characteristics.
28
SgQE−LL
SgQE−LH
Analytical [36]
100
Mode−III
Nondimensional frequency
80
60
Mode−II
40
Mode−I
20
0
4
6
8
10
12
14
16
18
20
22
Number of nodes N
Figure 5: Convergence behaviour of frequencies for a SSSS gradient plate
(g/lX = 0.05).
30
SgQE−LL
SgQE−LH
Mode−III
Nondimensional frequency
25
Mode−II
20
15
Mode−I
10
4
6
8
10
12
14
16
18
20
22
Number of nodes N
Figure 6: Convergence behaviour of frequencies for a FFFF gradient plate
(g/lX = 0.05).
4.2.2
Free vibration analysis of gradient plate using SgQE-LL and
SgQE-LH elements
The first six frequencies for SSSS, FFFF and SSFF plates obtained using
SgQE-LL and SgQE-LH elements are compared and tabulated. The comparison is made for different length scale parameter: g/lx = 0.00001, 0.05, 0.1, 0.5.
29
25
SgQE−LL
SgQE−LH
Mode−III
20
Nondimensional frequency
Mode−II
15
10
5
Mode−I
0
4
6
8
10
12
14
16
18
20
22
Number of nodes N
Figure 7: Convergence behaviour of frequencies for a SSFF gradient plate
(g/lX = 0.05).
All the tabulated reslts are generated using Nx = Ny = 11 nodes. In the
Table 6, the comparison of first six frequencies for SSSS gradient plate are
shown. Good agreement with analytical solutions [38] is noticed for all the
frequencies obtained using SgQE-LL and SgQE-LH elements for different
g/lx .
Tables 7 and 8 contains the frequency comparison for FFFF and SSFF
plates for various g/lx values. As the exact solutions for gradient elastic plate
are not available in the literature for FFFF and SSFF support conditions,
the frequencies obtained using SgQE-LL and SgQE-LH are compared. Both
elements show identical performance for all g/lx values. The frequencies obtained for lower values of g/lx = 0.00001 match well with the classical plate
frequencies for all support conditions.
In the above findings, SgQE-LL and SgQE-LH elements demonstrate excellent agreement with analytical results for all frequencies and g/lx values
for a SSSS plate. For FFFF and SSFF plates, SgQE-LL and SgQE-LH elements produce similar frequencies for g/lx values considered. Hence, a single
SgQE-LL or SgQE-LH element with few nodes can be used efficiently to
study the free vibration behaviour of gradient plates with different support
conditions and g/lx values.
30
Freq. g/lx
ω̄1
ω̄2
ω̄3
ω̄4
ω̄5
ω̄6
0.00001
0.05
0.1
0.5
SgQE-LL
19.739
20.212
21.567
47.693
SgQE-LH
19.739
20.286
21.791
49.940
Analyt. [38]
(m=1,n=1)
19.739
20.220
21.600
48.087
SgQE-LL
49.348
52.249
60.101
178.418
SgQE-LH
49.348
52.365
60.429
180.895
Analyt. [38]
(m=1,n=2)
49.348
52.303
60.307
180.218
SgQE-LL
78.957
86.311
105.316
357.227
SgQE-LH
78.957
86.720
106.321
363.156
Analyt. [38]
(m=2,n=2)
78.957
86.399
105.624
359.572
SgQE-LL
98.696
109.863
137.940
491.447
SgQE-LH
98.696
109.950
138.193
493.131
Analyt. [38]
(m=1,n=3)
98.696
110.201
139.121
500.088
SgQE-LL
128.305
147.119
192.759
730.346
SgQE-LH
128.305
147.639
193.950
736.599
Analyt. [38]
(m=2,n=3)
128.305
147.454
193.865
737.906
SgQE-LL
167.783
199.133
272.173
1084.136
SgQE-LH
167.783
199.262
272.486
1085.930
Analyt. [38]
(m=1,n=4)
167.783
199.897
274.562
1099.535
Table 6: Comparison of first six frequencies for a gradient SSSS plate
31
Freq. g/lx
ω̄1
ω̄2
ω̄3
ω4
ω̄5
ω̄6
0.00001
0.05
0.1
0.5
SgQE-LL
13.468
13.546
13.681
14.118
SgQE-LH
13.468
13.551
13.713
15.628
Classical [10]
(g/lx =0)
13.468
——
——
——
SgQE-LL
19.596
19.820
20.313
22.113
SgQE-LH
19.596
19.820
20.315
22.129
Classical [10]
(g/lx =0)
19.596
——
——
——
SgQE-LL
24.270
24.699
25.681
29.745
SgQE-LH
24.270
24.700
25.686
29.785
Classical [10]
(g/lx =0)
24.270
——
——
——
SgQE-LL
34.8001
35.780
37.929
73.986
SgQE-LH
34.8001
35.722
38.015
76.161
Classical [10]
(g/lx =0)
34.8001
——
——
——
SgQE-LL
61.093
64.314
71.238
145.033
SgQE-LH
61.093
64.317
71.244
145.065
Classical [10]
(g/lx =0)
61.093
——
——
——
SgQE-LL
63.686
67.059
75.114
193.940
SgQE-LH
63.686
67.123
75.509
200.707
Classical [10]
(g/lx =0)
63.686
——
——
——
Table 7: Comparison of first six frequencies for a gradient FFFF plate
32
Freq. g/lx
ω̄1
ω̄2
ω̄3
ω̄4
ω̄5
ω̄6
0.00001
0.05
0.1
0.5
SgQE-LL
3.367
3.373
3.386
3.491
SgQE-LH
3.367
3.382
3.413
3.950
Classical [47,48]
(g/lx =0)
3.367
——
——
——
SgQE-LL
17.316
17.598
18.370
32.579
SgQE-LH
17.316
17.634
18.474
33.927
Classical [47,48]
(g/lx =0)
17.316
——
——
——
SgQE-LL
19.292
19.645
20.585
35.825
SgQE-LH
19.292
19.664
20.649
36.852
Classical [47,48]
(g/lx =0)
19.292
——
——
——
SgQE-LL
38.211
39.671
39.162
105.800
SgQE-LH
38.211
39.775
43.851
109.959
Classical [47,48]
(g/lx =0)
38.211
——
——
——
SgQE-LL
51.035
53.714
60.400
153.000
SgQE-LH
51.035
53.739
60.493
153.980
Classical [47,48]
(g/lx =0)
51.035
——
——
——
SgQE-LL
53.487
56.431
63.699
158.557
SgQE-LH
53.487
56.537
64.000
161.072
Classical [47,48]
(g/lx =0)
53.487
——
——
——
Table 8: Comparison of first six frequencies for a gradient SSFF plate
33
5
Conclusion
Two novel versions of weak form quadrature elements for gradient elastic
beam theory were proposed. This methodology was extended to construct
two new and different quadrature plate elements based on Lagrange-Lagrange
and mixed Lagrange-Hermite interpolations. A new way to account for the
non-classical boundary conditions associated with the gradient elastic beam
and plate theories were introduced and implemented. The capability of the
proposed four elements was demonstrated through free vibration analysis.
Based on the findings it was concluded that, accurate solutions can be obtained even for higher frequencies and for any choice of length scale parameter
using single beam or plate element with fewer number of nodes. The new
results reported for gradient plate for different boundary conditions can be
a reference for the research in this field.
References
[1] Zienkiewicz.O.C, Taylor. R.L., The finite element method,Vol.1. Basic
formulations and linear problems, London: McGraw-Hill, 1989,648p.
[2] Zienkiewicz.O.C, Taylor. R.L., The finite element method, Vol.2. Solid
and fluid mechanics: dynamics and non-linearity, London: McGraw-Hill,
1991, 807p.
[3] K.J. Bathe., Finite element procedures in engineering analysis. PrenticeHall, 1982.
[4] Bellman RE, Casti J., Differential quadrature and long-term integration.
Journal of Mathematical Analysis and Applications 1971; 34:235–238.
[5] Bert, C. W., and Malik, M., 1996,Differential Quadrature Method in
Compu-tational Mechanics: A Review,. ASME Appl. Mech. Rev., 49(1),
pp. 1–28.
[6] Bert, C. W., Malik, M., 1996,The differential quadrature method for irregular domains and application to plate vibration.. International Journal
of Mechanical Sciences 1996; 38:589–606.
[7] C. Shu, Differential Quadrature and Its Application in Engineering,.
Springer-Verlag, London, 2000.
34
[8] H. Du, M.K. Lim, N.R. Lin, Application of generalized differential quadrature method to structural problems,. Int. J. Num. Meth.Engrg. 37 (1994)
1881–1896.
[9] H. Du, M.K. Lim, N.R. Lin, Application of generalized differential quadrature to vibration analysis,. J. Sound Vib. 181 (1995) 279–293.
[10] Xinwei Wang, Differential Quadrature and Differential Quadrature
Based Element Methods Theory and Applications,.Elsevier, USA, 2015
[11] O. Civalek, O.M. Ulker., Harmonic differential quadrature (HDQ) for
axisymmetric bending analysis of thin isotropic circular plates,. Struct.
Eng. Mech.17 (1) (2004) 1–14.
[12] O. Civalek, Application of differential quadrature (DQ) and harmonic
differential quadrature (HDQ) for buckling analysis of thin isotropic plates
and elastic columns. Eng. Struct. 26 (2) (2004) 171–186.
[13] X. Wang, H.Z. Gu, Static analysis of frame structures by the differential quadrature element method,. Int. J. Numer. Methods Eng. 40 (1997)
759–772.
[14] X. Wang, Y. Wang, Free vibration analysis of multiple-stepped beams
by the differential quadrature element method, Appl. Math. Comput. 219
(11) (2013) 5802–5810.
[15] Wang Y, Wang X, Zhou Y., Static and free vibration analyses of rectangular plates by the new version of differential quadrature element
method, International Journal for Numerical Methods in Engineering
2004; 59:1207–1226.
[16] Y. Xing, B. Liu, High-accuracy differential quadrature finite element
method and its application to free vibrations of thin plate with curvilinear
domain, Int. J. Numer. Methods Eng. 80 (2009) 1718–1742.
[17] Malik M., Differential quadrature element method in computational mechanics: new developments and applications. Ph.D. Dissertation, University of Oklahoma, 1994.
[18] Karami G, Malekzadeh P., A new differential quadrature methodology for beam analysis and the associated differential quadrature element
method. Computer Methods in Applied Mechanics and Engineering 2002;
191:3509–3526.
35
[19] Karami G, Malekzadeh P., Application of a new differential quadrature
methodology for free vibration analysis of plates. Int. J. Numer. Methods
Eng. 2003; 56:847–868.
[20] A.G. Striz, W.L. Chen, C.W. Bert, Static analysis of structures by
the quadrature element method (QEM),. Int. J. Solids Struct. 31 (1994)
2807–2818.
[21] W.L. Chen, A.G. Striz, C.W. Bert, High-accuracy plane stress and plate
elements in the quadrature element method, Int. J. Solids Struct. 37 (2000)
627–647.
[22] H.Z. Zhong, Z.G. Yue, Analysis of thin plates by the weak form quadrature element method, Sci. China Phys. Mech. 55 (5) (2012) 861–871.
[23] Chunhua Jin, Xinwei Wang, Luyao Ge, Novel weak form quadrature
element method with expanded Chebyshev nodes, Applied Mathematics
Letters 34 (2014) 51–59.
[24] T.Y. Wu, G.R. Liu, Application of the generalized differential quadrature
rule to sixth-order differential equations, Comm. Numer. Methods Eng.
16 (2000) 777–784.
[25] G.R. Liu a , T.Y. Wu b, Differential quadrature solutions of eighth-order
boundary-value differential equations, Journal of Computational and Applied Mathematics 145 (2002) 223–235.
[26] Xinwei Wang, Novel differential quadrature element method for vibration
analysis of hybrid nonlocal EulerBernoulli beams, Applied Mathematics
Letters 77 (2018) 94–100.
[27] Mindlin, R.D., 1965.1964. Micro-structure in linear elasticity. Arch. Rat.
Mech. Anal. 16, 52–78.
[28] Mindlin, R., Eshel, N., 1968. On first strain-gradient theories in linear
elasticity. Int. J. Solids Struct. 4, 109–124.
[29] Mindlin, R.D., 1965.1964. Micro-structure in linear elasticity. Arch. Rat.
Mech. Anal. 16, 52–78.
[30] Fleck, N.A., Hutchinson, J.W., A phenomenological theory for strain
gradient effects in plasticity. 1993. J. Mech. Phys. Solids 41 (12),
1825e1857.
36
[31] Koiter, W.T., 1964.Couple-stresses in the theory of elasticity, I & II.
Proc. K. Ned.Akad. Wet. (B) 67, 17–44.
[32] Harm Askes, Elias C. Aifantis, Gradient elasticity in statics and dynamics: An overview of formulations,length scale identification procedures,
finite element implementations and new results Int. J. Solids Struct. 48
(2011) 1962–1990
[33] Aifantis, E.C., Update on a class of gradient theories. 2003.Mech. Mater.
35,259e280.
[34] Altan, B.S., Aifantis, E.C., On some aspects in the special theory of
gradient elasticity. 1997. J. Mech. Behav. Mater. 8 (3), 231e282.
[35] Papargyri-Beskou, S., Tsepoura, K.G., Polyzos, D., Beskos, D.E., Bending and stability analysis of gradient elastic beams. 2003. Int. J. Solids
Struct. 40, 385e400.
[36] S. Papargyri - Beskou, D. Polyzos, D. E. Beskos, Dynamic analysis
of gradient elastic flexural beams. Structural Engineering and Mechanics,
Vol. 15, No. 6 (2003) 705–716.
[37] A.K. Lazopoulos, Dynamic response of thin strain gradient elastic
beams, International Journal of Mechanical Sciences 58 (2012) 27–33.
[38] Papargyri-Beskou, S., Beskos, D., Static, stability and dynamic analysis
of gradient elastic flexural Kirchhoff plates. 2008. Arch. Appl. Mech. 78,
625e635.
[39] Lazopoulos, K.A., On the gradient strain elasticity theory of plates.
2004. Eur. J.Mech. A/Solids 23, 843e852.
[40] Papargyri-Beskou, S., Giannakopoulos, A.E., Beskos, D.E., Variational
analysis of gradient elastic flexural plates under static loading. 2010. International Journal of Solids and Structures 47, 2755e2766.
[41] I. P. Pegios S. Papargyri-Beskou D. E. Beskos, Finite element static
and stability analysis of gradient elastic beam structures, Acta Mech 226,
745–768 (2015).
[42] Tsinopoulos, S.V., Polyzos, D., Beskos, D.E, Static and dynamic BEM
analysis of strain gradient elastic solids and structures, Comput. Model.
Eng. Sci. (CMES) 86, 113–144 (2012).
37
[43] Vardoulakis, I., Sulem, J., Bifurcation Analysis in Geomechanics. 1995.
Blackie/Chapman and Hall, London.
[44] Kitahara, M. (1985), Boundary Integral Equation Methods in Eigenvalue
Problems of Elastodynamics and Thin Plates, Elsevier, Amsterdam.
[45] J.N. Reddy, Energy Principles and Variational Methods in Applied Mechanics, Second Edition, John Wiley, NY, 2002.
[46] S.P. Timoshenko, D.H. Young, Vibration Problem in Engineering, Van
Nostrand Co., Inc., Princeton, N.J., 1956.
[47] B.Singh, S. Chakraverty, Flexural vibration of skew plates using boundary characteristics orthogonal polynomials in two variables, J. Sound Vib.
173 (2) (1994) 157–178.
[48] A.W. Leissa, The free vibration of rectangular plates, J. Sound Vib. 31
(3) (1973) 257–293.
APPENDIX
5.1
Analytical solutions for free vibration analysis of
gradient elastic Euler-Bernoulli beam
To obtain the natural frequencies of the gradient elastic Euler-Bernoulli beam
which is governed by Equation (9), we assume a solution of the form
w(x, t) = w̄(x)eiωt
substituting the above solution in the governing equation (9), we get
w̄iv − g 2 w̄vi −
ω2
w̄ = 0
β2
here, β 2 = EI/m, and the above equation has the solution of type
w̄(x) =
6
X
ci eki x
j=1
where, ci are the constants of integration which are determined through
boundary conditions and the ki are the roots of the characteristic equation
38
k iv − g 2 k vi −
ω2
=0
β2
After applying the boundary conditions listed in section 1.1 we get,
[F (ω)]{C} = {0}
For non-trivial solution, following condition should be satisfied
det[F (ω)] = 0
The above frequency equation renders all the natural frequencies for a
gradient elastic Euler-Bernoulli beam. The following are the frequency equations for different boundary conditions.
(a) Simply supported beam :
1
1
1
1
1
1
(k1 L)
e(k2 L)
e(k3 L)
e(k4 L)
e(k5 L)
e(k6 L)
e
2
2
2
2
2
k2
k2
k3
k4
k5
k6
1
2 (k L) 2 (k L) 2 (k L) 2 (k L) 2 (k L) 2 (k L)
2
3
4
5
6
[F (ω)] = k1 e 1
k2 e
k3 e
k4 e
k5 e
k6 e
4
4
4
4
4
4
k6
k3
k4
k5
k2
k1
k 4 e(k1 L) k 4 e(k2 L) k 4 e(k3 L) k 4 e(k4 L) k 4 e(k5 L) k 4 e(k6 L)
2
3
4
5
6
1
39
(b) Cantilever beam :
1
1
1
1
1
1
k2
k3
k4
k5
k6
k1
2
2
2
2
2
k2
k
k
k
k
k
1
2
3
4
5
6
[F (ω)] = t1
t2
t3
t4
t5
t6
p2
p3
p4
p5
p6
p1
k 3 e(k1 L) k 3 e(k2 L) k 3 e(k3 L) k 3 e(k4 L) k 3 e(k5 L) k 3 e(k6 L)
6
5
4
3
2
1
(c) clamped beam :
1
1
1
1
1
1
k2
k3
k4
k5
k6
k1
2
2
2
2
2
k2
k
k
k
k
k
2
3
4
5
6
1
(k L)
(k
L)
(k
L)
(k
L)
(k
L)
(k
L)
2
3
4
5
6
[F (ω)] = e 1
e
e
e
e
e
(k1 L)
(k2 L)
(k3 L)
(k4 L)
(k5 L)
(k6 L)
k
e
k
e
k
e
k
e
k
e
k
e
1
2
3
4
5
6
k 2 e(k1 L) k 2 e(k2 L) k 2 e(k3 L) k 2 e(k4 L) k 2 e(k5 L) k 2 e(k6 L)
2
3
4
5
6
1
40
(d) Propped cantilever beam :
1
1
1
1
1
1
k2
k3
k4
k5
k6
k1
2
2
2
2
2
k2
k
k
k
k
k
1
2
3
4
5
6
(k2 L)
(k3 L)
(k4 L)
(k5 L)
(k6 L)
[F (ω)] = e(k1 L)
e
e
e
e
e
2 (k1 L) 2 (k2 L) 2 (k3 L) 2 (k4 L) 2 (k5 L) 2 (k6 L)
k2 e
k3 e
k4 e
k5 e
k6 e
k1 e
p
p
p
p
p
p
1
2
3
4
5
6
(e) Free-free beam :
q2
q3
q4
q5
q6
q1
r2
r3
r4
r5
r6
r1
3
3
3
3
3
k3
k
k
k
k
k
1
2
5
6
4
3
[F (ω)] =
t1
t
t
t
t
t
2
3
4
5
6
p
p
p
p
p
p
1
2
3
4
5
6
k 3 e(k1 L) k 3 e(k2 L) k 3 e(k3 L) k 3 e(k4 L) k 3 e(k5 L) k 3 e(k6 L)
1
2
3
4
5
6
Where,
41
t1 = (k13 − g 2 k1 5 )e(k1 L) ,
t2 = (k23 − g 2 k2 5 )e(k2 L) ,
t3 = (k33 − g 2 k3 5 )e(k3 L)
t4 = (k43 − g 2 k4 5 )e(k4 L) ,
t5 = (k53 − g 2 k5 5 )e(k5 L)
p1 = (k12 − g 2 k1 4 )e(k1 L) ,
p2 = (k22 − g 2 k2 4 )e(k2 L) ,
p3 = (k32 − g 2 k3 4 )e(k3 L)
p4 = (k42 − g 2 k4 4 )e(k4 L) ,
p5 = (k52 − g 2 k5 4 )e(k5 L) ,
p6 = (k62 − g 2 k6 4 )e(k6 L)
t6 = (k63 − g 2 k6 5 )e(k6 L)
q1 = (k13 − g 2 k1 5 ),
q2 = (k23 − g 2 k2 5 ),
q4 = (k43 − g 2 k4 5 ),
q5 = (k53 − g 2 k5 5 ) q6 = (k63 − g 2 k6 5 )
r1 = (k12 − g 2 k1 4 ),
r2 = (k22 − g 2 k2 4 ),
r3 = (k32 − g 2 k3 4 )
r4 = (k42 − g 2 k4 4 ),
r5 = (k52 − g 2 k5 4 ),
r6 = (k62 − g 2 k6 4 )
42
q3 = (k33 − g 2 k3 5 )
| 5 |
Unmatched Perturbation Accommodation for an
Aerospace Launch Vehicle Autopilot Using Dynamic Sliding Manifolds
M. R. Saniee
Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India
Abstract- Sliding mode control of a launch vehicle during
its atmospheric flight phase is studied in the presence of
unmatched disturbances. Linear time-varying dynamics of
the aerospace vehicle is converted into a systematic formula
and then dynamic sliding manifold as an advanced method
is used in order to overcome the limited capability of
conventional sliding manifolds in minimizing the undesired
effects of unmatched perturbations on the control system.
At the end, simulation results are evaluated and the
performance of two approaches are compared in terms of
stability and robustness of the autopilot.
Keywords: Launch vehicle control, Variable structures,
Unmatched perturbation, Dynamic sliding mode.
I.
INTRODUCTION
To reach the aim of a mission, an aerospace launch
vehicle (ALV) must move along the specified trajectory
and has a required attitude. These tasks are fulfilled by the
vehicle flight control system or autopilot. It forms control
actions such as forces and torques to the ALV during its
powered path providing the best fulfillment of the specified
requirements to the vehicle terminal state vector [1].
Since dynamic equations of an ALV system can be
mathematically modeled using estimated and time-varying
coefficients, the most critical problem arises due to the
variable characteristics of such vehicles [2]. So the attitude
control systems are confronting dynamics with uncertain
parameters in addition to nonlinearities and disturbances. In
order to achieve an acceptable performance, robust
controllers are proposed to follow the nominal trajectory
[3].
One of the nonlinear robust control techniques is
variable structure controls (VSC). It utilizes beneficial
characteristics of various control configurations and
delivers robust performance and new features that none of
those structures possess on their own. The central feature of
VSC is the so-called sliding mode control (SMC) on the
switching manifold which the system remain insensitive to
plant parameter variations and external disturbances [4].
Sliding mode control first introduces in the framework of
VSC and soon became the principle operational mode for
this class of control systems. Due to practical advantages
such as order reduction, low sensitivity to turbulences and
plant parameter variations, SMC has been known a very
efficient technique to control complicated dynamic plants
functioning under variable situations which are common for
many processes of modern technologies. The main
shortcoming of SMC namely chattering phenomenon arises
due to switching in the both sides of sliding manifold. This
dilemma can be treated by continuous approximation of
discontinuous control or by continuous SMC design [6].
SMC also can not accommodate for unmatched
disturbances unlike its powerful application for matched
disturbance rejection. To obliterate this problem
encountered in practice for flight dynamics and timescale
separation of central loops in multi-loop systems, dynamic
sliding mode (DSM) has received considerable attention
[7]. DSM exploits benefits of dynamic compensators to
offer a switching manifold in order to provide the systems
with robustness characteristics against unmatched
perturbations [8]. This technique can be applied in variety
of complex control systems and even automated passive
acoustic monitoring devices used for studying marine
mammal vocalizations and their behaviors [9-12]. In this
analysis enhanced properties of a control, designed based
on DSM for longitudinal channel output tracking of a time
varying ALV will be demonstrated in comparison to that of
CSM control.
Section 2 and 3 presents CSM and DSM control theory,
respectively. ALV dynamics are offered in Sec. 4. CSM
and DSM autopilot designed and simulation results
demonstrated in Sec. 5. Section 6 devoted for conclusion.
II.
CSM CONTROL THEORY
Consider the following dynamic model:
x Αx Βu F(x, t )
(1)
where x(t ) R and u (t ) are the state vector are the control
vector, respectively; A and B are constant matrices; F(x,t)
is a bounded disturbance; It is assumed that {A,B} is a
n
controllable pair and rank(B)=m. The conventional sliding
surface S(x,t) can be defined as:
d
S ( x, t ) ( λ) n1 e
(2)
dt
where η is a small positive constant. Therefore, a control
input can be chosen as:
where λ is a positive real constant and e is tracking error
as:
(3)
e x xd
where ρ is positive real constant, ueq is the continuous
u ( λB) 1 (e λAx xd ρsign( s ))
u eq u disc
control which is called “equivalent control” and udisc is the
discontinuous control that switches around the sliding
surface and so, system state synchronously moves on this
manifold and toward the origin.
where xd is the nominal trajectory and e(0)=0. In this study,
it is assumed that n=2 and sliding surface can be
determined in terms of error as follows:
(4)
S e λe
III.
S ( x, t ) 0
S ( x, t ) 0
(5)
So, the system dynamics moves from any initial states
toward the sliding hyperplanes in a certain amount of time
and maintains on it hereafter [4]. In other words, the
existence of the conventional sliding mode in the designed
sliding manifold is provided.
This two-stage design becomes simpler for systems in socalled regular form. Since rank{B}=m, matrix B in Eq. (1)
can be partitioned as :
B
(6)
B 1
B 2
( x2 , e) x2 W (s)e
x1 A 11 x1 A 12 x 2 F1
x 2 A 21 x1 A 22 x 2 u F2
P( s )
and P(s), Q(s) are
Q( s )
polynomials of s . The operator W (s) has to be specified
in order to provide the desired plant behavior as well as
rejecting effects of unmatched disturbance F1 .
To ensure the occurance of the sliding mode on the
sliding manifold (12), the discontinuous control should be
designed. By using the Lyapunov’s function (9) and the
reachibility condition (10), the control law can be given as:
u A21 x1 A22 x2 W ( s)e sign()
(7)
u eq u disc
(13)
The existence of the sliding mode in the dynamic
sliding surface can be proven if ( x2 , e) 0 , derivative of
(9) can be identified as:
(8)
where x1 R nm , x2 R m , A ij are constant matrices for
x W (s)e
V T
2
i, j 1,2 , F1 is unmatched disturbance and F2 is matched
disturbance.
Lyapunov direct method is used to obtain the control law.
A candidate function is selected as:
V 1 sT s
2
(12)
where x2 R m , s d / dt , W ( s)
where B1 R ( nm )m and B 2 R mm with det(B 2 ) 0 .
The nonsingular coordinate transformation,
x1
, T R nn
x Tx
2
converts the system Equation (1) to regular form:
DSM CONTROL APPROACH
The main characteristic of the dynamic sliding mode is
being compensator. It means that DSM control designs
control law of each step based on previous step data and so,
the system may need some additional dynamics to improve
the system and sliding mode stability besides the desired
system response.
Dynamic sliding manifold can be modeled as a linear
function in terms of some states and tracking error as
follows [7]:
The tracking error will asymptotically reach zero with a
control law of bellow form:
u ,
u
u ,
(11)
(14)
Substituting (8) and (13) into (14) should yield the
following expression:
V ρsign() 0
ρ 0
(15)
(9)
Consequently, the surface 0 is attractive and DSM
control provides asymptotic stability to the states of
tracking error dynamics.
with V (0) 0 and V (s) 0 . The condition, guaranteeing
an ideal sliding motion, is the η-reachibility condition given
by:
1 d 2
s η s
(10)
2 dt
IV.
2
THE EQUATIONS OF MOTION FOR ALV
Newton’s second law can be employed to extract the
motion equations of an ALV. Assuming rigid airframe for
the vehicle, the 6DOF equations of motion obtained as
follows [2]:
Fx m(U qW rV )
F m(V rU pW )
The trust vector deflection of servo dynamics can be
described as:
TF servo δ 1
(16)
δc 0.1s 1
with a rate limit of | ddtδ | 25 deg/ sec . Because reference
signal is pitch rate, a rate gyro is selected as follows:
y
Fz m(W pV qU )
M x I x p
(14)
TF gyro
M y I y q ( I x I y ) pr
M z I z r ( I y I x ) pq
V.
Since altitude control systems of an ALV are usually
simplified into a linear set of dynamical equations, a linear
ALV model is developed in this article. Considering small
perturbations, linearized equations of motion can be
obtained as follows [1]:
(17)
AUTOPILOT DESIGN AND SIMULATION
RESULTS
In this section, both CSM and DSM control designed
for the time varying ALV pitch longitudinal channel and
excellent performance of DSM in comparison to that of
CSM are demonstrated.
v z Z v v z Z q q Z θ θ Z δe δe
q M vz v z M q q M δe δe
v y Z v v y Z r r Z θ θ Z δr δr
(80π ) 2
s (40π ) s (80π ) 2
2
A. CSM Control Design
The goal is to generate the control δe to enforce state
motion on CSM:
S θe Kθ e
(18)
(15)
r M vy v y M r r M δr δr
p M p p M δa δa
where θe θc θ and K const. is chosen in order to
make ALV track the commanded pitch rate qc and so, the
states trajectory of system asymptotically converge to the
sliding manifold S 0 .
Using Lyapunov function (9), its derivative is derived as:
where Z, M are dynamic coefficients and δ is deflection of
trust vector.
Since the control objective is to track guidance
command in pitch channel, thus the two first equations will
be regarded as required dynamics and the other three ones
are waved belonging to yaw and roll channels. Time
varying coefficients of pitch dynamics in Eq. (15) are
shown as in Figure 1.
V S T S S T [q c M vz v z M q q M δe δe Kθe ]
(19)
By utilizing the equality form of (10) for ensuring
asymptotic stability of the system, the necessary control is
given as:
δe M δe1[q c Kq e M vz v z M q q ρsign(S )]
(20)
where K 1 and ρ 0.01 have been selected.
The control law (20) is discontinuous and will cause
chattering on the manifold (18). To solve this undesired
phenomenon, the discontinuous term sign(S ) in Eq. (20) is
replaced by the continuous term sat(S / ε) , where ε is a
real small constant whose value is chosen 103 in this
research.
B. DSM Control Design
Following the procedure in [7], the design procedure for
dynamic sliding manifold is presented. Note that to
transform the ALV longitudinal equations of motion to
regular form (8) and avoiding singularity, the servo
dynamic equation (16) is added to plant equations.
Thus, δ is converted to one of system states and δc will be
the control effort .
Figure 1. Longitudinal dynamic coefficient
3
Based on Eq. (12), the following expression for dynamic
sliding manifold is defined:
δ W (s)e
properly accommodate for matched disturbances [4],
simulation was run in presence of unmatched disturbances
depicted in Figure 3 such that f11 and f12 are exerted to
the first and second expressions in Eq. (15), respectively.
The simulation results with dynamic and conventional
SMC are illustrated in Figure 4 and Figure 5, respectively.
It is shown that DSM unlike the CSM can follow the
nominal trajectory very closely and withstands the
unmatched perturbations.
(21)
which W (s) can be selected as bellow:
a1s 2 a2 s a3
(22)
b1s 2 b2 s
where a1 , a2 , a3 , b1 , b2 are real indices determined for each
iteration. In order to obtain these coefficients, tracking
error achieved as:
W ( s)
qc
(23)
1 W ( s)G( s)
where G(s) is the transfer function of q(s) relative to
δ (s) . By comparing characteristics equation for (23) and
integral of time multiplied by absolute tracking error
criterion with:
e
s 5 2.8wn s 4 5wn2 s 3 5.5wn3 s 2 3.4wn4 s wn5 0 (24)
where
wn 10 Hz
is chosen and related parameters
identified at each moment. Also, the control δc is given as
follows:
(25)
δc ρsat ( / ε)
Figure 3. Unmatched disturbances profiles
where ρ 1 and ε 103 is considered.
Figure 2. Desired pitch and pitch rate to be tracked
Figure 4. Pitch angle error, pitch rate error and controller command
obtained from CSM autopilot
In this paper, pitch rate program has been designed
offline as shown in Figure 2 and it is desired to be tracked
during the flight envelope. Since sliding mode control can
In this research, it is assumed that a1 b1 1 and the
other three coefficients are determined by the
4
corresponding algorithm at each moment during the ALV
flight time whose variations are illustrated in Figure 6.
conventional and dynamic SMC was designed and closedloop system operations of these methods were compared.
Results show that dynamic SMC can accommodates
unmatched disturbances and output tracking errors is much
less than those of CSM, while conventional SMC does not
operate properly and cannot satisfy requirements of system
performance. The simple and straightforward design
procedure, together with the encouraging robustness against
unmatched disturbances; invite further application of this
approach.
REFERENCES
[1] V. V. Malyshev, “Aerospace vehicle Control: Modern
Theory and Applications,” IAE and MAI Russia
Cooperation, Feb. 1985.
[2] J. H. Blacklock, “Automatic Control of Aircraft and
Missiles,” Wiley, 1991.
[3] V. V. Malyshev, and M. N. Krasilshikov, “Aerospace
vehicle Navigation and Control,” FAPESP Publication,
Sao-Paulo, Brazil, 1996.
[4] V. I. Utkin, “Sliding Modes in Control and
Optimization,” 111-130, Springer-Verlag, Berlin, 1992.
[5] M. Esfahanian, B. Ebrahimi, J. Roshanian, and M.
Bahrami, “Time-Varying Dynamic Sliding Mode Control
of a Nonlinear Aerospace Launch Vehicle,” Canadian
Aeronautics and Space Journal, Vol. 57, No. 2, 2011.
[6] Y. B. Shtessel, “Nonlinear Sliding Manifolds in
Nonlinear Output Tracking Problem,” In the Proceeding of
the American Control Conference Seattle, Washington,
June, 1995.
[7] Y. B. Shtessel, “Nonlinear Output Tracking in
Conventional and Dynamic Sliding Manifolds,” IEEE
Transactions on Automatic Control, Vol. 42, No. 9, 1997,
pp. 1282-1286.
[8] A. J. Koshkouei, K. J. Burnham and A. S. I. Zinober,
“Dynamic Sliding Mode Control Design,” IEEE
Proceedings-Control Theory and Applications, Vol. 152,
No. 4, pp. 392_396, 2005.
[9] M. Esfahanian, H. Zhuang, and N. Erdol, “On
Contour-Based Classification of Dolphin Whistles by
Type,” Journal of Applied Acoustics, Vol. 76, pp. 274-279,
Feb 2014.
[10] W. C. L. Filho and L. Hsu, “Adaptive Control of
Missile Attitude,” IFAC Adaptive Systems in Control and
Signal Processing, Lund, Sweden, 1995.
[11] M. Esfahanian, H. Zhuang, and N. Erdol, and E.
Gerstein, “Comparison of two methods for detection of
north Atlantic right whale upcalls,” 2015 IEEE European
Signal Processing Conference, Aug 31-Sep 4, Nice, France,
2015.
[12] A. Nayir, E. Rosolowski, and L. Jedut, “New trends in
wind energy modeling and wind turbine control,” IJTPE
journal, Vol. 2, No. 4, pp. 51-59, September 2010.
Figure 5. Pitch angle error, pitch rate error and controller command
obtained from DSM autopilot
.
Figure 6. Indices variations of W (s) calculated on-line
VI.
CONCLUSION
In this research, commanded pitch rate tracking with
unmatched disturbances for the atmospheric flight of a
time-varying ALV is considered in SMC. Both
5
| 3 |
GRADED INTEGRAL CLOSURES
FRED ROHRER
arXiv:1302.1659v2 [math.AC] 8 Apr 2013
Abstract. It is investigated how graded variants of integral and complete integral
closures behave under coarsening functors and under formation of group algebras.
Introduction
Let G be a group, let R be a G-graded ring, and let S be a G-graded R-algebra.
(Throughout, monoids, groups and rings are understood to be commutative, and algebras
are understood to be commutative, unital and associative.) We study a graded variant
of (complete) integral closure, defined as follows: We denote by Int(R, S) (or CInt(R, S),
resp.) the G-graded sub-R-algebra of S generated by the homogeneous elements of S that
are (almost) integral over R and call this the (complete) integral closure of R in S. If R is
entire (as a G-graded ring, i.e., it has no homogeneous zero-divisors) we consider its graded
field of fractions Q(R), i.e., the G-graded R-algebra obtained by inverting all nonzero
homogeneous elements, and then Int(R) = Int(R, Q(R)) (or CInt(R) = CInt(R, Q(R)),
resp.) is called the (complete) integral closure of R. These constructions behave similar
to their ungraded relatives, as long as we stay in the category of G-graded rings. But
the relation between these constructions and their ungraded relatives, and more generally
their behaviour under coarsening functors, is less clear; it is the main object of study in
the following.
For an epimorphism of groups ψ : G ։ H we denote by •[ψ] the ψ-coarsening functor
from the category of G-graded rings to the category of H-graded rings. We ask for
conditions ensuring that ψ-coarsening commutes with relative (complete) integral closure,
i.e., Int(R, S)[ψ] = Int(R[ψ] , S[ψ] ) or CInt(R, S)[ψ] = CInt(R[ψ] , S[ψ] ), or – if R and R[ψ] are
entire – that ψ-coarsening commutes with (complete) integral closure, i.e., Int(R)[ψ] =
Int(R[ψ] ) or CInt(R)[ψ] = CInt(R[ψ] ). Complete integral closure being a more delicate
notion than integral closure, it is not astonishing that the questions concerning the former
are harder than the other ones. Furthermore, the case of integral closures of entire graded
rings in their graded fields of fractions turns out to be more complicated than the relative
case, because Q(R)[ψ] almost never equals Q(R[ψ] ), hence in addition to the coarsening
we also change the overring in which we form the closure.
The special case H = 0 of parts of these questions was already studied by several
authors. Bourbaki ([3, V.1.8]) treats torsionfree groups G, Van Geel and Van Oystaeyen
2010 Mathematics Subject Classification. Primary 13B22; Secondary 13A02, 16S34.
Key words and phrases. Graded ring, coarsening, integral closure, integrally closed ring, complete
integral closure, completely integrally closed ring, group algebra.
The author was supported by the Swiss National Science Foundation.
1
2
FRED ROHRER
([13]) consider G = Z, and Swanson and Huneke ([11, 2.3]) discuss the case that G is of
finite type. Our main results, generalising these partial results, are as follows.
Theorem 1 Let ψ : G ։ H be an epimorphism of groups and let R be a G-graded ring.
a) If Ker(ψ) is contained in a torsionfree direct summand of G then ψ-coarsening
commutes with relative (complete) integral closure.
b) Suppose that R is entire. If G is torsionfree, or if Ker(ψ) is contained in a torsionfree direct summand of G and the degree support of R generates G, then ψ-coarsening
commutes with integral closure.
The questions above are closely related to the question of how (complete) integral closure behaves under formation of group algebras. If F is a group, there is a canonical
G ⊕ F -graduation on the algebra of F over R; we denote the resulting G ⊕ F -graded
ring by R[F ]. We ask for conditions ensuring that formation of graded group algebras
commutes with relative (complete) integral closure, i.e., Int(R, S)[F ] = Int(R[F ], S[F ])
or CInt(R, S)[F ] = CInt(R[F ], S[F ]), or – if R is entire – that formation of graded
group algebras commutes with (complete) integral closure, i.e., Int(R)[F ] = Int(R[F ])
or CInt(R)[F ] = CInt(R[F ]). Our main results are the following.
Theorem 2 Let G be a group and let R be a G-graded ring. Formation of graded group
algebras over R commutes with relative (complete) integral closure. If R is entire then
formation of graded group algebras over R commutes with (complete) integral closure.
It is maybe more interesting to consider a coarser graduation on group algebras, namely
the G-graduation obtained from R[F ] by coarsening with respect to the canonical projection G ⊕ F ։ G; we denote the resulting G-graded R-algebra by R[F ][G] and call it the
coarsely graded algebra of F over R. We ask for conditions ensuring that formation of
coarsely graded group algebras commutes with relative (complete) integral closure, i.e.,
Int(R, S)[F ][G] = Int(R[F ][G] , S[F ][G] ) or CInt(R, S)[F ][G] = CInt(R[F ][G] , S[F ][G]), or – if
R and R[F ][G] are entire – that formation of coarsely graded group algebras commutes
with (complete) integral closure, i.e., Int(R)[F ][G] = Int(R[F ][G] ) or CInt(R)[F ][G] =
CInt(R[F ][G] ). Ungraded variants of these questions (i.e., for G = 0) for a torsionfree
group F were studied extensively by Gilmer ([6, §12]). On use of Theorems 1 and 2 we
will get the following results.
Theorem 3 Let G and F be groups and let R be a G-graded ring. Formation of the
coarsely graded group algebra of F over R commutes with relative (complete) integral
closure if and only if F is torsionfree. If R is entire and F is torsionfree then formation
of the coarsely graded group algebra of F over R commutes with integral closure.
Some preliminaries on graded rings, coarsening functors, and algebras of groups are
collected in Section 1. Relative (complete) integral closures are treated in Section 2,
and (complete) integral closures of entire graded rings in their graded fields of fractions
are treated in Section 3. Our notation and terminology follows Bourbaki’s Éléments de
mathématique.
GRADED INTEGRAL CLOSURES
3
Before we start, a remark on notation and terminology may be appropriate. Since we try
to never omit coarsening functors (and in particular forgetful functors) from our notations
it seems conceptually better and in accordance with the general yoga of coarsening to not
furnish names of properties of G-graded rings or symbols denoting objects constructed
from G-graded rings with additional symbols that highlight the dependence on G or on
the graded structure. For example, if R is a G-graded ring then we will denote by Nzd(R)
(instead of, e.g., NzdG (R)) the monoid of its homogeneous non-zerodivisors, and we call
R entire (instead of, e.g., G-entire) if Nzd(R) consists of all homogeneous elements of R
different from 0. Keeping in mind that in this setting the symbol “R” always denotes
a G-graded ring (and never, e.g., its underlying ungraded ring), this should not lead to
confusions (whereas mixing up different categories might do so).
Throughout the following let G be a group.
1. Preliminaries on graded rings
First we recall our terminology for graded rings and coarsening functors.
(1.1) By a G-graded ring we mean a pair (R, (Rg )g∈G ) consisting of a ring R and a
family (Rg )g∈G of subgroups of the additive group of R whose direct sum equals the
additive group of R such that Rg Rh ⊆ Rg+h for g, h ∈ G. If no confusion can arise we
denote a G-graded ring (R, (Rg )g∈G ) just by R. Accordingly, for a G-graded ring R and
S
g ∈ G we denote by Rg the component of degree g of R. We set Rhom := g∈G Rg and call
degsupp(R) := {g ∈ G | Rg 6= 0} the degree support of R. We say that R has full support
if degsupp(R) = G and that R is trivially G-graded if degsupp(R) = {0}. Given G-graded
rings R and S, by a morphism of G-graded rings from R to S we mean a morphism of
rings u : R → S such that u(Rg ) ⊆ Sg for g ∈ G. By a G-graded R-algebra we mean
a G-graded ring S together with a morphism of G-graded rings R → S. We denote by
GrAnnG the category of G-graded rings with this notion of morphism. This category has
inductive and projective limits. In case G = 0 we canonically identify GrAnnG with the
category of rings.
(1.2) Let ψ : G ։ H be an epimorphism of groups. For a G-graded ring R we define an
H-graded ring R[ψ] , called the ψ-coarsening of R; its underlying ring is the ring underlying
L
R, and its H-graduation is given by (R[ψ] )h =
g∈ψ−1 (h) Rg for h ∈ H. A morphism
u : R → S of G-graded rings can be considered as a morphism of H-graded rings R[ψ] →
S[ψ] , and as such it is denoted by u[ψ] . This gives rise to a functor •[ψ] : GrAnnG → GrAnnH .
This functor has a right adjoint, hence commutes with inductive limits, and it has a left
adjoint if and only if Ker(ψ) is finite ([10, 1.6; 1.8]). For a further epimorphism of groups
ϕ : H ։ K we have •[ϕ◦ψ] = •[ϕ] ◦ •[ψ] .
(1.3) We denote by
that G = limU ∈F U.
−→
G
FG the set of subgroups of finite type of G, ordered by inclusion, so
4
FRED ROHRER
(1.4) Let F ⊆ G be a subgroup. For a G-graded ring R we define an F -graded ring R(F )
L
with underlying ring the subring g∈F Rg ⊆ R and with F -graduation (Rg )g∈F . For an
F -graded ring S we define a G-graded ring S (G) with underlying ring the ring underlying
(G)
(G)
S and with G-graduation given by Sg = Sg for g ∈ F and Sg = 0 for g ∈ G \ F . If R
is a G-graded ring and F is a set of subgroups of G, ordered by inclusion, whose inductive
limit is G, then R = limF ∈F ((R(F ) )(G) ).
−→
The next remark recalls the two different notions of graded group algebras and, more
general, of graded monoid algebras.
(1.5) Let M be a cancellable monoid, let F be its group of differences, and let R be a
G-graded ring. The algebra of M over R, furnished with its canonical G ⊕ F -graduation,
is denoted by R[M] and called the finely graded algebra of M over R, and we denote by
(ef )f ∈F its canonical basis. So, for (g, f ) ∈ G ⊕ F we have R[M](g,f ) = Rg ef . Denoting
by π : G ⊕ F ։ G the canonical projection we set R[M][G] := R[M][π] and call this
the coarsely graded algebra of M over R. If S is a G-graded R-algebra then S[M] is
a G ⊕ F -graded R[M]-algebra, and S[M][G] is a G-graded R[M][G] -algebra. We have
R[F ] = limU ∈F R[U](G⊕F ) and R[F ][G] = limU ∈F R[U][G] (1.3).
−→
−→
F
F
We will need some facts about graded variants of simplicity (i.e., the property of “being
a field”) and entirety. Although they are probably well-known, we provide proofs for the
readers convenience. Following Lang we use the term “entire” instead of “integral” (to
avoid confusion with the notion of integrality over some ring which is central in this
article) or “domain” (to avoid questions as whether a “graded domain” is the same as
a “domain furnished with a graduation”), and we use the term “simple” (which is more
common in noncommutative algebra) instead of “field” for similar reasons.
(1.6) Let R be a G-graded ring. We denote by R∗ the multiplicative group of invertible
homogeneous elements of R and by Nzd(R) the multiplicative monoid of homogeneous
non-zerodivisors of R. We call R simple if R∗ = Rhom \ 0, and entire if Nzd(R) = Rhom \ 0.
If R is entire then the G-graded ring of fractions Nzd(R)−1 R is simple; we denote it by
Q(R) and call it the (graded) field of fractions of R. If ψ : G ։ H is an epimorphism of
groups and R[ψ] is simple or entire, then R is so. Let F ⊆ G be a subgroup. If R is simple
or entire, then R(F ) is so, and an F -graded ring S is simple or entire if and only if S (G) is
so.
(1.7) Let I be a nonempty right filtering preordered set, and let ((Ri )i∈I , (ϕij )i≤j ) be an
inductive system in GrAnnG over I. Analogously to [2, I.10.3 Proposition 3] we see that if
Ri is simple or entire for every i ∈ I, then limi∈I Ri is simple or entire. If Ri is entire for
−→
every i ∈ I and ϕij is a monomorphism for all i, j ∈ I with i ≤ j, then by [8, 0.6.1.5] we
get an inductive system (Q(Ri ))i∈I in GrAnnG over I with limi∈I Q(Ri ) = Q(limi∈I Ri ).
−→
−→
(1.8) Let F ⊆ G be a subgroup and let ≤ be an ordering on F that is compatible
with its structure of group. The relation “g − h ∈ F≤0 ” is the finest ordering on G that
is compatible with its structure of group and induces ≤ on F ; we call it the canonical
GRADED INTEGRAL CLOSURES
5
extension of ≤ to G. If ≤ is a total ordering then its canonical extension to G induces a
total ordering on every equivalence class of G modulo F .
(1.9) Lemma Let ψ : G ։ H be an epimorphism of groups such that Ker(ψ) is torhom
sionfree, let R be an entire G-graded ring, and let x, y ∈ R[ψ]
\ 0 with xy ∈ Rhom . Then,
x, y ∈ Rhom and xy 6= 0.
Proof. (cf. [2, II.11.4 Proposition 8]) By [2, II.11.4 Lemme 1] we can choose a total ordering
on Ker(ψ) that is compatible with its structure of group. Let ≤ denote its canonical
extension to G (1.8). Let h := deg(x) and h′ := deg(y). There exist strictly increasing
finite sequences (gi )ni=0 in ψ −1 (h) and (gj′ )m
in ψ −1 (h′ ), xi ∈ Rgi \ 0 for i ∈ [0, n], and
j=0
Pn
P
yj ∈ Rgj′ \ 0 for j ∈ [0, m] such that x = i=0 xi and y = m
j=0 yj . If k ∈ [0, n] and
′
′
l ∈ [0, m] with gk + gl = gn + gm then k = n and l = m by [2, VI.1.1 Proposition 1]. This
′
implies that the component of xy of degree gn + gm
equals xn ym 6= 0, so that xy 6= 0. As
hom
′
′
, hence n = m = 0 and therefore
x0 y0 6= 0 and xy ∈ R
it follows g0 + g0 = gn + gm
hom
x, y ∈ R .
(1.10) Corollary Let ψ : G ։ H be an epimorphism of groups such that Ker(ψ) is
∗
.
torsionfree, and let R be an entire G-graded ring. Then, R∗ = R[ψ]
hom
hom
\ 0 with
\ 0 then there exists y ∈ R[ψ]
Proof. Clearly, R∗ ⊆ (R[ψ] )∗ . If x ∈ (R[ψ] )∗ ⊆ R[ψ]
hom
hom
∗
xy = 1 ∈ R , so 1.9 implies x ∈ R , hence x ∈ R .
(1.11) Let R be a G-graded ring and let F be a group. It is readily checked that R[F ]
is simple or entire if and only if R is so. Analogously to [6, 8.1] it is seen that R[F ][G] is
entire if and only if R is entire and F is torsionfree. Together with 1.10 it follows that
R[F ][G] is simple if and only if R is simple and F = 0.
(1.12) Proposition Let ψ : G ։ H be an epimorphism of groups.
a) The following statements are equivalent: (i) ψ is an isomorphism; (ii) ψ-coarsening
preserves simplicity.
b) The following statements are equivalent: (i) Ker(ψ) is torsionfree; (ii) ψ-coarsening
preserves entirety; (iii) ψ-coarsening maps simple G-graded rings to entire H-graded
rings.1
Proof. If K is a field and R = K[Ker(ψ)](G) , then R is simple and R[ψ] is trivially Hgraded, hence R[ψ] is simple or entire if and only if R[0] is so (1.11, 1.6). If Ker(ψ) 6= 0
then R[0] is not simple, and if Ker(ψ) is not torsionfree then R[0] is not entire (1.11). This
proves a) and the implication (iii)⇒(i) in b). The remaining claims follow from 1.9.
(1.13) A G-graded ring R is called reduced if 0 is its only nilpotent homogeneous element.
With arguments similar to those above one can show that statements (i)–(iii) in 1.12 b) are
also equivalent to the following: (iv) ψ-coarsening preserves reducedness; (v) ψ-coarsening
maps simple G-graded rings to reduced H-graded rings. We will make no use of this fact.
1In
case H = 0 the implication (i)⇒(ii) is [2, II.11.4 Proposition 8].
6
FRED ROHRER
Finally we make some remarks on a graded variant of noetherianness.
(1.14) Let R be a G-graded ring. We call R noetherian if ascending sequences of Ggraded ideals of R are stationary, or – equivalently – if every G-graded ideal of R is of
finite type. Analogously to the ungraded case one can prove a graded version of Hilbert’s
Basissatz: If R is noetherian then so are G-graded R-algebras of finite type. If ψ : G ։ H
is an epimorphism of groups and R[ψ] is noetherian, then R is noetherian. Let F ⊆ G be
a subgroup. It follows from [4, 2.1] that if R is noetherian then so is R(F ) . Moreover, an
F -graded ring S is noetherian if and only if S (G) is so.
If F is a group then it follows from [4, 2.1] and the fact that ef ∈ R[F ]∗ for f ∈ F that
R[F ] is noetherian if and only if R is so. Analogously to [6, 7.7] one sees that R[F ][G] is
noetherian if and only if R is noetherian and F is of finite type. More general, it follows
readily from a result by Goto and Yamagishi ([4, 1.1]) that G is of finite type if and only if
ψ-coarsening preserves noetherianness for every epimorphism of groups ψ : G ։ H. (This
was proven again two years later by Nǎstǎsescu and Van Oystaeyen ([9, 2.1]).)
2. Relative integral closures
We begin this section with basic definitions and first properties of relative (complete)
integral closures.
(2.1) Let R be a G-graded ring and let S be a G-graded R-algebra. An element x ∈ S hom
is called integral over R if it is a zero of a monic polynomial in one indeterminate with
coefficients in Rhom . This is the case if and only if x, considered as an element of S[0] , is
integral over R[0] , as is seen analogously to the first paragraph of [3, V.1.8]. Hence, using
[3, V.1.1 Théorème 1] we see that for x ∈ S hom the following statements are equivalent:
(i) x is integral over R; (ii) the G-graded R-module underlying the G-graded R-algebra
R[x] is of finite type; (iii) there exists a G-graded sub-R-algebra of S containing R[x]
whose underlying G-graded R-module is of finite type.
An element x ∈ S hom is called almost integral over R if there exists a G-graded sub-Rmodule T ⊆ S of finite type containing R[x]. This is the case if and only if x, considered
as an element of S[0] , is almost integral over R[0] . Indeed, this condition is obviously
necessary. It is also sufficient, for if T ⊆ S[0] is a sub-R[0] -module of finite type containing
R[0] [x] then the G-graded sub-R-module T ′ ⊆ S generated by the set of homogeneous
components of elements of T is of finite type and contains T , hence R[x]. It follows from
the first paragraph that if x ∈ S hom is integral over R then it is almost integral over R;
analogously to [3, V.1.1 Proposition 1 Corollaire] it is seen that the converse is true if R
is noetherian (1.14).
(2.2) Let R be a G-graded ring and let S be a G-graded R-algebra. The G-graded
sub-R-algebra of S generated by the set of elements of S hom that are (almost) integral
over R is denoted by Int(R, S) (or CInt(R, S), resp.) and is called the (complete) integral
closure of R in S. We have Int(R, S) ⊆ CInt(R, S), with equality if R is noetherian
(2.1). For an epimorphism of groups ψ : G ։ H we have Int(R, S)[ψ] ⊆ Int(R[ψ] , S[ψ] ) and
CInt(R, S)[ψ] ⊆ CInt(R[ψ] , S[ψ] ) (2.1).
GRADED INTEGRAL CLOSURES
7
Let R′ denote the image of R in S. We say that R is (completely) integrally closed in
S if R′ = Int(R, S) (or R′ = CInt(R, S), resp.), and that S is (almost) integral over R if
Int(R, S) = S (or CInt(R, S) = S, resp.). If R is completely integrally closed in S then
it is integrally closed in S, and if S is integral over R then it is almost integral over R;
the converse statements are true if R is noetherian. If ψ : G ։ H is an epimorphism of
groups, then S is (almost) integral over R if and only if S[ψ] is (almost) integral over R[ψ] ,
and if R[ψ] is (completely) integrally closed in S[ψ] then R is (completely) integrally closed
in S. If G ⊆ F is a subgroup then Int(R, S)(F ) = Int(R(F ) , S (F ) ) and CInt(R, S)(F ) =
CInt(R(F ) , S (F ) ), hence R is (completely) integrally closed in S if and only if R(F ) is
(completely) integrally closed in S (F ) .
From [3, V.1.1 Proposition 4 Corollaire 1] and [12, §135, p. 180]2 we know that sums and
products of elements of S[0] that are (almost) integral over R[0] are again (almost) integral
over R[0] . Hence, Int(R, S)hom (or CInt(R, S)hom , resp.) equals the set of homogeneous
elements of S that are (almost) integral over R, and thus Int(R, S) (or CInt(R, S), resp.)
is (almost) integral over R by the above. Moreover, Int(R, S) is integrally closed in S by
[3, V.1.2 Proposition 7]. One should note that CInt(R, S) is not necessarily completely
integrally closed in S, not even if R is entire and S = Q(R) ([7, Example 1]).
(2.3) Suppose we have a commutative diagram of G-graded rings
R
/
R
S
/
h
S.
hom
is (almost) integral over R
If x ∈ S hom is (almost) integral over R, then h(x) ∈ S
(2.1, [3, V.1.1 Proposition 2], [5, 13.5]). Hence, if the diagram above is cartesian and R
is (completely) integrally closed in S, then R is (completely) integrally closed in S.
(2.4) Let R be a G-graded ring, let S be a G-graded R-algebra, and let T ⊆ Rhom
be a subset. Analogously to [3, V.1.5 Proposition 16] one shows that T −1 Int(R, S) =
Int(T −1 R, T −1S). Hence, if R is integrally closed in S then T −1 R is integrally closed in
T −1 S.
Note that there is no analogous statement for complete integral closures. Although
−1
T CInt(R, S) ⊆ CInt(T −1 R, T −1 S) by 2.3, this need not be an equality. In fact, by [3,
V.1 Exercice 12] there exists an entire ring R that is completely integrally closed in Q(R)
and a subset T ⊆ R \ 0 such that Q(R) is the complete integral closure of T −1 R.
(2.5) Let I be a right filtering preordered set, and let (ui )i∈I : (Ri )i∈I → (Si )i∈I be a
morphism of inductive systems in GrAnnG over I. By 2.3 we have inductive systems
(Int(Ri , Si ))i∈I and (CInt(Ri , Si ))i∈I in GrAnnG over I, and we can consider the sublimi∈I Ri -algebras
−→
limi∈I Int(Ri , Si ) ⊆ limi∈I CInt(Ri , Si ) ⊆ limi∈I Si
−→
−→
−→
2Note
that van der Waerden calls “integral” what we call “almost integral”.
8
FRED ROHRER
and compare them with the sub-limi∈I Ri -algebras
−→
Int(limi∈I Ri , limi∈I Si ) ⊆ CInt(limi∈I Ri , limi∈I Si ) ⊆ limi∈I Si .
−→
−→
−→
−→
−→
Analogously to [8, 0.6.5.12] it follows limi∈I Int(Ri , Si ) = Int(limi∈I Ri , limi∈I Si ), hence if
−→
−→
−→
Ri is integrally closed in Si for every i ∈ I then limi∈I Ri is integrally closed in limi∈I Si .
−→
−→
Note that there is no analogous statement for complete integral closures. Although
limi∈I CInt(Ri , Si ) ⊆ CInt(limi∈I Ri , limi∈I Si ) by 2.3, this need not be an equality (but
−→
−→
−→
cf. 3.2). In fact, by [3, V.1 Exercice 11 b)] there exist a field K and an increasing family
(Rn )n∈N of subrings of K such that Rn is completely integrally closed in Q(Rn ) = K for
every n ∈ N and that limn∈N Rn is not completely integrally closed in Q(limn∈N Rn ) = K.
−→
−→
Now we turn to the behaviour of finely and coarsely graded group algebras with respect
to relative (complete) integral closures.
(2.6) Theorem Let R be a G-graded ring.
a) Formation of finely graded group algebras over R commutes with relative (complete)
integral closure.
b) Let S be a G-graded R-algebra, and let F be a group. Then, R is (completely)
integrally closed in S if and only if R[F ] is (completely) integrally closed in S[F ].
Proof. a) Let F be a group, and let S be a G-graded R-algebra. Let x ∈ S[F ]hom . There
are s ∈ S hom and f ∈ F with x = sef . If x ∈ Int(R, S)[F ]hom then s ∈ Int(R, S)hom ,
hence s ∈ Int(R[F ], S[F ]) (2.3), and as ef ∈ Int(R[F ], S[F ]) it follows x = sef ∈
Int(R[F ], S[F ]). This shows Int(R, S)[F ] ⊆ Int(R[F ], S[F ]). Conversely, suppose that
x ∈ Int(R[F ], S[F ])hom . As ef ∈ R[F ]∗ it follows s ∈ Int(R[F ], S[F ])hom . So, there is a
finite subset E ⊆ S[F ]hom such that the G ⊕F -graded sub-R[F ]-algebra of S[F ] generated
by E contains R[F ][s]. As eh ∈ R[F ]∗ for every h ∈ F we can suppose E ⊆ S. If n ∈ N
then sn is an R[F ]-linear combination of products in E, and comparing the coefficients
of e0 shows that sn is an R-linear combination of products in E. Thus, R[s] is contained
in the G-graded sub-R-algebra of S generated by E, hence s ∈ Int(R, S), and therefore
x ∈ Int(R, S)[F ]. This shows Int(R[F ], S[F ]) ⊆ Int(R, S)[F ]. The claim for complete
integral closures follows analogously. b) follows immediately from a).
(2.7) Theorem Let F be a group. The following statements are equivalent:3
(i) Formation of coarsely graded algebras of F over G-graded rings commutes with relative integral closure;
(i’) Formation of coarsely graded algebras of F over G-graded rings commutes with relative complete integral closure;
(ii) If R is a G-graded ring and S is a G-graded R-algebra, then R is integrally closed
in S if and only if R[F ][G] is integrally closed in S[F ][G] ;
(ii’) If R is a G-graded ring and S is a G-graded R-algebra, then R is completely integrally
closed in S if and only if R[F ][G] is completely integrally closed in S[F ][G] ;
3In
case G = 0 the implication (iii)⇒(i) is [3, V.1 Exercice 24].
GRADED INTEGRAL CLOSURES
9
(iii) F is torsionfree.
Proof. “(i)⇒(ii)” and “(i’)⇒(ii’)”: Immediately from 2.3.
“(ii)⇒(iii)” and “(ii’)⇒(iii)”: Suppose that F is not torsionfree. It suffices to find a
noetherian ring R and an R-algebra S such that R is integrally closed in S and that R[F ][0]
is not integrally closed in S[F ][0] , for then furnishing R and S with trivial G-graduations
it follows that R[F ][G] is not integrally closed in S[F ][G] (2.2). The ring Z is noetherian
and integrally closed in Q. By hypothesis there exist g ∈ F \ 0 and n ∈ N>1 with ng = 0,
Pn−1 1 i
so that eng = 1 ∈ Q[F ][0] . It is readily checked that f := i=0
e ∈ Q[F ][0] \ Z[F ][0] is
n g
n−1
idempotent. Setting c := 1 + (n − 1)eg ∈ Z[F ][0] we get
Pn−1 n−1 i+n−1
Pn−1 n−1 i−1
d := f c = f + (n − 1)f egn−1 = f + i=0
eg
= f + i=0
eg =
n
n
f+
n−1 −1
e
n g
+
Pn−2
i=0
n−1 i
eg
n
+
n−1 n−1
e
n g
−
n−1 n−1
e
n g
= f + (n − 1)f = nf ∈ Z[F ][0] .
Therefore, f 2 + (c − 1)f − d = f + d − f − d = 0 yields an integral equation for f over
Z[F ][0] . Thus, Z[F ][0] is not integrally closed in Q[F ][0] .
“(iii)⇒(i)” and “(iii)⇒(i’)”: Without loss of generality suppose G = 0 (2.2). Suppose that F is torsionfree, let R be a ring, and let S be an R-algebra. If n ∈ N then
Int(R, S)[Nn ][0] = Int(R[Nn ][0] , S[Nn ][0] ) ([3, V.1.3 Proposition 12]), hence
Int(R[Zn ][0] , S[Zn ][0] ) = Int(R, S)[Zn ][0]
(2.4). This proves (i) in case F is of finite type, and so we get (i) in general by 1.5, 2.2
and 2.5. It remains to show (i’). The inclusion CInt(R, S)[F ][0] ⊆ CInt(R[F ][0] , S[F ][0] )
follows immediately from 2.2 and 2.3. We prove the converse inclusion analogously to [7,
Proposition 1]. Since F is torsionfree we can choose a total ordering ≤ on F that is compatible with its structure of group ([2, II.11.4 Lemme 1]). Let x ∈ CInt(S[F ][0] , R[F ][0] ).
There are n ∈ N, a family (xi )ni=1 in S \ 0, and a strictly increasing family (fi )ni=1 in F
P
with x = ni=1 xi efi . We prove by induction on n that x ∈ CInt(R, S)[F ][0] . If n = 0 this
is clear. Suppose that n > 0 and that the claim is true for strictly smaller values of n.
There exists a finite subset P ⊆ S[F ][0] with R[F ][0] [x] ⊆ hP iR[F ]. Let Q denote the finite
set of coefficients of elements of P . Let k ∈ N. There exists a family (sp )p∈P in R[F ] with
P
xk = p∈P sp p. By means of the ordering of F we see that xkn is the coefficient of efkn in
P
xk , hence the coefficient of efkn in p∈P sp p. This latter being an R-linear combination
of Q we get xkn ∈ hQiR . It follows R[xn ] ⊆ hQiR , and thus xn ∈ CInt(R, S). So, we
get xn efn ∈ CInt(R[F ][0] , S[F ][0] ) (2.2, 2.3), hence x − xn efn ∈ CInt(R[F ][0] , S[F ][0] ), thus
x − xn efn ∈ CInt(R, S)[F ][0] by our hypothesis, and therefore x ∈ CInt(R, S)[F ][0] as
desired.
(2.8) The proof above shows that 2.7 remains true if we replace “If R is a G-graded
ring” by “If R is a noetherian G-graded ring” in (ii) and (ii’).
The rest of this section is devoted to the study of the behaviour of relative (complete)
integral closures under arbitrary coarsening functors. Although we are not able to characterise those coarsenings with good behaviour, we identify two properties of the coarsening,
10
FRED ROHRER
one that implies good behaviour of (complete) integral closures, and one that is implied
by good behaviour of (complete) integral closures.
(2.9) Let ψ : G ։ H be an epimorphism of groups. We say that ψ-coarsening commutes
with relative (complete) integral closure if Int(R, S)[ψ] = Int(R[ψ] , S[ψ] ) (or CInt(R, S)[ψ] =
CInt(R[ψ] , S[ψ] ), resp.) for every G-graded ring R and every G-graded R-algebra S.
(2.10) Proposition Let ψ : G ։ H be an epimorphism of groups. We consider the
following statements:
(1) ψ-coarsening commutes with relative integral closure;
(1’) ψ-coarsening commutes with relative complete integral closure;
hom
(2) If R is a G-graded ring, S is a G-graded R-algebra, and x ∈ S[ψ]
, then x is integral over R[ψ] if and only if all its homogeneous components (with respect to the
G-graduation) are integral over R;
hom
(2’) If R is a G-graded ring, S is a G-graded R-algebra, and x ∈ S[ψ]
, then x is almost
integral over R[ψ] if and only if all its homogeneous components (with respect to the
G-graduation) are almost integral over R;
(3) If R is a G-graded ring and S is a G-graded R-algebra, then R is integrally closed in
S if and only if R[ψ] is integrally closed in S[ψ] .
(3’) If R is a G-graded ring and S is a G-graded R-algebra, then R is completely integrally
closed in S if and only if R[ψ] is completely integrally closed in S[ψ] .
Then, we have (1)⇔(2)⇔(3) and (1’)⇔(2’)⇒(3’).
Proof. The implications “(1)⇔(2)⇒(3)” and “(1’)⇔(2’)⇒(3’)” follow immediately from
the definitions. Suppose (3) is true, let R be a G-graded ring R, and let S be a Ggraded R-algebra. As Int(R, S) is integrally closed in S (2.2) it follows that Int(R, S)[ψ]
is integrally closed in S[ψ] , implying
Int(R[ψ] , S[ψ] ) ⊆ Int(Int(R, S)[ψ] , S[ψ] ) = Int(R, S)[ψ] ⊆ Int(R[ψ] , S[ψ] )
(2.2) and thus the claim.
The argument above showing that (3) implies (1) cannot be used to show that (3’)
implies (1’), as CInt(R, S) is not necessarily completely integrally closed in S (2.2).
(2.11) Let ψ : G ։ H be an epimorphism of groups, suppose that there exists a section
π : H → G of ψ in the category of groups, and let R be a G-graded ring. For g ∈ G there
is a morphism of groups
π
jR,g
: Rg → R[0] [Ker(ψ)], x 7→ xeg+π(ψ(g)) .
L
π
The family (jR,g
)g∈G induces a morphism of groups jRπ :
g∈G Rg → R[0] [Ker(ψ)][0] that
π
is readily checked to be a morphism jR : R[ψ] → R[ψ] [Ker(ψ)][H] of H-graded rings.
(2.12) Theorem Let ψ : G ։ H be an epimorphism of groups.
a) If Ker(ψ) is contained in a torsionfree direct summand of G then ψ-coarsening
commutes with relative (complete) integral closure.
GRADED INTEGRAL CLOSURES
11
b) If ψ-coarsening commutes with relative (complete) integral closure then Ker(ψ) is
torsionfree.
Proof. a) First, we consider the case that Ker(ψ) itself is a torsionfree direct summand
hom
of G. Let R be a G-graded ring, let S be a G-graded R-algebra, and let x ∈ S[ψ]
be
(almost) integral over R[ψ] . As Ker(ψ) is a direct summand of G there exists a section π
of ψ in the category of groups. So, we have a commutative diagram
R[ψ]
/
S[ψ]
π
jS
π
jR
R[ψ] [Ker(ψ)][H]
/
S[ψ] [Ker(ψ)][H]
of H-graded rings (2.11). Since Ker(ψ) is torsionfree it follows
jSπ (x) ∈ Int(R[ψ] [Ker(ψ)][H] , S[ψ] [Ker(ψ)][H] ) = Int(R[ψ] , S[ψ] )[Ker(ψ)][H]
(and similarly for complete integral closures) by 2.3 and 2.7. By the construction of jSπ
this implies xg ∈ Int(R[ψ] , S[ψ] ) (or xg ∈ CInt(R[ψ] , S[ψ] ), resp.) for every g ∈ G, and thus
the claim (2.10).
Next, we consider the general case. Let F be a torsionfree direct summand of G
containing Ker(ψ), let χ : G ։ G/F be the canonical projection and let λ : H ։ G/F be
the induced epimorphism of groups, so that λ ◦ ψ = χ. Let R be a G-graded ring, and let
S be a G-graded R-algebra. By 2.9 and the first paragraph,
Int(R[χ] , S[χ] ) = Int(R, S)[χ] = (Int(R, S)[ψ] )[λ] ⊆
Int(R[ψ] , S[ψ] )[λ] ⊆ Int((R[ψ] )[λ] , (S[ψ] )[λ] ) = Int(R[χ] , S[χ] ),
hence (Int(R, S)[ψ] )[λ] = Int(R[ψ] , S[ψ] )[λ] and therefore Int(R, S)[ψ] = Int(R[ψ] , S[ψ] ) (or the
analogous statement for complete integral closures) as desired.
b) Suppose K := Ker(ψ) is not torsionfree. By 2.7 and 2.8 there exist a noetherian ring
R and an R-algebra S such that R is integrally closed in S (hence completely integrally
closed in S) and that R[K][0] is not integrally closed in S[K][0] (hence not completely
integrally closed in S[K][0] (2.2)). Then, R[K] is completely integrally closed in S[K]
(2.6). Extending the K-graduations of R and S trivially to G-graduations it follows that
R[K][G] is completely integrally closed in S[K][G] , while (R[K][G] )[ψ] is not integrally closed
in (S[K][G] )[ψ] . This proves the claim.
(2.13) Corollary Let ψ : G ։ H be an epimorphism of groups. If G is torsionfree then
ψ-coarsening commutes with relative (complete) integral closure.4
Proof. Immediately from 2.12.
(2.14) Supposing that the torsion subgroup T of G is a direct summand of G, it is
readily checked that a subgroup F ⊆ G is contained in a torsionfree direct summand of G
if and only if the composition of canonical morphisms T ֒→ G ։ G/F has a retraction.
4In
case H = 0 the statement about relative integral closures is [3, V.1 Exercice 25].
12
FRED ROHRER
A torsionfree subgroup F ⊆ G is not necessarily contained in a torsionfree direct
summand of G, not even if G is of finite type. Using the criterion above one checks that
a counterexample is provided by G = Z ⊕ Z/nZ for n ∈ N>1 and F = h(n, 1)iZ ⊆ G.
(2.15) Questions Let ψ : G ։ H be an epimorphism of groups. The above result gives
rise to the following questions:
a) If Ker(ψ) is torsionfree, does ψ-coarsening commute with (complete) integral closure?
b) If ψ-coarsening commutes with (complete) integral closure, is then Ker(ψ) contained
in a torsionfree direct summand of G?
Note that, by 2.14, at most one of these questions has a positive answer.
3. Integral closures of entire graded rings
In this section we consider (complete) integral closures of entire graded rings in their
graded fields of fractions. We start with the relevant definitions and basic properties.
(3.1) Let R be an entire G-graded ring. The (complete) integral closure of R in Q(R)
is denoted by Int(R) (or CInt(R), resp.) and is called the (complete) integral closure of
R. We say that R is (completely) integrally closed if it is (completely) integrally closed in
Q(R). Keep in mind that Int(R) is integrally closed, but that CInt(R) is not necessarily
completely integrally closed (2.2). If ψ : G ։ H is an epimorphism of groups and R is a
G-graded ring such that R[ψ] (and hence R) is entire, then Q(R)[ψ] is entire and
R[ψ] ⊆ Q(R)[ψ] ⊆ Q(Q(R)[ψ] ) = Q(R[ψ] ).
From 2.9 it follows Int(R)[ψ] ⊆ Int(R[ψ] ) and CInt(R)[ψ] ⊆ CInt(R[ψ] ). Hence, if R[ψ] is
(completely) integrally closed then so is R. We say that ψ-coarsening commutes with
(complete) integral closure if whenever R is an entire G-graded ring then R[ψ] is entire
and Int(R)[ψ] = Int(R[ψ] ) (or CInt(R)[ψ] = CInt(R[ψ] ), resp.). Clearly, if ψ-coarsening
commutes with (complete) integral closure then Ker(ψ) is torsionfree (1.12 b)). Let
F ⊆ G be a subgroup. An F -graded ring S is entire and (completely) integrally closed if
and only if S (G) is so.
(3.2) Let I be a nonempty right filtering preordered set, and let ((Ri )i∈I , (ϕij )i≤j ) be an
inductive system in GrAnnG over I such that Ri is entire for every i ∈ I and that ϕij is
a monomorphism for all i, j ∈ I with i ≤ j. By 2.5 and 1.7 we have inductive systems
(Int(Ri ))i∈I and (CInt(Ri ))i∈I in GrAnnG over I, and we can consider the sub-limi∈I Ri −→
algebras
lim Int(Ri ) ⊆ lim CInt(Ri ) ⊆ Q(lim Ri )
−→
−→
−→
i∈I
i∈I
i∈I
and compare them with the sub-limi∈I Ri -algebras
−→
Int(lim Ri ) ⊆ CInt(lim Ri ) ⊆ Q(lim Ri ).
−→
−→
−→
i∈I
i∈I
i∈I
GRADED INTEGRAL CLOSURES
13
It follows immediately from 2.5 and 1.7 that limi∈I Int(Ri ) = Int(limi∈I Ri ). Hence, if Ri
−→
−→
is integrally closed for every i ∈ I then limi∈I Ri is integrally closed, and limi∈I CInt(Ri ) ⊆
−→
−→
CInt(limi∈I Ri ).
−→
Suppose now in addition that (limi∈I Ri ) ∩ Q(Ri ) = Ri for every i ∈ I. Then, analo−→
gously to [1, 2.1] one sees that limi∈I CInt(Ri ) = CInt(limi∈I Ri ), hence if Ri is completely
−→
−→
integrally closed for every i ∈ I then so is limi∈I Ri . This additional hypothesis is fulfilled
−→
in case R is a G-graded ring, F is a group, I = FF (1.3), and RU equals R[U](G⊕F ) or
R[U][G] for U ∈ FF .
(3.3) Let R, S and T be G-graded rings such that R ⊆ S ⊆ T as graded subrings.
Clearly, CInt(R, S) ⊆ CInt(R, T ) ∩ S. Gilmer and Heinzer found an (ungraded) example
showing that this is not necessarily an equality ([7, Example 2]), not even if R, S and T
are entire and have the same field of fractions. In [7, Proposition 2] they also presented
the following criterion for this inclusion to be an equality, whose graded variant is proven
analogously: If for every G-graded sub-S-module of finite type M ⊆ T with R ⊆ M there
exists a G-graded S-module N containing M such that R is a direct summand of N, then
CInt(R, S) = CInt(R, T ) ∩ S.
In [7, Remark 2] Gilmer and Heinzer claim (again in the ungraded case) that this
criterion applies if S is principal. As this would be helpful to us later (3.6, 3.10) we take
the opportunity to point out that it is wrong. Namely, suppose that S is not simple, that
T = Q(S), and that the hypothesis of the criterion is fulfilled. Let x ∈ S \ 0. There
exists an S-module N containing hx−1 iS ⊆ T such that S is a direct summand of N. The
tensor product with S/hxiS of the canonical injection S ֒→ N has a retraction, but it also
factors over the zero morphism S/hxiS → hx−1 iS /S. This implies x ∈ S ∗ , yielding the
contradiction that S is simple.
Now we consider graded group algebras. We will show that both variants behave well
with integral closures and that the finely graded variant behaves also well with complete
integral closure.
(3.4) Theorem a) Formation of finely graded group algebras over entire G-graded rings
commutes with (complete) integral closure.
b) If F is a group, then an entire G-graded ring R is (completely) integrally closed if
and only if R[F ] is so.
Proof. Keeping in mind that Q(R)[F ] = Q(R[F ]) (1.11) this follows immediately from
2.6.
(3.5) Lemma If R is a simple G-graded ring then R[Z][G] is entire and completely
integrally closed.
Proof. First we note that S := R[Z][G] is entire (1.11). The argument in [2, IV.1.6 Proposition 10] shows that S allows a graded version of euclidean division, i.e., for f, g ∈ S hom
with f 6= 0 there exist unique u, v ∈ S hom with g = uf + v and degZ (v) < degZ (f ), where
degZ denotes the usual Z-degree of polynomials over R. Using this we see analogously to
14
FRED ROHRER
[2, IV.1.7 Proposition 11] that every G-graded ideal of S has a homogeneous generating
set of cardinality 1. Next, developing a graded version of the theory of divisibility in entire
rings along the line of [2, VI.1], it follows analogously to [2, VI.1.11 Proposition 9 (DIV);
VII.1.2 Proposition 1] that for every x ∈ Q(S)hom there exist coprime a, b ∈ S hom with
x = ab . So, the argument in [3, V.1.3 Proposition 10] shows that S is integrally closed.
As it is noetherian the claim is proven (2.2).
(3.6) Theorem a) Formation of coarsely graded algebras of torsionfree groups over
entire G-graded rings commutes with integral closure.
b) If F is a torsionfree group, then an entire G-graded ring R is integrally closed if and
only if R[F ][G] is so.5
Proof. It suffices to prove the first claim. We can without loss of generality suppose
that F is of finite type, hence free of finite rank (1.3, 1.5, 3.2). By induction on the
rank of F we can furthermore suppose F = Z. We have Int(R[Z][G] ) ∩ Q(R)[Z][G] =
Int(R[Z][G] , Q(R)[Z][G] ). Since Q(R)[Z][G] is integrally closed (3.5) we get
Int(R[Z][G] ) ⊆ Int(Q(R)[Z][G] , Q(R[Z][G] )) = Q(R)[Z][G] .
It follows
Int(R[Z][G] ) = Int(R[Z][G] ) ∩ Q(R)[Z][G] = Int(R[Z][G] , Q(R)[Z][G] ) = Int(R)[Z][G]
(2.7) and thus the claim.
(3.7) Let F be a torsionfree group, let R be an entire G-graded ring, and suppose that
CInt(R[Z][G] ) ∩ Q(R)[Z][G] = CInt(R[Z][G] , Q(R)[Z][G] ). Then, the same argument as in
3.6 (keeping in mind 3.2) yields CInt(R)[F ][G] = CInt(R[F ][G] ), hence R is completely
integrally closed if and only if R[F ][G] is so. However, although R[Z][G] is principal by the
proof of 3.5, we have seen in 3.3 that it is unclear whether CInt(R[Z][G] ) ∩ Q(R)[Z][G] and
CInt(R[Z][G] , Q(R)[Z][G] ) are equal in general.
(3.8) Corollary a) Formation of coarsely graded algebras of torsionfree groups over
noetherian entire G-graded rings commutes with complete integral closure.
b) If F is a torsionfree group, then a noetherian entire G-graded ring R is completely
integrally closed if and only if R[F ][G] is so.
Proof. We can without loss of generality suppose that F is of finite type (1.3, 1.5, 3.2).
Then, R[F ][G] is noetherian (1.14), and the claim follows from 3.6 and 2.2.
In the rest of this section we study the behaviour of (complete) integral closures under
arbitrary coarsening functors, also using the results from Section 2.
(3.9) Proposition Let ψ : G ։ H be an epimorphism of groups.
a) ψ-coarsening commutes with integral closure if and only if a G-graded ring R is
entire and integrally closed if and only if R[ψ] is so.
5In
case G = 0 the statement that integral closedness of R implies integral closedness of R[F ][G] is [3,
V.1 Exercice 24].
GRADED INTEGRAL CLOSURES
15
b) If ψ-coarsening commutes with complete integral closure, then a G-graded ring R is
entire and completely integrally closed if and only if R[ψ] is so.
Proof. If ψ-coarsening commutes with (complete) integral closure then it is clear that
a G-graded ring R is entire and (completely) integrally closed if and only if R[ψ] is so.
Conversely, suppose that ψ-coarsening preserves the property of being entire and integrally
closed. Let R be an entire G-graded ring. Since simple G-graded rings are entire and
integrally closed, R[ψ] is entire (1.12 b)). As Int(R) is integrally closed (2.2) the same is
true for Int(R)[ψ] , implying
Int(R[ψ] ) = Int(R[ψ] , Q(R[ψ] )) ⊆ Int(Int(R)[ψ] , Q(R[ψ] )) =
Int(Int(R)[ψ] ) = Int(R)[ψ] ⊆ Int(R[ψ] )
(3.1) and thus the claim.
The argument used in a) cannot be used to prove the converse of b), as CInt(R) is not
necessarily completely integrally closed (3.1).
(3.10) Proposition Let ψ : G ։ H be an epimorphism of groups. Suppose that ψcoarsening commutes with relative integral closure and maps simple G-graded rings to
entire and integrally closed H-graded rings. Then, ψ-coarsening commutes with integral
closure.
Proof. If R is an entire G-graded ring, then R[ψ] is entire (2.12 b), 1.12 b)) and Q(R)[ψ]
is integrally closed, and as Q(Q(R)[ψ] ) = Q(R[ψ] ) (3.1) it follows
Int(R[ψ] ) = Int(R[ψ] , Q(R[ψ] )) ⊆ Int(Q(R)[ψ] , Q(R[ψ] )) = Int(Q(R)[ψ] ) = Q(R)[ψ] ,
hence Int(R[ψ] ) = Int(R[ψ] , Q(R)[ψ] ) = Int(R, Q(R))[ψ] = Int(R)[ψ] .
(3.11) We have seen in 3.3 that it is (in the notations of the proof of 3.10) not clear that
CInt(R[ψ] ) ⊆ Q(R)[ψ] implies CInt(R[ψ] ) = CInt(R[ψ] , Q(R)[ψ] ). Therefore, the argument
from that proof cannot be used to get an analogous result for complete integral closures.
(3.12) Lemma Let F be a free direct summand of G, let H be a complement of F in G,
let ψ : G ։ H be the canonical projection, let R be a simple G-graded ring, and suppose
that ψ(degsupp(R)) ⊆ degsupp(R). Then, R[ψ] ∼
= R(H) [degsupp(R) ∩ F ][H] in GrAnnH .
Proof. We set D := degsupp(R). As F is free the same is true for D ∩ F . Let E be a basis
of D ∩ F . If e ∈ E then Re 6= 0, so that we can choose ye ∈ Re \ 0 ⊆ R∗ . For f ∈ D ∩ F
P
there exists a unique family (re )e∈E of finite support in Z with f = e∈E re e, and we set
Q
yf := e∈E yere ∈ Rf \ 0; in case f ∈ E we recover the element yf defined above.
As (R(H) )[0] is a subring of R[0] there exists a unique morphism of (R(H) )[0] -algebras
p : R(H) [D ∩ F ][0] → R[0] with p(ef ) = yf for f ∈ D ∩ F . If h ∈ H, then for f ∈ D ∩ F and
x ∈ Rh we have p(xef ) = xyf ∈ Rh+f ⊆ (R[ψ] )h , so that p((R(H) [D ∩ F ][H] )h ) ⊆ (R[ψ] )h ,
and therefore we have a morphism p : R(H) [D ∩ F ][H] → R[ψ] in GrAnnH .
16
FRED ROHRER
Let χ : G ։ F denote the canonical projection. For g ∈ G with χ(g) ∈ D there is a
morphism of groups
x
eχ(g) ,
qg : Rg → R(H) [D ∩ F ], x 7→ yχ(g)
and for g ∈ G with χ(g) ∈
/ D we denote by qg the zero morphism of groups Rg →
R(H) [D ∩ F ]. For h ∈ H the morphisms qg with g ∈ ψ −1 (h) induce a morphism of groups
qh : (R[ψ] )h → R(H) [D ∩ F ]. So, we get a morphism of groups
M
qh : R[ψ] → R(H) [D ∩ F ].
q :=
h∈H
Let g ∈ G and x ∈ Rg . If χ(g) ∈
/ D then g ∈
/ D, hence x = 0, and therefore
x
x
x
p(q(x)) = x. Otherwise, p(q(x)) = p( yχ(g) eχ(g) ) = yχ(g)
p(eχ(g) ) = yχ(g)
yχ(g) = x. This
shows that q is a right inverse of p. If x ∈ R(H) then q(p(x)) = x, and if f ∈ D ∩ F then
y
q(p(ef )) = q(yf ) = yff ef = ef , hence q is a left inverse of p. Therefore, q is an inverse of
p, and thus p is an isomorphism.
(3.13) Proposition Let ψ : G ։ H be an epimorphism of groups, let R be a simple
G-graded ring, and suppose that one of the following conditions is fulfilled:
i) G is torsionfree;
ii) Ker(ψ) is contained in a torsionfree direct summand of G and R has full support.
Then, R[ψ] is entire and completely integrally closed.
Proof. First, we note that R[ψ] is entire (1.12 b)). In case i) it suffices to show that R[0] is
integrally closed, so we can replace H with 0 and hence suppose Ker(ψ) = G. In case ii),
by the same argument as in the proof of 2.12 (and keeping in mind 3.1) we can suppose
without loss of generality that K := Ker(ψ) itself is a torsionfree direct summand of G
and hence consider H as a complement of K in G. In both cases, as K = limL∈F L (1.3)
−→ K
we have G = K ⊕ H = limL∈F (L ⊕ H), hence R = limL∈F ((R(U ⊕H) )(G) ). Setting ψL :=
−→ K
−→ K
ψ ↾L⊕H : L ⊕ H ։ H we get R[ψ] = limL∈F ((R(L⊕H) )[ψL ] ) (1.2). Hence, if (R(L⊕H) )[ψ]
−→ K
is integrally closed for every L ∈ FK then R[ψ] is integrally closed (3.2). Therefore, as
R(L⊕H) is simple for every L ∈ FK (1.6) we can suppose that K is of finite type, hence
free. As R is simple it is clear that D := degsupp(R) ⊆ G is a subgroup, hence D∩K ⊆ K
is a subgroup, and thus D ∩ K is free. In both cases, our hypotheses ensure ψ(D) ⊆ D,
so that 3.12 implies R[ψ] ∼
= R(H) [D ∩ K][H] . As R is simple it is completely integrally
closed, hence R(H) is completely integrally closed (3.1), thus R(H) [D ∩ K][H] is completely
integrally closed (3.6), and so the claim is proven.
(3.14) Theorem Let ψ : G ։ H be an epimorphism of groups, let R be an entire
G-graded ring, and suppose that one of the following conditions is fulfilled:
i) G is torsionfree;
ii) Ker(ψ) is contained in a torsionfree direct summand of G and hdegsupp(R)iZ = G.
Then, Int(R)[ψ] = Int(R[ψ] ), and R is integrally closed if and only if R[ψ] is so.6
6In
case i) and H = 0 this is [3, V.1 Exercice 25].
GRADED INTEGRAL CLOSURES
17
Proof. As degsupp(Q(R)) = hdegsupp(R)iZ this follows immediately from 3.10, 3.13 and
2.12.
(3.15) Questions Let R be an entire G-graded ring. The above, especially 3.7 and 3.11,
gives rise to the following questions:
a) Let ψ : G ։ H be an epimorphism of groups such that Ker(ψ) is torsionfree. Do we
have CInt(R[ψ] ) ∩ Q(R)[ψ] = CInt(R[ψ] , Q(R)[ψ] )?
b) Do we have CInt(R[Z][G] ) ∩ Q(R)[Z][G] = CInt(R[Z][G] , Q(R)[Z][G] )?
If both these questions could be answered positively, then the same arguments as above
would yield statements for complete integral closures analogous to 3.6, 3.10, and 3.14.
Acknowledgement: I thank Benjamin Bechtold and the reviewer for their comments
and suggestions. The remarks in 2.14 were suggested by Micha Kapovich and Will Sawin
on http://mathoverflow.net/questions/108354. The counterexample in 3.3 is due to
an anonymous user on http://mathoverflow.net/questions/110998.
References
[1] D. F. Anderson, Graded Krull domains. Comm. Algebra 7 (1979), 79–106.
[2] N. Bourbaki, Éléments de mathématique. Algèbre. Chapitres 1 à 3. Masson, 1970; Chapitres 4 à 7.
Masson, 1981.
[3] N. Bourbaki, Éléments de mathématique. Algèbre commutative. Chapitres 5 à 7. Hermann, 1975.
[4] S. Goto, K. Yamagishi, Finite generation of noetherian graded rings. Proc. Amer. Math. Soc. 89
(1983), 41–44.
[5] R. Gilmer, Multiplicative ideal theory. Pure Appl. Math. 12, Marcel Dekker, 1972.
[6] R. Gilmer, Commutative semigroup rings. Chicago Lectures in Math., Univ. Chicago Press, 1984.
[7] R. Gilmer, W. J. Heinzer, On the complete integral closure of an integral domain. J. Aust. Math.
Soc. 6 (1966), 351–361.
[8] A. Grothendieck, J. A. Dieudonné, Éléments de géométrie algébrique. I: Le langage des schémas
(Seconde édition). Grundlehren Math. Wiss. 166, Springer, 1971.
[9] C. Nǎstǎsescu, F. Van Oystaeyen, Graded rings with finiteness conditions II. Comm. Algebra 13
(1985), 605–618.
[10] F. Rohrer, Coarsenings, injectives and Hom functors. Rev. Roumaine Math. Pures Appl. 57 (2012),
275–287.
[11] I. Swanson, C. Huneke, Integral closure of ideals, rings, and modules. London Math. Soc. Lecture
Note Ser. 336, Cambridge Univ. Press, 2006.
[12] B. L. van der Waerden, Algebra. Zweiter Teil. (Fünfte Auflage). Heidelb. Taschenb. 23, Springer,
1967.
[13] J. Van Geel, F. Van Oystaeyen, About graded fields. Indag. Math. (N.S.) 84 (1981), 273–286.
Universität Tübingen, Fachbereich Mathematik, Auf der Morgenstelle 10, 72076 Tübingen, Germany
E-mail address: [email protected]
| 4 |
Hygienic Source-Code Generation Using Functors
Karl Crary
arXiv:1801.01579v1 [cs.PL] 4 Jan 2018
Carnegie Mellon University
Abstract
Existing source-code-generating tools such as Lex and Yacc
suffer from practical inconveniences because they use disembodied code to implement actions. To prevent this problem,
such tools could generate closed functors that are then instantiated by the programmer with appropriate action code.
This results in all code being type checked in its appropriate
context, and it assists the type checker in localizing errors
correctly. We have implemented a lexer generator and parser
generator based on this technique for Standard ML, OCaml,
and Haskell.
1
Introduction
Compiler implementers have a love-hate relationship with
source-code-generating tools such as Lex [9] (which generates lexers from regular expressions) and Yacc [7] (which
generates shift-reduce parsers from context-free grammars).
These tools automate the some of the most tedious parts of
implementing a parser, but they can be awkward to use.
One of the main awkward aspects of such tools is the
disembodied code problem. To build a lexer or a parser, these
tools cobble together snippets of code (each implementing
an action of the lexer/parser) supplied by the programmer
in a lexer/parser specification file. Unfortunately, the code
snippets, as they appear in the specification file, are divorced
from their ultimate context. The tools manipulate them as
simple strings.1
This makes programming awkward in several ways.
Functions and other values are passed into the snippets using identifiers that are bound nowhere in the programmer’s
code, nor even introduced by a pseudo-binding such as open.
Rather, the snippet is copied into a context in which such
identifiers are in scope. This can make code difficult to read.
More importantly, disembodied code makes debugging
challenging, because the code seen by the compiler bears
little resemblance to the code written by the programmer.
For example, consider the following line from an ML-Lex [1]
specification:
{whitespace}+ => ( lex () );
This line tells the lexer to skip any whitespace it encounters
by matching it and then calling itself recursively to continue.
1
Such strings may even include syntax errors, which are duly
copied into the output code. Typically the tool does not even ensure that delimiters are matched.
(Note that lex is an example of an identifier introduced implicitly when the snippet is copied.) ML-ULex2 [13] converts
the line into the Standard ML code:
fun yyAction0 (strm, lastMatch : yymatch) =
(yystrm := strm; ( lex () ))
This output code already is not very easy to read. However, the problem is greatly exacerbated by the familiar phenomenon in typed functional languages that type checkers
are often bad at identifying the true source of a type error. Suppose we introduce an error into the specification by
omitting the argument to lex:
{whitespace}+ => ( lex );
We now obtain3 several pages of error messages looking
like:
foo.lex.sml:1526.25-1526.70 Error: operator and
operand don’t agree [tycon mismatch]
operator domain: yyInput.stream * action * yymatch
operand:
yyInput.stream *
(yyInput.stream * yymatch -> unit
-> (?.svalue,int) ?.token)
* yymatch
in expression:
yyMATCH (strm,yyAction0,yyNO_MATCH)
or like:
foo.lex.sml:1686.20-1692.47 Error: types of if
branches do not agree [tycon mismatch]
then branch: (?.svalue,int) ?.token
else branch: unit -> (?.svalue,int) ?.token
in expression:
if inp = #"\n"
then yyQ38 (strm’,lastMatch)
else
if inp < #"\n"
then if inp = #"\t" then yyQ37 (<exp>,<exp>)
else yyQ36 (<exp>,<exp>)
else yyQ37 (strm’,lastMatch)
2
The lexer generator (compatible with ML-Lex) that Standard ML
of New Jersey uses.
3
Using Standard ML of New Jersey v100.68.
and none of the errors is anywhere near the copied snippet
containing the error.
functor is already known, so an error in one action will not
be misinterpreted as an error in all the other actions.
The problem is related to the issue of variable hygiene in
macro expansion [8]. In both cases, the programmer writes
code (a lexer/parser action, or macro argument) divorced
from its ultimate context and then—after processing—that
code is dropped verbatim into its ultimate context. In the
setting of macros, this sets the scene for variable capture to
occur, which is nearly always erroneous. In lexer generators,
variable capture often is actually desired (consider the lex
identifier), but, as observed above, it is nevertheless difficult
to reason about and to debug.
Accordingly, we are interested in source-code generation
in which all code is type-checked in the same context in
which it is written. We call this hygienic source-code generation by analogy to hygienic macro expansion, which ensures
the same thing for macros.
An obvious way to accomplish hygienic source-code generation is to have the tool type-check every snippet before
it assembles them into output code. But, this approach is
unattractive in practice, because it necessitates including all
the apparatus of parsing, elaboration, and type-checking as
part of a tool that does not otherwise need all that apparatus.
We propose a simpler and cleaner alternative: Rather
than type-check disembodied code in context, we dispense
with disembodied code altogether. To accomplish this, the
tool—rather than assembling snippets of source code into a
program—generates a functor that abstracts over the code
that used to reside in snippets. The programmer then applies the functor in order to instantiate the lexer or parser
with specific action implementations.
A third alternative, arguably more principled than ours,
is to implement the lexer/parser generator in a type-safe
metaprogramming language such as MetaML [12] or its
cousins. With such an approach, as in ours, the action implementations would be type-checked in context, without
any need to duplicate compiler apparatus. Furthermore,
it would remove the need to write the lexer/parser specification and action implementations in two separate places,
as our proposal requires. On the other hand, this alternative requires one to use a special programming language.
We want an approach compatible with pre-existing, conventional functional programming languages, specifically ML
and Haskell.
Finally, in some problem domains one may consider
avoiding generated source code entirely. For example, in
parsing, some programmers find parser combinators [5, 6]
to be a suitable or even preferable alternative to Yacc-like
tools. Nevertheless, many programmers prefer traditional
LR parser generators for various reasons including error reporting and recovery, and ambiguity diagnostics. In this
work we take it as given that source-code generation is preferred, for whatever reason.
Employing this design, we have implemented a lexer
generator, called CM-Lex, and a parser generator, called
CM-Yacc. Each tool supports Standard ML, OCaml, and
Haskell.4 Both tools are available on-line at:
www.cs.cmu.edu/~crary/cmtool/
In the remainder of the paper we describe how the tools
work, taking the lexer generator as our primary example.
2
Lexing Functor Generation
The following is a very simple CM-Lex specification:
sml
name LexerFun
alphabet 128
function f : t =
(seq ’a ’a) => aa
(seq ’a (* ’b) ’c) => abc
The specification’s first line indicates that CM-Lex
should generate Standard ML code. The next two lines
indicate that CM-Lex should produce a functor named
LexerFun, and that it should generate a 7-bit parser (any
symbols outside the range 0 . . . 127 will be rejected automatically).
The remainder gives the specification of a lexing function
named f. The function will return a value of type t, and it
is defined by two regular expressions. Regular expressions
are given as S-expressions using the Scheme Shell’s SRE
notation5 [11].
Thus, the first arm activates an action named aa when
the regular expression aa is recognized. The second activates
an action named abc when the regular expression ab∗ c is
recognized.
Observe that the specification contains no disembodied
code. The actions are simply given names, which are instantiated when the resulting functor is applied.
From this specification, CM-Lex generates the following
Standard ML code:6
functor LexerFun
(structure Arg :
sig
type t
type info = { match : char list,
follow : char stream }
val aa : info -> t
val abc : info -> t
end)
At first blush, our proposal might seem to replace one
sort of disembodied code with another. This is true in a
sense, but there is a key difference. The code in which the
functor is applied is ordinary code, submitted to an ordinary
compiler. That compiler then type checks the action code
(that formerly resided in snippets) in the context in which
it now appears, which is the functor’s argument.
As a practical matter, each action becomes a distinct
field of the functor argument, and consequently each action
is type-checked independently, as desired. The type of the
:>
sig
val f : char stream -> Arg.t
end
= . . . implementation . . .
4
The tool is implemented in Standard ML.
Although SREs are less compact than some other notations, we
find their syntax is much easier to remember.
6
We simplify here and in the following examples for the sake of
exposition.
5
2
fun aa ({follow, ...}:info) =
( print "matched aa\n" )
signature STREAM =
sig
type ’a stream
datatype ’a front =
Nil
| Cons of ’a * ’a stream
The type checker is able to identify the source of the error
precisely, finding that aa has the type unit instead of t:
example.sml:8.4-29.12 Error: value type in
structure doesn’t match signature spec
name: aa
spec:
?.Arg.info -> ?.Arg.t
actual: ?.Arg.info -> unit
val front : ’a stream -> ’a front
val lazy : (unit -> ’a front) -> ’a stream
end
Figure 1: Lazy Streams
2.1
An expanded specification
We may add a second function to the lexer by simply adding
another function specification:
When the programmer calls the functor, he provides the
type t and the actions aa and abc, both of which produce
a t from a record of matching information. The functor
then returns a lexing function f, which produces a t from a
stream of characters.
Although the programmer-supplied actions can have side
effects, the lexer itself is purely functional. The input is processed using lazy streams (the signature for which appears
in Figure 1). Each action is given the portion of the stream
that follows the matched string as part of the matching information.
function g : u =
(or (seq ’b ’c) (seq ’b ’d)) => bcbd
epsilon => error
In the parlance of existing lexer generators, multiple
functions are typically referred to as multiple start conditions or start states, but we find it easier to think about
them as distinct functions that might or might not share
some actions. In this case, the function g is specified to return a value of type u. Since u might not be the same type
as t, g cannot share any actions with f.
The first arm activates an action named bcbd when the
regular expression bc + bd is recognized. The second arm
activates an action named error when the empty string is
recognized. Like other lexer generators, CM-Lex prefers the
longest possible match, so an epsilon arm will only be used
when the input string fails to match any other arm. Thus,
the latter arm serves as an error handler.7
From the expanded specification, CM-Lex generates the
functor:
As an illustration of how the functor might be applied,
the following program processes an input stream, printing a
message each time it recognizes a string:
structure Lexer =
LexerFun
(structure Arg =
struct
type t = char stream
type info = { match : char list,
follow : char stream }
functor LexerFun
(structure Arg :
sig
type t
type u
fun aa ({follow, ...}:info) =
( print "matched aa\n"; follow )
fun abc ({follow, ...}:info) =
( print "matched ab*c\n"; follow )
end)
type info = { match : char list,
follow : char stream }
val
val
val
val
end)
fun loop (strm : char stream) =
(case front strm of
Nil => ()
| Cons _ => loop (Lexer.f strm))
The function Lexer.f matches its argument against the
two regular expressions and calls the indicated action, each
of which prints a message and returns the remainder of the
stream.
Observe that the implementations of the actions (the
fields aa and abc of the argument structure) are ordinary
ML code. As one consequence, the action code faces the
standard type checker. Moreover, each action’s required
type is unambiguously given by LexerFun’s signature and
the type argument t, so error identification is much more
accurate.
For example, suppose we replace the aa action with an
erroneous implementation that fails to return the remainder
of the stream:
aa : info -> t
abc : info -> t
bcbd : info -> u
error : info -> u
:>
sig
val f : char stream -> Arg.t
val g : char stream -> Arg.u
end
= . . . implementation . . .
3
Recursion in actions
One important functionality for a lexer generator is the ability for actions to invoke the lexer recursively. For example,
7
In contrast, the specification for f was inexhaustive, so CM-Lex
added a default error handler that raises an exception.
3
it is common for a lexer, upon encountering whitespace, to
skip the whitespace and call itself recursively (as in the example in Section 1).8
This can be problematic because it requires recursion
between the lexer functor’s argument and its result.
For example, consider a lexer that turns a stream of characters into a stream of words. The CM-Lex specification is:
structure rec Arg =
struct
type t = string stream
type info = { match : char list,
follow : char stream }
fun whitespace ({follow, ...}:info) =
Words.f follow
sml
name WordsFun
alphabet 128
fun word ({match, follow, ...}:info) =
lazy
(fn () => Cons (String.implode match,
Words.f follow))
end)
set whitechar =
(or 32 9 10) /* space, tab, lf */
set letter = (range ’a ’z)
function f : t =
(+ whitechar) => whitespace
(+ letter) => word
and Words = WordsFun (structure Arg = Arg)
Unfortunately, recursive modules bring about a variety
of thorny technical issues [2, 10, 4]. Although some dialects
of ML support recursive modules, Standard ML does not.
As a workaround, CM-Lex provides recursive access to
the lexer via a self field passed to each action. The info
type is extended with a field self : self, where the type
self is a record containing all of the lexing functions being
defined. In this case:
CM-Lex generates the functor:
functor WordsFun
(structure Arg :
sig
type t
type info = { match : char list,
follow : char stream }
type self = { f : char stream -> t }
Using the self-augmented functor, we can implement
the lexer as follows:
val whitespace : info -> t
val word : info -> t
end)
structure Words =
WordsFun
(structure Arg =
struct
type t = string stream
:>
sig
val f : char stream -> Arg.t
end
= . . . implementation . . .
type self = { f : char stream -> t }
type info = { match : char list,
follow : char stream,
self : self }
A natural way9 to implement the desired lexer would be
with a recursive module definition:
8
One way to accomplish this would be to structure the lexer with a
driver loop (such as the function loop in Section 2), and for the action
to signal the driver loop to discard the action’s result and recurse.
However, the earlier example notwithstanding, this is usually not the
preferred way to structure a lexer.
9
This simple implementation does not result in the best behavior
from the lazy streams, because forcing the output stream causes the
lexer to examine more of the input stream than is necessary to determine the output stream’s first element. We illustrate a better way to
manage laziness in Appendix A. In any case, laziness is orthogonal to
the issue being discussed here.
fun whitespace
({match, follow, self, ...}:info) =
#f self follow
fun word
({match, follow, self, ...}:info) =
lazy
(fn () => Cons (String.implode match,
#f self follow))
end)
4
Parsing Functor Generation
The parser generator, CM-Yacc, works in a similar fashion
to CM-Lex. A CM-Yacc specification for a simple arithmetic
parser is:
4
at which a syntax error is detected and returns an exception
for the parser to raise. For example:
sml
name ArithParseFun
terminal
terminal
terminal
terminal
terminal
datatype terminal =
NUMBER of t
| PLUS
| TIMES
| LPAREN
| RPAREN
NUMBER of t
PLUS
TIMES
LPAREN
RPAREN
nonterminal Term : t =
1:NUMBER => number_term
1:Term PLUS 2:Term => plus_term
1:Term TIMES 2:Term => times_term
LPAREN 1:Term RPAREN => paren_term
structure Parser =
ArithParseFun
(structure Arg =
struct
type t = int
start Term
fun
fun
fun
fun
The specification says that the functor should be named
ArithParseFun, and it declares five terminals, one of which
carries a value of type t.
The specification then declares one nonterminal called
Term, indicates that a term carries a value of type t, and
gives four productions that produce terms.10 Numbers are
attached to the symbols on the left-hand-side of a production that carry values that should be passed to the production’s action. The number itself indicates the order in which
values should be passed. Thus plus term is passed a pair
containing the first and third symbols’ values.
The final line specifies that the start symbol is Term.
From this specification, CM-Yacc generates the following
Standard ML code:
datatype terminal = datatype terminal
fun error _ = Fail "syntax error"
end)
Then our parser is Parser.parse : terminal -> int.
Note that we use datatype copying (a little-known feature of Standard ML) to copy the terminal datatype into
the Arg structure. If the datatype were defined within the
Arg structure, there would be no way to use it outside.
OCaml does not support datatype copying, but one can
get the same effect by including a module that contains the
datatype.
functor ArithParseFun
(structure Arg :
sig
type t
val
val
val
val
number_term x = x
plus_term (x, y) = x + y
times_term (x, y) = x * y
paren_term x = x
5
Functors in Haskell
In broad strokes the Haskell versions of CM-Lex and CMYacc are similar to the ML versions. In one regard, they are
simpler: In Haskell all definitions are mutually recursive, so
no special functionality is required to allow lexer actions to
reinvoke the lexer.
However, Haskell does not support functors, the central
mechanism we exploit here. Instead, we built an ersatz functor from polymorphic functions.
Recall the CM-Lex specification from Section 2.1,
reprised in Figure 2. From that specification, CM-Lex generates a module (in the Haskell sense) named LexerFun with
the following exports:
number_term : t -> t
plus_term : t * t -> t
times_term : t * t -> t
paren_term : t -> t
datatype terminal =
NUMBER of t
| PLUS
| TIMES
| LPAREN
| RPAREN
val error : terminal stream -> exn
end)
data LexInfo =
LexInfo
{ match :: [Char],
follow :: [Char] }
:>
sig
val parse : Arg.terminal stream -> Arg.t
end
= . . . implementation . . .
data Arg t u =
Arg { t :: Proxy t,
u :: Proxy u,
As before, the programmer supplies the type t and the
actions. (The actions need not be passed a self argument,
because parser actions do not commonly need to reinvoke
the parser.) He also supplies the terminal datatype and an
error action, the latter of which takes the terminal stream
aa :: LexInfo -> t,
abc :: LexInfo -> t,
bcbd :: LexInfo -> u,
error :: LexInfo -> u }
10
This grammar is ambiguous, resulting in shift-reduce conflicts.
The ambiguity can be resolved in either of the standard manners: by
specifying operator precedences, or by refactoring the grammar.
f :: Arg t u -> [Char] -> t
g :: Arg t u -> [Char] -> u
5
Compare this with the ML version, also reprised in Figure 2. The type Arg represents the argument to the functor.
It contains implementations for the four actions aa, abc,
bcbc, and error.
It also contains implementations for the two types t and
u. Haskell does not support type fields like an ML structure,
but we can get a similar effect by including proxy fields with
the types Proxy t and Proxy u. The programmer then fills
them in with the term Proxy :: Proxy T for some T.11
Proxy [3] is a type constructor in the Haskell standard
library that is designed for this sort of use. For any type
constructor C, the type Proxy C has a single data constructor
Proxy. The Proxy type constructor is poly-kinded, so C need
not have kind *.
An alternative would be to leave out the type fields altogether and allow type inference to fill them automatically.
We believe it would be a misstep to do so. The type implementations are critical documentation that should be given
explicitly in the program. Moreover, leaving out the type
implementations would bring back the possibility that the
type checker would misattribute the source of a type error.
The functor’s output is factored into two separate polymorphic functions that each take the functor argument as
an argument. Since the type arguments t and u are propagated to the result types of the lexing functions, they must
also appear as explicit parameters of the type Arg.
sml
name LexerFun
alphabet 128
function f : t =
(seq ’a ’a) => aa
(seq ’a (* ’b) ’c) => abc
function g : u =
(or (seq ’b ’c) (seq ’b ’d)) => bcbd
epsilon => error
. . . became . . .
The Haskell version of CM-Yacc builds an ersatz functor
in a similar fashion. However, while the ML version specified
the terminal type as an input to the parser functor, there
is no way to specify a datatype as an input to an ersatz
functor. Instead, the parsing module defines the terminal
datatype and passes it out.
In the example above, CM-Lex was used in purely functional mode. Consequently, the input stream was simply a
character list, since Haskell lists are lazy already. Alternatively, CM-Lex and CM-Yacc can be directed to generate
monadic code, which allows the lexer or parser to deal with
side effects, either in the generation of the input stream (e.g.,
input read from a file) or in the actions. Doing so incurs
some complications — it is important that the input stream
be memoizing and not every monad is capable of supporting
the proper sort of memoization12 — but these complications
are orthogonal to the functor mechanism discussed here and
are beyond the scope of this paper.
functor LexerFun
(structure Arg :
sig
type t
type u
type info = { match : char list,
follow : char stream }
val
val
val
val
end)
aa : info -> t
abc : info -> t
bcbd : info -> u
error : info -> u
:>
sig
val f : char stream -> Arg.t
val g : char stream -> Arg.u
end
= . . . implementation . . .
6
Conclusion
We argue that functor generation is a cleaner mechanism
for source-code-generating tools than assembling snippets
of disembodied code. The resulting functor makes no demands on the surrounding code (other than a few standard
libraries), and so it is guaranteed to type check.13 The programmer never need look at the generated code.
Figure 2: The example from Section 2.1
11
Alternatively, one could give the proxy fields the bare types t and
u and fill them in with undefined :: T, but that approach would be
more awkward in the monadic case in which we also need to specify
a monad. A monad has kind * -> * and therefore does not have
elements.
12
Monads such as IO and ST that support references also support
memoization, and Identity supports it trivially (since no memoization
need ever be done), but most others do not.
13
More precisely, it is guaranteed to type check in an initial context
containing standard libraries and other module definitions. Unfortunately, Standard ML does not quite enjoy the weakening property,
so the resulting functor is not guaranteed to type check in any context. Pollution of the namespace with datatype constructors and/or
6
and the parser specification is:
In contrast, with a snippet-assembling tool, an error in
any snippet will — even in the best case — require the programmer to look at generated code containing the snippet.
More commonly, the programmer will need to look at lots
of generated code having nothing to do with the erroneous
snippet.
We have demonstrated the technique for lexer and parser
generation, but there do not seem to be any limitations that
would preclude its use for any other application of sourcecode generation.
A
sml
name CalcParseFun
terminal
terminal
terminal
terminal
terminal
nonterminal Atom : t =
1:NUMBER => number_atom
LPAREN 1:Term RPAREN => paren_atom
A Full Example
As a more realistic example, we implement a calculator that
processes an input stream and returns its value. For simplicity, the calculator stops at the first illegal character (which
might be the end of the stream). The lexer specification is:
nonterminal Factor : t =
1:Atom => atom_factor
1:Atom TIMES 2:Factor => times_factor
sml
name CalcLexFun
alphabet 128
nonterminal Term : t =
1:Factor => factor_term
1:Factor PLUS 2:Term => plus_term
set digit = (range ’0 ’9)
set whitechar =
(or 32 9 10) /* space, tab, lf */
start Term
which generates:
function lex : t =
(+ digit) => number
’+ => plus
’* => times
’( => lparen
’) => rparen
(+ whitechar) => whitespace
functor CalcParseFun
(structure Arg :
sig
type t
val
val
val
val
val
val
/* Stop at the first illegal character */
epsilon => eof
which generates:
functor CalcLexFun
(structure Arg :
sig
type t
number_atom : t -> t
paren_atom : t -> t
atom_factor : t -> t
times_factor : t * t -> t
factor_term : t -> t
plus_term : t * t -> t
datatype terminal =
NUMBER of t
| PLUS
| TIMES
| LPAREN
| RPAREN
type self = { lex : char stream -> t }
type info = { match : char list,
follow : char stream,
self : self }
val
val
val
val
val
val
val
end)
NUMBER of t
PLUS
TIMES
LPAREN
RPAREN
val error : terminal stream -> exn
end)
:>
sig
val parse : Arg.terminal stream -> Arg.t
end
= . . . implementation . . .
number : info -> t
plus : info -> t
times : info -> t
lparen : info -> t
rparen : info -> t
whitespace : info -> t
eof : info -> t
We then assemble the calculator as follows:
structure Calculator
:> sig
val calc : char stream -> int
end =
struct
datatype terminal =
NUMBER of int
| PLUS
| TIMES
| LPAREN
:>
sig
val lex : char stream -> Arg.t
end
= . . . implementation . . .
infix declarations for identifiers that are used within the generated
functor will prevent it from parsing correctly. This is one reason why
it is considered good practice in SML for all code to reside within
modules.
7
References
| RPAREN
[1] Andrew W. Appel, James S. Mattson, and
David R. Tarditi.
A lexical analyzer generator
for Standard ML, October 1994.
Available at
www.smlnj.org/doc/ML-Lex/manual.html.
structure Lexer =
CalcLexFun
(structure Arg =
struct
type t = terminal front
[2] Karl Crary, Robert Harper, and Sidd Puri. What
is a recursive module? In 1999 SIGPLAN Conference on Programming Language Design and Implementation, pages 50–63, Atlanta, May 1999.
type self = { lex : char stream -> t }
type info = { match : char list,
follow : char stream,
self : self }
[3] Data.Proxy.
Online
documentation
at
https://hackage.haskell.org/package/base-4.7.0.0/docs/Data2014. Retrieved October 21, 2017.
fun number
({ match, follow, self }:info) =
Cons (NUMBER
(Option.valOf
(Int.fromString
(String.implode match))),
lazy (fn () => #lex self follow))
[4] Derek Dreyer. Understanding and Evolving the ML
Module System. PhD thesis, Carnegie Mellon University, School of Computer Science, Pittsburgh, Pennsylvania, May 2005.
[5] Richard Frost and John Launchbury. Constructing natural language interpreters in a lazy functional language.
The Computer Journal, 32(2), 1989.
fun simple terminal
({ follow, self, ... }:info) =
Cons (terminal,
lazy (fn () => #lex self follow))
val
val
val
val
[6] Graham Hutton. Higher-order functions for parsing.
Journal of Functional Programming, 2(3), 1992.
plus = simple PLUS
times = simple TIMES
lparen = simple LPAREN
rparen = simple RPAREN
[7] Stephen C. Johnson. Yacc — yet another compiler compiler. Technical Report 32, Bell Laboratories Computing Science, Murray Hill, New Jersey, July 1975.
[8] Eugene Kohlbecker, Daniel P. Friedman, Matthias
Felleisen, and Bruce Duba. Hygienic macro expansion.
In 1986 ACM Conference on Lisp and Functional Programming, pages 161–161, 1986.
fun whitespace
({ follow, self, ... }:info) =
#lex self follow
[9] M. E. Lesk. Lex — a lexical analyzer generator. Technical Report 39, Bell Laboratories Computing Science,
Murray Hill, New Jersey, October 1975.
fun eof _ = Nil
end)
structure Parser =
CalcParseFun
(structure Arg =
struct
type t = int
[10] Claudio V. Russo. Types for Modules. Number 60 in
Electronic Notes in Theoretical Computer Science. Elsevier, January 2003.
[11] Olin Shivers.
The SRE regular-expression
notation,
August
1998.
Post
to
the
comp.lang.scheme newsgroup, now archived at
http://www.scsh.net/docu/post/sre.html.
fun id x = x
val
val
val
fun
val
fun
number_atom = id
paren_atom = id
atom_factor = id
times_factor (x, y) = x * y
factor_term = id
plus_term (x, y) = x + y
[12] Walid Taha and Tim Sheard. MetaML and multi-stage
programming with explicit annotations. Theoretical
Computer Science, 248(1–2), 2000.
[13] Aaron Turon and John Reppy. SML/NJ Language Processing Tools: User Guide, September 2015. Available
at www.smlnj.org/doc/ml-lpt/manual.pdf.
datatype terminal = datatype terminal
fun error _ = Fail "syntax error"
end)
fun calc strm =
Parser.parse
(lazy (fn () => Lexer.lex strm))
end
8
| 6 |
Words and characters in finite p-groups
by
arXiv:1406.5395v3 [math.GR] 14 Jun 2016
Ainhoa Iñiguez
Mathematical Institute
University of Oxford
Andrew Wiles Building
Woodstock Road
OX2 6GG, Oxford
UNITED KINGDOM
E-mail: [email protected]
and
Josu Sangroniz1
Departamento de Matemáticas
Facultad de Ciencia y Tecnologı́a
Universidad del Paı́s Vasco UPV/EHU
48080 Bilbao
SPAIN
E-mail: [email protected]
Abstract
Given a group word w in k variables, a finite group G and g ∈ G, we
consider the number Nw,G (g) of k-tuples g1 , . . . , gk of elements of G such
that w(g1 , . . . , gk ) = g. In this work we study the functions Nw,G for
the class of nilpotent groups of nilpotency class 2. We show that, for
the groups in this class, Nw,G (1) ≥ |G|k−1, an inequality that can be
improved to Nw,G (1) ≥ |G|k /|Gw | (Gw is the set of values taken by w on
G) if G has odd order. This last result is explained by the fact that the
functions Nw,G are characters of G in this case. For groups of even order,
all that can be said is that Nw,G is a generalized character, something
that is false in general for groups of nilpotency class greater than 2. We
characterize group theoretically when Nxn ,G is a character if G is a 2group of nilpotency class 2. Finally we also address the (much harder)
problem of studying if Nw,G (g) ≥ |G|k−1 for g ∈ Gw , proving that this is
the case for the free p-groups of nilpotency class 2 and exponent p.
Keywords: p-groups; words; characters
MSC: 20D15, 20F10
Both authors are supported by the MINECO (grants MTM2011-28229-C02-01 and MTM2014-53810-C22-P). The second author is also supported by the Basque Government (grants IT753-13 and IT974-16).
1
Corresponding author
1
Introduction
Given a group word w in k variables x1 , . . . , xk , that is an element in the free group
Fk on x1 , . . . , xk , we can define for any k elements g1 , . . . , gk in a group G the element
w(g1, . . . , gk ) ∈ G by applying to w the group homomorphism from Fk to G sending xi to
gi . For G a finite group and g ∈ G we consider the number
Nw,G (g) = |{(g1 , . . . , gk ) ∈ G(k) | w(g1, . . . , gk ) = g}|.
(1)
(G(k) denotes the k-fold cartesian product of G with itself.) So Nw,G (g) can be thought
of as the number of solutions of the equation w = g. The set of word values of w on G,
i. e., the set of elements g ∈ G such that the equation w = g has a solution in G(k) , is
denoted Gw .
There is an extensive literature on the functions Nw,G , sometimes expressed in terms
of the probabilistic distribution Pw,G = Nw,G /|G|k . For example Nikolov and Segal gave
in [9] a characterization of the finite nilpotent (and also solvable) groups based on these
probabilities: G is nilpotent if and only if inf w,g Pw,G (g) > p−|G| , where w and g range
over all words and Gw , respectively, and p is the largest prime dividing |G|. One of
the implications is easy: if G is not nilpotent the infimum is in fact zero. Indeed, we
can consider the k-th left-normed lower central word γk = [[[x1 , x2 ], x3 ], ..., xk ]. Since
G is not nilpotent, there exists some non-trivial element g ∈ Gγk (for any k). Since
γk (g1 , . . . , gk ) = 1 if some gi = 1, we have that Pγk ,G (g) ≤ (|G| − 1)k /|G|k , which can be
made arbitrarily small. On the other hand the estimation Pw,G (g) > p−|G| for g ∈ Gw
seems to be far from sharp and Amit in an unpublished work has conjectured that if G is
nilpotent, Pw,G (1) ≥ 1/|G|.
We prefer to give our results in terms of the functions Nw,G . In this paper we focus our
attention on finite nilpotent groups of nilpotency class 2, which we take to be p-groups
right from the outset, so all the results quoted here are referred to this family of groups.
In Section 2 we consider a natural equivalence relation for words that enable us to assume
that they have a special simple form. Then it is not difficult to prove Amit’s conjecture
Nw,G (1) ≥ |G|k−1 for w ∈ Fk . This result has been proved independently by Levy in
[6] using a similar procedure, although our approach to the concept of word equivalence
is different. In the next two sections we show that the functions Nw,G are generalized
characters, a result that is false for nilpotent groups of nilpotency class greater than 2,
and even more, if G has odd order, they are genuine characters. In particular we obtain an
improvement of Amit’s conjectured bound, namely, Nw,G (1) ≥ |G|k /|Gw |. For 2-groups,
there are easy examples where Nx2 ,G fails to be a character and we actually characterize
group-theoretically when this happens for the power words w = xn (always within the
class of nilpotent 2-groups of nilpotency class 2). In the last section we consider briefly the
conjecture Nw,G (g) ≥ |G|k−1 for w ∈ Fk and g ∈ Gw . This problem is much harder than
the case g = 1 and only some partial results have been obtained, for instance confirming
the conjecture if G is a free nilpotent p-group of nilpotency class 2 and exponent p.
2
2
Words in p-groups of nilpotency class 2
Since we are going to work with p-groups of nilpotency class 2, it is more convenient for
us to define a word in the variables x1 , . . . , xk as an element in the free pro-p group of
nilpotency class 2 on the variables x1 , . . . , xk , Fk . Thus, if w ∈ Fk is a word, it can be
represented in a unique way as
Y
w = xz11 . . . xzkk
[xi , xj ]zij ,
1≤i<j≤k
where the exponents zl , zij are p-adic integers. Of course, if G is a finite p-group (or pro-p
group) of nilpotency class 2 and g1 , . . . , gk ∈ G, it makes sense to evaluate w on g1 , . . . , gk
by applying the homomorphism π : Fk −→ G given by xi 7→ gi . As in the introduction,
we denote this element w(g1 , . . . , gk ) and define the function Nw = Nw,G by (1).
If σ is an automorphism of Fk , σ is determined by the images of the generators
x1 , . . . , xk , which we denote w1 , . . . , wk . Then the image of w ∈ Fk is the word w(w1 , . . . , wk ),
the result of evaluating w on w1 , . . . , wk . Since σ is an automorphism, there exist
w1′ , . . . , wk′ ∈ Fk such that wi′ (w1 , . . . , wk ) = xi , 1 ≤ i ≤ k, and the inverse automorphism
is given by xi 7→ wi′ . If G is a finite p-group (or pro-p group) of nilpotency class 2, we can
define the map ϕ : G(k) −→ G(k) by ϕ(g1 , . . . , gk ) = (w1 (g1 , . . . , gk ), . . . , wk (g1 , . . . , gk ))
and it is clear that this map is a bijection with the inverse map given by (g1 , . . . , gk ) 7→
(w1′ (g1 , . . . , gk ), . . . , wk′ (g1 , . . . , gk )). If w ′ = w(w1 , . . . , wk ), it is clear that w ′ (g1 , . . . , gk ) =
g if and only if w(ϕ(g1, . . . , gk )) = g, thus ϕ is a bijection between the solutions of w ′ = g
and w = g and in partic ular, Nw,G = Nw′ ,G .
Definition 2.1. We will say that two words w, w ′ ∈ Fk are equivalent if they belong to
the same orbit under the action of the automorphism group of Fk .
Therefore we have proved the following result.
Proposition 2.1. If w, w ′ ∈ Fk are two equivalent words, Nw,G = Nw′ ,G for any finite
p-group G of nilpotency class 2.
Next we want to find a set of representatives of the equivalence classes of words. There
are natural homomorphisms
Aut(Fk ) ։ Aut(Fk /Fk′ ) ∼
= GLk (Zp ) → Aut(Fk′ ),
(2)
where the composite map is the restriction. For the middle isomorphism, given an automorphism, we write the coordinates of the images of the vectors in a basis of the
(k)
Zp -module Fk /Fk′ ∼
= Zp as rows of the corresponding matrix. The last homomorphism
comes from the identification of Fk′ with the exterior square of Fk /Fk′ . More explicitly,
′
we identify
additive group of the k × k antisymmetric matrices over Zp , Ak ,
Q Fk with the
zij
via w = i<j [xi , xj ] 7→ A, where A ∈ Ak has entries zij for 1 ≤ i < j ≤ k. Then,
if X ∈ GLk (Zp ), the action of X on Ak is given by A 7→ X t AX. This action is better
(k)
understood if we interpret A as an alternating bilinear form on the free Zp -module Zp .
Notice however that, under a change of basis, the matrix A is now transformed into P AP t ,
3
where P is the matrix associated to the change of basis, writing the coordinates of the
vectors in the new basis as rows of P .
We start by analyzing the action of GLk (Zp ) and the affine subgroup
1 0
(k−1)
Aff k−1 (Zp ) =
| u ∈ Zp
, X ∈ GLk−1 (Zp )
ut X
(t means transposition), on Ak , giving a set of representatives of the orbits. The result
about the action of GLk (Zp ) generalizes naturally if we replace Zp by any principal ideal
domain (see for example [8, Theorem IV.1]), but a more elaborate proof is required. For
completeness we include a proof here that takes advantage of the fact that Zp is a discrete
valuation ring and can be later adapted to the case when the acting group is Aff k−1 (Zp ).
(i) Each orbit of the action of GLk (Zp ) on Akcontains
a unique block
0
1
, 1 ≤ i ≤ r and
diagonal matrix with diagonal non-zero blocks psi H, H =
−1 0
0 ≤ s1 ≤ · · · ≤ sr (0 ≤ r ≤ k/2).
Lemma 2.2.
(ii) Each orbit of the action of the affine group Aff k−1 (Zp ) on Ak contains a unique
tridiagonal matrix A (that is, all the entries aij of A with |i − j| > 1 are zero)
with the non-zero entries above the main diagonal ai,i+1 = psi , 1 ≤ i ≤ r, and
0 ≤ s1 ≤ s2 ≤ · · · ≤ sr (0 ≤ r < k).
Proof. Given A in Ak , we consider a basis {e1 , . . . , ek } (for instance, the canonical basis)
(k)
in the free Zp -module Zp and the alternating bilinear form ( , ) defined by the matrix
A with respect to this basis. There is nothing to prove if A is the zero matrix, so we
can suppose that (ei , ej ) 6= 0 for some 1 ≤ i < j ≤ k and we can assume that its p-adic
valuation is minimum among the valuations of all the (non-zero) (er , es ). After reordering
the basis, we can suppose that i = 1 and j = 2 and moreover, by multiplying e1 or e2
by a p-adic unit, we can suppose that (e1 , e2 ) = ps1 for some s1 ≥ 0. Notice that any
(non-zero) (u, v) has p-adic valuation greater than or equal to ps1 .
Now for each i ≥ 3 we set e′i = ei + αi e1 + βi e2 , where αi , βi ∈ Zp are chosen so that
(e′i , e1 ) = (e′i , e2 ) = 0. The elements αi , βi exist because the valuation of (e1 , e2 ) is less than
or equal to the valuations of (ei , e2 ) and (ei , e1 ). By replacing ei by e′i we can suppose that
he1 , e2 i is orthogonal to he3 , . . . , ek i. Proceeding inductively, we obtain a basis {e′1 , . . . , e′k }
such that, for some 0 ≤ r ≤ k/2, the subspaces he′2i−1 , e′2i i are pairwise orthogonal for
1 ≤ i ≤ r, the remaining vectors are in the kernel of the form and (e′2i−1 , e′2i ) = psi ,
1 ≤ i ≤ r, with 0 ≤ s1 ≤ · · · ≤ sr . It is clear that with respect to this new basis the
matrix associated to the form ( , ) has the desired form.
To prove uniqueness suppose that A and A′ are block diagonal matrices with (non-zero)
′
′
diagonal blocks ps1 H, . . . , psr H, 0 ≤ s1 ≤ · · · ≤ sr , and ps1 H, . . . , pst H, 0 ≤ s′1 ≤ · · · ≤ s′t ,
respectively, and A′ = X t AX for some X ∈ GLk (Zp ). The matrices A, A′ and X can be
viewed as endomorphisms of the abelian groups Rn = (Z/pn Z)(k) , n ≥ 1. Since X defines
′
in fact an automorphism of Rn the image subgroups of A and
P A (as endomorphisms of
2s
Rn ) have the same order. For A this order is p , where s = si ≤n (n − si ), and similarly
4
P
P
for A′ . We conclude that, for any n ≥ 1, si ≤n (n − si ) = s′ ≤n (n − s′i ), whence r = t
i
and si = s′i for all 1 ≤ i ≤ r, that is, A = A′ .
For the existence part in (ii) we have to show that, given an alternating form ( , )
(k)
on Zp and a basis {e1 , . . . , ek }, there exists another basis {e′1 , . . . , e′k } such that e′1 ∈
e1 + he2 , . . . , ek i, he′2 , . . . , e′k i = he2 , . . . , ek i and (e′i , e′j ) = 0 for |i − j| > 1, (ei , ei+1 ) = psi ,
0 ≤ s1 ≤ · · · ≤ sr , (ei , ei+1 ) = 0, r < i < k. We can suppose that ( , ) is not the
trivial form and then consider the minimum valuation s1 of all the (non-zero) (ei , ej ).
If this minimum is attained for some (e1 , ej ) we interchange e2 and ej . Otherwise this
minimum is attained for some (ei , ej ), 2 ≤ i < j ≤ k and (e1 + ei , ej ) has still valuation s1
(because the valuation of (e1 , ej ) is strictly greater than s1 ). By replacing e1 by e1 + ei ,
interchanging e2 and ej , and adjusting units, we can suppose that (e1 , e2 ) = ps1 . Now we
can replace ei , i geq3, by e′i = ei + αi e2 , where αi is chosen so that (e1 , e′i ) = 0. Thus we
can assume (e1 , ei ) = 0 for i ≥ 3. Now we iterate the procedure with the basis elements
e2 , . . . , ek .
We prove uniqueness with a similar counting argument as in (i) but this time considering the order of the images of the subgroup of Rn , Sn = {0} × (Z/pn Z)(k−1) . So
we assume that A′ = X t AX with A and A′ as in (ii) and X ∈ Aff k−1(Zp ). Notice that,
as an automorphism of Rn , X fixes Sn , so the images
of Sn by A and A′ must have the
P
′
same order. These orders are ps and ps , where s = si ≤n (n − si ) and similarly for s′ , so
s = s′ and, since this must happen for any n ≥ 1, it follows that si = s′i for all i, that is,
A = A′ .
Proposition 2.3. The following words are a system of representatives of the action of
Aut Fk on Fk :
s
sr
(3)
[x1 , x2 ]p 1 . . . [x2r−1 , x2r ]p , 0 ≤ r ≤ k/2, 0 ≤ s1 ≤ · · · ≤ sr ,
ps1
ps2
ps3
psr
x1 [x1 , x2 ] [x2 , x3 ] . . . [xr−1 , xr ] , 1 ≤ r ≤ k, s1 ≥ 0, 0 ≤ s2 ≤ · · · ≤ sr . (4)
Proof. As explained above, the action of Aut Fk on Fk′ can be suitably identified with the
action of GLk (Zp ) on Ak , thus it follows directly from the part (i) of the previous lemma
that the words (3) are representatives for the orbits contained in Fk′ .
s Q
Now suppose w ∈ Fk \Fk′ . Then w = (xz11 . . . xzkk )p 1 1≤i<j≤k [xi , xj ]zij , where s1 ≥ 0
and some zi is a p-adic unit. After applying the inverse of the automorphism x1 7→
xz11 . . . xzkk , xi 7→ x1 , xj 7→ xj , for j 6= 1, i, we can assume that xz11 . . . xzkk = x1 . Now
we consider the action of the stabilizer of x1 , Autx1 Fk . The image of this subgroup by
the first map in (2) is Aff k−1 (Zp ), so we can identify the action of Autx1 Fk on Fk′ with
the action of Aff k−1 (Zp ) on Ak . It follows from Lemma 2.2 (ii) that w is equivalent to
a word in (4). Notice also that if w ′ = σ(w) for two of these words w and w ′ and some
s′1
σ ∈ Aut Fk , it would follow by passing to Fk /Fk′ that σ(x1 )p 1 = xp1 . Since σ induces
s
automorphisms of (Fk /Fk′ )p and this chain of subg roups of Fk /Fk′ is strictly decreasing,
we conclude that s1 = s′1 . But Fk /Fk′ is torsion-free, so σ fixes x1 , that is, σ(x1 ) = x1 z
for some z ∈ Fk′ . Composing σ with the automorphism x1 7→ x1 z −1 , xi 7→ xi , i ≥ 2, we
s1
can suppose that σ ∈ Autx1 Fk . Thus, the two matrices in Ak associated to x−p
w and
1
−ps1 ′
x1 w are in the same orbit by Aff k−1 (Zp ), and so they coincide by Lemma 2.2 (ii). We
conclude that w = w ′.
s
5
Theorem 2.4. Let G be a finite p-group of nilpotency class 2 and w a word in k variables.
Then Nw (1) ≥ |G|k−1.
Proof. We can suppose that w is as in the last proposition. Write k0 = ⌊k/2⌋ and fix
g2 , g4 . . . , g2k0 ∈ G. Then the map G′ ×G(k−k0 −1) −→ G′ given by (x1 , x3 , . . . , x2(k−k0 )−1 ) 7→
w(x1 , g2 , x3 , . . . ) is a group homomorphism whose kernel has size at least |G|k−k0−1 . Since
there are |G|k0 choices for g2 , g4, . . . , g2k0 , we get at least |G|k−1 solutions to the equation
w(x1 , . . . , xk ) = 1.
3
The functions Nw from a character-theoretical point
of view
In this section, unless otherwise stated, we consider an arbitrary finite group G and a
word w that is thought now as an element in the free group with, say, free generators
x1 , . . . , xk . The function Nw = Nw,G is of course a class function, so it can be written as
a linear combination of the irreducible characters of G:
X
Nw =
Nwχ χ,
(5)
χ∈Irr(G)
where
Nwχ = (Nw , χ) =
1 X
1
Nw (g)χ(g) =
|G| g∈G
|G|
X
χ(w(g1, . . . , gk )).
(6)
(g1 ,...,gk )∈G(k)
It is a natural question to study when the function Nw is a character of G. Notice that
in this case Nw (g) ≤ Nw (1) for any element g ∈ G, so Nw (1) will be at least the average
of the non-zero values of the function Nw , that is
Nw (1) ≥
1 X
|G|k
Nw (g) =
,
|Gw | g∈G
|Gw |
(7)
w
which of course improves the bound conjectured by Amit, Nw (1) ≥ |G|k−1.
It is easy to find examples where Nw is not a character. Probably the simplest is
Q8 , the quaternion group of order 8, and w = x2 : Nx2 ,Q8 (1) = 2 but Nx2 ,Q8 (z) = 6
for the unique involution z ∈ Q8 . For p odd we can construct a p analogue to Q8 as
p
G = hg1 , . . . , gp−1i ⋊ hgpi/hgp−1
gp−p i, where hg1 , . . . , gp−1 i ∼
= Cp 2
= Cp × · · · × Cp × Cp2 , hgp i ∼
gp
gp
p
(Cn denotes a cyclic group of order n) and gi = gi gi+1 , 1 ≤ i < p − 2, gp−2 = gp−2 gp−1
gp
and gp−1
= gp−1 g1−1. It can be checked that Nxp ,G (1) = pp−1 but Nxp ,G (z) = pp + pp−1 for
any non-trivial element z ∈ Z(G) = Gxp = hgpp i. Notice that |G| = pp+1 and this is the
smallest order for which Nxp ,G can fail to be a character, since p-groups of order at most
pp are regular and, for regular p-groups, Nxp ,G is the regular character of G/Gp . Also, for
the quaternion group and its p analogues, (7) is false. However, it is a well known result
that in general the functions Nxn ,G are generalized characters (i. e., Z-linear combinations
of irreducible characters), see [4, Problem 4.7].
6
For words w in more than one variable the situation is different and there are examples
of groups G and words w where Nw,G is not a generalized character, even among nilpotent
groups. As for non-solvable examples one can take G = P SL2 (11) and the 2-Engel word
w = [x, y, y] (see [10] for another choice of w). Then
the coefficients Nwχ for the two
√
irreducible characters χ of degree 12 are 305 ± 23 5. More examples can be obtained
using the following result by A. Lubotzky [7]: if G is a simple group and 1 ∈ A ⊆ G is
a subset invariant under the group of automorphisms of G, then A = Gw for some word
w in two variables. Notice that if A contains an element a such that ai 6∈ A, for some
i coprime with the order of a, then Nw (ai ) = 0 but Nw (a) 6= 0, something that cannot
happen if Nw is a generalized character.
Examples of p-groups where Nw is not a generalized character are provided by the
free p-groups of nilpotency class 4 and exponent p with p > 2 and p ≡ 1 (mod 4) and
w = [x, y, x, y] (which settles in the negative a question of Parzanchevski who had asked
if the functions Nw were always generalized characters for solvable or nilpotent groups).
We realize these groups as as 1 + J, where J = I/I 4 and I is the ideal generated by X
and Y in the algebra of the polynomials in the non-commuting indeterminates X and Y
with coefficients in Fp . If x = 1 + X and y = 1 + Y and u = [x, y, x, y], then certainly
u ∈ Gw but we claim that if i is not a quadratic residue modulo p, then ui 6∈ Gw (so
Nw (u) 6= 0 but Nw (ui ) = 0). Indeed, one can check directly (or by appealing to the Lazard
correspondence, but in this case, only for p > 3, see [5, §10.2]) that γ4 (G) = 1 + γ4 (J),
where γ4 (J) is the fourth term in the descending central series of the Lie algebra J. Now
ui ∈ Gw if and only if
i[X, Y, X, Y ] = [aX + bY, cX + dY, aX + bY, cX + dY ]
for some a, b, c, d ∈ Fp , and one can see that this equation has no solution if i is not a
quadratic residue modulo p.
In contrast to the previous example, we shall show that the functions Nw are always
generalized characters for p-groups of nilpotency class 2 and, in the next section, that
they are actually genuine characters for odd p. Before that we recall briefly that for some
words w the functions Nw are known to be characters for any group, notably, for the
commutator word w = [x, y] (this is basically [4, Problem 3.10]). This classical result due
to Frobenius can be extended in various ways: when w is an admissible word (i. e., a word
in which all the variables appear exactly twice, once with exponent 1 and once with −1)
[3] or when w = [w ′ , y], where y is a variable which does not occur in w ′ (so in particular,
for γk = [x1 , x2 , . . . , xk ], the k-th left-normed lower central word) [12]. It is also clear that
if Nw and Nw′ are characters (or generalized characters), so is Nww′ if w and w ′ have no
common variables. The reason is of course that Nww′ = Nw ∗ Nw′ is the convolution of
the two functions Nw and Nw′ . More information in this direction is given in [10].
As promised we finish this section by proving that the functions Nw are generalized
characters for p-groups of nilpotency class 2. We give first a characterization of when Nw
is a generalized character. We have already used before that a necessary condition is that
Nw (g) = Nw (g i ) for any i coprime with the order of g and we are going to see that this
condition is in fact sufficient. The proof is standard once we know that the coefficients Nwχ
in (5) are algebraic integers (as Amit and Vishne point out in [1] this follows immediately
from (6) and the result of Solomon’s in [11] that Nw (g) is always a multiple of |CG (g)|).
7
Lemma 3.1. Let G be a group and w a word. Then Nw = Nw,G is a generalized character
of G if and only if Nw (g) = Nw (g i ) for any g ∈ G and i coprime with the order of G.
Proof. We only need to prove sufficiency. Since the coefficients Nwχ are algebraic integers
it suffices to see that they are rational numbers. But, by elementary character and Galois
theory, if f is a rational-valued class function of a group G, f is a Q-linear combination
i
of irreducible characters if and
P only if f (g) = f (g ) for any g ∈ G and i coprime with the
order of G. Indeed, if f = χ∈Irr(G) aχ χ with aχ ∈ Q, and σ is the automorphism of the
cyclotomic extension Q(ε)/Q sending ε to εi , where ε is a primitive |G|-th root of unity,
we have
X
X
f (g) = f (g)σ =
aχ χσ (g) =
aχ χ(g i ) = f (g i ).
χ∈Irr(G)
χ∈Irr(G)
i
And conversely, if f (g) = f (g ),
f (g) = f (g)σ
−1
= f (g i )σ
−1
=(
X
aχ χ(g i ))σ
−1
X
=
χ∈Irr(G)
−1
aσχ χ(g).
χ∈Irr(G)
−1
By the linear independence of the irreducible characters, we conclude that aχ = aσχ
any automorphism σ, so aχ ∈ Q.
for
Theorem 3.2. Let G be a p-group of nilpotency class 2 and w a word. Then the function
Nw = Nw,G is a generalized character of G.
Proof. By Proposition 2.3 we can suppose that w has the form (3) or (4). Now we observe
that, if i is not a multiple of p, the map (g1 , g2 , . . . , gk ) 7→ (g1i , g2 , g3i , . . . ) is a bijection
from the set of solutions of w = g to the set of solutions of w = g i , so in particular
Nw (g) = Nw (g i ) and the result follows from the previous lemma.
4
The functions Nw for odd p-groups of nilpotency
class 2
The goal of this section is to show that Nw,G is a genuine character of a p-group G of
nilpotency class 2 if p is odd. We begin with a general result.
Lemma 4.1. Let χ ∈ Irr(G) with kernel K and w a word in k variables. Then Nwχ =
χ
, where χ is the character of G/K defined naturally by χ.
|K|k−1Nw,G/K
Proof. We have
Nwχ = (Nw , χ) = (Ñw , χ)G/K ,
where Ñw is the average function defined by Ñw (g) =
1
|K|
P
function on G/K. As such a function it is clear that Ñw = |K|
clear.
n∈K
k−1
Nw (gn) viewed as a
Nw,G/K , so the result is
We assume now that G is a p-group of nilpotency class 2. The following technical
result characterizes when Nw is a character.
8
Lemma 4.2. Let G be a p-group of nilpotency class 2. Then Nw is a character of G
if and only if for any (non-trivial) epimorphic image of G, say G1 , with cyclic center,
Nw,G1 (1) ≥ Nw,G1 (z), where z is a central element of G1 of order p.
Proof. By Theorem 3.2 we know that Nw is a generalized character, that is, all the
numbers Nwχ are integers, so Nw is a character of G if and only if these numbers are
non-negative.
We recall that a group G with a faithful irreducible character χ has cyclic center. We
claim that if G is a p-group of nilpotency class 2 and χ is a faithful irreducible character
then Nwχ ≥ 0 if and only if Nw (1) ≥ Nw (z), where z ∈ Z(G) = Z has order p. Indeed, it
is well known that χ(1)χ = η G , where η is a (faithful) linear character of Z(χ) = Z (see
for instance [4, Theorem 2.31 and Problem 6.3]). Then
Nwχ =
1
1
(Nw , η G ) =
(Nw|Z , η)Z .
χ(1)
χ(1)
If the order of Z is pr and z1 is a generator with z = z1p
(Nw|Z , η)Z =
r−1
we have
X
j
1 X
1 X
j i
(εp ) ),
Nw (z1p )(
Nw (z1i )εi =
|Z| 0<i≤pr
|Z| 0≤j≤r
r−j
(8)
0<i≤p
(p,i)=1
where ε = η(z1 ) is a primitive pr -th root of unity and we have used Lemma 3.1 for the
second equality. Notice that the innermost sum of (8) is the sum of all the primitive
pr−j -th roots of unity, which is always zero except in the cases pr−j = 1 or p, when it is 1
or −1, respectively. We conclude that
(Nw|Z , η)Z =
r−1
1
1
(Nw (1) − Nw (z1p ) =
(Nw (1) − Nw (z))
|Z|
|Z|
and our claim follows.
Now we prove the sufficiency part in the lemma. Let χ ∈ Irr(G), K = ker(χ) and
G1 = G/K. Of course we can suppose that χ 6= 1G (because Nw1G = |G|k−1 ≥ 0, k is
the number of variables of w). By hypothesis Nw,G1 (1) ≥ Nw,G1 (z), where z is a central
element of G1 of order p. We can view χ as a faithful character χ of G1 and then our
χ
≥ 0. By Lemma 4.1, Nwχ ≥ 0, which shows that Nw is
previous claim implies that Nw,G
1
a character.
Conversely, suppose that Nw is a character, that is, all the numbers Nwχ are nonnegative, and consider an epimorphic image G1 = G/N with cyclic center and a central
element z ∈ G1 of order p. Then G has an irreducible character χ with kernel N that
is faithful when considered as a character χ of G1 . Since Nwχ ≥ 0, again by Lemma 4.1,
χ
≥ 0 and, by our initial claim, the inequality Nw,G1 (1) ≥ Nw,G1 (z) follows.
Nw,G
1
Theorem 4.3. Let G be a p-group of nilpotency class 2, p odd, and w a word. Then Nw
is a character of G.
9
Proof. By the last result it suffices to show that if G has cyclic center Z and z ∈ Z has
order p, Nw (1) ≥ Nw (z). Also we can assume that w has the form (4) (if w is as in (3),
skip the next two paragraphs).
s1
If Z p 1 6= 1, we can write z = z1p for some z1 ∈ Z and then the map (g1 , g2 , . . . , gk ) 7→
(g1 z1 , g2 , . . . , gk ) is a bijection between the sets of solutions of w = 1 and w = z, so
Nw (1) = Nw (z).
s
s
Now we suppose that Z p 1 = 1 and notice that, since G has nilpotency class 2 and p
is odd,
ps1
s
s
s
s
s
(xy)p 1 = xp 1 y p 1 [y, x]( 2 ) = xp 1 y p 1 .
(9)
Therefore if we fix g2 , g4 , · · · ∈ G, the map (g1 , g3 , . . . ) 7→ w(g1 , g2 , . . . , gk ) is a group
homomorphism ϕg2 ,g4 ,.... Obviously there is a bijection between the kernel of this homomorphism and the set of solutions of w = 1 with x2i = g2i . As for the solutions of w = z
with x2i = g2i , either this set is empty or else its elements are in one-to-one correspondence with the elements in a coset of the kernel of ϕg2 ,g4,... . In any case, considering only
solutions with x2i = g2i , the number of solutions of w = 1 is greater than or equal to the
number of solutions of w = z. Varying g2 , g4 ,. . . , we conclude Nw (1) ≥ Nw (z), as desired.
5
The functions Nxn for 2-groups of nilpotency class
2
In this section we study the functions Nxn for 2-groups of nilpotency class 2 and characterize when this function is a character. As we already pointed out in Section 3, the
function Nx2, Q8 is not a character and in fact for each r ≥ 1 we can define a 2-group Q23r
of order 23r , which is the usual quaternion group Q8 when r = 1, such that Nx2r, Q23r is not
a character. We shall see that this group is in some sense involved in G whenever Nx2r, G is
not a character. We shall also need to introduce another family of groups, denoted D23r ,
that, for r = 1, is the usual dihedral group of order 8.
Definition 5.1. Let r ≥ 1. We define the quasi dihedral and quasi quaternion group,
D23r and Q23r , as
r
r
r
D23r = hx, z1 , y | x2 = z12 = y 2 = 1, [x, z1 ] = 1, [x, y] = z1 , [z1 , y] = 1i,
r
r
r−1
Q23r = hx, z1 , y | z12 = 1, x2 = z12
r
= y 2 , [x, z1 ] = 1, [x, y] = z1 , [z1 , y] = 1i.
One can check that, if G = D23r or Q23r , G has order 23r , exponent 2r+1 and G′ =
r−1
is the central involution, in the
Z(G) = hz1 i is cyclic of order 2r . Moreover, if z = z12
3r−2
and Nx2r (z) = 23r−2 , whereas in the quaternion
(quasi) dihedral case Nx2r (1) = 3 × 2
case the numbers are in the reverse order: Nx2r (1) = 23r−2 and Nx2r (z) = 3 × 23r−2 (and
so Nx2r is not a character of Q23r ).
If T and H are 2-groups with cyclic center and |Z(T )| ≤ |Z(H)|, we shall denote T ∗ H
the central product of T and H with Z(T ) amalgamated with the corresponding subgroup
10
of Z(H). Notice that if all the generators of Z(T ) are in the same orbit under the action
of the automorphism group of T (or if a similar situation holds in H), the group T ∗ H
is unique up to isomorphism and this is what happens if T = D23r or Q23r . Also, for a
p-group G, Ωr (G) is the subgroup generated by the elements of order at most pr .
Lemma 5.1. Let G be a 2-group of nilpotency class 2 and cyclic center Z of order 2r .
Suppose that Ωr+1 (G)′ = Z. Then G = T ∗ H, where T is isomorphic to D23r or Q23r .
Proof. Since G has nilpotency class 2, Ωr+1 (G)′ is generated by the commutators of elements of order at most 2r+1 and it is cyclic, because it is contained in Z, which is cyclic,
so it is generated by one of these commutators, say [x, y]. The orders of x and y have to
be 2r or 2r+1 (because [x, y] has order 2r ). If both have order 2r it is clear that T = hx, yi
is isomorphic to D23r and, if both have order 2r+1 , is isomorphic to Q23r (notice that
r
r
r
G2 ⊆ Z, so x2 = y 2 ). On the other hand, if one is of order 2r and the other of order
2r+1 , their product has order 2r and T is isomorphic to D23r again.
Now it suffices to check that G = T CG (T ) (because obviously T ∩ CG (T ) = Z(T )
has order 2r , and so is the center of G). Indeed, the conjugacy class of x has order
|[x, G]| = |G′| = |Z| = pr and the same for y, so
|G : CG (T )| = |G : CG (x) ∩ CG (y)| ≤ |G : CG (x)||G : CG (y)| = 22r .
But
|T CG (T ) : CG (T )| = |T : T ∩ CG (T )| = |T : Z| = 22r ,
so G = T CG (T ), as claimed.
One can check that, as it happens with the usual dihedral and quaternion groups,
D23r ∗ D23r and Q23r ∗ Q23r are isomorphic. Using this result and iterating Lemma 5.1 we
get the following.
Proposition 5.2. Let G be a 2-group of nilpotency class 2 and cyclic center Z of order
2r . Then G is isomorphic to a group D23r ∗ . n. . ∗D23r ∗ H or D23r ∗ . n. . ∗D23r ∗ Q23r ∗ H,
n ≥ 0, where H has cyclic center of order 2r and Ωr+1 (H)′ is properly contained in the
center of H.
Now suppose that G = T ∗H, where T = D23r or Q23r and H is a 2-group of nilpotency
r
r
class 2 and cyclic center of order 2r . For any g ∈ T , g 2 = 1 or z, thus if h ∈ H, (gh)2 = 1
r
r
r
r
r
if and only if g 2 = h2 = 1 or z. Similarly, (gh)2 = z if and only if g 2 = 1 and h2 = z
or the other way round. This means that
Nx2r, G (1) = (Nx2r, T (1)Nx2r, H (1) + Nx2r, T (z)Nx2r, H (z))/2r
Nx2r, G (z) = (Nx2r, T (1)Nx2r, H (z) + Nx2r, T (z)Nx2r, H (1))/2r ,
whence
Nx2r, G (1) = 22r−2 (3Nx2r, H (1) + Nx2r, H (z)) or 22r−2 (Nx2r, H (1) + 3Nx2r, H (z))
Nx2r, G (z) = 22r−2 (3Nx2r, H (z) + Nx2r, H (1)) or 22r−2 (Nx2r, H (z) + 3Nx2r, H (1)),
11
depending on whether T = D23r or Q23r , respectively. It follows that, in the former
case, Nx2r, G (1) ≥ Nx2r, G (z) if and only if Nx2r, H (1) ≥ Nx2r, H (z) but, in the latter case, this
holds if and only if Nx2r, H (1) ≤ Nx2r, H (z) (and the same equivalences hold if inequalities are
replaced by equalities). The same happens if T = D23r ∗ . n. . ∗D23r or D23r ∗ . n. . ∗D23r ∗Q23r ,
n ≥ 0. The combination of this with Proposition 5.2 basically reduces our problem to the
groups G with Ωr+1 (G)′ properly contained in the center, which is the situation considered
in the next lemma.
Lemma 5.3. Let G be a 2-group of nilpotency class 2 and cyclic center of order 2r
and let z be the unique central involution. Suppose that Ωr+1 (G)′ is properly contained
in Z = Z(G). Then Nx2r (1) > Nx2r (z) if and only if G has exponent 2r . Otherwise
Nx2r (1) = Nx2r (z).
r
r
Proof. Since (G′ )2 ⊆ Z 2 = 1, raising to the 2r+1-th power is a group endomorphism
r+1
of G by (9) and Ωr+1 (G) = {x ∈ G | x2
= 1}. Moreover, Ωr+1 (G)′ is contained in
r−1
Z 2 , so (Ωr+1 (G)′ )2
= 1 and raising to the 2r -th power is a group endomorphism of
r
Ωr+1 (G) with kernel Ωr (G) = {x ∈ G | x2 = 1}. It is clear now that Nx2r (1) = |Ωr (G)|
r
and Nx2r (z) = |Ωr+1 (G)| − |Ωr (G)| (for any element x in Ωr+1 (G)\Ωr (G), x2 is a central
involution, so it is z), so Nx2r (1) > Nx2r (z) if and only if |Ωr+1 (G) : Ωr (G)| < 2, that
r
r
is Ωr+1 (G) = Ωr (G) = G, i. e., G has exponent 2r . Otherwise 1 6= Ωr+1 (G)2 = {x2 |
r
x ∈ Ωr+1 (G)}, thus z ∈ Ωr+1 (G)2 is in the image of the 2r -th power endomorphism of
Ωr+1 (G) and Nx2r (z) = |Ωr (G)| = Nx2r (1).
Proposition 5.4. Let G be a 2-group of nilpotency class 2, cyclic center of order 2r and
central involution z. Then Nx2r (1) < Nx2r (z) if and only if G is isomorphic to a group
D23r ∗ . n. . ∗D23r ∗ Q23r ∗ H, n ≥ 0, where H has cyclic center of order 2r and exponent 2r .
Proof. By Proposition 5.2, G has two possible decompositions as a central product with
one factor H satisfying the hypotheses of Lemma 5.3. If Q23r does not occur in the
decomposition of G, we know that Nx2r (1) < Nx2r (z) if and only if Nx2r, H (1) < Nx2r, H (z),
something that, according to Lemma 5.3, never happens. Therefore Q23r does occur in
the decomposition of G. In this case, we know that Nx2r (1) < Nx2r (z) if and only if
Nx2r, H (1) > Nx2r, H (z), which, by Lemma 5.3, is equivalent to H having exponent 2r .
It is not difficult to classify the groups H in the previous proposition.
Lemma 5.5. Let H be a 2-group of nilpotency class 2, cyclic center of order 2r and
exponent 2r . Then H ∼
= D23r1 ∗ · · · ∗ D23rn ∗ C2r with r1 ≤ · · · ≤ rn < r.
Proof. The result is trivially true if H is abelian, so suppose H ′ = h[x, y]i =
6 1 has order
s
2r
2s
2s
2 . Since (xy) = 1, (9) yields s < r. The elements x and y are central with orders
s
at most 2r−s , so they lie in hz12 i, where Z(H) = hz1 i, and, for suitable i and j, xz1i and
yz1j have order exactly 2s . By replacing x and y by these elements, we can suppose that
T = hx, yi ∼
= D23s . Arguing as in the last part of the proof of Lemma 5.1, H = T CH (T )
and T ∩ CH (T ) = Z(T ) is cyclic of order 2s . Since Z(H) ≤ CH (T ), the hypotheses still
hold in CH (T ), so we can apply induction.
12
The last two results show that, with the hypotheses of Proposition 5.4, Nx2r (1) <
Nx2r (z) if and only if G ∼
= D23r1 ∗ · · · ∗ D23rn ∗ Q23r , n ≥ 0, r1 ≤ · · · ≤ rn ≤ r. Notice
simply that the cyclic factor of H is absorbed by Q23r .
Theorem 5.6. Let G be a 2-group of nilpotency class 2. Then Nx2r is a character of G
if and only if G has no epimorphic image isomorphic to D23r1 ∗ · · · ∗ D23rn ∗ Q23r , n ≥ 0,
r1 ≤ · · · ≤ rn ≤ r.
Proof. If G has an epimorphic image G1 of the indicated type, then by the last remark,
Nx2r, G1 (1) < Nx2r, G1 (z) (z is the central involution of G1 ) and, by Lemma 4.2, Nx2r is
not a character of G. Conversely, if Nx2r is not a character of G, by the same lemma,
G has an epimorphic image G1 with cyclic center such that Nx2r, G1 (1) < Nx2r, G1 (z). We
claim that the center Z of G1 has order 2r (and then, again by the last remark, G1 is the
r
desired epimorphic image). The map x 7→ x2 cannot be a group endomorphism of G1
(this would immediately imply that either Nx2r, G1 (z) = 0 or else Nx2r, G1 (1) = Nx2r, G1 (z)),
r−1
r−1
hence (G′1 )2
6= 1 and Z 2
6= 1, that is |Z| ≥ 2r (here Z is the center of G1 ). If
r
|Z| > 2r , z = z12 for some z1 ∈ Z and then Nx2r, G1 (1) = Nx2r, G1 (z) because x 7→ xz1 maps
r
r
bijectively the solutions of x2 = 1 to the solutions of x2 = z. Thus |Z| = 2r and the
proof is complete.
6
p-groups with central Frattini subgroup
In this section we consider a d-generated p-group G such that Φ(G) ≤ Z(G), that is, G
has nilpotency class 2 and elementary abelian derived subgroup. We shall show that for
the words wk = [x1 , x2 ] . . . [x2k−1 , x2k ], with k ≥ d0 = ⌊d/2⌋,
Nwk (g) ≥ |G|2k−1 for all g ∈ Gwk .
(10)
For k = 1, (10) can be easily proved for any p-group of nilpotency class 2. Indeed, if
g = [x, y], we can multiply x by any element commuting with y and y by any central
element, thus Nw1 (g) ≥ |CG (y)||Z(G)| = |G||Z(G) : [y, G]| ≥ |G|.
If V = G/Φ(G),
as a vector space over Fp , there is a natural surjective linear
V2 Vviewed
2
map π from
=
V , the exterior square of V , onto G′ given by x ∧ y 7→ [x, y]. For a
V2
fixed ω ∈
and k ≥ 0, it is then natural to consider the number Nwk (ω) = Nwk ,V (ω) of
solutions in V (2k) of the equation
wk (x1 , . . . , x2k ) = x1 ∧ x2 + · · · + x2k−1 ∧ x2k = ω.
The setVof values of wk , that is, the set {wk (v1 , . . . , v2k ) | vi ∈ V } ⊆
simply 2wk .
(11)
V2
will be denoted
Similarly as in Section
V22, if we fix a basis {e1 , . . . , ed } of V there is a one-to-one
correspondence between
and Ad , the set of d × d antisymmetric matrices over the
P
field Fp , given by 1≤i<j≤d aij (ei ∧ ej ) 7→ A, where A ∈ Ad has entries aij for 1 ≤ i <
j ≤ d. Then solving (11) amounts to solving the matrix equation X t Jk X = A, where X
represents a 2k × d matrix, Jk is the 2k × 2k block diagonal matrix with repeated diagonal
13
block
0 1
−1 0
and A ∈ Ad . Moreover, the number of solutions only depends on the
Jr 0
with r ≤ k (otherwise there
rank of A, so there is no loss to assume that A =
0 0
are no solutions). This number, which is denoted N(K2k , Kd,2r ) in [13],
V2wasVoriginally
considered by Carlitz [2] and later on by other authors (see [13]). If ω ∈ wr \ 2wr−1 , the
corresponding antisymmetric matrix A has rank 2r, so the number N(K2k , Kd,2r ) is, in
our notation, Nwk (ω). An explicit formula for this number is given in [13, Theorems 3,4,
5] that, for r ≤ k ≤ d0 = ⌊d/2⌋, can be written as
N(K2k , Kd,2r ) = p
k
Y
r(2k−r)
2i
(p − 1)
i=k−r+1
k−r
X
j=0
j
d
−
2r
)
(
p2
j
p
k−r
Y
(p2i − 1),
(12)
i=k−r−j+1
n
n
n
n−j+1
j
= 1. If we
= (p − 1) . . . (p
− 1)/(p − 1) . . . (p − 1) for j > 0 and
where
0 p
j p
just consider in (12) the summand corresponding to j = k − r we get the inequality
N(K2k , Kd,2r ) ≥ p
r(2k−r)
k
Y
k−r
d
−
2r
2i
.
(p − 1)p( 2 )
k−r p
(13)
i=1
Q
Q
n
> ( ni=n−j+1 pi−1 )/( ji=1 pi ) = pnj ,
= p , and
But i=1 (p − 1) > i=1 p
j p
therefore
k−r
2
N(K2k , Kd,2r ) > pr(2k−r)+k +( 2 )+(d−2r)(k−r) .
Qk
2i
Qk
2i−1
k2
The quadratic function of r in the exponent of p in this formula is decreasing in the
interval 0 ≤ r ≤ k, so we conclude that
V
2
Nwk (ω) = N(K2k , Kd,2r ) > p2k for any ω ∈ 2wk .
(14)
V
V
Since the rank of a d × d antisymmetric matrix is at most 2d0 we have that 2wk = 2
V
for k ≥ d0 . But for any k, π maps 2wk onto Gwk , thus Gwk = G′ for k ≥ d0 . Now it is easy
to show that if (10) holds for w = wd0 , it also holds for w = wk for any k ≥ d0 . Indeed, if
k ≥ d0 , we have Nwk = Nwd0 ∗ Nwk−d0 and, since we are assuming that Nwd0 (x) ≥ |G|2d0 −1
for any x ∈ G′ , if g ∈ G′ ,
X
X
Nwk−d0 (y)
Nwd0 (gy −1)Nwk−d0 (y) ≥ |G|2d0 −1
Nwk (g) =
y∈Gwk−d
= |G|
y∈Gwk−d
0
2d0 −1
|G|
2(k−d0 )
= |G|
2k−1
0
.
So in order to prove (10) for wk we can always assume that 1 ≤ k ≤ d0 .
It is clear that if g ∈ G′ and π(ω) = g, the solutions of wk (x1 , . . . , x2k ) = ω in V can be
lifted to solutions of wk (x1 , . . . , x2k ) = g in G and of course, all solutions of the equation
in G occur in this way, so
X
Nwk ,V (ω)
Nwk (g) = |Φ(G)|2k
ω∈π −1 (g)
14
and (10) can be written now as
X
|Φ(G)|
Nwk ,V (ω) ≥ |G : Φ(G)|2k−1 = pd(2k−1)
(15)
ω∈π −1 (g)
V
for g ∈ Gwk . Obviously only the ω’s in π −1 (g) ∩ 2wk contribute to this sum and for
them we can use the estimation (14). Thus, since |G′ | = pd(d−1)/2 /|ker(π)| ≤ |Φ(G)|, the
inequality
V
d(d−1)
2
(16)
p 2 +2k −d(2k−1) |π −1 (g) ∩ 2wk | ≥ |ker(π)|
V
implies (15). If k = d0 , |π −1 (g) ∩ 2wk | = |π −1 (g)| = |ker(π)| and (16) holds because the
exponent of p is positive (it is positive for any k). We conclude the following result.
Proposition 6.1. Let G be a d-generated p-group with Φ(G) ≤ Z(G). Then for any
k ≥ ⌊d/2⌋ and g ∈ G′ , Nwk (g) ≥ |G|2k−1.
Another situation in which |π −1 (g) ∩
we also have the following result.
V2
wk
| = |ker(π)| is when π is an isomorphism, so
Proposition 6.2. Let G be a d-generated p-group with Φ(G) ≤ Z(G) and |G′ | = pd(d−1)/2 .
Then for any k ≥ 1 and g ∈ Gwk , Nwk (g) ≥ |G|2k−1.
Notice that the last proposition applies in particular to the free p-groups of nilpotency
class 2 and exponent p and, in this case the inequality Nw (g) ≥ |G|k−1, g ∈ Gw , is in fact
true for any word w ∈ Fk . This is clear if w ∈ Fk′ and otherwise we can suppose that
s1 = 0 in (4). But in this case all equations w = g have the same number of solutions,
namely, |G|2k−1 .
References
[1] A. Amit, U. Vishne, Characters and solutions to equations in finite groups, J. Algebra
Appl. 10, no. 4, (2011) 675–686.
[2] L. Carlitz, Representations by skew forms in a finite field, Arch. Math. 5 (1954)
19–31.
[3] A. K. Das, R. K. Nath, On solutions of a class of equations in a finite group, Comm.
Algebra 37, no. 11, (2009) 3904–3911.
[4] I. M. Isaacs, Character Theory of Finite Groups, Dover Publications INC., New York,
1994.
[5] E. Khukhro, p-automorphisms of finite p-groups, Cambridge University Press, Cambridge, 1998.
[6] M. Levy, On the probability of satisfying a word in nilpotent groups of class 2,
arXiv:1101.4286 (2011).
15
[7] A. Lubotzky, Images of word maps in finite simple groups, arXiv:1211.6575 (2012).
[8] M. Newman, Integral matrices, Academic Press, New York, 1972.
[9] N. Nikolov, D. Segal, A characterization of finite soluble groups, Bull. Lond. Math.
Soc. 39 (2007) 209–213.
[10] O. Parzanchevski, G. Schul, On the Fourier expansion of word maps, Bull. Lond.
Math. Soc. 46 (2014) 91–102.
[11] L. Solomon, The solution of equations in groups, Arch. Math. 20 (1969) 241–247
[12] S. P. Strunkov, On the theory of equations on finite groups, Izv. Math. 59, no. 6,
(1995) 1273–1282.
[13] J. Wei, Y. Zhang, The number of solutions to the alternate matrix equation over a
finite field and a q-identity, J. Statist. Plann. Inference 94, no. 2, (2001) 349–358.
16
| 4 |
1-String B2-VPG Representation of Planar
Graphs∗
Therese Biedl1 and Martin Derka1
1
David R. Cheriton School of Computer Science, University of Waterloo
200 University Ave W, Waterloo, ON N2L 3G1, Canada
{biedl,mderka}@uwaterloo.ca
arXiv:1411.7277v2 [cs.CG] 3 Dec 2015
Abstract
In this paper, we prove that every planar graph has a 1-string B2 -VPG representation—a string
representation using paths in a rectangular grid that contain at most two bends. Furthermore,
two paths representing vertices u, v intersect precisely once whenever there is an edge between
u and v. We also show that only a subset of the possible curve shapes is necessary to represent
4-connected planar graphs.
1998 ACM Subject Classification I.3.5 Computational Geometry and Object Modeling
Keywords and phrases Graph drawing, string graphs, VPG graphs, planar graphs
Digital Object Identifier 10.4230/LIPIcs.xxx.yyy.p
1
Preliminaries
One way of representing graphs is to assign to every vertex a curve so that two curves cross
if and only if there is an edge between the respective vertices. Here, two curves u, v cross
means that they share a point s internal to both of them and the boundary of a sufficiently
small closed disk around s is crossed by u, v, u, v (in this order). Such a representation of
graphs using crossing curves is referred to as a string representation, and graphs that can be
represented in this way are called string graphs.
In 1976, Ehrlich, Even and Tarjan showed that every planar graph has a string representation [8]. It is only natural to ask if this result holds if one is restricted to using only
some “nice” types of curves. In 1984, Scheinerman conjectured that all planar graphs can be
represented as intersection graphs of line segments [12]. This was proved first for bipartite
planar graphs [7, 10] with the strengthening that every segment is vertical or horizontal.
The result was extended to triangle-free planar graphs, which can be represented by line
segments with at most three distinct slopes [6].
Since Scheinerman’s conjecture seemed difficult to prove for all planar graphs, interest
arose in possible relaxations. Note that any two line segments can intersect at most once.
Define 1-String to be the class of graphs that are intersection graphs of curves (of arbitrary
shape) that intersect at most once. We also say that graphs in this class have a 1-string
representation. The original construction of string representations for planar graphs given
in [8] requires curves to cross multiple times. In 2007, Chalopin, Gonçalves and Ochem
showed that every planar graph is in 1-String [3, 4]. With respect to Scheinerman’s
conjecture, while the argument of [3, 4] shows that the prescribed number of intersections
can be achieved, it provides no idea on the complexity of curves that is required.
∗
Research supported by NSERC. The second author was supported by the Vanier CGS. A preliminary
version appeared at the Symposium on Computational Geometry 2015.
© Therese Biedl and Martin Derka;
licensed under Creative Commons License CC-BY
Conference title on which this volume is based on.
Editors: Billy Editor and Bill Editors; pp. 1–28
Leibniz International Proceedings in Informatics
Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany
2
1-String B2 -VPG Representation of Planar Graphs
Another way of restricting curves in string representations is to require them to be
orthogonal, i.e., to be paths in a grid. Call a graph a VPG-graph (as in “Vertex-intersection
graph of Paths in a Grid”) if it has a string representation with orthogonal curves. It is
easy to see that all planar graphs are VPG-graphs (e.g. by generalizing the construction
of Ehrlich, Even and Tarjan). For bipartite planar graphs, curves can even be required to
have no bends [7, 10]. For arbitrary planar graphs, bends are required in orthogonal curves.
Recently, Chaplick and Ueckerdt showed that two bends per curve always suffice [5]. Let
B2 -VPG be the graphs that have a string representation where curves are orthogonal and
have at most two bends; the result in [5] then states that planar graphs are in B2 -VPG.
Unfortunately, in Chaplick and Ueckerdt’s construction, curves may cross each other twice,
and so it does not prove that planar graphs are in 1-String.
The conjecture of Scheinerman remained open until 2009 when it was proved true by
Chalopin and Gonçalves [2].
Our Results: In this paper, we show that every planar graph has a string representation
that simultaneously satisfies the requirements for 1-String (any two curves cross at most
once) and the requirements for B2 -VPG (any curve is orthogonal and has at most two bends).
Our result hence re-proves, in one construction, the results by Chalopin et al. [3, 4] and the
result by Chaplick and Ueckerdt [5].
I Theorem 1. Every planar graph has a 1-string B2 -VPG representation.
In addition to Theorem 1, we show that for 4-connected planar graphs, only a subset of
orthogonal curves with 2 bends is needed:
I Theorem 2. Every 4-connected planar graph has a 1-string B2 -VPG representation where
all curves have a shape of C or Z (including their horizontal mirror images).
Our approach is inspired by the construction of 1-string representations from 2007 [3, 4].
The authors proved the result in two steps. First, they showed that triangulations without
separating triangles admit 1-string representations. By induction on the number of separating
triangles, they then showed that a 1-string representation exists for any planar triangulation,
and consequently for any planar graph.
In order to show that triangulations without separating triangles have 1-string representations, Chalopin et al. [4] used a method inspired by Whitney’s proof that 4-connected
planar graphs are Hamiltonian [13]. Asano, Saito and Kikuchi later improved Whitney’s
technique and simplified his proof [1]. Our paper uses the same approach as [4], but borrows
ideas from [1] and develops them further to reduce the number of cases.
2
Definitions and Basic Results
Let us begin with a formal definition of a 1-string B2 -VPG representation.
I Definition 3 (1-string B2 -VPG representation). A graph G has a 1-string B2 -VPG representation if every vertex v of G can be represented by a curve v such that:
1. Curve v is orthogonal, i.e., it consists of horizontal and vertical segments.
2. Curve v has at most two bends.
3. Curves u and v intersect at most once, and u intersects v if and only if (u, v) is an edge
of G.
T. Biedl and M. Derka
We always use v to denote the curve of vertex v, and write vR if the representation R is
not clear from the context. We also often omit “1-string B2 -VPG” since we do not consider
any other representations.
Our technique for constructing representations of a graph uses an intermediate step
referred to as a “partial 1-string B2 -VPG representation of a W-triangulation that satisfies
the chord condition with respect to three chosen corners.” We define these terms, and related
graph terms, first.
A planar graph is a graph that can be embedded in the plane, i.e., it can be drawn so
that no edges intersect except at common endpoints. All graphs in this paper are planar.
We assume throughout the paper that one combinatorial embedding of the graph has been
fixed by specifying the clockwise (CW) cyclic order of incident edges around each vertex.
Subgraphs inherit this embedding, i.e., they use the induced clockwise orders. A facial
region is a connected region of R2 − Γ where Γ is a planar drawing of G that conforms
with the combinatorial embedding. The circuit bounding this region can be read from the
combinatorial embedding of G and is referred to as a facial circuit. We sometimes refer to
both facial circuit and facial region as a face when the precise meaning is clear from the
context. The outer-face is the one that corresponds to the unbounded region; all others
are called interior faces. The outer-face cannot be read from the embedding; we assume
throughout this paper that the outer-face of G has been specified. Subgraphs inherit the
outer-face by using as outer-face the one whose facial region contains the facial region of the
outer-face of G. An edge of G is called interior if it does not belong to the outer-face.
A triangulated disk is a planar graph G for which the outer-face is a simple cycle and
every interior face is a triangle. A separating triangle is a cycle C of length 3 such that G has
vertices both inside and outside the region bounded by C (with respect to the fixed embedding
and outer-face of G). Following the notation of [4], a W-triangulation is a triangulated disk
that does not contain a separating triangle. A chord of a triangulated disk is an interior edge
for which both endpoints are on the outer-face.
Let X, Y be two vertices on the outer-face of a connected planar graph so that neither of
them is a cut vertex. Define PXY to be the counter-clockwise (CCW) path on the outer-face
from X to Y (X and Y inclusive). We often study triangulated disks with three specified
distinct vertices A, B, C called the corners. A, B, C must appear on the outer-face in CCW
order. We denote PAB = (a1 , a2 , . . . , ar ), PBC = (b1 , b2 , . . . , bs ) and PCA = (c1 , c2 , . . . , ct ),
where ct = a1 = A, ar = b1 = B and bs = c1 = C.
I Definition 4 (Chord condition). A W-triangulation G satisfies the chord condition with
respect to the corners A, B, C if G has no chord within PAB , PBC or PCA , i.e., no interior
edge of G has both ends on PAB , or both ends on PBC , or both ends on PCA .1
I Definition 5 (Partial 1-string B2 -VPG representation). Let G be a connected planar graph
and E 0 ⊆ E(G) be a set of edges. An (E 0 )-1-string B2 -VPG representation of G is a 1-string
B2 -VPG representation of the subgraph (V (G), E 0 ), i.e., curves u, v cross if and only if (u, v)
is an edge in E 0 . If E 0 consists of all interior edges of G as well as some set of edges F on
the outer-face, then we write (int ∪ F ) representation instead.
In our constructions, we use (int ∪ F ) representations with F = ∅ or F = {e}, where e is
an outer-face edge incident to corner C of a W-triangulation. Edge e is called the special
1
For readers familiar with [4] or [1]: A W-triangulation that satisfies the chord condition with respect
to corners A, B, C is called a W-triangulation with 3-boundary PAB , PBC , PCA in [4], and the chord
condition is the same as Condition (W2b) in [1].
3
4
1-String B2 -VPG Representation of Planar Graphs
edge, and we sometimes write (int ∪ e) representation, rather than (int ∪ {e}) representation.
2.1
2-Sided, 3-Sided and Reverse 3-Sided Layouts
To create representations where vertex-curves have few bends, we need to impose geometric
restrictions on representations of subgraphs. Unfortunately, no one type of layout seems
sufficient for all cases, and we will hence have three different layout types illustrated in
Figure 1.
I Definition 6 (2-sided layout). Let G be a connected planar graph and A, B be two distinct
outer-face vertices neither of which is a cut vertex in G. Furthermore, let G be such that
all cut vertices separate it into at most two connected components. An (int ∪ F ) B2 -VPG
representation of G (for some set F ) has a 2-sided layout (with respect to corners A, B) if:
1. There exists a rectangle Θ that contains all intersections of curves and such that
(i) the top of Θ is intersected, from right to left in order, by the curves of the vertices
of PAB ,
(ii) the bottom of Θ is intersected, from left to right in order, by the curves of the
vertices of PBA .
2. Any curve v of an outer-face vertex v has at most one bend. (By 1., this implies that A
and B have no bends.)
I Definition 7 (3-sided layout). Let G be a W -triangulation and A, B, C be three distinct
vertices in CCW order on the outer-face of G. Let F be a set of exactly one outer-face edge
incident to C. An (int ∪ F ) B2 -VPG representation of G has a 3-sided layout (with respect
to corners A, B, C) if:
1. There exists a rectangle Θ containing all intersections of curves so that
(i) the top of Θ is intersected, from right to left in order, by the curves of the vertices
on PAB ;
(ii) the left side of Θ is intersected, from top to bottom in order, by the curves of the
vertices on PBbs−1 , possibly followed by C; 2
(iii) the bottom of Θ is intersected, from right to left in order, by the curves of vertices
on Pc2 A in reversed order, possibly followed by C;2
(iv) curve bs = C = c1 intersects the boundary of Θ exactly once; it is the bottommost
curve to intersect the left side of Θ if the special edge in F is (C, c2 ), and C is the
leftmost curve to intersect the bottom of Θ if the special edge in F is (C, bs−1 ).
2. Any curve v of an outer-face vertex v has at most one bend. (By 1., this implies that B
has precisely one bend.)
3. A and C have no bends.
We also need the concept of a reverse 3-sided layout, which is similar to the 3-sided layout
except that B is straight and A has a bend. Formally:
I Definition 8 (Reverse 3-sided layout). Let G be a connected planar graph and A, B, C
be three distinct vertices in CCW order on the outer-face of G. Let F be a set of exactly
one outer-face edge incident to C. An (int ∪ F ) B2 -VPG representation of G has a reverse
3-sided layout (with respect to corners A, B, C) if:
2
Recall that (bs−1 , C) and (C, c2 ) are the two incident edges of C on the outer-face.
T. Biedl and M. Derka
5
1. There exists a rectangle Θ containing all intersections of curves so that
(i) the right side of Θ is intersected, from bottom to top in order, by the curves of the
vertices on PAB ;
(ii) the left side of Θ is intersected, from top to bottom in order, by the curves of the
vertices on PBbs−1 , possibly followed by C;
(iii) the bottom of Θ is intersected, from right to left in order, by the curves of vertices
on Pc2 A in reversed order, possibly followed by C;
(iv) curve bs = C = c1 intersects the boundary of Θ exactly once; it is the bottommost
curve to intersect the left side of Θ if the special edge in F is (C, c2 ), and C is the
leftmost curve to intersect the bottom of Θ if the special edge in F is (C, bs−1 ).
2. Any curve v of an outer-face vertex v has at most one bend. (By 1., this implies that A
has precisely one bend.)
3. B and C have no bends.
B
other curves of PAB
B
other curves
of PBC
other
curves of PBC
bs-1
bs-1
C?
B
other curves of PBA
A
C? c2
other curves
of PAB
other curves of PAB
other curves of PCA
A
C?
C? c2
other curves of PCA
A
Figure 1 Illustration of a 2-sided layout, 3-sided layout, and reverse 3-sided layout.
We sometimes refer to the rectangle Θ for these representations as a bounding box. Figure 2
(which will serve as base case later) shows such layouts for a triangle and varying choices of
F.
2.2
Private Regions
Our proof starts by constructing a representation for triangulations without separating
triangles. The construction is then extended to all triangulations by merging representations
of subgraphs obtained by splitting at separating triangles. To permit the merge, we apply
the technique used in [4] (and also used independently in [9]): With every triangular face,
create a region that intersects the curves of vertices of the face in a predefined way and does
not intersect anything else, specifically not any other such region. Following the notation
of [9], we call this a private region (but we use a different shape).
I Definition 9 (Chair-shape). A chair-shaped area is a region bounded by a 10-sided orthogonal polygon with CW (clockwise) or CCW (counter-clockwise) sequence of interior angles
90°, 90°, 270°, 270°, 90°, 90°, 90°, 90°, 270°, 90°. See also Figure 3.
I Definition 10 (Private region). Let G be a planar graph with a partial 1-string B2 -VPG
representation R and let f be a facial triangle in G. A private region of f is a chair-shaped
area Φ inside R such that:
1. Φ is intersected by no curves except for the ones representing vertices on f .
2. All the intersections of R are located outside of Φ.
6
1-String B2 -VPG Representation of Planar Graphs
B
B
B
A
A
Θ
C
A
B
A
Θ
B C A
B C A
B A
Θ
B
C
Θ
B C A
A
Θ
B
A
B
C
A
B B
A
B
C
B
A
Θ
A
Θ
C
A
Figure 2 (int ∪ F ) representations of a triangle: (Top) 2-sided representations for
F ∈ {{(A, C)}, {(B, C)}, ∅}. (Bottom) 3-sided and reverse 3-sided representations for F ∈
{{(A, C)}, {(B, C)}}. Private regions are shaded in grey.
3. For a suitable labeling of the vertices of f as {a, b, c}, Φ is intersected by two segments of
a and one segment of b and c. The intersections between these segments and Φ occur at
the edges of Φ as depicted in Figure 3.
2.3
The Tangling Technique
Our constructions will frequently use the following “tangling technique”. Consider a set of
k vertical downward rays s1 , s2 , s3 , . . . , sk placed beside each other in left to right order.
The operation of bottom-tangling from s1 to sk rightwards stands for the following (see also
Figure 4):
1. For 1 < i ≤ k, stretch si downwards so that it ends below si−1 .
2. For 1 ≤ i < k, bend si rightwards and stretch it so that it crosses si+1 , but so that it
does not cross si+2 .
We similarly define right-tangling upwards, top-tangling leftwards and left-tangling
downwards as rotation of bottom-tangling rightwards by 90°, 180° and 270° CCW. We
define bottom-tangling leftwards as a horizontal flip of bottom-tangling rightwards, and
right-tangling downwards, top-tangling rightwards and left-tangling upwards as 90°, 180°
and 270° CCW rotations of bottom-tangling leftwards.
3
2-Sided Constructions for W-Triangulations
We first show the following lemma, which is the key result for Theorem 2, and will also be
used as an ingredient for the proof of Theorem 1. The reader is also referred to the appendix,
where we give an example of a (3-sided) construction for a graph, which in the recursive
cases uses some of the cases of the proof of Lemma 11.
I Lemma 11. Let G be a W-triangulation. Let A, B, C be any three corners with respect
to which G satisfies the chord condition, and let F be a set of at most one outer-face edge
incident to C. Then G has an (int ∪ F ) 1-string B2 -VPG representation with 2-sided layout
T. Biedl and M. Derka
7
a
b
b
a
c
c
c
c
a
b
a
b
a
b
b
c
c
c
a
a
c
b
a
b
Figure 3 The chair-shaped private region of a triangle a, b, c with possible rotations and flips.
Note that labels of a, b, c can be arbitrarily permuted—the curve intersecting the “base” of the does
not need to be named c.
...
s1
...
sk
s1
sk
Figure 4 Bottom-tangling from s1 to sk rightwards.
with respect to A, B. Furthermore, this representation has a chair-shaped private region for
every interior face of G.
We prove Lemma 11 by induction on the number of vertices.
First, let us make an observation that will greatly help to reduce the number of cases of
the induction step. Define G rev to be the graph obtained from graph G by reversing the
combinatorial embedding, but keeping the same outer-face. This effectively switches corners
A and B, and replaces special edge (C, c2 ) by (C, bs−1 ) and vice versa. If G satisfies the chord
condition with respect to corners (A, B, C), then G rev satisfies the chord condition with
respect to corners (B, A, C). (With this new order, the corners are CCW on the outer-face
of G rev , as required.)
Presume we have a 2-sided representation of G rev . Then we can obtain a 2-sided
representation of G by flipping the 2-sided one of G rev horizontally, i.e., along the y-axis.
Hence for all the following cases, we may (after possibly applying the above flipping operation)
make a restriction on which edge the special edge is.
Now we begin the induction. In the base case, n = 3, so G is a triangle, and the
three corners A, B, C must be the three vertices of this triangle. The desired (int ∪ F )
representations for all possible choices of F are depicted in Figure 2.
8
1-String B2 -VPG Representation of Planar Graphs
The induction step for n ≥ 4 is divided into three cases which we describe in separate
subsections.
3.1
C has degree 2
Since G is a triangulated disk with n ≥ 4, (bs−1 , c2 ) is an edge. Define G0 := G − {C} and
F 0 := {(bs−1 , c2 )}. We claim that G0 satisfies the chord condition for corners A0 := A, B 0 := B
and a suitable choice of C 0 ∈ {bs−1 , c2 }, and argue this as follows. If c2 = A or c2 is incident
to a chord that ends on PBC other than (bs−1 , c2 ) (thus bs−1 6= B), then set C 0 := bs−1 .
The chord condition holds for G0 as bs−1 cannot be incident to a chord by planarity and the
chord condition for G. Otherwise, c2 is not incident to a chord that ends in an interior vertex
of PBC other than bs−1 , so set C 0 := c2 ; clearly the chord condition holds for G0 . Thus in
either case, we can apply induction to G0 .
To create a 2-sided representation of G, we use a 2-sided (int ∪ F 0 ) representation R0
of G0 constructed with respect to the aforementioned corners. We introduce a new vertical
curve C placed between bs−1 and c2 below R0 . Add a bend at the upper end of C and
extend it leftwards or rightwards. If the special edge e exists, then extend C until it hits the
curve of the other endpoint of e; else extend it only far enough to allow for the creation of
the private region. See also Figure 5.
B
bs-1
G'
R'
B
bs-1 C
A
c2
C
R'
c2
A
B
bs-1 C
R'
c2
A
B
bs-1 C
c2
A
Figure 5 Case 1: 2-sided construction if C has degree 2 and (left) F = {}, (middle) F = {(C, c2 )}
and (right) F = {(bs−1 , C)}.
3.2
G has a chord incident to C
We may (after applying the reversal trick) assume that the special edge, if it exists, is
(C, bs−1 ).
By the chord condition, the chord incident to C has the form (C, ai ) for some 1 < i < r.
The graph G can be split along the chord (C, ai ) into two graphs G1 and G2 . Both G1 and
G2 are bounded by simple cycles, hence they are triangulated disks. No edges were added,
so neither G1 nor G2 contains a separating triangle. So both of them are W-triangulations.
T. Biedl and M. Derka
9
ai
R2
B
bs-1
G2
G1
C
B
ai
bs-1
C
ai
A
R1
C
A
Figure 6 Case 2(a): Constructing an (int ∪ (C, bs−1 )) representation when C is incident to a
chord, in 2-sided (middle) and 3-sided (right) layout.
We select (C, A, ai ) as corners for G1 and (ai , B, C) as corners for G2 and can easily
verify that G1 and G2 satisfy the chord condition with respect to those corners:
G1 has no chords on PAai or PCA as they would violate the chord condition in G. There
is no chord on Pai C as it is a single edge.
G2 has no chords on Pai B or PBC as they would violate the chord condition in G. There
is no chord on Pai C as it is a single edge.
Inductively construct a 2-sided (int ∪ (C, ai )) representation R1 of G1 and a 2-sided
(int ∪ F ) representation R2 of G2 , both with the aforementioned corners. Note that CR2
and aiR2 are on the bottom side of R2 with CR2 to the left of aiR2 .
Rotate R1 by 180°, and translate it so that it is below R2 with aiR1 in the same column
as aiR2 . Stretch R1 and R2 horizontally as needed until CR1 is in the same column as
CR2 . Then aiR and CR for R ∈ {R1 , R2 } can each be unified without adding bends by
adding vertical segments. The curves of outer-face vertices of G then cross (after suitable
lengthening) the bounding box in the required order. See also Figure 6.
Every interior face f of G is contained in G1 or G2 and hence has a private region in R1
or R2 . As our construction does not make any changes inside the bounding boxes of R1 and
R2 , the private region of f is contained in R as well.
3.3
G has no chords incident to C and deg(C) ≥ 3
We may (after applying the reversal trick) assume that the special edge, if it exists, is (C, c2 ).
In this case we split G in a more complicated fashion illustrated in Figure 7. Let u1 , . . . , uq
be the neighbours of vertex C in clockwise order, starting with bs−1 = u1 and ending with
c2 = uq . We know that q = deg(C) ≥ 3 and that u2 , . . . , uq−1 are not on the outer-face, since
C is not incident to a chord. Let uj be a neighbour of C that has at least one neighbour
other than C on PCA , and among all those, choose j to be minimal. Such a j exists because
G is a triangulated disk and therefore uq−1 is adjacent to both C and uq .
We distinguish two sub-cases.
Case 3(a): j 6= 1. Denote the neighbours of uj on Pc2 A by t1 , . . . , tx in the order in which
they appear on Pc2 A . Separate G into subgraphs as follows (see also Figure 7):
The right graph GR is bounded by (A, P.AB
. . , B, P.Bu
. .1 , u1 , u2 , . . . , uj , tx , P.t.x.A , A).
10
1-String B2 -VPG Representation of Planar Graphs
B
B
GR
bs-1 = u1
u2
bs-1 = u1
uj
u2
GR
uj=uq-1
GQ
G0
C
uq = c2
G1
t1
G3
G2
t2
t3
GQ
G2
G1
t4=tx A
C
uq = c2 = t1
t2
G3
t3
t4=tx
A
Figure 7 Case 3(a): Splitting the graph when deg(C) ≥ 3, no chord is incident to C, and j > 1.
(Left) j < q − 1; G0 is non-trivial. (Right) j = q − 1; G0 = {c2 }.
Let GB be the graph bounded by (uj , t1 , P.t1. t.x , tx , uj ). We are chiefly interested in its
subgraph GQ := GB − uj .
Let GL be the graph bounded by (C, P.Ct
. .1 , t1 , uj , C). We are chiefly interested in its
subgraph G0 := GL − {uj , C}.
The idea is to obtain representations of these subgraphs and then to combine them
suitably. We first explain how to obtain the representation RR used for GR . Clearly GR is a
W-triangulation, since u2 , . . . , uj are interior vertices of G, and hence the outer-face of GR is
a simple cycle. Set AR := A and BR := B. If B 6= u1 then set CR := u1 and observe that
GR satisfies the chord condition with respect to these corners:
GR does not have any chords with both ends on PAR BR = PAB , PBR u1 ⊆ PBC , or
Ptx AR ⊆ PCA since G satisfies the chord condition.
If there were any chords between a vertex in u1 , . . . , uj and a vertex on PCR AR , then by
CR = u1 the chord would either connect two neighbours of C (hence giving a separating
triangle of G), or connect some ui for i < j to PCA (contradicting the minimality of j),
or connect uj to some other vertex on Ptx A (contradicting that tx is the last neighbour
of uj on PCA ). Hence no such chord can exist either.
If B = u1 , then set CR := u2 (which exists by q ≥ 3) and similarly verify that it satisfies
the chord condition as PBR CR is the edge (B, u2 ). Since CR ∈ {u1 , u2 } in both cases, we can
apply induction on GR and obtain a 2-sided (int ∪ (u1 , u2 )) representation RR with respect
to the aforementioned corners.
Next we obtain a representation for the graph G0 , which is bounded by uj+1 , . . . , uq , Pc2 t1
and the neighbours of uj in CCW order between t1 and uj+1 . We distinguish two cases:
(1) j = q − 1, and hence t1 = uq = c2 and G0 consists of only c2 . In this case, the
representation of R0 consists of a single vertical line segment c2 .
(2) j < q − 1, so G0 contains at least three vertices uq−1 , uq and t1 . Then G0 is a Wtriangulation since C is not incident to a chord and by the choice of t1 . Also, it satisfies
the chord condition with respect to corners A0 := c2 , B0 := t1 and C0 := uj+1 since the
three paths on its outer-face are sub-paths of PCA or contained in the neighbourhood of
C or uj . In this case, construct a 2-sided (int ∪ (uj+1 , uj+2 )) representation R0 of G0
with respect to these corners inductively.
Finally, we create a representation RQ of GQ . If GQ is a single vertex or a single edge,
then simply use vertical segments for the curves of its vertices (recall that there is no special
edge here). Otherwise, we can show:
T. Biedl and M. Derka
11
I Claim 12. GQ has a 2-sided (int ∪ ∅) 1-string B2 -VPG representation with respect to
corners t1 and tx .
Proof. GQ is not necessarily 2-connected, so we cannot apply induction directly. Instead
we break it into x − 1 graphs G1 , . . . , Gx−1 , where for i = 1, . . . , x − 1 graph Gi is bounded
by Pti ti+1 as well as the neighbours of uj between ti and ti+1 in CCW order. Note that Gi
is either a single edge, or it is bounded by a simple cycle since uj has no neighbours on
PCA between ti and ti+1 . In the latter case, use Bi := ti , Ai := ti+1 , and Ci an arbitrary
third vertex on Pti ti+1 ⊆ PCA , which exists since the outer-face of Gi is a simple cycle and
(ti , ti+1 , uj ) is not a separating triangle. Observe that Gi satisfies the chord condition since
all paths on the outer-face of Gi are either part of PCA or in the neighbourhood of uj . Hence
by induction there exists a 2-sided (int ∪ ∅) representation Ri of Gi with respect to the
corners of Gi . If Gi is a single edge (ti , ti+1 ), then let Ri consists of two vertical segments ti
and ti+1 .
Since each representation Ri has at its leftmost end a vertical segment ti and at its
rightmost end a vertical segment ti+1 , we can combine all these representations by aligning
Ri+1
i
tR
horizontally and filling in the missing segment. See also Figure 8. One easily
i and ti
verifies that the result is a 2-sided (int ∪ ∅) representation of GQ .
J
G1
uj
G2
(empty)
→
t1
G1
G2
t2
G3
t3
t4 = tx
G3
Θ
t1
t2
t3
t4 = t x
Figure 8 Left: Graph GB . The boundary of GQ is shown bold. Right: Merging 2-sided (int ∪ ∅)
representations of Gi , 1 ≤ i ≤ 3, into a 2-sided (int ∪ ∅) representation of GQ .
We now explain how to combine these three representations RR , RQ and R0 ; see also
R
R
Figure 9. Translate RQ so that it is below RR with tR
and tx Q in the same column; then
x
connect these two curves with a vertical segment. Rotate R0 by 180° and translate it so
RQ
0
that it is below RR and to the left and above RQ , and tR
are in the same column;
1 and t1
then connect these two curves with a vertical segment. Notice that the vertical segments
of u2RR , . . . , ujRR are at the bottom left of RR . Horizontally stretch R0 and/or RR so that
R0
u2RR , . . . , ujRR are to the left of the vertical segment of uj+1
, but to the right (if j < q − 1)
R0
of the vertical segment of uj+2 . There are such segments by j > 1.
Introduce a new horizontal segment C and place it so that it intersects curves uq , . . . , uj+2 ,
u2 , . . . , uj , uj+1 (after lengthening them, if needed). Attach a vertical segment to C. If
j < q − 1, then top-tangle uq , . . . , uj+2 rightwards. (Recall from Section 2.3 that this creates
intersections among all these curves.) Bottom-tangle u2 , . . . , uj rightwards. The construction
hence creates intersections for all edges in the path u1 , . . . , uq , except for (uj+2 , uj+1 ) (which
was represented in R0 ) and (u2 , u1 ) (which was represented in RR ).
Bend and stretch ujRR rightwards so that it crosses the curves of all its neighbours in
G0 ∪ GQ . Finally, consider the path between the neighbours of uj CCW from uj+1 to tx .
Top-tangle curves of these vertices rightwards, but omit the intersection if the edge is on the
outer-face (see e.g. (t2 , t3 ) in Figure 9).
B
B
RR
RR
bs-1
bs-1
12
u
uj
uj
1-Stringu2B2 -VPG
Representation ofA Planar Graphs 2
A
uj+2
uj+2
C
One verifies that the curves intersect the bounding
boxes as desired. The constructed
representations contain private regions for all interior faces of GR , GQ and G0 by induction.
uj+1
uj+1 are of the form (C, u , u
0
The remaining
and
(uj , wk , wk+1 ) where wk and
R0 faces
i
i+1 ), 1 ≤ i < q, R
u
=c
q
2
uq=c
2
wk+1
are
two consecutive neighbours of uj on the outer-face
of G0 or GQ . Private regions
RQ
RQ Figure 9.
for those faces are shown in
C
t1
t2
t3
t1
t4 = t x
RR
u2
B bs-1
t2
t3
t4 = t x
RR
uj
u2
B bs-1
A
uj
uj+2
uj+2
R0
...
uj+1
R0
uq=c2
uj+1
uq=c2
RQ
C
t1
t3
t2
RQ
C
t4 = t x
t1
Figure 9 Combining subgraphs in Case 3(a). 2-sided construction, for F = {(C, c2 )} and F = ∅.
The construction matches the graph depicted in Figure 7 left.
Case 3(b): j = 1, i.e., there exists a chord (bs−1 , ci ). In this case we cannot use the
above construction directly since we need to bend uj = u1 = bs−1 horizontally rightwards
to create intersections, but then it no longer extends vertically downwards as required for
bs−1 . Instead we use a different construction.
Edge (bs−1 , ci ) is a chord from PBC to PCA . Let (bk , c` ) be a chord from PBC to PCA
that maximizes k − `, i.e., is furthest from C (our construction in this case actually works for
any chord from PBC to PCA —it is not necessary that k = s − 1). Note that possibly ` = t
(i.e., the chord is incident to A) or k = 1 (i.e., the chord is incident to B), but not both by
the chord condition. We assume here that ` < t, the other case is symmetric.
B
G2
B
bk
C
→
cl
A
bk
bk
G1
C
cl
G2
cl
G1
A
B
bk
C
c2
cl
A
Figure 10 Case 3(b): Construction of a 2-sided (int ∪ (C, c2 )) representation of G with a chord
(bk , c` ).
In order to construct a 2-sided (int ∪ F ) representation of G, split the graph along (bk , c` )
into two W-triangulations G1 (which includes C and the special edge, if any) and G2 (which
includes A). Set (A, B, c` ) as corners for G1 (these are three distinct vertices by c` 6= A) and
set (c` , bk , C) as corners for G2 and verify the chord condition:
G1 has no chords on either PCc` ⊆ PCA or Pbk C ⊆ PBC as they would contradict the
T. Biedl and M. Derka
13
chord condition in G. The third side is a single edge (bk , c` ) and so it does not have any
chords either.
G2 has no chords on either Pc` A ⊆ PCA or PAB as they would violate the chord condition
in G. It does not have any chords on the path PBc` due to the selection of the chord
(bk , c` ) and by the chord condition in G and by the chord condition in G.
Thus, by induction, G1 has a 2-sided (int ∪ F ) representation R1 and G2 has a 2-sided
(int ∪ (bk , c` )) representation R2 with respect to the aforementioned corners. Translate and
1
1
2
2
horizontally stretch R1 and/or R2 so that bR
and cR
are aligned with bR
and cR
`
` ,
k
k
R1
R1
respectively, and connect each pair of curves with a vertical segment. Since bk and c`
have no bends, this does not increase the number of bends on any curve and produces a
2-sided (int ∪ F ) representation of G. All the faces in G have a private region inside one of
the representations of G1 or G2 .
This ends the description of the construction in all cases, and hence proves Lemma 11.
We now show how Lemma 11 implies Theorem 2:
Proof of Theorem 2. Let G be a 4-connected planar graph. Assume first that G is triangulated, which means that it is a W -triangulation. Let (A, B, C) be the outer-face vertices
and start with an (int ∪ (B, C))-representation of G (with respect to corners (A, B, C)) that
exists by Lemma 11. The intersections of the other two outer-face edges (A, C) and (A, B)
can be created by tangling B, A and C, A suitably (see Figure 11).
Theorem 2 also stipulates that every curve used in a representation has at most one
vertical segment. This is true for all curves added during the construction. Furthermore, we
join two copies of a curve only by aligning and connecting their vertical ends, so all curves
have at most one vertical segment.
This proves Theorem 2 for 4-connected triangulations. To handle an arbitrary 4-connected
planar graph, stellate the graph, i.e., insert into each non-triangular face f a new vertex v
and connect it to all vertices on f . By 4-connectivity this creates no separating triangle and
the graph is triangulated afterwards. Finding a representation of the resulting graph and
deleting the curves of all added vertices yields the result.
J
Θ
B
C
A
Figure 11 Completing a 2-sided (int ∪ (B, C)) representation by adding intersections for (A, B)
and (A, C).
4
3-Sided Constructions for W-Triangulations
Our key tool for proving Theorem 1 is the following lemma:
I Lemma 13. Let G be a W-triangulation and let A, B, C be any three corners with respect
to which G satisfies the chord condition. For any e ∈ {(C, bs−1 ), (C, c2 )}, G has an (int ∪ e)
14
1-String B2 -VPG Representation of Planar Graphs
1-string B2 -VPG representation with 3-sided layout and an (int ∪ e) 1-string B2 -VPG
representation with reverse 3-sided layout. Both representations have a chair-shaped private
region for every interior face.
The proof of Lemma 13 will use induction on the number of vertices. To combine the
representations of subgraphs, we sometimes need them to have a 2-sided layout, and hence
we frequently use Lemma 11 proved in Section 3. Also, notice that for Lemma 13 the special
edge must exist (this is needed in Case 1 to find private regions), while for Lemma 11, F is
allowed to be empty.
We again reduce the number of cases in the proof of Lemma 13 by using the reversal
trick. Define G rev as in Section 3. Presume we have a 3-sided/reverse 3-sided representation
of G rev . We can obtain a 3-sided/reverse 3-sided representation of G by flipping the reverse
3-sided/3-sided representation of G rev diagonally (i.e., along the line defined by (x = y)).
Again, this effectively switches corners A and B (corner C remains the same), and replaces
special edge (C, c2 ) by (C, bs−1 ) and vice versa. If G satisfies the chord condition with respect
to corners (A, B, C), then G rev satisfies the chord condition with respect to corners (B, A, C).
Hence for all the following cases, we may again (after possibly applying the above flipping
operation) make a restriction on which edge the special edge is. Alternatively, we only need
to give the construction for the 3-sided, but not for the reverse 3-sided layout.
So let G and a special edge e be given, and set F = {e}. In the base case, n = 3, so
G is a triangle, and the three corners A, B, C must be the three vertices of this triangle.
The desired (int ∪ F ) representations for all possible choices of F are depicted in Figure 2.
The induction step for n ≥ 4 uses the same case distinctions as the proof of Lemma 11; we
describe these cases in separate subsections.
4.1
C has degree 2
Since G is a triangulated disk with n ≥ 4, (bs−1 , c2 ) is an edge. Define G0 as in Section 3.1
to be G − {C} and recall that G0 satisfies the chord condition for corners A0 := A, B 0 := B
and a suitable choice of C 0 ∈ {bs−1 , c2 } Thus, we can apply induction to G0 .
To create a 3-sided representation of G, we use a 3-sided (int ∪ F 0 ) representation R0
of G0 , where F 0 = {(bs−1 , c2 )}. Note that regardless of which vertex is C 0 , we have bs−1
as bottommost curve on the left and c2 as leftmost curve on the bottom. Introduce a new
horizontal segment representing C which intersects c2 if F = {(C, c2 )}, or a vertical segment
which intersects bs−1 if F = {(C, bs−1 )}.
After suitable lengthening, the curves intersect the bounding box in the required order.
One can find the chair-shaped private region for the only new face {C, c2 , bs−1 } as shown
in Figure 12. Observe that no bends were added to the curves of R0 and that C has the
required number of bends.
Since we have given the constructions for both possible special edges, we can obtain the
reverse 3-sided representation by diagonally flipping a 3-sided representation of G rev .
4.2
G has a chord incident to C
Let (C, ai ) be a chord that minimizes i (i.e., is closest to A). Define W-triangulations G1
and G2 with corners (C, A, ai ) for G1 and (ai , B, C) for G2 as in Section 3.2, and recall
that they satisfy the chord condition. So, we can apply induction to both G1 and G2 ,
obtain representations R1 and R2 (with respect to the aforementioned corners) for them, and
combine them suitably. We will do so for both possible choices of special edge, and hence
need not give the constructions for reverse 3-sided layout due to the reversal trick.
T. Biedl and M. Derka
15
B
B
bs-1
R'
bs-1
G'
R'
bs-1
A
c2
C
B
C
c2
C
A
c2
A
Figure 12 Case 1: 3-sided representation if C has degree 2.
Using Lemma
11, construct a 2-sided (intR' ∪ (C, ai ))
Case 2(a):R' F = {(C, bs−1 )}.
R'
representation R1 of G1 with respect to the aforementioned corners of G1 . Inductively,
construct a 3-sided (int ∪ F ) representation R2 of G2 with respect to the corners of G2 . Note
that CR2 and aiR2 are on the bottom side of R2 with CR2 to the left of aiR2 .
First,b rotate
R by 180°. We can now
merge
R and A
R2 as describedBin Section
3.1 since
bs-1 C 1 c2
bs-1 C
B
B
A
c2 1 A
c2
s-1 C
all relevant curves end vertically in R1 and R2 . The curves of outer-face vertices of G then
cross (after suitable lengthening) the bounding box in the required order. See also Figure 13.
ai
B
B
bs-1
C
bs-1
C
ai
G2
G1
R2
ai
A
R1
C
A
Figure 13 Case 2(a): Constructing a 3-sided (int ∪ (C, bs−1 )) representation when C is incident
to a chord.
Case 2(b): F = {(C, c2 )}. For the 3-sided construction, it does not seem possible to
merge suitable representations of G1 and G2 directly, since the geometric restrictions imposed
onto curves A, B, C, c2 and ai by the 3-sided layout cannot be satisfied using 3-sided and
2-sided representations of G1 and G2 . We hence use an entirely different approach that
splits the graph further; it resembles Case 1 in [1, Proof of Lemma 2]. Let GQ = G1 − C,
and observe that it is bounded by Pc2 A , PA,ai , and the path formed by the neighbours
c2 = u1 , u2 , . . . , uq = ai of C in G1 in CCW order. We must have q ≥ 2, but possibly G1 is
a triangle {C, A, ai } and GQ then degenerates into an edge. If GQ contains at least three
vertices, then u2 , . . . , uq−1 are interior since chord (C, ai ) was chosen closest to A, and so
GQ is a W-triangulation.
We divide the proof into two subcases, depending on whether A 6= c2 or A = c2 . See also
Figures 14 and 15.
Case 2(b)1: A 6= c2 . Select the corners of GQ as (AQ := c2 , BQ := A, CQ := ai = uq ), and
observe that it satisfies the chord condition since the three corners are distinct and the three
16
1-String B2 -VPG Representation of Planar Graphs
outer-face paths are sub-paths of PCA and PAB or in the neighbourhood of C, respectively.
Apply Lemma 11 to construct a 2-sided (int ∪ (uq , uq−1 )) representation RQ of GQ with
respect to the corners of GQ . Inductively, construct a 3-sided (int ∪ (C, ai )) representation
R2 of G2 with respect to the corners of G2 .
To combine RQ with R2 , rotate RQ by 180°. Appropriately stretch RQ and translate it so
R
that it is below R2 with ai Q and aiR2 in the same column, and so that the vertical segment
of each of the curves uq−1 , . . . , u1 = c2 is to the left of the bounding box of R2 . Then
R
ai Q and aiR2 can be unified without adding bends by adding a vertical segment. Curves
uq−1 , . . . , u1 = c2 in the rotated RQ can be appropriately stretched upwards, intersected by
CR2 after stretching it leftwards, and then top-tangled leftwards. All the curves of outer-face
vertices of G then cross (after suitable lengthening) a bounding box in the required order.
All faces in G that are not interior to GQ or G2 are bounded by (C, uk , uk+1 ), 1 ≤ k < q.
The chair-shaped private regions for such faces can be found as shown in Figure 14.
Case 2(b)2: A = c2 . In this case the previous construction cannot be applied since the
corners for GQ would not be distinct. We give an entirely different construction.
If GQ has at least 3 vertices, then q ≥ 3 since otherwise by A = c2 = u1 edge (A, uq )
would be a chord on PAB . Choose as corners for GQ the vertices AQ := A, BQ := ai = uq
and CQ := uq−1 and observe that the chord condition holds since all three paths on the
outer-face belong to PAB or are in the neighbourhood of C. By Lemma 11, GQ has a 2-sided
(int ∪ (uq , uq−1 )) representation RQ with the respective corners and private region for every
interior face of GQ . If GQ has at most 2 vertices, then GQ consists of edge (A, a2 ) only, and
we use as representation R2 two parallel vertical segments a2 and A.
We combine RQ with a representation R1 of G1 that is different from the one used in the
previous cases; in particular we rotate corners. Construct a reverse 3-sided layout R2 of G2
with respect to corners C2 := ai , A2 := B and B2 := C. Rotate R2 by 180°, and translate
R
it so that it is situated below RQ with ai Q and aiR2 in the same column. Then, extend
R
R
Q
Q
CR2 until it crosses uq−1 , . . . , u1 (after suitable lengthening), and then bottom-tangle
R
R
Q
uq−1
, . . . , u1 Q rightwards. This creates intersections for all edges in path uq , uq−1 , . . . , u1 ,
except for (uq , uq−1 ), which is either on the outer-face (if q = 2) or had an intersection in
RQ . One easily verifies that the result is a 3-sided layout, and private regions can be found
for the new interior faces as shown in Figure 15.
B
B
R2
G2
C
ai = u5 = uq
u4
u3
u2 GQ
c2=u1
C
ai = u5 = uq
A
u2 u3 u4
RQ
c2 = u1
Figure 14 Case 2(b)1: C is incident to a chord, F = {(C, c2 )}, and c2 6= A.
A
T. Biedl and M. Derka
17
u4
B
B
ai = u5 = uq
G2
u4
u3 G
u2
C
RQ
u3 u2
ai = u 5 = u q
A = c2 = u1
R2
Q
A = c2 = u 1
C
B
B
A = c2 = u1
a2 = u2 = uq
G2
R2
GQ
C
a2 = u 2 = u q
A = c2 = u 1
C
Figure 15 Case 2(b)2: Construction when C is incident to a chord, c2 = A, F = {(C, c2 )} and
(A, ai , C) is not a face (top), or when (A, ai , C) is a face (bottom).
4.3
G has no chords incident to C and deg(C) ≥ 3
We will give explicit constructions for 3-sided and reverse 3-sided layout, and may hence
(after applying the reversal trick) assume that the special edge is (C, c2 ).
As in Section 3.3, let u1 , . . . , uq be the neighbours of C and let j be minimal such that
uj has another neighbour on PAC . We again distinguish two sub-cases.
Case 3(a): j 6= 1. As in Section 3.3, define t1 , . . . , tx , GR , GB , GQ , GL and G0 . See also
Figure 7. Recall that GR satisfies all conditions with respect to corners AR := A, BR := B
and CR ∈ {u1 , u2 }. Apply induction on GR and obtain an (int ∪ (u1 , u2 )) representation
RR with respect to the corners of GR . We use as layout for RR the type that we want for
G, i.e., use a 3-sided/reverse 3-sided layout if we want G to have a 3-sided/reverse 3-sided
representation.
For G0 and GQ , we use exactly the same representations R0 and RQ as in Section 3.3.
Combine now these three representations RR , RQ and R0 as described in Section 3.3,
Case 3(a); this can be done since the relevant curves u2RR , . . . , utRR all end vertically in
RR . See also Figure 16. The only change occurs at curve C; in Section 3.3 this received
a bend and a downward segment, but here we omit this bend and segment and let C end
horizontally as desired.
One easily verifies that the curves intersect the bounding boxes as desired. The constructed
representations contain private regions for all interior faces of GR , GQ and G0 by induction.
The remaining faces are of the form (C, ui , ui+1 ), 1 ≤ i < q, and (uj , wk , wk+1 ) where wk and
wk+1 are two consecutive neighbours of uj on the outer-face of G0 or GQ . Private regions
for those faces are shown in Figure 16.
Case 3(b): j = 1, i.e., there exists a chord (bs−1 , ci ). In this case we cannot use the
above construction directly since we need to bend uj = u1 = bs−1 horizontally rightwards to
create intersections, but then it no longer extends vertically downwards as required for bs−1 .
The simple construction described in Section 3.3, Case 3(b) does not apply either. However,
if we use a different vertex as uj (and argue carefully that the chord condition holds), then
18
1-String B2 -VPG Representation of Planar Graphs
B
B
RR
RR
bs-1
bs-1
u2
uj
u2
A
uj
A
uj+2
uj+2
C
C
R0
uj+1
uj+1
R0
uq=c2
uq=c2
RQ
RQ
t1
t2
t3
t1
t4 = t x
t2
t3
t4 = t x
Figure 16 Case 3(a): 3-sided representation when deg(C) ≥ 3, there is no chord incident to C,
RR
RR 7 left.
F = {(C, c2 )}, and j > 1. The construction
matches the graph depicted in Figure
u2
B bs-1
uj
uj+2
uj
uj+2
B
R0
uq=c2
u2
B bs-1
A
the same construction works.
...
GR
uj+1
u2
bs-1 = u1
RQ
C
t1
R0
uj'
t2
t3
C
t4 = t x
G0
C
uj+1
uq=c2
uq = c2
GQ
G1
t1
G3
G2
t2
RQ
t1
t3
t4=tx A
Figure 17 Case 3(b): Splitting the graph when deg(C) ≥ 3, no chord is incident to C, and j = 1.
Recall that u1 , . . . , uq are the neighbours of corner C in CW order starting with bs−1
and ending with c2 . We know that q ≥ 3 and u2 , . . . , uq−1 are not on the outer-face. Now
define j 0 as follows: Let uj 0 , j 0 > 1 be a neighbour of C that has at least one neighbour on
PCA other than C, and choose uj 0 so that j 0 is minimal while satisfying j 0 > 1. Such a j 0
exists since uq−1 has another neighbour on PCA , and by q ≥ 3 we have q − 1 > 1. Now,
separate G as in the previous case, except use j 0 in place of j. Thus, define t1 , . . . , tx to be
the neighbours of uj 0 on Pc2 A , in order, and separate G into three graphs as follows:
The right graph GR is bounded by (A, P.AB
. . , B, P.Bu
. .1 , u1 , u2 , . . . , uj 0 , tx , P.t.x.A , A).
Let GB be the graph bounded by (uj 0 , t1 , P.t1. t.x , tx , uj 0 ). Define GQ := GB − uj 0 .
Let GL be the graph bounded by (C, P.Ct
. .1 , t1 , uj 0 , C). Define G0 := GL − {uj 0 , C}.
Observe that the boundaries of all the graphs are simple cycles, and thus they are
W-triangulations. Select (AR := A, BR := B, CR := u2 ) to be the corners of GR and argue
the chord condition as follows:
GR does not have any chords on PCR AR as such chords would either contradict the
minimality of j 0 , or violate the chord condition in G.
GR does not have any chords on PAR BR = PAB .
T. Biedl and M. Derka
19
GR does not have any chords on PBbs−1 as it is a sub-path of PBC and they would violate
the chord condition in G. It also does not have any chords in the form (CR = u2 , b` ), 1 ≤
` < s − 1 as they would have to intersect the chord (bs−1 , ci ), violating the planarity of
G. Hence, GR does not have any chords on PCR AR .
Notice in particular that the chord (u1 , ci ) of GR is not a violation of the chord condition
since we chose u2 as a corner.
Hence, we can obtain a representation RR of GR with 3-sided or reverse 3-sided layout
and special edge (u1 = bs−1 , u2 ). For graphs GQ and G0 the corners are chosen, the chord
condition is verified, and the representations are obtained exactly as in Case 3(a). Since the
special edge of GR is (u1 , u2 ) as before, curves u1 and u2 are situated precisely as in Case
3(a), and we merge representations and find private regions as before.
This ends the description of the construction in all cases, and hence proves Lemma 13.
5
From 4-Connected Triangulations to All Planar Graphs
In this section, we prove Theorem 1. Observe that Lemma 13 essentially proves it for
4-connected triangulations. As in [4] we extend it to all triangulations by induction on the
number of separating triangles.
B
C
Θ
A
Figure 18 Completing a 3-sided (int ∪ (B, C)) representation by adding intersections for (A, B)
and (A, C).
I Theorem 14. Let G be a triangulation with outer-face (A, B, C). G has a 1-string B2 -VPG
representation with a chair-shaped private region for every interior face f of G.
Proof. Our approach is exactly the same as in [4], except that we must be careful not to add
too many bends when merging subgraphs at separating triangles, and hence must use 3-sided
layouts. Formally, we proceed by induction on the number of separating triangles. In the
base case, G has no separating triangle, i.e., it is 4-connected. As the outer-face is a triangle,
G clearly satisfies the chord condition. Thus, by Lemma 13, it has a 3-sided (int ∪ (B, C))
representation R with private region for every face. R has an intersection for every edge
except for (A, B) and (A, C). These intersections can be created by tangling B, A and C, A
suitably (see Figure 18). Recall that A initially did not have any bends, so it has 2 bends
in the constructed representation of G. The existence of private regions is guaranteed by
Lemma 13.
Now assume for induction that G has k + 1 separating triangles. Let ∆ = (a, b, c) be
an inclusion-wise minimal separating triangle of G. It was shown in [4] that the subgraph
G2 induced by the vertices inside ∆ is either an isolated vertex, or a W-triangulation with
corners (A, B, C) such that the vertices on PAB are adjacent to b, the vertices on PBC are
adjacent to c, and the vertices on PCA are adjacent to a. Furthermore, G2 satisfies the chord
20
1-String B2 -VPG Representation of Planar Graphs
condition. Also, graph G1 = G − G2 is a W-triangulation that satisfies the chord condition
and has k separating triangles. By induction, G1 has a representation R1 (with respect to
the corners of G1 ) with a chair-shaped private region for every interior face f . Let Φ be the
private region for face ∆. Permute a, b, c, if needed, so that the naming corresponds to the
one needed for the private region and, in particular, the vertical segment of c intersects the
private region of ∆ as depicted in Figure 19.
Case 1: G2 is a single vertex v. Represent v by inserting into Φ an orthogonal curve v
with 2 bends that intersects a, b and c. The construction, together with private regions for
the newly created faces (a, b, v), (a, c, v) and (b, c, v), is shown in Figure 19.
Case 2: G2 is a W-triangulation. Recall that G2 satisfies the chord condition with
respect to corners (A, B, C). Apply Lemma 13 to construct a 3-sided (int ∪ (C, bs−1 ))
representation R2 of G2 with respect to the corners of G2 . Let us assume that (after possible
rotation) Φ has the orientation shown in Figure 19 (right); if it had the symmetric orientation
then we would do a similar construction using a reverse 3-sided representation of G2 . Place
R2 inside Φ as shown in Figure 19 (right). Stretch the curves representing vertices on PCA ,
PAB and PBbs−1 downwards, upwards and leftwards respectively so that they intersect a, b
and c. Top-tangle leftwards the curves A = a1 , a2 , . . . , ar = B. Left-tangle downwards
the curves B = b1 , b2 , . . . , bs−1 and bend and stretch C downwards so that it intersects
a. Bottom-tangle leftwards the curves C = c1 , . . . , ct = A. It is easy to verify that the
construction creates intersections for all the edges between vertices of ∆ and the outer-face
of G2 . The tangling operation then creates intersections for all the outer-face edges of G2
except edge (C, bs−1 ), which is already represented in R2 .
Every curve that receives a new bend represents a vertex on the outer-face of G2 , which
means that it initially had at most 1 bend. Curve A is the only curve that receives 2 new
bends, but this is allowed as A does not have any bends in R2 . Hence, the number of bends
for every curve does not exceed 2.
Private regions for faces formed by vertices a, b, c and vertices on the outer-face of G2
can be found as shown in Figure 19 right.
J
With Theorem 14 in hand, we can show our main result: every planar graph has a 1-string
B2 -VPG representation.
Proof of Theorem 1. If G is a planar triangulated graph, then the claim holds by Theorem 14. To handle an arbitrary planar graph, repeatedly stellate the graph (recall that
this means inserting into each non-triangular face a new vertex connected to all vertices of
the face). It is easily shown that one stellation makes the graph connected, a second one
makes it 2-connected, and a third one makes it 3-connected and triangulated. Thus after 3
stellations we have a 3-connected triangulated graph G0 such that G is an induced subgraph
of G0 . Apply Theorem 14 to construct a 1-string B2 -VPG representation R0 of G0 (with the
three outer-face vertices chosen as corners). By removing curves representing vertices that
are not in G, we obtain a 1-string B2 -VPG representation of G.
J
6
Conclusions and Outlook
We showed that every planar graph has a 1-string B2 -VPG representation, i.e., a representation as an intersection graph of strings where strings cross at most once and each string
is orthogonal with at most two bends. One advantage of this is that the coordinates to
describe such a representation are small, since orthogonal drawings can be deformed easily
T. Biedl and M. Derka
21
R1
b
c
c
v
a
v
a
b
R1
b
c
A
c
C
G2
B
R2
C
B
a
A
a
b
Figure 19 A separating triangle enclosing one vertex and the construction (top), and a separating
triangle enclosing a W-triangulation and the corresponding construction (bottom).
such that all bends are at integer coordinates. Every vertex curve has at most two bends
and hence at most 3 segments, so the representation can be made to have coordinates in an
O(n) × O(n)-grid with perimeter at most 3n. Note that none of the previous results provided
an intuition of the required size of the grid.
Following the steps of our proof, it is not hard to see that our representation can be
found in linear time, since the only non-local operation is to test whether a vertex has a
neighbour on the outer-face. This can be tested by marking such neighbours whenever they
become part of the outer-face. Since no vertex ever is removed from the outer-face (updating
the outer-face markers upon removing such vertices could increase the time complexity), this
takes overall linear time.
The representation constructed in this paper uses curves of 8 possible shapes for planar
graphs. For 4-connected planar graphs, the shapes that have at most one vertical segment
suffice. A natural question is if one can restrict the number of shapes required to represent
all planar graphs.
Bringing this effort further, is it possible to restrict the curves even more? The existence
of 1-string B1 -VPG representations for planar graphs is open. Furthermore, Felsner et
al. [9] asked the question whether every planar graph is the intersection graph of only two
22
1-String B2 -VPG Representation of Planar Graphs
shapes, namely {L, Γ}. As they point out, a positive result would provide a different proof of
Scheinerman’s conjecture (see [11] for details). Somewhat inbetween: is every planar graph
the intersection graph of xy-monotone orthogonal curves, preferably in the 1-string model
and with few bends?
References
1
2
3
4
5
6
7
8
9
10
11
12
13
A
Takao Asano, Shunji Kikuchi, and Nobuji Saito. A linear algorithm for finding Hamiltonian
cycles in 4-connected maximal planar graphs. Discr. Applied Mathematics, 7(1):1 – 15, 1984.
Jérémie Chalopin and Daniel Gonçalves. Every planar graph is the intersection graph of
segments in the plane: extended abstract. In ACM Symposium on Theory of Computing,
STOC 2009, pages 631–638. ACM, 2009.
Jérémie Chalopin, Daniel Gonçalves, and Pascal Ochem. Planar graphs are in 1-string. In
ACM-SIAM Symposium on Discrete Algorithms, SODA ’07, pages 609–617. SIAM, 2007.
Jérémie Chalopin, Daniel Gonçalves, and Pascal Ochem. Planar graphs have 1-string
representations. Discrete & Computational Geometry, 43(3):626–647, 2010.
Steven Chaplick and Torsten Ueckerdt. Planar graphs as VPG-graphs. J. Graph Algorithms
Appl., 17(4):475–494, 2013.
Natalia de Castro, Francisco Javier Cobos, Juan Carlos Dana, Alberto Márquez, and Marc
Noy. Triangle-free planar graphs and segment intersection graphs. J. Graph Algorithms
Appl., 6(1):7–26, 2002.
Hubert de Fraysseix, Patrice Ossona de Mendez, and János Pach. Representation of planar
graphs by segments. Intuitive Geometry, 63:109–117, 1991.
Gideon Ehrlich, Shimon Even, and Robert Endre Tarjan. Intersection graphs of curves in
the plane. J. Comb. Theory, Ser. B, 21(1):8–20, 1976.
Stefan Felsner, Kolja B. Knauer, George B. Mertzios, and Torsten Ueckerdt. Intersection
graphs of L-shapes and segments in the plane. In Mathematical Foundations of Computer
Science (MFCS’14), Part II, volume 8635 of Lecture Notes in Computer Science, pages
299–310. Springer, 2014.
Irith Ben-Arroyo Hartman, Ilan Newman, and Ran Ziv. On grid intersection graphs. Discrete Mathematics, 87(1):41–52, 1991.
Matthias Middendorf and Frank Pfeiffer. The max clique problem in classes of string-graphs.
Discrete Mathematics, 108(1-3):365–372, 1992.
Edward R. Scheinerman. Intersection Classes and Multiple Intersection Parameters of
Graphs. PhD thesis, Princeton University, 1984.
Hassler Whitney. A theorem on graphs. The Annals of Mathematics, 32(2):387–390, 1931.
Example
Here we provide an example of constructing an (int ∪(18, 16)) 1-string B2 -VPG representation
R of the W-triangulation shown in Figure 20. We use numbers and colors to distinguish
vertices. We use letters to indicate special vertices such as corners; note that the designation
as such a corner may change as the subgraph gets divided further. The special edge is marked
with hatches.
One can verify that the graph with the chosen corners (1,4,18) satisfies the chord condition.
Vertex C has degree 3, but it is not incident to a chord, so one applies the construction from
Section 4.3. Finding vertex uj , we can see that j > 1, so Case 3(a) applies. Figure 20 shows
the graphs GR , GQ and G0 , and how to construct R from their representations RR , RQ and
R0 .
T. Biedl and M. Derka
The construction of RQ is shown in Figure 21. The representation should have a 2-sided
layout and no special edge. Graph GQ decomposes into three subgraphs G1 , G2 , G3 . Their
2-sided representations are found separately and combined as described in the proof of
Claim 12.
The construction of RR is shown in Figure 22 (decomposition of GR ) and 23 (combining
the representations). Representation RR is supposed to be 3-sided. We first apply Case
1 (Section 4.1) twice, since corner C has degree 2. Then corner C becomes incident to a
chord, so we are in Case 2, and use sub-case Case 2(a) (Section 4.2) since the special edge
is (C, bs−1 = B). This case calls for a 3-sided representation of a G2 (which is a triangle
in this case, so the base case applies). It also calls for a 2-sided representation of G1 with
special edge (C, A = c2 ). This is Case 2 (Section 3.2) and we need to apply the reversal
trick—we flip the graph and relabel the corners. After obtaining the representation, it must
be flipped horizontally in order to undo the reversal. The construction decomposes the graph
further, using Case 2 repeatedly, which breaks the graphs into elementary triangles. Their
2-sided representations are obtained using the base case and composed as stipulated by the
construction.
Figure 24 shows the complete 3-sided (int ∪ (18, 16)) representation of the graph.
23
24
1-String B2 -VPG Representation of Planar Graphs
4
5
3
6
17 14
18 16
2
15
13 12 11 10
7
8
9
1
B:=4
B
u1=bs-1:=5
C := 5
u2=uj :=6
GR
c2 := 6
C:
c
t
=1 2 := 1 :=
16 13
8
t2 :
=
t3 : t4 =
=
11 10 tx :=
7
A:=1
A:=1
t4=tx
GQ
B
u1=bs-1
Case 3(a)
A1=B2:=14
RR
u2=uj
B1:=13 C1=12
A3:=7
A2=B3:=10
A
C
R0
G0
c2
RQ
t1
A0=16
t2
C0=17
B0=B1:=13
t3 t4=tx
Figure 20 Illustration of the example. The goal is to find an (int ∪ (18, 16)) 1-string B2 -VPG
representation of the W-triangulation shown on top, using corners (1,4,18).
T. Biedl and M. Derka
25
A1
B1
t1=B0=B1
t2=A1=B2
A2
C1
t3=A2=B3
C1
A3
B2
A3=t4=tx
B3
ai:=15
B1
B2:=11 A2:=10
B1:=13 C1:=12
B3:=10
A3:=7
ai
A1:=11
B2
C:=15
A:=15
A1
C1
ai:=14
180°
ai
B
A:=12 B:=11
B:=13 C:=12
C
A:=14
B:=13
C:=12
B C
A
180°
B:=15
C:=14
A:=12
BC A
B C
Figure 21 Illustration of the example: Finding RQ (top right).
A
A
A2
B3
A3
26
1-String B2 -VPG Representation of Planar Graphs
B:=4
Case 3(a)
C:=5
c2:=6
A:=1
Case 1, 3-sided
B:=4
B:=4
Case 1, 3-sided
C:=6
A:=1
C:=7
A:=1
c2:=8
Case 2, 3-sided
B:=4
C:=3
C:=3
Case 2, 2-sided,
reversing trick
A:=1
ai:=8 B:=7
A:=3
A:=7
B:=1
C:=7
Case 2, 2-sided
C:=3
A:=3
B:=3
Case 2, 2-sided,
a :=2
ai:=2 reversing trick i
B:=1
A:=1
C:=8
C:=8
A:=8
B:=7
C:=2
Case 2, 2-sided
Case 2, 2-sided
B:=3
C:=2
A:=2
Case 2, 2-sided,
reversing trick
A:=9
C:=2
B:=8
A:=2
C:=8
A:=8
ai:=9
B:=1
A:=1
ai:=9
B:=8
B:=1
Figure 22 Illustration of the example: Decomposing graph GR .
C:=9
T. Biedl and M. Derka
27
B:=4
B
Case 3(a)
C:=5
c2:=6
A:=1
A
C
B
B
Case 1, 3-sided
Case 1, 3-sided
C
A
A
C
Case 2, 3-sided
180°
B
B
Case 2, 2-sided,
reversing trick
C
flip
A
A
C
B
C
Case 2, 2-sided
180°
ai
ai
Case 2, 2-sided,
reversing trick
B C
A
A
B
B
flip
A
C
C
Case 2, 2-sided
Case 2(a), 2-sided
180°
ai
B C
A
B
A
Case 2, 2-sided,
reversing trick
B C
B
flip
C
ai
C
A
A
180°
Figure 23 Illustration of the example: Composing representation RR .
B C
A
A
28
1-String B2 -VPG Representation of Planar Graphs
4
5
3
6
17 14
18 16
2
15
7
13 12 11 10
8
9
1
B
RR
u1=bs-1
u2=uj
A
C
R0
RQ
c2
t1
t2
t3 t4=tx
Figure 24 Illustration of the example: Complete 3-sided (int ∪ (18, 16)) representation.
| 1 |
arXiv:1803.00673v1 [math.CO] 2 Mar 2018
An efficient algorithm to test forcibly-connectedness of
graphical degree sequences
Kai Wang∗
March 5, 2018
Abstract
We present an algorithm to test whether a given graphical degree sequence is
forcibly connected or not and prove its correctness. We also outline the extensions of
the algorithm to test whether a given graphical degree sequence is forcibly k-connected
or not for every fixed k ≥ 2. We show through experimental evaluations that the algorithm is efficient on average, though its worst case run time is probably exponential.
We also adapt Ruskey et al’s classic algorithm to enumerate zero-free graphical degree
sequences of length n and Barnes and Savage’s classic algorithm to enumerate graphical
partitions of even integer n by incorporating our testing algorithm into theirs and then
obtain some enumerative results about forcibly connected graphical degree sequences
of given length n and forcibly connected graphical partitions of given even integer n.
Based on these enumerative results we make some conjectures such as: when n is large,
(1) almost all zero-free graphical degree sequences of length n are forcibly connected;
(2) almost none of the graphical partitions of even n are forcibly connected.
Keywords— graphical degree sequence, graphical partition, forcibly connected, forcibly
k-connected, co-NP
1
Introduction
A graphical degree sequence of finite length n is a non-increasing sequence of non-negative
integers d1 ≥ d2 ≥ · · · ≥ dn such that it is the vertex degree sequence of some simple graph
(i.e. a finite undirected graph without loops or multiple edges). Given an arbitrary nonincreasing sequence of non-negative integers a1 ≥ a2 ≥ · · · ≥ an , it is easy to test whether
it is a graphical degree sequence by using the Erdős-Gallai criterion [7] or the Havel-Hakimi
algorithm [11, 9]. Seven equivalent criteria to characterize graphical degree sequences are
summarized by Sierksma and Hoogeveen [19]. The notion of partition of an integer is well
known in number theory, and is defined to be a non-increasing sequence of positive integers
whose sum is the given integer. An integer partition is called a graphical partition if it is
Department of Computer Sciences, Georgia Southern University, Statesboro, GA 30460, USA
[email protected]
∗
1
the vertex degree sequence of some simple graph. Essentially a zero-free graphical degree
sequence and a graphical partition are the same thing.
It is often interesting to observe the properties of all the graphs having the same vertex
degree sequence. A graph G with degree sequence d = (d1 ≥ d2 ≥ · · · ≥ dn ) is called a
realization of d. Let P be any property of graphs (e.g. being bipartite, connected, planar,
triangle-free, Hamiltonian, etc). A degree sequence d is called potentially P-graphic if it has
at least one realization having the property P and forcibly P-graphic if all its realizations have
the property P [15]. In this paper we only consider the property of k-connectedness (k ≥ 1
is fixed). Wang and Cleitman [20] give a simple characterization of potentially k-connected
graphical degree sequences of length n, through which we can easily test whether a given
graphical degree sequence is potentially connected. However, to the best of our knowledge no
simple characterization of forcibly k-connected graphical degree sequences has been found so
far and no algorithm has been published to test whether a given graphical degree sequence
is forcibly connected or forcibly k-connected with given k. Some sufficient (but unnecessary)
conditions are known for a graphical degree sequence to be forcibly connected or forcibly
k-connected [5, 3, 6].
In the rest of this paper we will present a straight-forward algorithm to characterize
forcibly connected graphical degree sequences and outline the extensions of the algorithm to
test forcibly k-connectedness of graphical degree sequences for fixed k ≥ 2. We will demonstrate the efficiency of the algorithm through some computational experiments and then
present some enumerative results regarding forcibly connected graphical degree sequences of
given length n and forcibly connected graphical partitions of given even integer n. Base on
the observations on these available enumerative results we make some conjectures about the
relative asymptotic behavior of considered functions and the unimodality of certain associated integer sequences.
2
2.1
The testing algorithm
Preliminaries
Based on a result of Wang and Cleitman [20], a graphical
· · · ≥ dn
Pndegree sequence kd1 ≥ d2 ≥ P
k−1
di .
is potentially k-connected if and only if dn ≥ k and i=1 di ≥ 2n − 2 2 − 2 + 2 i=1
Taking k = 1, we get that a zero-free
P graphical degree sequence d1 ≥ d2 ≥ · · · ≥ dn is
potentially connected if and only if ni=1 di ≥ 2n − 2.
Note that any graphical degree sequence with a 0 in it can be neither potentially nor
forcibly connected. We will design an algorithm to test whether a zero-free graphical degree
sequence d is forcibly connected based on the simple observation that d is forcibly connected
if and only if it is not potentially disconnected, i.e., it does not have any disconnected
realization. Equivalently we need to test whether d can be decomposed into two sub graphical
degree sequences. For example, 3,3,3,3,2,2,2 is a potentially connected graphical degree
sequence of length 7. It is not forcibly connected since it can be decomposed into two
sub graphical degree sequences 3,3,3,3 and 2,2,2. Note also that when a graphical degree
sequence can be decomposed into two sub graphical degree sequences, the terms in each
sub sequence need not be consecutive in the original sequence. For example, the graphical
2
degree sequence 4,4,3,3,3,2,2,2,1 can be decomposed into two sub graphical degree sequences
4,4,3,3,3,1 and 2,2,2 or into 4,4,3,3,2 and 3,2,2,1. We say that the graphical degree sequence
3,3,3,3,2,2,2 has a natural decomposition because it has a decomposition in which the terms
in each sub sequence are consecutive in the original sequence. The graphical degree sequence
4,4,3,3,3,2,2,2,1 is not forcibly connected but does not have a natural decomposition. On the
other hand, the graphical degree sequence 6,6,6,5,5,5,5,4,4 is forcibly connected since there
is no way to decompose it into two sub graphical degree sequences.
2.2
Pseudo-code and the proof of its correctness
In this section we will present the pseudo-code of our Algorithm 1 to test forcibly-connectedness
of a given zero-free graphical degree sequence. We then give a proof why it correctly identifies
such graphical degree sequences.
We assume the input is a zero-free graphical degree sequence already sorted in nonincreasing order. In case an input that does not satisfy this condition is given, we can
still easily test whether it is graphical by the Erdős-Gallai criterion [7] or the Havel-Hakimi
algorithm [11, 9]. The output will be True if the input is forcibly connected and False
otherwise. The output can also include a way to decompose the input in case it is not
forcibly connected and such a decomposition is desired.
Algorithm 1: Pseudo-code to test forcibly-connectedness of a graphical degree sequence
Input: A zero-free graphical degree sequence d = (d1 ≥ d2 ≥ · · · ≥ dn )
Output: True or False, indicating whether d is forcibly connected or not
1 if d1 ≥ n − 2 or dn ≥ ⌊n/2⌋ then
2
return True
3 if d1 = dn then
4
return False
5 su ← max{s : s < n − ds+1 }; // 2 ≤ su ≤ n − dn − 1
6 if there exists an s such that d1 + 1 ≤ s ≤ su and d1 = (d1 ≥ d2 ≥ · · · ≥ ds ) and
d2 = (ds+1 ≥ ds+2 ≥ · · · ≥ dn ) are both graphical then
7
return False
8 for l ← dn + 1 to min{⌊n/2⌋, n − d1 − 1} do
9
if dn+1−l < l then
10
m ← min{i : di < l}; // 1 ≤ m ≤ n − l + 1
11
if l ≤ n − m then
12
Form all candidate decompositions of d into s1 and s2 such that s1 is taken
from dL = (dm ≥ dm+1 ≥ · · · ≥ dn ) of length l and s2 = d − s1 is of length
n − l and both with even sum. If both s1 and s2 are graphical, return
False
13 return True
Now we show why Algorithm 1 correctly identifies whether d is forcibly connected or
not. The conditional test on line 1 works as follows.
3
• If d1 ≥ n − 2, then in any realization G of d the vertex v1 with degree d1 will be in a
connected component with at least n − 1 vertices, leaving at most 1 vertex to be in any
other connected component should the input d be non forcibly connected. However, a
graph with a single vertex has the degree sequence 0, which contradicts the assumption
that d is zero-free. Thus in this case d must be forcibly connected.
• If dn ≥ ⌊n/2⌋, then the vertex vn with degree dn will be in a connected component with
at least 1 + ⌊n/2⌋ vertices. Should the input d be non forcibly connected, there will
be another connected component not containing vn and also having at least 1 + ⌊n/2⌋
vertices since each vertex in that connected component has degree at least dn ≥ ⌊n/2⌋.
This will result in a realization with at least 2 + 2⌊n/2⌋ > n vertices, a contradiction.
Thus in this case d must also be forcibly connected.
The conditional test on line 3 works as follows. Let d1 = dn = d.
• If n is odd, then d must be even since the degree sum of a graphical degree sequence
must be even. When we reach line 3, we must have d < ⌊n/2⌋. Now d can be
decomposed into two sub graphical degree sequences of length n−1
and n+1
respectively
2
2
n−1
since d < 2 = ⌊n/2⌋. Thus it is not forcibly connected.
• If n is even, we consider two cases.
Case (A): n/2 is even, i.e. n ≡ 0 mod 4. In this case d can be decomposed into two
graphical degree sequences of length n/2 since d < ⌊n/2⌋ = n/2. Thus it is not forcibly
connected.
Case (B): n/2 is odd, i.e. n ≡ 2 mod 4. Further consider two sub cases. (B1): if d is
odd, then d can be decomposed into two graphical degree sequences of length n/2 − 1
and n/2 + 1 respectively since d < n/2 − 1 as a result of d and n/2 being both odd
and d < n/2. (B2): if d is even, then d can be decomposed into two graphical degree
sequences of length n/2 since d < n/2. Thus in this case d is not forcibly connected.
Lines 5 to 7 try to find if d can be decomposed into two sub graphical degree sequences
such that each sub sequence contains terms consecutive in the original sequence d, i.e. if
the input d has a natural decomposition. For each given s, the two sub sequences d1 =
(d1 ≥ d2 ≥ · · · ≥ ds ) and d2 = (ds+1 ≥ ds+2 ≥ · · · ≥ dn ) can be tested whether they
are graphical by utilizing a linear time algorithm [12] that is equivalent to the Erdős-Gallai
criterion. The smallest s that need to be tested is d1 + 1 since d1 can only be in a graphical
degree sequence of length at least d1 + 1 and d1 has length s. The largest s that need to be
tested is at most n − dn − 1 since dn can only be in a graphical degree sequence of length at
least dn + 1 and d2 has length n − s. Actually the upper bound of the tested s can be chosen
to be at most the largest s such that s < n − ds+1 since ds+1 can only be in a graphical
degree sequence of length at least ds+1 + 1. Let su be the largest integer that satisfies this
inequality. Note su ≥ 2 since s = 2 satisfies the inequality at the point of line 5. Also note
that su ≤ n − dn − 1 because if su ≥ n − dn then n − dn ≤ su < n − dsu +1 , which leads
to dsu +1 < dn , a contradiction. Therefore the upper bound of tested s is chosen to be su .
Additional data structures can be maintained to skip the tests of those s for which each of
the two sub sequences d1 and d2 has odd sum. Clearly a necessary condition for the input
4
d to have a natural decomposition is su ≥ d1 + 1. A weaker necessary condition easier to
check is n − dn − 1 ≥ d1 + 1, i.e. d1 + dn ≤ n − 2.
The for loop starting from line 8 is to test whether the input d can be decomposed
into two sub graphical degree sequences of length l and n − l respectively, whether the
decomposition is natural or not. At first glance we need to test the range of l in 2 ≤ l ≤ n−2
since the shortest zero-free graphical degree sequence has length 2. By symmetry we do not
need to test those l beyond ⌊n/2⌋. Actually we only need to test the range of l from dn + 1
to min{⌊n/2⌋, n − d1 − 1}. We can start the loop with l = dn + 1 since, should the input d be
decomposable, dn must be in a sub graphical degree sequence of length at least dn + 1 and
the other sub graphical degree sequence not containing dn must also be of length at least
dn + 1 due to all its terms being at least dn . There is no need to test those l > n − d1 − 1
since, should the input d be decomposable, d1 must be in a sub graphical sequence of length
at least d1 + 1 and the other sub graphical sequence not containing d1 must have length at
most n − d1 − 1.
The condition tested on line 9 (dn+1−l < l) is necessary for d to be decomposable into
two sub graphical degree sequences of length l and n − l respectively. A zero-free graphical
degree sequence of length l must have all its terms less than l. If d is decomposable into
two sub graphical degree sequences of length l and n − l respectively, d must have at least
l terms less than l and n − l terms less than n − l. Therefore, the l smallest terms of d
(dn−l+1 ≥ dn−l+2 ≥ · · · ≥ dn ) must be all less than l and the n − l smallest terms of d
(dl+1 ≥ dl+2 ≥ · · · ≥ dn ) must be all less than n − l. These translate to the necessary
conditions dn−l+1 < l and dl+1 < n − l for d to be decomposable. The condition dl+1 < n − l
has already been satisfied since d1 < n − l based on the loop range of l on line 8.
Lines 10 to 12 first find out the sub sequence dL of d consisting exactly of those terms
less than l and then exhaustively enumerate all sub sequences s1 of dL with length l and
even sum, trying to find a valid decomposition of d into s1 and s2 = d − s1 with length n−l,
consisting of the terms of d not in s1 . Note that the l terms of s1 need not be consecutive
in dL . The motivation for the construction of m and dL = (dm ≥ dm+1 ≥ · · · ≥ dn ) is that,
should the input d be decomposable into two sub graphical degree sequences of length l and
n − l respectively, the sub graphical degree sequence with length l must have all its terms
coming from dL . For each such sub sequence s1 of dL with length l (we can always choose
such an s1 since dL has length n − m + 1 ≥ l due to the definition of m on line 10), let
the remaining terms of d form a sub sequence s2 = d − s1 of length n − l. If both s1 and
s2 are graphical degree sequences, then the input d is not forcibly connected since we have
found a valid decomposition of d into s1 and s2 and we may return False on line 12. The
conditional test on line 11 (l ≤ n − m) is added because at this point we know d cannot be
naturally decomposed and we can therefore exclude the consideration of l = n − m + 1 since
under this condition there is only one possible choice of s1 from dL and consequently only
one possible decomposition of d into two sub sequences of length l and n − l respectively,
which is also a natural decomposition. If we remove the natural decomposition test from
lines 5 to 7 and also remove the conditional test on line 11, the algorithm would obviously
still be correct. If in the for loop from lines 8 to 13 we never return False on line 12, this
means there is no way to decompose the input d into two sub graphical degree sequences
whatsoever and we should return True on line 14. If we return False on line 4, 7, or 12 then
a valid decomposition can also be returned if desired.
5
Later we will show that there is a computable threshold M(n) given the length n of the
input d such that if d1 is below this threshold the algorithm can immediately return False
without any exhaustive enumerations. However, our computational experiences suggest that
if the input satisfies d1 < M(n) then Algorithm 1 already runs fast and it might not be
worthwhile to add the computation of the additional threshold M(n) into it.
2.3
Extensions of the algorithm
In this section we show how to extend Algorithm 1 to perform additional tasks such as
listing all possible decompositions of a graphical degree sequence and testing forcibly kconnectedness of a graphical degree sequence for fixed k ≥ 2.
2.3.1
Enumeration of all possible decompositions
Algorithm 1 can be easily extended to give all possible decompositions of the input d into
two sub graphical degree sequences in case it is not forcibly connected. We simply need to
report a valid decomposition found on line 3, 6 and 12 and continue without returning False
immediately. Such an enumerative algorithm to find all valid decompositions of the input d
can be useful when we want to explore the possible realizations of d and their properties.
2.3.2
Testing forcibly k-connectedness of d when k ≥ 2
It is also possible to extend Algorithm 1 to decide whether a given graphical degree sequence
d is forcibly biconnected or not. We know that a connected graph is biconnected (nonseparable) if and only if it does not have a cut vertex. This characterization leads us to
observe that if in any forcibly connected graphical degree sequence d the removal of any
term di and the reduction of some collection dS of di elements from the remaining sequence
d − {di } by 1 each results in a non forcibly connected graphical degree sequence d′ then
d is not forcibly biconnected. If no such term and a corresponding collection of elements
from the remaining sequence can be found whose removal/reduction results in a non forcibly
connected graphical degree sequence, then d is forcibly biconnected. We give a pseudo-code
framework in Algorithm 2 to decide whether a given graphical degree sequence d is forcibly
biconnected or not. To simplify our description, we call the above mentioned combination
of removal/reduction operations a generalized Havel-Hakimi (GHH) operation, notationally
d′ = GHH(d, di , dS ). We remark that if the d′ obtained on line 4 of Algorithm 2 is not a
graphical degree sequence then the condition on line 5 is not satisfied and the algorithm will
not return False at the moment.
Similarly we can test whether a given graphical degree sequence d is forcibly k-connected
or not for k ≥ 3 iteratively as long as we already have a procedure to test whether a graphical
degree sequence is forcibly (k − 1)-connected or not. Suppose we already know an input d
is potentially k-connected and forcibly (k − 1)-connected. We can proceed to choose a term
di and a collection dS of size di from the remaining sequence d − {di } and perform a GHH
operation on d. If the resulting sequence d′ is a graphical degree sequence and is non forcibly
(k − 1)-connected, then d is not forcibly k-connected. If no such term and a corresponding
collection of elements from the remaining sequence can be found whereby a GHH operation
6
Algorithm 2: Pseudo-code to test whether a graphical degree sequence is forcibly
biconnected (See text for the description of GHH operation)
Input: A zero-free graphical degree sequence d = (d1 ≥ d2 ≥ · · · ≥ dn )
Output: True or False, indicating whether d is forcibly biconnected or not
1 if d is not potentially biconnected or forcibly connected then
2
return False
3 for each di and each collection dS of size di from d − {di } do
4
d′ ← GHH(d, di, dS );
5
if d′ is a non forcibly connected graphical degree sequence then
6
return False
7
return True
can be performed on d to result in a non forcibly (k−1)-connected graphical degree sequence,
then d is forcibly k-connected. We give a pseudo-code framework in Algorithm 3 to decide
whether a given graphical degree sequence d is forcibly k-connected or not.
Algorithm 3: Pseudo-code to test whether a graphical degree sequence is forcibly
k-connected (See text for the description of GHH operation)
Input: A zero-free graphical degree sequence d = (d1 ≥ d2 ≥ · · · ≥ dn ) and an integer
k≥2
Output: True or False, indicating whether d is forcibly k-connected or not
1 if d is not potentially k-connected or forcibly (k − 1)-connected then
2
return False
3 for each di and each collection dS of size di from d − {di } do
4
d′ ← GHH(d, di, dS );
5
if d′ is a non forcibly (k − 1)-connected graphical degree sequence then
6
return False
7
3
return True
Complexity analysis
We conjecture that Algorithm 1 runs in time polynomial in n on average. The worst case
run time complexity is probably still exponential in n. We are unable to provide a rigorous
proof at this time, but we will later show through experimental evaluations that it runs fast
on randomly generated long graphical degree sequences most of the time.
Now we give a discussion of the run time behavior of Algorithm 1. Observe that lines 1
to 4 take constant time. Lines 5 to 7 take O(n2 ) time if we use the linear time algorithm
from [12] to test whether an integer sequence is graphical. Lines 9 to 11 combined take O(n)
time and they are executed O(n) times. So the overall time complexity is O(n2 ) excluding
the time on line 12.
7
Next consider all candidate decompositions of d into s1 and s2 on line 12. The sub
sequence s1 is taken from dL = (dm ≥ dm+1 ≥ · · · ≥ dn ) whose length could be as large as
n and the length
l of s1 could be as large as ⌊n/2⌋. Therefore in the worst case we may
n
have up to n/2
candidate decompositions, which could make the run time of Algorithm 1
exponential in n.
A careful implementation of Algorithm 1 will help reduce running time by noting that
dL is a multi-set and provides us an opportunity to avoid duplicate enumerations of s1
because different l combinations of the indices (m, m + 1, · · · , n) could produce the same
sub sequence s1 . For this purpose, we can assume the input d is also provided in another
format (e1 , f1 ), (e2 , f2 ), · · · , (eq , fq ) where d contains fi copies of ei for i = 1, · · · , q and
e1 > e2 > · · · > eq > 0. (Clearly d1 = e1 and dn = eq .) Now enumerating s1 of length l from
dL can be equivalently translated to the following problem of enumerating all non-negative
integer solutions of Equation (1) subject to constraints (2),
k
X
xi = l,
(1)
i=1
0 ≤ xi ≤ fq−k+i , for i = 1, · · · , k,
(2)
where k is the number of distinct elements in dL = (dm ≥ dm+1 ≥ · · · ≥ dn ) which can
also be represented as (eq−k+1, fq−k+1 ), (eq−k+2, fq−k+2 ), · · · , (eq , fq ) and k satisfies k ≤ q and
k ≤ l − dn since all the elements of dL are < l and ≥ dn . In this context m and k vary
with l as the for loop from lines 8 to 12 progresses. Each solution of Equation (1) represents
a candidate choice of s1 out of dL with length l by taking xi copies of eq−k+i . Further
improvement could be achieved by noting the odd terms among eq−k+1 , eq−k+2, · · · , eq since
we must have an even number of odd terms in s1 for it to have even sum. We can categorize
the xi variables of Equation (1) into two groups based on the parity of the corresponding ei
and enumerate only its solutions having an even sum of the xi ’s belonging to the odd group.
The number of solutions of Equation (1) can be exponential in n. For example, let
l = n/2, k = n/4 and let fj = 4 for j = q − k + 1, · · · , q. Then the number of solutions of
Equation (1) will be at least n/4
by taking half of all x1 , · · · , xk to be 4 and the remaining
n/8
half to be 0. However in practice we rarely find such a large number of solutions are actually
all enumerated before Algorithm 1 returns.
To the best of our knowledge, the computational complexity of the decision problem of
whether a given graphical degree sequence is forcibly connected is unknown. The problem is
clearly in co-NP since a short certificate to prove that a given input is not forcibly connected
is a valid decomposition of the input sequence. But is it co-NP-hard? As far as we know,
this is an open problem.
The time complexity of the extension Algorithms 2 and 3 to test whether a given graphical
degree sequence d is forcibly k-connected or not for k ≥ 2 is apparently exponential due to
the exhaustive enumeration of the candidate collection dS of size di from the remaining
sequence d − {di } and the ultimate calls to Algorithm 1 possibly an exponential number of
times.
The computational complexity of the decision problem of whether a given graphical
degree sequence is forcibly k-connected (k ≥ 2) is also unknown to us. Clearly the problem
8
is still in co-NP when k is fixed as to prove that a graphical degree sequence d is not forcibly
k-connected is as easy as using a sequence of k − 1 certificates each consisting of a pair
(di , dS ) and a k th certificate being a valid decomposition to show that the final resulting
sequence is decomposable after a sequence of k − 1 GHH operations on d, but we do not
know if it is inherently any harder than the decision problem for k = 1.
4
Computational results
In this section we will first present some results on the experimental evaluation on the
performance of Algorithm 1 on randomly generated long graphical degree sequences. We
will then provide some enumerative results about the number of forcibly connected graphical
degree sequences of given length and the number of forcibly connected graphical partitions
of a given even integer. Based on the available enumerative results we will make some
conjectures about the asymptotic behavior of related functions and the unimodality of certain
associated integer sequences.
4.1
Performance evaluations of algorithm 1
In order to evaluate how efficient Algorithm 1 is, we aim to generate long testing instances
with length n in the range of thousands and see how Algorithm 1 performs on these instances.
Our experimental methodology is as follows. Choose a constant ph in the range [0.1,0.95]
and a constant pl in the range of [0.001,min{ph − 0.01, 0.49}] and generate 100 random
graphical degree sequences of length n with largest term around ph n and smallest term
around pl n. Each such graphical degree sequence is generated by first uniformly random
sampling integer partitions with the specified number of parts n and the specified largest
part and smallest part and then accept it as input for Algorithm 1 if it is a graphical
degree sequence. We run Algorithm 1 on these random instances and record the average
performance and note the proportion of them that are forcibly connected. Table 1 lists the
tested ph and pl . The largest tested pl is 0.49 since any graphical degree sequence of length
n and smallest term at least 0.5n will cause Algorithm 1 to return True on line 2.
We implemented our Algorithm 1 using C++ and compiled it using g++ with optimization level -O3. The experimental evaluations are performed on a common Linux workstation.
We summarize our experimental results for the length n = 1000 as follows.
1. For those instances with ph in the range from 0.1 to 0.5, Algorithm 1 always finishes
instantly (run time < 0.01s) and all the tested instances are non forcibly connected. This
does not necessarily mean that there are no forcibly connected graphical degree sequences
of length n = 1000 with largest term around ph n with ph in this range. It only suggests that
forcibly connected graphical degree sequences are relatively rare in this range.
2. For each ph in the range from 0.55 to 0.95, we observed a transition interval It of pl for
each fixed ph . See Table 2 for a list of observed transition intervals. All those instances with
pl below the range It are non forcibly connected and all those instances with pl above the
range It are forcibly connected. Those instances with pl in the range It exhibit the behavior
that the proportion of forcibly connected among all tested 100 instances gradually increases
from 0 to 1 as pl increases in the range It . For example, based on the results of Table 2,
9
Table 1: Chosen ph and pl in the experimental performance evaluation of Algorithm 1.
ph
0.10
0.20
0.30
0.40
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
pl
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.09
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.19
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.29
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.39
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
0.001,0.002,0.003,...,0.01,0.02,0.03,...,0.49
Table 2: Transition interval It of pl for each ph (for n = 1000).
ph
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
It of pl
0.30 to 0.40
0.20 to 0.30
0.15 to 0.24
0.09 to 0.17
0.05 to 0.12
0.03 to 0.09
0.01 to 0.07
0.003 to 0.04
0.001 to 0.03
the proportions of forcibly connected graphical degree sequences of length 1000 with largest
term around 850 (ph = 0.85) and smallest term below around 10 (pl = 0.01) are close to
0. If the smallest term is above around 70 (pl = 0.07) then the proportion is close to 1.
When the smallest term is between 10 and 70 (pl in the range from 0.01 to 0.07) then the
proportion transitions from 0 to 1. Again these results should be interpreted as relative
frequency instead of absolute law.
3. Algorithm 1 is efficient most of the time but encounters bottlenecks at occasions. For
ph from 0.80 to 0.95 and pl near the lower end of the transition interval It , Algorithm 1
does perform poorly on some of the tested instances with run time from a few seconds to
more than a few hours (time out). The exact range of pl near the lower end of It where
Algorithm 1 could perform poorly varies. We observed that this range of pl for which the
algorithm could perform poorly is quite narrow. For example, when n = 1000, ph = 0.9, this
range of pl we observed is from 0.001 to 0.01. We observed that the frequency at which the
10
algorithm performs poorly also varies. We believe that this is because among all possible
instances with given length and given largest and smallest terms there is still great variety
in terms of difficulty of testing their property of forcibly connectedness using Algorithm 1.
In particular, some instances will trigger the exhaustive behavior of Algorithm 1 on line 12,
making it enumerate a lot of candidate decompositions without returning.
We have also performed experimental evaluations of Algorithm 1 for the length n =
2000, 3000, ..., 10000 without being able to finish all the same ph , pl choices as for n = 1000
because of shortage of time. The behavior of Algorithm 1 on inputs of these longer lengths
is similar to the case of n = 1000 but with different transition intervals It and varied range
of pl near the lower end of the transition interval It for which it could perform poorly.
To sum up, we believe that the average case run time of Algorithm 1 is polynomial. We
estimate that more than half of all zero-free graphical degree sequences of length n can be
tested in constant time on line 1. However, its worst case run time should be exponential.
As mentioned above, the computational complexity of the decision problem itself is unknown
to us.
Currently we have a very rudimentary implementation of Algorithm 2 and do not have
an implementation of Algorithm 3 for any k ≥ 3 yet. Algorithm 2 can start to encounter
bottlenecks for input length n around 40 to 50, which is much shorter than the input lengths
Algorithm 1 can handle. We suspect that to handle input length n ≥ 100 when k = 3 will be
very difficult unless significant enhancement to avoid many of those exhaustive enumerations
can be introduced.
4.2
Enumerative results
In this section we will present some enumerative results related to forcibly connected graphical degree sequences of given length and forcibly connected graphical partitions of given
even integer. We also make some conjectures based on these enumerative results. For the
reader’s convenience, we summarize the notations used in this section in Table 3.
In a previous manuscript [21] we have presented efficient algorithms for counting the number of graphical degree sequences of length n and the number of graphical degree sequences
of k-connected graphs with n vertices (or graphical degree sequences of length n that are
potentially k-connected). It is proved there that the asymptotic orders of the number D(n)
of zero-free graphical degree sequences of length n and the number Dc (n) of potentially conc (n)
nected graphical degree sequences of length n are equivalent. That is, limn→∞ DD(n)
= 1. In
order to investigate how the number Df (n) of forcibly connected graphical degree sequences
of length n grows compared to D(n) we conduct computations to count such graphical degree sequences. We do not have any algorithm that can get the counting result without
actually generating the sequences. The fastest algorithm we know of that can generate all
zero-free graphical degree sequences of length n is from Ruskey et al [18]. We adapted this
algorithm to incorporate the test in Algorithm 1 to count those that are forcibly connected.
Since D(n) grows as an exponential
√ function of n based on the bounds given by Burns [4]
(4n /(c1 n) ≤ D(n) ≤ 4n /((log n)c2 n)) for all sufficiently large n with c1 , c2 positive constants), it is unlikely to get the value of Df (n) for large n using an exhaustive generation
algorithm. We only have counting results of Df (n) for n up to 26 due to the long running
11
Table 3: Terminology used in this section
Term
D(n)
Dc (n)
Df (n)
Cn [N]
Fn [N]
Ln [j]
M(n)
g(n)
gc (n)
gf (n)
cn [j]
fn [j]
ln [j]
m(n)
Meaning
number of zero-free graphical sequences of length n
number of potentially connected graphical sequences of length n
number of forcibly connected graphical sequences of length n
number of potentially connected graphical degree sequences
of length n with degree sum N
number of forcibly connected graphical degree sequences
of length n with degree sum N
number of forcibly connected graphical degree sequences of
length n with largest term j
minimum largest term in any forcibly connected graphical
sequence of length n
number of graphical partitions of even n
number of potentially connected graphical partitions of even n
number of forcibly connected graphical partitions of even n
number of potentially connected graphical partitions of
even n with j parts
number of forcibly connected graphical partitions of
even n with j parts
number of forcibly connected graphical partitions of n with largest term j
minimum largest term of forcibly connected graphical partitions of n
time of our implementation. The results together with the proportion of them in all zerofree graphical degree sequences are listed in Table 4. From the table it seems reasonable to
conclude that the proportion Df (n)/D(n) will increase when n ≥ 8 and it might tend to the
limit 1.
Since our adapted algorithm from Ruskey et al [18] for computing Df (n) actually generates all forcibly connected graphical degree sequences of length n it is trivial to also output
the individual counts based on the degree sum N or the largest degree ∆. That is, we can
output the number of forcibly connected graphical degree sequences of length n with degree
sum N or largest term ∆. In Table 5 we show itemized potentially and forcibly connected
graphical degree sequences of length 7 based on the degree sum N. The counts for N < 12
are not shown because those counts are all 0. The highest degree sum is 42 for any graphical
degree sequence of length 7. From the table we see that the individual counts based on the
degree sum N that contribute to Dc (7) (row C7 [N]) and Df (7) (row F7 [N]) both form a
unimodal sequence. Counts for other degree sequence lengths from 5 to 26 exhibit similar
behavior. Based on the available enumerative results we find that for any given n the range
of even N for Cn [N] and Fn [N] to be nonzero respectively are exactly the same (between
2n − 2 and n(n − 1)). In fact, this can be proved as the following
Proposition 4.1. An even N has a potentially connected graphical partition with n parts if
and only if it has a forcibly connected graphical partition with n parts.
12
Table 4: Number of forcibly connected graphical degree sequences of length n and their
proportions in zero-free graphical degree sequences of length n.
n
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
D(n)
Df (n)
7
6
20
18
71
63
240
216
871
783
3148
2843
11655
10535
43332
39232
162769
147457
614198
556859
2330537
2113982
8875768
8054923
33924859
30799063
130038230
118098443
499753855
454006818
1924912894
1749201100
7429160296
6752721263
28723877732
26114628694
111236423288
101153550972
431403470222
392377497401
1675316535350 1524043284254
6513837679610 5926683351876
25354842100894 23073049582134
Df (n)/D(n)
0.857143
0.900000
0.887324
0.900000
0.898967
0.903113
0.903904
0.905382
0.905928
0.906644
0.907079
0.907518
0.907861
0.908182
0.908461
0.908717
0.908948
0.909161
0.909356
0.909537
0.909705
0.909860
0.910006
Table 5: Number of potentially (row C7 [N]) and forcibly (row F7 [N]) connected graphical
degree sequences of length 7 with given degree sum N.
degree sum N
C7 [N]
F7 [N]
12
7
3
14
11
5
16
15
10
18
22
19
20
26
25
22
29
28
24
29
29
26
26
26
28
23
23
30
18
18
32
13
13
34
8
8
36
5
5
38
2
2
40
1
1
42
1
1
Table 6: Number L15 [j] of forcibly connected graphical degree sequences of length 15 with
given largest term j.
largest part j
L15 [j]
14
3166852
13
2624083
12
1398781
11
600406
10
201128
9
52903
8
9718
7
1031
6
21
Proof. Sufficiency is obvious by definition. In the following we show the necessity.
Suppose an even N has a potentially connected graphical partition with n parts. From
the Wang and Cleitman characterization [20] we know that N must be between 2n − 2 and
13
n(n − 1) for it to have a potentially connected graphical partition of n parts. Now construct
a partition π of N with n parts as follows. Let the largest part be n−1 and let the remaining
−n+1
n − 1 parts be as evenly as possible. That is, let b = ⌊ N n−1
⌋ and a = N − (n − 1)(b + 1).
Then the smallest n − 1 parts of π consist of a copies of b + 1 and n − 1 − a copies of b.
With 2n − 2 ≤ N ≤ n(n − 1), we have 0 < b ≤ n − 1 and 0 ≤ a < n − 1. Based on the
Nash-Williams condition it is easy to verify that π is a graphical partition of N with n parts
and it is forcibly connected since its largest part is n − 1.
In Table 6 we show itemized numbers of forcibly connected graphical degree sequences
of length 15 based on the largest degree. The counts for largest degrees less than 6 are not
shown because those counts are all 0. From the table we can see that the counts decrease
with the largest degree. For other degree sequence lengths from 5 to 26 we observed similar
behavior. The table also indicates that there are no forcibly connected graphical degree
sequences of length 15 with largest degree less than 6. In fact, if we define M(n) to be the
minimum largest term in any forcibly connected graphical sequence of length n. This is,
.
M(n) = min{∆: ∆ is the largest term of some forcibly connected graphical degree sequence
of length n}. Clearly we have M(n) ≤ n/2 since for even n the sequence n/2, n/2, · · · , n/2
of length n is forcibly connected. We can show a lower bound of M(n) as follows.
√
Theorem 4.2. For M(n) defined√above, we have M(n) = Ω( n). That is, there is a
constant c > 0 such that M(n) > c n for all sufficiently large n.
Proof. For the purpose of deriving a contradiction assume there is a forcibly connected
graphical degree√sequence ß = (d1 ≥ d2 ≥ · · · ≥ dn ) of length n with the largest term
d1 = M(n) = o( n).
Let us first consider the case that n is even. Let ßH be the higher half (of length n/2)
of ß and ßL be the lower half (of length n/2) of ß. If both ßH and ßL have even sums, then
they can shown to be both graphical degree sequences based on the Nash-Williams condition
[17, 16, 19] as follows. Suppose the Durfee square size of ß is s where s ≤ d1 = M(n) by
the definition of Durfee square. Since ß is graphical it satisfies the Nash-Williams condition,
which can be represented as s inequalities:
j
X
i=1
d′1 , · · ·
(d′i − di ) ≥ j, j = 1, · · · , s,
, d′s
′
where
are the largest s parts of the conjugate partition of the partition
√ ß (d1 = n).
Now ß and ßH have the same Durfee square size by our assumption that s = o( n) and they
have the same s largest parts. Let the s largest parts of the conjugate of ßH be d′′1 , · · · , d′′s
with d′′1 = n/2 by our construction. To show that ßH is graphical, we only need to show that
the following s inequalities hold:
j
X
i=1
(d′′i − di ) ≥ j, j = 1, · · · , s.
(3)
′′
The
√ first of these inequalities d1 − d1 = n/2 − M(n) ≥ 1 is clearly satisfied since M(n) =
o( n). We also have the following inequalities,
d′′j ≥ s and dj ≤ M(n), j = 2, · · · , s,
14
so we have d′′j − dj ≥ s − M(n), j = 2, · · · , s. Even√if d′′j − dj , j = 2, · · · , s are all negative,
their sum will be of order o(n) since s ≤ M(n) = o( n). Clearly the s inequalities in (3) are
all satisfied since d′′1 − d1 = n/2 − M(n) is of order Ω(n). This shows that ßH is graphical.
By the same argument ßL is graphical and we have found that ß can be decomposed into
two sub graphical degree sequences ßH and ßL . This contradicts our assumption that ß is
forcibly connected.
If both ßH and ßL have odd sums (we cannot have one of them having even sum and
the other having odd sum since the sum of all the terms of ß is even), then it must be the
case that both ßH and ßL have an odd number of odd terms. Construct two new sequences
ß′H and ß′L from ßH and ßL by removing the largest odd term from ßL and adding it to ßH .
Now clearly ß′H and ß′L is a decomposition of ß into two sub sequences of length n/2 + 1 and
n/2 − 1 respectively and both having even sums. Again they are guaranteed to be graphical
degree sequences by the Nash-Williams condition using a similar argument as above, which
contradicts the assumption that ß is forcibly connected.
The case for n odd
√ can be proved in a similar way. The conclusion that M(n) cannot be
of lower order than n then follows.
We do not have any theory or algorithm to efficiently obtain M(n) for any given n. Other
than recording the minimum largest term while enumerating all forcibly connected graphical
degree sequences of length n, a naive approach would be to let ∆ start from 3 upward and
test if there is a forcibly connected graphical degree sequence of length n with largest term ∆
and stop incrementing ∆ when we have found one. Obviously this works but is not efficient.
Any efficient algorithm for M(n) might be worthwhile to be added into Algorithm 1 so that
it can immediately return False if d1 < M(n). However, while we conduct performance
evaluations of Algorithm 1 we do find that a random graphical degree sequence of length n
with d1 ≤ n/2 most likely can be decided instantly by Algorithm 1. Therefore we believe
that an efficient algorithm for M(n) will not help much on average. We show the values of
M(n) based on our enumerative results in Table 7. The fact that M(15) = 6 agrees with the
results of Table 6 where the counts L15 [j] = 0 for all j < 6. As a side note, the minimum
largest term in any potentially connected graphical sequence of length n is clearly 2 since the
degree sequence 2, 2, · · · , 2 (n copies) is potentially connected while 1, 1, · · · , 1 (n copies) is
not potentially connected.
Table 7: Minimum largest term M(n) of forcibly connected graphical sequences of length n.
n
M(n)
n
M(n)
3
2
15
6
4
2
16
6
5
2
17
7
6
3
18
7
7
3
19
7
8
3
20
7
9
4
21
8
10
4
22
8
11
5
23
8
12
5
24
8
13
5
25
8
14
6
26
9
We also investigated the number gf (n) of forcibly connected graphical partitions of a
given even integer n. There is a highly efficient Constant Amortized Time (CAT) algorithm
of Barnes and Savage [2] to generate all graphical partitions of a given even n. And there
are efficient counting algorithms of Barnes and Savage [1] and Kohnert [13] to count the
number g(n) of graphical partitions of even n without generating them. It is known from
15
Erdős and Richmond [8] that the number gc (n) of potentially connected graphical partitions
c (2n)
= 1. It is also known from Pittel
of n and g(n) are of equivalent order, i.e. limn→∞ gg(2n)
[14] that the proportion of graphical partitions among all partitions of an integer n tends
to 0. Although the order of the number p(n) of unrestricted partitions of n is long known
since Hardy and Ramanujan [10], the exact asymptotic order of g(n) is still unknown. We
know of no algorithm to count the number gf (n) of forcibly connected graphical partitions
of n without generating them. Using a strategy similar to that we employed in computing
Df (n), we adapted the algorithm of Barnes and Savage [2] and incorporated the test of
forcibly connectedness from Algorithm 1 and then count those that are forcibly connected.
The growth of g(n) is quick and we only have numerical results of gf (n) for n up to 170. The
results together with the proportion of them in all graphical partitions are listed in Table 8.
For the purpose of saving space we only show the results in increments of 10 for n. From
the table it seems reasonable to conclude that the proportion gf (n)/g(n) will decrease when
n is beyond some small threshold and it might tend to the limit 0.
Table 8: Number of forcibly connected graphical partitions of n and their proportions in all
graphical partitions of n
n
10
20
30
40
50
60
70
80
90
100
110
120
130
140
150
160
170
g(n)
17
244
2136
14048
76104
357635
1503172
5777292
20614755
69065657
219186741
663394137
1925513465
5383833857
14555902348
38173235010
97368672089
gf (n)
8
81
586
3308
15748
66843
256347
909945
3026907
9512939
28504221
81823499
226224550
604601758
1567370784
3951974440
9714690421
gf (n)/g(n)
0.470588
0.331967
0.274345
0.235478
0.206927
0.186903
0.170537
0.157504
0.146832
0.137738
0.130045
0.123341
0.117488
0.112299
0.107679
0.103527
0.099772
Table 9: Number of potentially (row c20 [j]) and forcibly (row f20 [j]) connected graphical
partitions of 20 with given number of parts j.
number of parts j
c20 [j]
f20 [j]
5
1
1
6
9
9
16
7
26
25
8
38
22
9
37
10
10
36
9
11
30
5
Table 10: Number l20 [j] of forcibly connected graphical partitions of 20 with given largest
term j.
largest part j
l20 [j]
3
1
4
14
5
26
6
20
7
12
8
5
9
2
10
1
Table 11: Minimum largest term m(n) of forcibly connected graphical partitions of n.
n
m(n)
10
2
20
3
30
4
40
5
50
5
60
6
70
6
80
6
90
7
100
7
Like the situation for Df (n) the adapted algorithm from Barnes and Savage [2] to compute
gf (n) actually generates all forcibly connected graphical partitions of n so it is trivial to also
output the individual counts based on the number of parts or the largest part. In Table 9 we
show the individual counts of potentially and forcibly connected graphical partitions of 20
based on the number of parts. Counts for the number of parts less than 5 or greater than 11
are not shown since those counts are all 0. The ranges of the number of parts j for which the
number cn [j] of potentially connected graphical partitions of n with j parts and the number
fn [j] of forcibly connected graphical partitions of n with j parts are nonzero are exactly the
same based on Proposition 4.1. The smallest number of parts j for which cn [j] and fn [j] are
both nonzero is the smallest positive integer t(n) such that t(n)(t(n) − 1) ≥ n and this is
also the smallest number of parts for which a partition of n with this many parts might be
graphical. The largest number of parts j for which cn [j] and fn [j] are both nonzero is n/2 + 1
based on the Wang and Cleitman characterization [20]. In Table 10 we show the individual
counts of forcibly connected graphical partitions of 20 based on the largest part. Counts for
the largest part less than 3 or greater than 10 are not shown since those counts are all 0.
Clearly n/2 is the maximum largest part of any forcibly connected graphical partition of n
since n/2, 1, 1, · · · , 1 (n/2 copies of 1) is a forcibly connected graphical partition of n and no
graphical partition of n has its largest part greater than n/2. However, similar to the case
of M(n), the minimum largest part, m(n), of any forcibly connected graphical
partition of
√
n does not seem to be easily obtainable. Clearly m(n)√grows at most like n since for
√ every
large even n it has a graphical partition with about n parts and all parts about n − 1
and this graphical partition is forcibly connected. In Table 11 we show several values of
m(n). They are obtained while we exhaustively generate all graphical partitions of n and
keep a record of the minimum largest part. The fact that m(20) = 3 agrees with the results
of Table 10 where l20 [j] = 0 for all j < 3. As a side note, the minimum largest part of any
potentially connected graphical partition of n is clearly 2 since 2, 2, · · · , 2 (n/2 copies) is a
potentially connected graphical partition of n while 1, 1, · · · , 1 (n copies) is not.
4.3
Questions and conjectures
Based on the available enumerative results we ask the following questions and make certain
conjectures:
1. What is the growth order of Df (n) relative to D(n)? Note that the exact asymptotic
17
order of D(n) is unknown yet. (Some upper and lower bounds of D(n) are known. See
Df (n)
= 1. That is, almost all zero-free graphical degree
Burns [4]). We conjecture limn→∞ D(n)
sequences of length n are forcibly connected. If this is true, then it is a stronger result than
c (n)
the known result (see [21]) limn→∞ DD(n)
= 1 since Df (n) ≤ Dc (n) ≤ D(n). Furthermore,
we conjecture that Df (n)/D(n) is monotonously increasing when n ≥ 8. Let Dc k (n) and
Df k (n) denote the number of potentially and forcibly k-connected graphical degree sequences
c k (n)
6= 1 when k ≥ 2.
of length n respectively. It is already known from [21] that limn→∞ DD(n)
D
(n)
f k
Clearly we also have limn→∞ D(n)
6= 1 when k ≥ 2. What can be said about the relative
orders of Dc k (n), Df k (n) and D(n) when k ≥ 2?
2. What is the growth order of gf (2n) relative to g(2n)? Note that the exact asymptotic
gf (2n)
order of g(2n) is unknown yet. We conjecture limn→∞ g(2n)
= 0. That is, almost none
of the graphical partitions of 2n are forcibly connected. Furthermore, we conjecture that
gf (2n)/g(2n) is monotonously decreasing when n ≥ 5. Let gc k (n) and gf k (n) denote the
number of potentially and forcibly k-connected graphical partitions of n respectively. What
can be said about the relative orders of gc k (n), gf k (n) and g(n) when k ≥ 2?
3. We conjecture that the numbers of forcibly connected graphical partitions of N with
exactly n parts, when N runs through 2n − 2, 2n, · · · , n(n − 1), give a unimodal sequence.
4. Let t(n) be the smallest positive integer such that t(n)(t(n) − 1) ≥ n. We conjecture
that the numbers of forcibly connected graphical partitions of n with j parts, when j runs
through t(n), t(n) + 1, · · · , n/2 + 1, give a unimodal sequence.
5. What is the growth order of M(n), the minimum largest term in any forcibly connected
graphical sequence of length n? Is there a constant C > 0 such that limn→∞ Mn(n) = C? Is
there an efficient algorithm to compute M(n)?
6. What is the growth order of m(n), the minimum largest term in any forcibly connected
√
= C? Is there an
graphical partition of n? Is there a constant C > 0 such that limn→∞ m(n)
n
efficient algorithm to compute m(n)?
7. We conjecture that the numbers of forcibly connected graphical partitions of an even
n with the largest part exactly ∆, when ∆ runs through m(n), m(n) + 1, · · · , n/2, give a
unimodal sequence.
8. We showed all these decision problems to test whether a given graphical degree
sequence is forcibly k-connected to be in co-NP for fixed k ≥ 1. Are they co-NP-hard?
Is the decision problem for k + 1 inherently harder than for k?
5
Conclusions
In this paper we presented an efficient algorithm to test whether a given graphical degree
sequence is forcibly connected or not and its extensions to test forcibly k-connectedness of
graphical degree sequences for fixed k ≥ 2. Through performance evaluations on a wide range
of long random graphical degree sequences we demonstrate its average case efficiency and
we believe that it runs in polynomial time on average. We then incorporated this testing
algorithm into existing algorithms that enumerate zero-free graphical degree sequences of
length n and graphical partitions of an even integer n to obtain some enumerative results
about the number of forcibly connected graphical degree sequences of length n and forcibly
18
connected graphical partitions of n. We proved some simple observations related to the
available numerical results and made several conjectures. We are excited that there are a
lot for further research in this direction.
References
[1] Tiffany M. Barnes and Carla D. Savage. A Recurrence for Counting Graphical Partitions. Electronic Journal of Combinatorics, 2, 1995.
[2] Tiffany M. Barnes and Carla D. Savage. Efficient generation of graphical partitions.
Discrete Applied Mathematics, 78(13):17–26, 1997.
[3] F. T. Boesch. The strongest monotone degree condition for n-connectedness of a graph.
Journal of Combinatorial Theory, Series B, 16(2):162–165, 1974.
[4] Jason Matthew Burns. The Number of Degree Sequences of Graphs. PhD thesis, Massachusetts Institute of Technology. Dept. of Mathematics, 2007.
[5] Gary Chartrand, S. F. Kapoor, and Hudson V. Kronk. A sufficient condition for nconnectedness of graphs. Mathematika, 15(1):5152, 1968.
[6] S. A. Choudum. On forcibly connected graphic sequences. Discrete Mathematics,
96(3):175–181, 1991.
[7] P. Erdős and T. Gallai. Graphs with given degree of vertices. Mat. Lapok, 11:264–274,
1960.
[8] P. Erdős and L. B. Richmond. On graphical partitions. Combinatorica, 13(1):57–63,
1993.
[9] S. L. Hakimi. On realizability of a set of integers as degrees of the vertices of a linear
graph. I. Journal of the Society for Industrial and Applied Mathematics, 10(3):496–506,
1962.
[10] G. H. Hardy and S. Ramanujan. Asymptotic formulae in combinatory analysis. Proc.
London Math. Soc., 17:75–115, 1918.
[11] Václav Havel. A remark on the existence of finite graphs. Časopis pro pěstovánı́ matematiky, 80(4):477–480, 1955.
[12] Antal Iványi, Gergő Gombos, Loránd Lucz, and Tamás Matuszka. Parallel enumeration
of degree sequences of simple graphs II. Acta Universitatis Sapientiae, Informatica,
5(2):245–270, 2013.
[13] Axel Kohnert. Dominance Order and Graphical Partitions. Electronic Journal of Combinatorics, 11(1), 2004.
[14] Boris Pittel. Confirming Two Conjectures About the Integer Partitions. Journal of
Combinatorial Theory, Series A, 88(1):123–135, 1999.
19
[15] S. B. Rao. A survey of the theory of potentially P-graphic and forcibly P-graphic degree
sequences, pages 417–440. Springer Berlin Heidelberg, 1981.
[16] C. C. Rousseau and F. Ali. A Note on Graphical Partitions. Journal of Combinatorial
Theory, Series B, 64(2):314–318, 1995.
[17] E. Ruch and I. Gutman. The Branching Extent of Graphs. Journal of Combinatorics,
Information and System Sciences, 4(4):285–295, 1979.
[18] Frank Ruskey, Robert Cohen, Peter Eades, and Aaron Scott. Alley CATs in search of
good homes. In 25th S.E. Conference on Combinatorics, Graph Theory, and Computing,
volume 102, pages 97–110. Congressus Numerantium, 1994.
[19] Gerard Sierksma and Han Hoogeveen. Seven Criteria for Integer Sequences Being
Graphic. Journal of Graph Theory, 15(2):223–231, 1991.
[20] D. L. Wang and D. J. Kleitman. On the Existence of n-Connected Graphs with Prescribed Degrees (n≥2). Networks, 3(3):225–239, 1973.
[21] Kai Wang. Efficient counting of degree sequences. https://arxiv.org/abs/1604.04148,
Under Review.
20
| 0 |
Disruptive Event Classification using PMU
Data in Distribution Networks
I. Niazazari and H. Livani, Member, IEEE
Abstract— Proliferation of advanced metering devices with high
sampling rates in distribution grids, e.g., micro-phasor
measurement units (μPMU), provides unprecedented potentials
for wide-area monitoring and diagnostic applications, e.g.,
situational awareness, health monitoring of distribution assets.
Unexpected disruptive events interrupting the normal operation
of assets in distribution grids can eventually lead to permanent
failure with expensive replacement cost over time. Therefore,
disruptive event classification provides useful information for
preventive maintenance of the assets in distribution networks.
Preventive maintenance provides wide range of benefits in terms
of time, avoiding unexpected outages, maintenance crew
utilization, and equipment replacement cost. In this paper, a PMUdata-driven framework is proposed for classification of disruptive
events in distribution networks. The two disruptive events, i.e.,
malfunctioned capacitor bank switching and malfunctioned
regulator on-load tap changer (OLTC) switching are considered
and distinguished from the normal abrupt load change in
distribution grids. The performance of the proposed framework is
verified using the simulation of the events in the IEEE 13-bus
distribution network. The event classification is formulated using
two different algorithms as; i) principle component analysis (PCA)
together with multi-class support vector machine (SVM), and ii)
autoencoder along with softmax classifier. The results
demonstrate the effectiveness of the proposed algorithms and
satisfactory classification accuracies.
Index Terms—Classification, PMU, Event Detection, SVM
I. INTRODUCTION
With the advent of advanced metering devices, e.g., phasor
measurement units (PMUs), and micro-PMUs (μPMUs), power
transmission and distribution networks have become more
intelligent, reliable and efficient [1], [2]. Most of the early
measurement devices have had the limited measuring capacity,
while the multi-functional capabilities of PMUs have made
them important monitoring assets to power networks. PMUs
have been used for several monitoring and control applications
in transmission grids, e.g., state estimation [3], dynamic
stability assessment [4], event diagnostics and classification [5].
Recently, the use of PMUs and μPMUs for several monitoring
and control applications in distribution networks have been
introduced. In [6], a fault location algorithm in distribution
networks have been proposed using PMU data. In [7], PMUs
are utilized in distribution networks with distributed generation
for three different applications, namely, state estimation,
protection and instability prediction. In [8], PMUs are deployed
in distribution grids for measurement of synchronized harmonic
phasors. Additionally, PMUs can help power grids to restore
quicker in case of cutting off the energy supply by providing
voltage, current and frequency measurements for reclosing the
circuit breakers, e.g., 2008 Florida blackout [9]. In [10], PMU
deployment for state estimation in active distribution networks
is discussed. Reference [11] presents the use of PMU data for
abnormal situation detection in distribution networks.
Event detection is an ongoing field of study, and can be a
challenging task due to different types of correlated events in
distribution grids, e.g., switching vs. load changing. Therefore,
distinguishing the disruptive events from one another, and
differentiating them from a normal condition of the network,
requires advanced data-driven frameworks. Several different
techniques are proposed in the literature for classification of
events in power networks. In [12], a probabilistic neural
network along with S-transform is utilized to classify power
quality disturbances. Partial discharge pattern recognition is
conducted by applying fuzzy decision tree method [13], and
sparse representation and artificial neural network [14].
Support vector machine (SVM) is a widely used method for
event classification, e.g., fault location [15], power system
security assessment [16], and transient stability analysis [17].
Accurate distribution event detection and classification
results in accurate preventive maintenance scheduling of the
critical assets based on the warning signs of the pending
electrical failures. Preventive maintenance is a beneficial task
in terms of time, equipment replacement cost, maintenance
crew utilization, avoiding unexpected outages, and
consequently, extending the life of the critical assets. Real-time
data analytics can help to detect multiple failures, along with
offering online monitoring of feeder operations. Therefore, it
can provide utilities with useful information about faulty
equipment in particular parts of the network. In [18], the authors
have used highly sensitive waveform recorders for gathering
data and improving feeders’ visibility and operational
efficiency. The collected data from waveform recorders is used
for incipient equipment failures detection on distribution
feeders [19]. In [20], a data-driven methodology is presented
for classification of five disruptive events, i.e., cable failure,
hot-line clamp failure, vegetation intrusion resulting in frequent
fault, fault induced conductor slap, and capacitor controller
malfunction.
In this paper, we propose a framework for classification of
two disruptive events from the normal abrupt load change in
distribution networks using PMU data. These classes are
malfunctioned capacitor bank switching, malfunctioned
regulator on-load tap changer (OLTC) switching, and abrupt
load changing. The disruptive events may not cause immediate
failure, however, they will cause permanent equipment failure
and expensive replacement cost over time. The classification of
these events prioritizes the preventive maintenance scheduling
and leads to higher life expectancy of distribution assets. In this
paper, the classification algorithms are developed using (1)
principal component analysis (PCA) along with SVM, and (2)
autoencoder along with softmax layer.
The rest of this paper is organized as follows, in section II,
the problem statement and the proposed methodology is
presented. Section III presents the simulation results, and the
conclusion and future works are presented in section IV.
using two different classification methods as; (1) PCA along
with multi-class SVM, and (2) a neural network-based toolbox,
i.e., autoencoder, along with a softmax layer classifier. Figure
1 illustrates the flowchart of these two methods which is
discussed in detail in the next subsections.
II. PROBLEM STATEMENT AND METHODOLOGY
Data analytics plays a major role in power system post-event
analysis such as event diagnosis and preventive maintenance
scheduling. These applications are helpful in terms of saving
maintenance time and cost, and leads to preventing unexpected
outages due to permanent failure of critical assets. In this paper,
two different disruptive events, i.e., malfunctioned capacitor
bank switching and malfunctioned regulator OLTC switching,
along with a normal abrupt load changing, are categorized as
three different classes. Malfunctioned capacitor bank switching
events are caused by failure in mechanical switches and can
happen in less than 2 cycles, i.e., 32 msec. The malfunctioned
regulator OLTC switching can be caused due to ageing and
degradation of the selector switches. In a malfunctioned
regulator OLTC switching, the tap is dislocated, and after a
while relocated to its original position. In this paper, we propose
a PMU-data-driven classification framework to distinguish
these two disruptive events from the normal abrupt load
changing in distribution networks. The rationale is that normal
abrupt load changing has similar signatures on PMU data and
advanced data analytics is required to distinguish the disruptive
events from the normal load changing.
The classification input are six different features that have
been extracted from PMU data as; change of voltage magnitude
between two consecutive samples (v(n+1)-v(n)), change of
voltage angle between two consecutive samples (δv(n+1)-δv(n),
current magnitude (p.u.), current angle (degree), change of
current magnitude between two consecutive samples (i(n+1)i(n)), and change of current angle between two consecutive
samples (δi(n+1)-δi(n)). Moreover, since these features change
over time, and we have a feature matrix, shown in (1), as
opposed to a feature vector.
(1)
𝑓1
𝑋=[ ⋮
(1)
𝑓𝑛
(𝑝)
⋯ 𝑓1
⋱
⋮ ]
(𝑝)
⋯ 𝑓𝑛 𝑛×𝑝
(𝑡)
Fig. 1. a) PCA+SVM method (left), and b) Autoencoder+Softmax (right)
events classification flowcharts
A. PCA+SVM event classification algorithm
The extracted features from PMU data change over time, thus
the features are presented as the feature matrix using (1). On the
other hand the input to the SVM classifier is a vector and the
matrices must be transformed into vectors. In this paper, PCA
is utilized to obtain the dominant Eigen values of the feature
matrix and used as the input to a multi-class SVM. The
summary of PCA and SVM is presented below.
A.1 Principal Component Analysis (PCA)
Principal component analysis (PCA) is a technique for
reducing the dimensionality of the problem, and extracting the
dominant features of the systems. In addition, it can be used in
pattern recognition in high-dimensional data and it has a wide
range of applications, e.g. image processing and face
recognition, and data mining. For further study refer to [25].
where X is the feature matrix and 𝑓𝑖 is the value of feature i at
time t.
In this paper, we consider the PMU data with two reporting
rates as i) 60 samples per second (sps), e.g. SEL 651 [21], and
ii) 120 sps, e.g. micro PMUs (μPMUs) developed at University
of California, Berkeley [22]. Additionally, it is assumed that the
capacitor bank switching takes about 1 cycle (16.67 ms) [23],
and on-load tap changer switching takes about 30-200 ms [24]
and the PMUs are capable of capturing this event.
In this paper, we present disruptive events classification
A.2 Support Vector Machine (SVM)
Support vector machine (SVM) is a supervised classification
algorithm that uses linear or nonlinear hyper planes for
separating classes from each other. The goal is to maximize the
margin between the hyper planes, and therefore, the problem is
formulated as a constrained optimization problem. Moreover,
SVM can be applied to non-linear sets of data incorporating a
method called kernel trick which maps original data into a
higher-dimensional space. We used Gaussian kernel function in
this paper. Additional discussion can be found in [26].
For multi-class classification, several algorithms have been
proposed in the past. In this paper, a one-against-all algorithm
is used to classify the events with respect to all the other classes
using several binary SVMs.
B. Autoencoder+ Softmax event classification algorithm
As the second method, the event classification is carried out
using a neural network-based toolbox, i.e., autoencoder, and
softmax layer. In this method, the new feature matrix is first
normalized using the mean and standard deviation of the
historical feature matrices. The rows of the normalized feature
matrix are then stacked on top of each other to create an input
vector. The feature vector is then used as the input to the
autoencoder layer for feature compression of the input vector.
The softmax layer is then carried out using the compressed
vector to classify the events. Fig. 1.b shows the flowchart and
the summary is presented in the following.
B.1 Autoencoder
An autoencoder belongs to the artificial neural network
family and is used as an unsupervised learning algorithm. It
takes the data as input and tries to reconstruct the data through
two layers of coding and decoding. The learning process is
carried out using back propagation algorithms and the goal of
training is to minimize the reconstruction error.
An autoencoder takes the 𝑥 ∈ 𝑅𝑑 as input and maps it onto
′
𝑧 ∈ 𝑅𝑑 as
𝑧 = 𝑠1 (𝑊𝑥 + 𝑏)
(3)
Where s1, W, and b are the element-wise sigmoid function, the
weight matrix, and the bias term, respectively. Then, 𝑧 is
mapped onto the 𝑅𝑑 to reconstruct the input using
𝑥′ = 𝑠2 (𝑊′ 𝑧 + 𝑏′ )
(4)
Where 𝑠2 , 𝑊 ′ , and 𝑏 ′ are the element-wise sigmoid function,
the weight matrix, and the bias term, respectively. [27].
B.2 Softmax Classifier
Softmax function is a generalization of logistic function that
output a multiclass probability distribution as opposed to a
binary probability distribution. It serves as the output layer of
the autoencoder. It takes an m-vector x as input and outputs a
y-vector of real values between 0 and 1. It is defined as
𝑓𝑗 (𝑥) =
𝑥
𝑒 𝑗
𝑥𝑖
∑𝑚
𝑖=1 𝑒
for
i=1,2,..m
(5)
Where 𝑓𝑗 (𝑥) is the predicted probability for class j. Further
discussion can be found in [28].
III. SIMULATION AND RESULTS
In this paper, the proposed disruptive event classification is
evaluated using the simulation studies. The PMU data for
classification is generated by simulating the IEEE 13-node
distribution system, as shown in Fig. 2. This distribution
network has three different voltage levels as 115 kV, 4.16 kV,
and 0.48 kV. The downstream network is connected via a 5000
kVA transformer to the upstream network. In this network there
are (1) three single-phase regulators between bus 650 and bus
632, (2) a transformer between 633 bus and 634 bus, (3) a
three-phase capacitor bank at bus 675, (4) a single-phase
capacitor at bus 611, and (5) 15 distributed loads at different
buses. We assume that one PMU is placed at bus 671 measuring
voltage at this bus and current from bus 637 to bus 671. A
Gaussian noise with zero mean and standard deviation of 1% of
the measured values is then added to the voltage and current
measurements (magnitudes and angles) to model the PMU
inaccuracies.
Fig. 2. The IEEE 13 Node Test Feeder with one PMU [29]
Figure 3 shows the PMU voltage magnitude over one second,
corresponding to three different classes, i.e. malfunctioned
capacitor bank switching, malfunctioned OLTC switching of
the voltage regulator and abrupt load changing. These figures
demonstrate the voltage magnitude measurement of phase a.
(a)
(b)
(c)
Fig. 3. PMU voltage magnitudes for three different classes a) malfunctioned
capacitor bank switching, b) malfunctioned OLTC switching c) abrupt load
changing
In order to create enough experiments for class 1, the
malfunctioned three-phase capacitor bank switching event at
bus 671 is simulated at different loading level. There are 15
different loads in the system, and for each of these loads, 10
different loading ranging from 50% up to 95%, with 5%
intervals is considered. Therefore, 150 different experiments
are simulated for class 1. Similarly, the same number of
experiments is simulated for the malfunctioned OLTC
switching event as the second class. Class three corresponds to
normal abrupt load changing and it is assumed that one of the
loads has a sudden change at a time. The abrupt load changing
is simulated using 5%, 10%, 15%, 20%, and 25% increase or
decrease of active and reactive power. Therefore, the total
number of all class 3 experiments is 150, and we have generated
450 total number of experiments in all three classes.
The proposed multi-class classification algorithms are then
trained and evaluated using the simulated PMU data. The
classifiers are trained using x percent of the data, i.e., selected
randomly x (10, 90), and the remaining data set is used to
evaluate the classification accuracies as
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =
𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒 𝑐𝑙𝑎𝑠𝑠𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛
𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑠𝑡 𝑐𝑎𝑠𝑒𝑠
(6)
accuracies increase as more training data is used. Additionally,
PMUs with 60 sps results in (a) 62% accuracy with 20% of data
used for training, and (b) 84% accuracy with 90% of data used
for training. While PMUs with 120 sps results in higher
accuracies for the same percentage of training experiments, As
the results verify, higher sampling rates leads to better capturing
of the events, and consequently, better classification of the
disruptive events.
In this paper, we have gradually increased the percentage of
the training data set and evaluated the confusion matrices and
the classification accuracies. Table 1 and 2 demonstrate the
confusion matrices for the scenario with 50% of data used for
training and the rest for evaluation, using PCA+SVM and
autoencoder+softmax, respectively.
Table 1. Confusion matrix in PCA+SVM method, with 50% of data used for training,
and 60 sps PMU
Predicted
Class 1
Actual
Class 1
Class 2
Class 3
53
(23.56%)
8
(3.56%)
10
(4.44%)
Class 2
7
(3.11%)
54
(24%)
2
(0.88%)
Class 3
12
(5.33%)
6
(2.67%)
62
(27.56%)
Nonclassified
3
(1.33%)
7
(3.11%)
1
(0.44%)
Table 2. Confusion matrix in Autoencoder+Softmax method, with 50% of data used
for training, and 60 sps PMU
Predicted
Class 1
Actual
Class 1
Class 2
Class 3
63
(28%)
9
(4%)
5
(2.22%)
Class 2
Class 3
6
(2.66%)
59
(26.22%)
3
(1.33%)
6
(2.66%)
7
(3.11%)
67
(29.77%)
Nonclassified
0
(0%)
0
(%)
0
(%)
As it is observed in the first row of Table 1, from 75
experiments corresponding to class 1, only 53 experiment are
accurately classified, and 7, 12, and 3 experiments are
misclassified as class 2, class 3, and non-classified,
respectively. The percentage next to each number is the
percentage of each number with respect to all the test cases.
The classification accuracy is then calculated using leaveone-out scenario which is a standard test for evaluation of any
machine learning technique [30]. In this scenario it is assumed
that all the experiments except one, i.e., 449 of the experiments,
are used for training the classifiers and the only remaining
experiment is tested using the trained classifiers. This process
is carried out for the number of all experiments, i.e., 450 times,
starting from the first experiment up to the last one. The
accuracy is then calculated as the number of accurately
classified experiment divided by the total number of
experiments. The leave-one-out accuracies are 86.1% and
91.2% using PCA + SVM, and Autoencoder + Softmax,
respectively which shows the better performance of the later
method for disruptive event classification in distribution grids.
Finally the classification accuracies are calculated for
different percentages of training cases, starting from 20% to
90%, with 10% intervals. Fig. 4 shows the results for
PCA+SVM method for the two different sampling rates 60 sps
and 120 sps. As it is observed from Fig. 4, the classification
Fig. 4. Classification accuracy of PCA + SVM method for different
training percentage and two different sampling rates
Fig. 5 shows the results using autoencoder+softmax method
for two different sampling rates, i.e., 60 and 120 sps. Similar to
PCA+SVM, as the training percentage increases, the accuracy
increases. Additionally, as the sampling rate gets higher, the
classification accuracies improve for the same percentage of the
training cases. Furthermore, Figs. 4 and 5 are compared and it
is verified that autoencoder+softmax method outperforms the
PCA+SVM method in all different scenarios.
Fig. 5. Classification accuracy of autoencoder and softmax method for
different training percentage and two different sampling rates
IV. CONCLUSION AND FUTURE WORKS
Data-driven event detection in distribution grids provides
essential operational and maintenance tools for next-generation
smart grids with advanced measurement devices, e.g., microphasor measurement units (μPMUs). This paper presents a new
framework for classification of disruptive events using PMU
data in distribution grids. Two disruptive events are defined as
malfunctioned capacitor bank switching and malfunctioned
regulator on-load tap changer (OLTC) switching which provide
similar PMU data pattern to normal abrupt load changes. The
end result of this paper will provide a new framework that can
be used for preventive maintenance scheduling of critical assets
in distribution grids. In this paper, the event classification is
developed using two multi-class classification algorithms for
distinguishing between the two disruptive events and the
normal load changing event. The first method is based on
principal component analysis (PCA) along with multi-class
support vector machine (SVM), and the second method is
designed using autoencoder accompanied by the softmax layer
classifier. The data for training and testing is provided using
simulating the IEEE 13-bus distribution network with a PMU
with 60 or 120 samples per second (SPS). The classification
results verify the acceptable performance of the proposed
framework for distinguishing the two disruptive events from the
normal load change. The results also show the superiority of the
second method over the first method. For future work, the
authors will implement the proposed framework on larger
standard networks, with more disruptive events, e.g. external
voltage disturbance, voltage sag, reconfiguration. Furthermore,
early-fusion, late-fusion techniques will be utilized for
concatenation of several PMU data for wide-area disruptive
event classification and localization frameworks.
[14]
[15]
[16]
[17]
[18]
[19]
[20]
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
B. Singh, N.K. Sharma, A.N. Tiwari, K.S. Verma, and S.N. Singh,
“Applications of phasor measurement units (PMUs) in electric power
system networks incorporated with FACTS controllers” International
Journal of Engineering, Science and Technology, Vol. 3, no. 3, pp.64-82,
Jun 2011
N.H. Abbasy, and H.M. Ismail, “A unified approach for the optimal PMU
location for power system state estimation” IEEE Transactions on power
systems, Vol. 24, no. 2, pp.806-813, May 2009
R. Jalilzadeh Hamidi, H. Khodabandehlou, H. Livani, and M. Sami
Fadali, “Application of distributed compressive sensing to power system
state estimation,” NAPS, pp. 1-6, Oct., 2015.
M. Zhou, V.A. Centeno, J.S. Thorp, A.G. Phadke, “An alternative for
including phasor measurements in state estimators”, IEEE transactions on
power systems,Vol. 21, no. 4,1930-7, Nov 2006.
A.L. Liao, E.M. Stewart, E.C. Kara, “Micro-synchrophasor data for
diagnosis of transmission and distribution level events”, Transmission
and Distribution Conference and Exposition (T&D), IEEE PES, pp. 1-5,
May 2016
M. Majidi, M. Etezadi-Amoli, and M.S. Fadali, “A Novel Method for
Single and Simultaneous Fault Location in Distribution Networks,” IEEE
Transactions on Power System. Vol. 30, no. 6, pp. 3368-3376, Nov. 2015.
F. Ding, C.D. Booth, “Applications of PMUs in Power Distribution
Networks with Distributed Generation”, Universities' Power Engineering
Conference (UPEC), Proceedings of 2011 46th International, pp. 1-5, Sep
2011.
A. Carta, N. Locci, C. Muscas, “A PMU for the measurement of
synchronized harmonic phasors in three-phase distribution networks”,
IEEE Transactions on instrumentation and measurement, Vol. 58, no. 10,
pp. 3723-30, Oct. 2009.
M. Wache, D.C. Murray, “Application of synchrophasor measurements
for distribution networks”, IEEE Power and Energy Society General
Meeting, pp. 1-4, Jul 24, 2011.
J. Liu, J. Tang, F. Ponci, A. Monti, C. Muscas, P.A. Pegoraro, “Trade-offs
in PMU deployment for state estimation in active distribution grids” IEEE
transactions on Smart Grid, Vol.3, no. 2, pp. 915-24, Jun, 2012.
D. Phillips, T. Overbye, “Distribution system event detection and
classification using local voltage measurements”, Power and Energy
Conference at Illinois (PECI), pp. 1-4, Feb 28, 2014.
S. Mishra, C.N. Bhende, B.K. Panigrahi, “Detection and classification of
power quality disturbances using S-transform and probabilistic neural
network”, IEEE Transactions on power delivery, Vol. 23, no. 1, pp. 2807, Jan 2008.
T.K. Abdel-Galil, R.M. Sharkawy, M.M. Salama, R. Bartnikas, “Partial
discharge pattern classification using the fuzzy decision tree approach”,
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
IEEE Transactions on Instrumentation and Measurement, Vol. 54, no.6,
pp. 2258-63, Dec. 2005.
M.Majidi, M.S.Fadali, M.Etezadi-Amoli, and M.Oskuoee, “Partial
Discharge Pattern Recognition via Sparse Representation and ANN,”
IEEE Transactions on Dielectrics and Electrical Insulation. Vol. 22, no.
2, pp. 1061-1070, April 2015.
H. Livani, C.Y. Evrenosoğlu, “A fault classification and localization
method for three-terminal circuits using machine learning”, IEEE
Transactions on Power Delivery, Vol. 28, no. 4, pp. 2282-90, Oct 2013.
S. Kalyani, K.S. Swarup, “Classification and assessment of power system
security using multiclass SVM”, IEEE Transactions on Systems, Man,
and Cybernetics, Part C (Applications and Reviews), Vol. 41, no. 5, pp.
753-8, Sep. 2011.
L.S. Moulin, A.A. Da Silva, M.A. El-Sharkawi, R.J. Marks, “Support
vector machines for transient stability analysis of large-scale power
systems”, IEEE Transactions on Power Systems, Vol. 19, no. 2, pp. 81825, May 2004.
J.A. Wischkaemper, C.L. Benner, B.D. Russell, K.M. Manivannan,
“Waveform analytics-based improvements in situational awareness,
feeder visibility, and operational efficiency”, IEEE PES T&D Conference
and Exposition, pp. 1-5, Apr 14, 2014.
J.S. Bowers, A. Sundaram, C.L. Benner, B.D. Russell, “Outage avoidance
through intelligent detection of incipient equipment failures on
distribution feeders”, Power and Energy Society General MeetingConversion and Delivery of Electrical Energy in the 21st Century, pp. 17, Jul 20, 2008.
J.A. Wischkaemper, C.L. Benner, B.D. Russell, “A new monitoring
architecture for distribution feeder health monitoring, asset management,
and real-time situational awareness”, IEEE PES Innovative Smart Grid
Technologies (ISGT), pp. 1-7, Jan 16, 2012.
SEL 611 relay (Online) Available: https://selinc.com/
Y. Zhou, R. Arghandeh, I. Konstantakopoulos I, S. Abdullah, A. von
Meier, C.J. Spanos, “Abnormal event detection with high resolution
micro-PMU data”, Power Systems Computation Conference (PSCC), pp.
1-7, Jun 2016.
L.P.Hayes, (2005) . First Energy Corp, Akron, OH, [Online] Available:
http://castlepowersolutions.biz/Components/Joslyn%20HiVoltage/Technical%20Papers/First_Energy_Corporation_Paper_Doble_
Conference_2005_VBM_Failures.pdf
R. Jongen, E. Gulski, J. Erbrink, (2014), “Degradation Effects and
Diagnosis of On-load Tap Changer in Power Transformers”, [Online]
Available:
http://www.unistuttgart.de/ieh/forschung/veroeffentlichungen/2014_Wil
d_CMD_Korea_Degradation_Effects_and_Diagnosis_of_Onload_Tap_Changer....pdf.
S. Wold, K. Esbensen, P. Geladi, “Principal component analysis”,
Chemometrics and intelligent laboratory systems, Vol. 2, no. 1-3, pp. 3752, Aug 1, 1987.
H. Livani, C.Y. Evrenosoğlu, “A fault classification method in power
systems using DWT and SVM classifier”, Transmission and Distribution
Conference and Exposition (T&D), IEEE PES, pp. 1-5, May 7, 2012.
P. Vincent, H. Larochelle, Y. Bengio, P.A. Manzagol, “Extracting and
composing robust features with denoising autoencoders”, Proceedings of
the 25th international conference on Machine learning, pp. 1096-1103,
Jul 5, 2008.
R.S. Sutton, A.G. Bart, “Reinforcement learning: An introduction” Vol.
1, no. 1, Cambridge: MIT press, Mar 1, 1998.
W.H. Kersting, “Radial distribution test feeders”, Power Engineering
Society Winter Meeting, Vol. 2, pp. 908-912, 2001.
G.C. Cawley, N.L. Talbot, “Efficient leave-one-out cross-validation of
kernel fisher discriminant classifiers”, Pattern Recognition, Vol. 36, no.
11, pp. 2585-92, Nov. 30, 2003.
| 3 |
Implicit Regularization in Nonconvex Statistical Estimation:
Gradient Descent Converges Linearly for Phase Retrieval,
Matrix Completion and Blind Deconvolution
arXiv:1711.10467v2 [cs.LG] 14 Dec 2017
Cong Ma∗
Kaizheng Wang∗
November 2017;
Yuejie Chi†
Yuxin Chen‡
Revised December 2017
Abstract
Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for
solving statistical estimation problems. Due to the highly nonconvex nature of the empirical loss, stateof-the-art procedures often require proper regularization (e.g. trimming, regularized cost, projection) in
order to guarantee fast convergence. For vanilla procedures such as gradient descent, however, prior
theory either recommends highly conservative learning rates to avoid overshooting, or completely lacks
performance guarantees.
This paper uncovers a striking phenomenon in nonconvex optimization: even in the absence of explicit
regularization, gradient descent enforces proper regularization implicitly under various statistical models.
In fact, gradient descent follows a trajectory staying within a basin that enjoys nice geometry, consisting
of points incoherent with the sampling mechanism. This “implicit regularization” feature allows gradient descent to proceed in a far more aggressive fashion without overshooting, which in turn results
in substantial computational savings. Focusing on three fundamental statistical estimation problems,
i.e. phase retrieval, low-rank matrix completion, and blind deconvolution, we establish that gradient
descent achieves near-optimal statistical and computational guarantees without explicit regularization.
In particular, by marrying statistical modeling with generic optimization theory, we develop a general
recipe for analyzing the trajectories of iterative algorithms via a leave-one-out perturbation argument. As
a byproduct, for noisy matrix completion, we demonstrate that gradient descent achieves near-optimal
error control — measured entrywise and by the spectral norm — which might be of independent interest.
Contents
1 Introduction
1.1 Nonlinear systems and empirical loss minimization . . .
1.2 Nonconvex optimization via regularized gradient descent
1.3 Regularization-free procedures? . . . . . . . . . . . . . .
1.4 Numerical surprise of unregularized gradient descent . .
1.5 This paper . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 Notations . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
4
5
6
6
7
8
2 Implicit regularization – a case study
2.1 Gradient descent theory revisited . . . . . . . . . . . .
2.2 Local geometry for solving random quadratic systems
2.3 Which region enjoys nicer geometry? . . . . . . . . . .
2.4 Implicit regularization . . . . . . . . . . . . . . . . . .
2.5 A glimpse of the analysis: a leave-one-out trick . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
9
10
11
12
12
.
.
.
.
.
∗ Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, USA; Email:
{congm, kaizheng}@princeton.edu.
† Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Email:
[email protected].
‡ Department of Electrical Engineering, Princeton University, Princeton, NJ 08544, USA; Email: [email protected].
1
3 Main results
13
3.1 Phase retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Low-rank matrix completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Blind deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4 Related work
19
5 A general recipe for trajectory analysis
20
5.1 General model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Outline of the recipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6 Analysis for phase retrieval
6.1 Step 1: characterizing local geometry in the RIC . . . . . .
6.1.1 Local geometry . . . . . . . . . . . . . . . . . . . . .
6.1.2 Error contraction . . . . . . . . . . . . . . . . . . .
6.2 Step 2: introducing the leave-one-out sequences . . . . . . .
6.3 Step 3: establishing the incoherence condition by induction
6.4 The base case: spectral initialization . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
23
23
23
24
24
26
7 Analysis for matrix completion
7.1 Step 1: characterizing local geometry in the RIC . . . . . .
7.1.1 Local geometry . . . . . . . . . . . . . . . . . . . . .
7.1.2 Error contraction . . . . . . . . . . . . . . . . . . . .
7.2 Step 2: introducing the leave-one-out sequences . . . . . . .
7.3 Step 3: establishing the incoherence condition by induction
7.4 The base case: spectral initialization . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
26
27
27
27
28
29
31
8 Analysis for blind deconvolution
8.1 Step 1: characterizing local geometry in the RIC . . . . . .
8.1.1 Local geometry . . . . . . . . . . . . . . . . . . . . .
8.1.2 Error contraction . . . . . . . . . . . . . . . . . . . .
8.2 Step 2: introducing the leave-one-out sequences . . . . . . .
8.3 Step 3: establishing the incoherence condition by induction
8.4 The base case: spectral initialization . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
32
32
33
34
35
36
9 Discussions
A Proofs for phase retrieval
A.1 Proof of Lemma 1 . . .
A.2 Proof of Lemma 2 . . .
A.3 Proof of Lemma 3 . . .
A.4 Proof of Lemma 4 . . .
A.5 Proof of Lemma 5 . . .
A.6 Proof of Lemma 6 . . .
38
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
46
46
47
48
49
50
50
B Proofs for matrix completion
B.1 Proof of Lemma 7 . . . . .
B.2 Proof of Lemma 8 . . . . .
B.3 Proof of Lemma 9 . . . . .
B.3.1 Proof of Lemma 22 .
B.3.2 Proof of Lemma 23 .
B.4 Proof of Lemma 10 . . . . .
B.5 Proof of Lemma 11 . . . . .
B.5.1 Proof of Lemma 24 .
B.5.2 Proof of Lemma 25 .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
51
52
55
56
61
62
64
66
68
69
.
.
.
.
.
.
2
B.6 Proof of Lemma 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.7 Proof of Lemma 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
C Proofs for blind deconvolution
C.1 Proof of Lemma 14 . . . . . . . . . .
C.1.1 Proof of Lemma 26 . . . . . .
C.1.2 Proof of Lemma 27 . . . . . .
C.2 Proofs of Lemma 15 and Lemma 16
C.3 Proof of Lemma 17 . . . . . . . . . .
C.4 Proof of Lemma 18 . . . . . . . . . .
C.4.1 Proof of Lemma 28 . . . . . .
C.4.2 Proof of Lemma 29 . . . . . .
C.4.3 Proof of Claim (224) . . . . .
C.5 Proof of Lemma 19 . . . . . . . . . .
C.6 Proof of Lemma 20 . . . . . . . . . .
C.7 Proof of Lemma 21 . . . . . . . . . .
70
74
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
78
. 78
. 79
. 81
. 87
. 89
. 94
. 98
. 99
. 99
. 101
. 102
. 106
D Technical lemmas
D.1 Technical lemmas for phase retrieval . . . .
D.1.1 Matrix concentration inequalities . .
D.1.2 Matrix perturbation bounds . . . . .
D.2 Technical lemmas for matrix completion . .
D.2.1 Orthogonal Procrustes problem . . .
D.2.2 Matrix concentration inequalities . .
D.2.3 Matrix perturbation bounds . . . . .
D.3 Technical lemmas for blind deconvolution .
D.3.1 Wirtinger calculus . . . . . . . . . .
D.3.2 Discrete Fourier transform matrices
D.3.3 Complex-valued alignment . . . . . .
D.3.4 Matrix concentration inequalities . .
D.3.5 Matrix perturbation bounds . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
108
108
108
108
109
109
111
117
120
120
121
125
129
131
1
Introduction
1.1
Nonlinear systems and empirical loss minimization
A wide spectrum of science and engineering applications calls for solutions to a nonlinear system of equations.
Imagine we have collected a set of data points y = {yj }1≤j≤m , generated by a nonlinear sensing system,
yj ≈ Aj x\ , 1 ≤ j ≤ m,
where x\ is the unknown object of interest, and the Aj ’s are certain nonlinear maps known a priori. Can
we reconstruct the underlying object x\ in a faithful yet efficient manner? Problems of this kind abound in
information and statistical science, prominent examples including low-rank matrix recovery [KMO10a,CR09],
robust principal component analysis [CSPW11, CLMW11], phase retrieval [CSV13, JEH15], neural networks
[SJL17, ZSJ+ 17], to name just a few.
In principle, it is possible to attempt reconstruction by searching for a solution that minimizes the
empirical loss, namely,
m
X
2
(1)
minimizex f (x) =
yj − Aj (x) .
j=1
Unfortunately, this empirical loss minimization problem is, in many cases, nonconvex, making it NP-hard in
general. This issue of non-convexity comes up in, for example, several representative problems that epitomize
the structures of nonlinear systems encountered in practice.1
• Phase retrieval / solving quadratic systems of equations. Imagine we are asked to recover an
unknown object x\ ∈ Rn , but are only given the square modulus of certain linear measurements about the
object, with all sign/phase information of the measurements missing. This arises, for example, in X-ray
crystallography [CESV13], and in latent-variable models where the hidden variables are captured by the
missing signs [CYC14]. To fix ideas, assume we would like to solve for x\ ∈ Rn in the following quadratic
system of m equations
\ 2
yj = a>
,
1 ≤ j ≤ m,
j x
where {aj }1≤j≤m are the known design vectors. One strategy is thus to solve the following problem
minimize
x∈Rn
m
2 i2
1 Xh
yj − a>
.
f (x) =
j x
4m j=1
(2)
• Low-rank matrix completion. In many scenarios such as collaborative filtering, we wish to make
predictions about all entries of an (approximately) low-rank matrix M \ ∈ Rn×n (e.g. a matrix consisting
of users’ ratings about many movies), yet only a highly incomplete subset of the entries are revealed to
us [CR09]. For clarity of presentation, assume M \ to be rank-r (r n) and positive semidefinite (PSD),
i.e. M \ = X \ X \> with X \ ∈ Rn×r , and suppose we have only seen the entries
\
Yj,k = Mj,k
= (X \ X \> )j,k ,
(j, k) ∈ Ω
within some index subset Ω of cardinality m. These entries can be viewed as nonlinear measurements
about the low-rank factor X \ . The task of completing the true matrix M \ can then be cast as solving
minimizeX∈Rn×r
f (X) =
n2 X
4m
(j,k)∈Ω
>
Yj,k − e>
j XX ek
2
,
(3)
where the ej ’s stand for the canonical basis vectors in Rn .
1 Here, we choose different pre-constants in front of the empirical loss in order to be consistent with the literature of the
respective problems. In addition, we only introduce the problem in the noiseless case for simplicity of presentation.
4
• Blind deconvolution / solving bilinear systems of equations. Imagine we are interested in estimating two signals of interest h\ , x\ ∈ CK , but only get to collect a few bilinear measurements about
them. This problem arises from mathematical modeling of blind deconvolution [ARR14, LLSW16], which
frequently arises in astronomy, imaging, communications, etc. The goal is to recover two signals from their
convolution. Put more formally, suppose we have acquired m bilinear measurements taking the following
form
yj = b∗j h\ x\∗ aj ,
1 ≤ j ≤ m,
where aj , bj ∈ CK are distinct design vectors (e.g. Fourier and/or random design vectors) known a priori.
In order to reconstruct the underlying signals, one asks for solutions to the following problem
minimizeh,x∈CK
f (h, x) =
m
X
j=1
1.2
2
yj − b∗j hx∗ aj .
Nonconvex optimization via regularized gradient descent
First-order methods have been a popular heuristic in practice for solving nonconvex problems including (1).
For instance, a widely adopted procedure is gradient descent, which follows the update rule
xt+1 = xt − ηt ∇f xt ,
t ≥ 0,
(4)
where ηt is the learning rate (or step size) and x0 is some proper initial guess. Given that it only performs
a single gradient calculation ∇f (·) per iteration (which typically can be completed within near-linear time),
this paradigm emerges as a candidate for solving large-scale problems. The concern is: whether xt converges
to the global solution and, if so, how long it takes for convergence, especially since (1) is highly nonconvex.
Fortunately, despite the worst-case hardness, appealing convergence properties have been discovered in
various statistical estimation problems; the blessing being that the statistical models help rule out ill-behaved
instances. For the average case, the empirical loss often enjoys benign geometry, in a local region (or at least
along certain directions) surrounding the global optimum. In light of this, an effective nonconvex iterative
method typically consists of two stages:
1. a carefully-designed initialization scheme (e.g. spectral method);
2. an iterative refinement procedure (e.g. gradient descent).
This strategy has recently spurred a great deal of interest, owing to its promise of achieving computational
efficiency and statistical accuracy at once for a growing list of problems (e.g. [KMO10a, JNS13, CW15, SL16,
CLS15,CC17,LLSW16,LLB17]). However, rather than directly applying gradient descent (4), existing theory
often suggests enforcing proper regularization. Such explicit regularization enables improved computational
convergence by properly “stabilizing” the search directions. The following regularization schemes, among
others, have been suggested to obtain or improve computational guarantees. We refer to these algorithms
collectively as Regularized Gradient Descent.
• Trimming/truncation, which discards/truncates a subset of the gradient components when forming the
descent direction. For instance, when solving quadratic systems of equations, one can modify the gradient
descent update rule as
xt+1 = xt − ηt T ∇f xt ,
(5)
where T is an operator that effectively drops samples bearing too much influence on the search direction. This strategy [CC17, ZCL16, WGE17] has been shown to enable exact recovery with linear-time
computational complexity and optimal sample complexity.
• Regularized loss, which attempts to optimize a regularized empirical risk
xt+1 = xt − ηt ∇f xt + ∇R xt ,
(6)
where R(x) stands for an additional penalty term in the empirical loss. For example, in low-rank matrix
completion R(·) imposes penalty based on the `2 row norm [KMO10a, SL16] as well as the Frobenius
norm [SL16] of the decision matrix, while in blind deconvolution, it penalizes the `2 norm as well as
certain component-wise incoherence measure of the decision vectors [LLSW16, HH17, LS17].
5
Table 1: Prior theory for gradient descent (with spectral initialization)
Vanilla gradient descent
Regularized gradient descent
sample
iteration
step
sample
iteration
type of
complexity complexity size
complexity
complexity
regularization
Phase
trimming
1
n log n
n
n log 1
log 1
n
retrieval
[CC17, ZCL16]
regularized loss
n
1
nr7
r log
Matrix
[SL16]
n/a
n/a
n/a
projection
completion
1
2
2
nr
r log
[CW15, ZL16]
regularized loss &
Blind
1
n/a
n/a
n/a Kpoly log m
m log
deconvolution
projection [LLSW16]
• Projection, which projects the iterates onto certain sets based on prior knowledge, that is,
xt+1 = P xt − ηt ∇f xt ,
(7)
where P is a certain projection operator used to enforce, for example, incoherence properties. This strategy
has been employed in both low-rank matrix completion [CW15, ZL16] and blind deconvolution [LLSW16].
Equipped with such regularization procedures, existing works uncover appealing computational and statistical properties under various statistical models. Table 1 summarizes the performance guarantees derived
in the prior literature; for simplicity, only orderwise results are provided.
Remark 1. There is another role of regularization commonly studied in the literature, which exploits prior
knowledge about the structure of the unknown object, such as sparsity to prevent overfitting and improve
statistical generalization ability. This is, however, not the focal point of this paper, since we are primarily
pursuing solutions to (1) without imposing additional structures.
1.3
Regularization-free procedures?
The regularized gradient descent algorithms, while exhibiting appealing performance, usually introduce more
algorithmic parameters that need to be carefully tuned based on the assumed statistical models. In contrast,
vanilla gradient descent (cf. (4)) — which is perhaps the very first method that comes into mind and
requires minimal tuning parameters — is far less understood (cf. Table 1). Take matrix completion and
blind deconvolution as examples: to the best of our knowledge, there is currently no theoretical guarantee
derived for vanilla gradient descent.
The situation is better for phase retrieval: the local convergence of vanilla gradient descent, also known
as Wirtinger flow (WF), has been investigated in [CLS15, WWS15]. Under i.i.d. Gaussian design and with
near-optimal sample complexity, WF (combined
with spectral initialization) provably achieves -accuracy
(in a relative sense) within O n log (1/ε) iterations. Nevertheless, the computational guarantee is significantly outperformed
by the regularized version (called truncated Wirtinger flow [CC17]), which only requires
O log (1/ε) iterations to converge with similar per-iteration cost. On closer inspection,
the high computational cost of WF is largely due to the vanishingly small step size ηt = O 1/(nkx\ k22 ) — and hence slow
movement — suggested by the theory [CLS15]. While this is already the largest possible step size allowed
in the theory published in [CLS15], it is considerably more conservative than the choice ηt = O 1/kx\ k22
theoretically justified for the regularized version [CC17, ZCL16].
The lack of understanding and suboptimal results about vanilla gradient descent raise a very natural
question: are regularization-free iterative algorithms inherently suboptimal when solving nonconvex statistical
estimation problems of this kind?
1.4
Numerical surprise of unregularized gradient descent
To answer the preceding question, it is perhaps best to first collect some numerical evidence. In what
follows, we test the performance of vanilla gradient descent for phase retrieval, matrix completion, and blind
6
100
100
100
10-5
10-5
10-5
10-10
10-10
10-10
10-15
0
100
200
300
(a) phase retrieval
400
500
10-15
50
100
150
200
250
300
350
400
(b) matrix completion
450
500
10-15
20
40
60
80
100
120
140
160
180
200
(c) blind deconvolution
Figure 1: (a) Relative `2 error of xt (modulo the global phase) vs. iteration count for phase retrieval
under i.i.d. Gaussian design, where m = 10n and ηt = 0.1. (b) Relative error of X t X t> (measured by
k·kF , k·k , k·k∞ ) vs. iteration count for matrix completion, where n = 1000, r = 10, p = 0.1, and ηt = 0.2.
(c) Relative error of ht xt∗ (measured by k·kF ) vs. iteration count for blind deconvolution, where m = 10K
and ηt = 0.5.
deconvolution, using a constant step size. For all of these experiments, the initial guess is obtained by means
of the standard spectral method. Our numerical findings are as follows:
• Phase retrieval. For each n, set m = 10n, take x\ ∈ Rn to be a random vector with unit norm, and
i.i.d.
generate the design vectors aj ∼ N (0, In ), 1 ≤ j ≤ m. Figure 1(a) illustrates the relative `2 error
t
\
t
\
min{kx − x k2 , kx + x k2 }/kx\ k2 (modulo the unrecoverable global phase) vs. the iteration count. The
results are shown for n = 20, 100, 200, 1000, with the step size taken to be ηt = 0.1 in all settings.
• Matrix completion. Generate a random PSD matrix M \ ∈ Rn×n with dimension n = 1000, rank r = 10,
and all nonzero eigenvalues equal to one. Each entry of M \ is observed independently with probability
p = 0.1. Figure 1(b) plots the relative error X t X t> − M \ / M \ vs. the iteration count, where |||·|||
can either be the Frobenius norm k·kF , the spectral norm k · k, or the entrywise `∞ norm k · k∞ . Here, we
pick the step size as ηt = 0.2.
i.i.d.
• Blind deconvolution. For each K ∈ {20, 100, 200, 1000} and m = 10K, generate the design vectors aj ∼
N (0, 21 IK ) + iN (0, 12 IK ) for 1 ≤ j ≤ m independently,2 and the bj ’s are drawn from a partial Discrete
Fourier Transform (DFT) matrix (to be described in Section 3.3). The underlying signals h\ , x\ ∈ CK are
produced as random vectors with unit norm. Figure 1(c) plots the relative error kht xt∗ −h\ x\∗ kF /kh\ x\∗ kF
vs. the iteration count, with the step size taken to be ηt = 0.5 in all settings.
In all of these numerical experiments, vanilla gradient descent enjoys remarkable linear convergence, always
yielding an accuracy of 10−5 (in a relative sense) within around 200 iterations. In particular, for the phase
retrieval problem, the step size is taken to be ηt = 0.1 although we vary the problem size from n = 20 to
n = 1000. The consequence is that the convergence rates experience little changes when the problem sizes
vary. In comparison, the theory published in [CLS15] seems overly pessimistic, as it suggests a diminishing
step size inversely proportional to n and, as a result, an iteration complexity that worsens as the problem
size grows.
In short, the above empirical results are surprisingly positive yet puzzling. Why was the computational
efficiency of vanilla gradient descent unexplained or substantially underestimated in prior theory?
1.5
This paper
The main contribution of this paper is towards demystifying the “unreasonable” effectiveness of regularizationfree nonconvex iterative methods. As asserted in previous work, regularized gradient descent succeeds by
properly enforcing/promoting certain incoherence conditions throughout the execution of the algorithm. In
contrast, we discover that
2 Here
and throughout, i represents the imaginary unit.
7
Table 2: Prior theory vs. our theory for vanilla
Prior theory
sample
iteration
complexity complexity
Phase retrieval
n log n
n log (1/ε)
Matrix completion
n/a
n/a
Blind deconvolution
n/a
n/a
gradient descent (with spectral initialization)
Our theory
step
sample
iteration
step
size
complexity
complexity
size
1/n
n log n
log n log (1/ε) 1/ log n
n/a nr3 poly log n
log (1/ε)
1
n/a
Kpoly log m
log (1/ε)
1
Vanilla gradient descent automatically forces the iterates to stay incoherent with the measurement
mechanism, thus implicitly regularizing the search directions.
This “implicit regularization” phenomenon is of fundamental importance, suggesting that vanilla gradient
descent proceeds as if it were properly regularized. This explains the remarkably favorable performance of
unregularized gradient descent in practice. Focusing on the three representative problems mentioned in
Section 1.1, our theory guarantees both statistical and computational efficiency of vanilla gradient descent
under random designs and spectral initialization. With near-optimal sample complexity, to attain -accuracy,
• Phase retrieval (informal): vanilla gradient descent converges in O log n log 1 iterations;
• Matrix completion (informal): vanilla gradient descent converges in O log 1 iterations;
• Blind deconvolution (informal): vanilla gradient descent converges in O log 1 iterations.
In words, gradient descent provably achieves (nearly) linear convergence in all of these examples. Throughout
this paper, an algorithm is said to converge (nearly) linearly to x\ in the noiseless case if the iterates {xt }
obey
dist(xt+1 , x\ ) ≤ (1 − c) dist(xt , x\ ),
∀t ≥ 0
for some 0 < c ≤ 1 that is (almost) independent of the problem size. Here, dist(·, ·) can be any appropriate
discrepancy measure.
As a byproduct of our theory, gradient descent also provably controls the entrywise empirical risk uniformly across all iterations; for instance, this implies that vanilla gradient descent controls entrywise estimation error for the matrix completion task. Precise statements of these results are deferred to Section 3 and
are briefly summarized in Table 2.
Notably, our study of implicit regularization suggests that the behavior of nonconvex optimization algorithms for statistical estimation needs to be examined in the context of statistical models, which induces an
objective function as a finite sum. Our proof is accomplished via a leave-one-out perturbation argument,
which is inherently tied to statistical models and leverages homogeneity across samples. Altogether, this
allows us to localize benign landscapes for optimization and characterize finer dynamics not accounted for
in generic gradient descent theory.
1.6
Notations
Before continuing, we introduce several notations used throughout the paper. First of all, boldfaced symbols
are reserved for vectors and matrices. For any vector v, we use kvk2 to denote its Euclidean norm. For
any matrix A, we use σj (A) and λj (A) to denote its jth largest singular value and eigenvalue, respectively,
and let Aj,· and A·,j denote its jth row and jth column, respectively. In addition, kAk, kAkF , kAk2,∞ ,
and kAk∞ stand for the spectral norm (i.e. the largest singular value), the Frobenius norm, the `2 /`∞ norm
(i.e. the largest `2 norm of the rows), and the entrywise `∞ norm (the largest magnitude of all entries) of a
matrix A. Also, A> , A∗ and A denote the transpose, the conjugate transpose, and the entrywise conjugate
of A, respectively. In denotes the identity matrix with dimension n × n. The notation On×r represents
the set of all n × r orthonormal matrices. The notation [n] refers to the set {1, · · · , n}. Also, we use Re(x)
to denote the real part of a complex number x. Throughout the paper, we use the terms “samples” and
“measurements” interchangeably.
8
Additionally, the standard notation f (n) = O (g(n)) or f (n) . g(n) means that there exists a constant c >
0 such that |f (n)| ≤ c|g(n)|, f (n) & g(n) means that there exists a constant c > 0 such that |f (n)| ≥ c |g(n)|,
and f (n) g(n) means that there exist constants c1 , c2 > 0 such that c1 |g(n)| ≤ |f (n)| ≤ c2 |g(n)|. Besides,
f (n) g(n) means that there exists some large enough constant c > 0 such that |f (n)| ≥ c |g(n)|. Similarly,
f (n) g(n) means that there exists some sufficiently small constant c > 0 such that |f (n)| ≤ c |g(n)|.
2
Implicit regularization – a case study
To reveal reasons behind the effectiveness of vanilla gradient descent, we first examine existing theory of
gradient descent and identify the geometric properties that enable linear convergence. We then develop an
understanding as to why prior theory is conservative, and describe the phenomenon of implicit regularization
that helps explain the effectiveness of vanilla gradient descent. To facilitate discussion, we will use the
problem of solving random quadratic systems (phase retrieval) and Wirtinger flow as a case study, but our
diagnosis applies more generally, as will be seen in later sections.
2.1
Gradient descent theory revisited
In the convex optimization literature, there are two standard conditions about the objective function —
strong convexity and smoothness — that allow for linear convergence of gradient descent.
Definition 1 (Strong convexity). A twice continuously differentiable function f : Rn 7→ R is said to be
α-strongly convex for α > 0 if
∇2 f (x) αIn ,
∀x ∈ Rn .
Definition 2 (Smoothness). A twice continuously differentiable function f : Rn 7→ R is said to be β-smooth
for β > 0 if
∇2 f (x) ≤ β,
∀x ∈ Rn .
It is well known that for an unconstrained optimization problem, if the objective function f is both αstrongly convex and β-smooth, then vanilla gradient descent (4) enjoys `2 error contraction [Bub15, Theorem
3.12], namely,
xt+1 − x\ k2 ≤
1−
2
β/α + 1
xt − x\
,
2
xt − x\ k2 ≤
and
1−
2
β/α + 1
t
x0 − x\
2
,
t ≥ 0, (8)
as long as the step size is chosen as ηt = 2/(α + β). Here, x\ denotes the global minimum. This immediately
reveals the iteration complexity for gradient descent: the number of iterations taken to attain -accuracy (in
a relative sense) is bounded by
β
1
O
log
.
α
In other words, the iteration complexity is dictated by and scales linearly with the condition number — the
ratio β/α of smoothness to strong convexity parameters.
Moving beyond convex optimization, one can easily extend the above theory to nonconvex problems with
local strong convexity and smoothness. More precisely, suppose the objective function f satisfies
∇2 f (x) αI
and
∇2 f (x) ≤ β
over a local `2 ball surrounding the global minimum x\ :
Bδ (x) := x | kx − x\ k2 ≤ δkx\ k2 .
(9)
Then the contraction result (8) continues to hold, as long as the algorithm is seeded with an initial point
that falls inside Bδ (x).
9
2.2
Local geometry for solving random quadratic systems
To invoke generic gradient descent theory, it is critical to characterize the local strong convexity and smoothness properties of the loss function. Take the problem of solving random quadratic systems (phase retrieval)
i.i.d.
as an example. Consider the i.i.d. Gaussian design in which aj ∼ N (0, In ), 1 ≤ j ≤ m, and suppose
without loss of generality that the underlying signal obeys kx\ k2 = 1. It is well known that x\ is the
unique minimizer — up to global phase — of (2) under this statistical model, provided that the ratio m/n
of equations to unknowns is sufficiently large. The Hessian of the loss function f (x) is given by
i
2
1 Xh
3 a>
− yj aj a>
j x
j .
m j=1
m
∇2 f (x) =
(10)
• Population-level analysis. Consider the case with an infinite number of equations or samples, i.e. m → ∞,
where ∇2 f (x) converges to its expectation. Simple calculation yields that
E ∇2 f (x) = 3 kxk22 In + 2xx> − In + 2x\ x\> .
It it straightforward to verify that for any sufficiently small constant δ > 0, one has the crude bound
In E ∇2 f (x) 10In ,
∀x ∈ Bδ (x) : x − x\ 2 ≤ δ x\ 2 ,
meaning that f is 1-strongly convex and 10-smooth within a local ball around x\ . As a consequence, when
we have infinite samples and an initial guess x0 such that kx0 − x\ k2 ≤ δ x\ 2 , vanilla gradient descent
with a constant step size converges to the global minimum within logarithmic iterations.
• Finite-sample regime with m n log n. Now that f exhibits favorable landscape in the population level,
one thus hopes that the fluctuation can be well-controlled so that the nice geometry carries over to the
finite-sample regime. In the regime where m n log n (which is the regime considered in [CLS15]), the
local strong convexity is still preserved, in the sense that
∇2 f (x) (1/2) · In ,
∀x :
x − x\
2
≤ δ x\
2
occurs with high probability, provided that δ > 0 is sufficiently small (see [Sol14, WWS15] and Lemma 1).
The smoothness parameter, however, is not well-controlled. In fact, it can be as large as (up to logarithmic
factors)3
∇2 f (x) . n
even when we restrict attention to the local `2 ball (9) with δ > 0 being a fixed small constant. This
means that the condition number β/α (defined in Section 2.1) may scale as O(n), leading to the step size
recommendation
ηt 1/n,
and, as a consequence, a high iteration complexity O n log(1/) . This underpins the analysis in [CLS15].
In summary, the geometric properties of the loss function — even in the local `2 ball centering around the
global minimum — is not as favorable as one anticipates, in particular in view of its population counterpart. A
direct application of generic gradient descent theory leads to an overly conservative step size and a pessimistic
convergence rate, unless the number of samples is enormously larger than the number of unknowns.
Remark 2. Notably, due to Gaussian designs, the phase retrieval problem enjoys more favorable geometry
compared to other nonconvex problems. In matrix completion and blind deconvolution, the Hessian matrices
are rank-deficient even at the population level. In such cases, the above discussions need to be adjusted, e.g.
strong convexity is only possible when we restrict attention to certain directions.
3 To
demonstrate this, take x = x\ + (δ/ka1 k2 ) · a1 in (10), one can easily verify that, with high probability, ∇2 f (x) ≥
2 2
2
− y1 a1 a>
1 /m − O(1) & δ n /m δ n/log n.
2
3(a>
1 x)
10
H1 ! N (1, 1)
! N (1,
m 1)
X
1
2
>
2
m
minimize
(a2i⌘x)
yi
⇣
⌘
x f (x)2 = ⇣
1 X
2
>
2
1
(x 1)
✓
◆ (x 1)
minimizex f (x) =
(ai x) p1yi exp m i=1
p✓
m
exp
2
f
(x
2 | H1 )
fmX1i=1
(xX
| H1 ) > 2⇡
1
2⇡
X
2
=
= exp x
==
= exp
x
minimizex L(x)
f (x)
(a= x)2L(x)
yi =
2
x2
p1 exp
(x
| H0 )
p1 exp fX x
fXm
(x | H0 ) i
2
2
2
2⇡
2⇡
2.3 Which region enjoys i=1
nicers geometry?
H1
s
H1
H21 H1
2
H1
n region surrounding
\ a large diameter that enjoys much nicer
\
Interestingly,
our theory
with
2identifies
2
x >2aR
s.t. a>
x = ax>
x
> 1
>
> , 1> 1 i m
n
>
\local
i
i
x 2 R s.t.
ai x does
= not
aimimic
x ,an ⇠`2 ball,
1 ()
i rather,
m
L(x)
xL(x)
+⇠log
geometry.
This
but
the
intersection
of ⇠an()
`2 ball and x
a polytope.+ log ⇠
s region
<
< 2<
< 2
⇣ For phase
X
We term it the region X
of incoherence
and
(RIC).
retrieval,
the RIC includes
all
⇣
H0 contraction
H0> H
2
H0points
0
>
>
\
\>
2
2
2
> >
minimize
f\ (X) => 1 \
eei m
XX
ej ei X X ej
∈R
X
xx 2
Rnnobeying
s.t.
a>
=e>
i
minimize
f (X) =
XX
X \>
X
j
i x
i x ej , ei X
i a
Pe,MAP = (i,j)2⌦
P\⇥ (✓0 )↵ +
P⇥ (✓P1 )e,MAP = P⇥ (✓0 )↵ + P⇥ (✓1 )
\
(i,j)2⌦
X ⇣
x−x
2
≤δ x
2
and
(11a)
2
p\
>
m ⇣
>
\>
m ⇣ e> XX↵
minimizeX ↵f (X) = X
e
X
xj − x\eX
.X log
,
(11b)
max a>
n2 xe\ >
2
j
j
i
i
>
> \ \>
2
>f (h, x)
>= \
\>e XX e
1≤j≤m
e
X
X
e
minimizeh,x f (h, x) =minimize
e>
XX
e
e
X
X
h,x
j
j
i
i
j
j
i
(i,j)2⌦ i
1,k
i 1,k
i=1
S1,k =As(W
+ formalized
c1 Fk )ei in
i=1numerical constant.
S1,k =
+ cprobability
where δ > 0 is some small
willk
be
Lemma
1, (W
withkhigh
the
1 Fk )e
m ⇣ n⇥r > \ \> >
Hessian matrix satisfies> X
i 2,k
>
>2 \ \>
s.t.find
ej XX
e (1/2)
= eS
X
X
ej(x)
,XX
j)
2
⌦e
= (W
+(i,
c\2O(log
F
)e=
X 2>R
s.t.
eki\>
X = e(W
ik>
i , k (i,
S2,k
+ cj)2 F2k ⌦
)ei 2,k
j ·2,k
I>
j ∇2efe
n)
·jIjnX
ne
minimizeh,x f (h, x) =
ei>
XX
X
X
e
i
i
\⇤
⇤ RIC.
⇤ In words,
\ \⇤
n⇤ h\the
simultaneously
for
inbthe
Hessian
nearly
the
i=1
find h,
x 2 Cn
s.t.x find
h
x
a
=
b
x
a
,
i xii⇤i
i
i
i
h,Si x
2 C=i (W
bF⇤i1kmatrix
h)e
aisim
= b⇤i hwell-conditioned
(with
i
m condition
1,k
i
i is.t.
i 1,k
+
c
ix
i ai ,+ c1 F
1,k
k
1
S
=
(W
number bounded by O(log n)), as long as (i) the iterate is not very
far from
thekglobal
(cf. (11a)),
1,k
1 minimizer
k )e
,
8pixel
k
i2 2,k
2 >
4 (W> + \c F\>
i 2,k , way to8pixel
S
=
)e
(r
f
(x))
(ii) the iterate
incoherent
with
respect
to
the
sensing
vectors
(cf.
(11b)).
Another
max
2,k
k
2
k
find X 2and
Rn⇥r
s.t. remains
e>
XX
e
=
e
X
X
e
,
(i,
j)
2
⌦
S
=
(W
+
c
F
)e
(r
f
(x))
i
i
2,k
k
2 k
j
j
max
2 f (x))
interpret the incoherence (r
condition
(11b) is that the empirical
risk needs to be well-controlled uniformly
2
max
f (x))region.
max (r
⇤ \ \⇤
See Figure
an illustration
the above
find h,across
x 2 all
Cnsamples.s.t.
b⇤i hi2(a)
x>⇤i afori =
hi =
xi faof(S
1, S i) m
i,
\bW
i
a (x x )
find
X 2 Rn⇥r
a1
a1
a2
aa1>1 (x
x\ ) 2.
a>
2 (x
x\ ) .
a1
a
a2
1
x\
a2
x
p
p
2
a
kx
1 (x))
max (r f
(r>2 f (x))
x\ )
\ max a2 (x
x
x0 x1 x2 x3
>
\
. log n
a (x x )
kx
x\ ka22 x0\ ) x\1 \ 2 2 3
aa1> (x
. log
n
0
1 x 2x\ k 3
x0 x10 x\x· 21 x32 x3
1
x xxx x· xkx
x
x
2
. log n
·
log n \
x
log n
a>
1 (x
\
xa
(a)
k
1
1,k > 2,k
a1 (x x\ ) Wk = f1 (S1,k , S2,k )
. log\ n
2 ) 3
. log n
=xf0 (S1,k12kx
, Sx2,k
Fk = f2 (S1,k , S2,k )
0 x21 x
3 x\x
k2
x\ ka22Fk
>
2 (x
kx
p
x ) .> log n
\
x
x
x
x\ k2
x0
a2p(x x\ )
. log n
x ) . log n \
(b)
kx x k2
\
x1
x2
x3
(c)
\
Figure 2: (a) The shaded region is an illustration of the incoherence region, which satisfies a>
j (x − x ) .
√
0
1
log n for all points x in the region. (b) When x resides in the desired region, we know that x remains
within the `2 ball but might fall out of the incoherence region (the shaded region). Once x1 leaves the
incoherence region, we lose control and may overshoot. (c) Our theory reveals that with high probability,
all iterates will stay within the incoherence region, enabling fast convergence.
The following observation is thus immediate: one can safely adopt a far more aggressive step size (as
large as ηt = O(1/ log n)) to achieve acceleration, as long as the iterates stay within the RIC. This, however,
fails to be guaranteed by generic gradient descent theory. To be more precise, if the current iterate xt falls
within the desired region, then in view of (8), we can ensure `2 error contraction after one iteration, namely,
kxt+1 − x\ k2 ≤ kxt − x\ k2
1
and hence xt+1 stays within the local `2 ball and hence satisfies (11a). However, it is
1 not immediately
obvious that xt+1 would still stay incoherent with the sensing vectors and satisfy (11b). If xt+1 leaves the
RIC, it no longer enjoys the benign local geometry of the loss function, and the algorithm has to slow down
in order to avoid overshooting. See Figure 2(b) for a visual illustration. In fact, in almost all regularized
gradient descent algorithms mentioned in Section 1.2, one of the main purposes of the proposed regularization
procedures is to enforce such incoherence constraints.
\
x is aligned with (and hence very coherent with) one vector aj , then with high probability one has a>
j x−x | &
√
√
a>
nkxk2 , which is significantly larger than log nkxk2 .
j x|
4 If
1
11
1
k
1
2
2.4
Implicit regularization
However, is regularization really necessary for the iterates to stay within the RIC? To answer this question,
t
maxj |a> (xt −x\ )|
maxj |a>
j x |
(resp. √logjnkx\ k
) vs. the
we plot in Figure 3(a) (resp. Figure 3(b)) the incoherence measure √log nkx
\k
2
2
iteration count in a typical Monte Carlo trial, generated in the same way as for Figure 1(a). Interestingly,
the incoherence measure remains bounded by 2 for all iterations t > 1. This important observation suggests
that one may adopt a substantially more aggressive step size throughout the whole algorithm.
4
2.5
3.5
2
3
1.5
2.5
1
2
0.5
1.5
1
0
0
5
10
15
20
25
30
0
5
10
(a)
15
20
25
30
(b)
t
\
max1≤j≤m |a>
max1≤j≤m |a> xt |
j (x −x )|
√
(in (a)) and
(in (b)) of the gradient
Figure 3: The incoherence measure √log nkx\ kj
\k
log
nkx
2
2
iterates vs. iteration count for the phase retrieval problem. The results are shown for n ∈ {20, 100, 200, 1000}
and m = 10n, with the step size taken to be ηt = 0.1. The problem instances are generated in the same way
as in Figure 1(a).
The main objective of this paper is thus to provide a theoretical validation of the above empirical observation. As we will demonstrate shortly, with high probability all iterates along the execution of the algorithm
(as well as the spectral initialization) are provably constrained within the RIC, implying fast convergence of
vanilla gradient descent (cf. Figure 2(c)). The fact that the iterates stay incoherent with the measurement
mechanism automatically, without explicit enforcement, is termed “implicit regularization”.
2.5
A glimpse of the analysis: a leave-one-out trick
In order to rigorously establish (11b) for all iterates, the current paper develops a powerful mechanism based
on the leave-one-out perturbation argument, a trick rooted and widely used in probability and random
matrix theory. Note that the iterate xt is statistically dependent with the design vectors {aj }. Under such
circumstances, one often resorts to generic bounds like the Cauchy-Schwarz inequality, which would not yield
a desirable estimate. To address this issue, we introduce a sequence of auxiliary iterates {xt,(l) } for each
1 ≤ l ≤ m (for analytical purposes only), obtained by running vanilla gradient descent using all but the lth
sample. As one can expect, such auxiliary trajectories serve as extremely good surrogates of {xt } in the
sense that
xt ≈ xt,(l) ,
1 ≤ l ≤ m,
t ≥ 0,
(12)
since their constructions only differ by a single sample. Most importantly, since xt,(l) is independent with
the lth design vector, it is much easier to control its incoherence w.r.t. al to the desired level:
p
t,(l)
(13)
a>
− x\ . log n x\ 2 .
l x
Combining (12) and (13) then leads to (11b). See Figure 4 for a graphical illustration of this argument.
Notably, this technique is very general and applicable to many other problems. We invite the readers to
Section 5 for more details.
12
max (r
minimizeX
2
f (x))
2 f (x))
(r
max
a1
\
a2
x
A
Ax
x
Ax
1
= |Ax|
ay1Ax
a2 2y = |Ax|
x\ 2
1
-3
incoherence
region w.r.t. a1
9
2
4
-1
1
{x
4
16
-2
4
-1
1
3
9
4
16
t,(1)
minimizeh,x
\
kx
a>
2 (x
p
\
x)
. log n
\
x k2 t,(l)
kx
ei X
(i,j)2⌦
p
x)
.
log n
x\ k2
find
a>
1 (x
f (X) =
{x
{xt,(l) }
}
al
} {x }
X2R
s.t.
n
s.t. b⇤i hi x⇤i ai =
a1
a2
x\
a1
a2
x\
·
{xt,(l) }
(a)
al
e>
i X
>
e>
j XX ei
w.r.t. al
w.r.t. al
t
m ⇣
X
i=1
n⇥r
find h, x 2 C
al
f (h, x) =
incoherence region w.r.t. a1
w.r.t. al
max (r
2
max (r
2 f (x
a>
1 (x
kx
a>
2 (x
kx
(b)
Figure 4: Illustration of the leave-one-out sequence w.r.t. al . (a) The sequence {xt,(l) }t≥0 is constructed
without using the lth sample. (b) Since the auxiliary sequence {xt,(l) } is constructed without using al , the
leave-one-out iterates stay within the incoherence region w.r.t. al with high probability. Meanwhile, {xt }
and {xt,(l) } are expected to remain close as their construction differ only in a single sample.
3
Main results
This section formalizes the implicit regularization phenomenon underlying unregularized gradient descent,
and presents its consequences, namely near-optimal statistical and computational guarantees for phase retrieval, matrix completion, and blind deconvolution. Note that the discrepancy measure dist (·, ·) may vary
from problem to problem.
3.1
Phase retrieval
Suppose the m quadratic equations
1
\ 2
yj = a>
,
j x
(14)
j = 1, 2, . . . , m
i.i.d.
are collected using random design vectors, namely, aj ∼ N (0, In ), and the nonconvex problem to solve is
i2
1 X h > 2
aj x − yj .
4m j=1
m
minimizex∈Rn
f (x) :=
(15)
The Wirtinger flow (WF) algorithm, first introduced in [CLS15], is a combination of spectral initialization
and vanilla gradient descent; see Algorithm 1.
Algorithm 1 Wirtinger flow for phase retrieval
Input: {aj }1≤j≤m and {yj }1≤j≤m .
e0 be the leading eigenvalue and eigenvector of
Spectral initialization: Let λ1 (Y ) and x
m
Y =
1 X
yj aj a>
j ,
m j=1
p
e0 .
respectively, and set x0 = λ1 (Y ) /3 x
Gradient updates: for t = 0, 1, 2, . . . , T − 1 do
1
xt+1 = xt − ηt ∇f xt .
1
1
1
(16)
(17)
1
Recognizing that the global phase/sign is unrecoverable from quadratic measurements,
1 we introduce the
`2 distance modulo the global phase as follows
dist(x, x\ ) := min kx − x\ k2 , kx + x\ k2 .
(18)
13
f (x
1
Our finding is summarized in the following theorem.
i.i.d.
Theorem 1. Let x\ ∈ Rn be a fixed vector. Suppose aj ∼ N (0, In ) for each 1 ≤ j ≤ m and m ≥c0 n log n
for some sufficiently large constant c0 > 0. Assume the step size obeys ηt ≡ η = c1 / log n · kx0 k22 for any
sufficiently small constant c1 > 0. Then
there exist some absolute constants 0 < ε < 1 and c2 > 0 such that
with probability at least 1 − O mn−5 , the Wirtinger flow iterates (Algorithm 1) satisfy that for all t ≥ 0,
dist(xt , x\ ) ≤ ε(1 − ηkx\ k22 /2)t kx\ k2 ,
p
t
\
≤ c2 log nkx\ k2 .
max a>
j x −x
1≤j≤m
(19a)
(19b)
Theorem 1 reveals a few intriguing properties of WF.
• Implicit regularization: Theorem 1 asserts that the incoherence properties are satisfied throughout
the execution of the algorithm (see (19b)), which formally justifies the implicit regularization feature we
hypothesized.
• Near-constant step size: Consider the case where kx\ k2 = 1. Theorem 1 establishes near-linear
convergence of WF with a substantially more aggressive step size η 1/ log n. Compared with the
choice η . 1/n admissible in [CLS15, Theorem 3.3], Theorem 1 allows WF to attain -accuracy within
O(log n log(1/)) iterations. The resulting computational complexity of WF is
1
,
O mn log n log
which significantly improves upon the result O mn2 log (1/) derived in [CLS15]. As a side note, if the
sample size further increases to m n log2 n, then a constant step size η 1 is also feasible, resulting
in an iteration complexity log(1/). This follows since withhigh probability, the entire trajectory resides
t
\
. kx\ k2 . We omit the details here.
within a more refined incoherence region maxj a>
j x −x
• Incoherence of spectral initialization: We have also demonstrated in Theorem 1 that the initial
guess x0 falls within the RIC and is hence nearly orthogonal to all design vectors. This provides a finer
characterization of spectral initialization, in comparison to prior theory that focuses primarily on the `2
accuracy [NJS13,CLS15]. We expect our leave-one-out analysis to accommodate other variants of spectral
initialization studied in the literature [CC17, CLM+ 16, WGE17, LL17, MM17].
3.2
Low-rank matrix completion
Let M ∈ Rn×n be a positive semidefinite matrix5 with rank r, and suppose its eigendecomposition is
\
M \ = U \ Σ\ U \> ,
(20)
where U \ ∈ Rn×r consists of orthonormal columns, and Σ\ is an r × r diagonal matrix with eigenvalues in
a descending order, i.e. σmax = σ1 ≥ · · · ≥ σr = σmin > 0. Throughout this paper, we assume the condition
number κ := σmax /σmin is bounded by a fixed constant, independent of the problem size (i.e. n and r).
Denoting X \ = U \ (Σ\ )1/2 allows us to factorize M \ as
M \ = X \ X \> .
(21)
Consider a random sampling model such that each entry of M \ is observed independently with probability
0 < p ≤ 1, i.e. for 1 ≤ j ≤ k ≤ n,
(
\
Mj,k
+ Ej,k with probability p,
(22)
Yj,k =
0,
else,
5 Here, we assume M \ to be positive semidefinite to simplify the presentation, but note that our analysis easily extends to
asymmetric low-rank matrices.
14
where the entries of E = [Ej,k ]1≤j≤k≤n are independent sub-Gaussian noise with sub-Gaussian norm σ
(see [Ver12, Definition 5.7]). We denote by Ω the set of locations being sampled, and PΩ (Y ) represents the
projection of Y onto the set of matrices supported in Ω. We note here that the sampling rate p, if not
known, can be faithfully estimated by the sample proportion |Ω|/n2 .
To fix ideas, we consider the following nonconvex optimization problem
minimizeX∈Rn×r
f (X) :=
1 X
4p
(j,k)∈Ω
2
>
e>
j XX ek − Yj,k .
(23)
The vanilla gradient descent algorithm (with spectral initialization) is summarized in Algorithm 2.
Algorithm 2 Vanilla gradient descent for matrix completion (with spectral initialization)
Input: Y = [Yj,k ]1≤j,k≤n , r, p.
Spectral initialization: Let U 0 Σ0 U 0> be the rank-r eigendecomposition of
M 0 :=
1
1
PΩ (Y ) = PΩ M \ + E ,
p
p
1/2
and set X 0 = U 0 Σ0
.
Gradient updates: for t = 0, 1, 2, . . . , T − 1 do
X t+1 = X t − ηt ∇f X t .
(24)
Before proceeding to the main theorem, we first introduce a standard incoherence parameter required for
matrix completion [CR09].
Definition 3 (Incoherence for matrix completion). A rank-r matrix M \ with eigendecomposition M \ =
U \ Σ\ U \> is said to be µ-incoherent if
r
r
µ
µr
\
\
U 2,∞ ≤
U F=
.
(25)
n
n
In addition, recognizing that X \ is identifiable only up to orthogonal transformation, we define the
optimal transform from the tth iterate X t to X \ as
ct := argmin X t R − X \
H
R∈O r×r
F
,
(26)
where Or×r is the set of r × r orthonormal matrices. With these definitions in place, we have the following
theorem.
Theorem 2. Let M \ be a rank r, µ-incoherent PSD matrix, and its condition number κ is a fixed constant.
Suppose the sample size satisfies n2 p ≥ Cµ3 r3 n log3 n for some sufficiently large constant C > 0, and the
noise satisfies
r
n
σmin
σ
p
.
(27)
3
p
κ µr log3 n
With probability at least 1 − O n−3 , the iterates of Algorithm 2 satisfy
r
n
ct − X \ ≤ C4 ρt µr √1 + C1 σ
X tH
X \ F,
(28a)
F
np
σmin p
s
s
!
log n
σ
n log n
t ct
\
t
+ C8
X \ 2,∞ ,
(28b)
X H − X 2,∞ ≤ C5 ρ µr
np
σmin
p
r
n
ct − X \ ≤ C9 ρt µr √1 + C10 σ
X tH
X\
(28c)
np
σmin p
15
for all 0 ≤ t ≤ T = O(n5 ), where C1 , C4 , C5 , C8 , C9 and C10 are some absolute positive constants and
1 − (σmin /5) · η ≤ ρ < 1, provided that 0 < ηt ≡ η ≤ 2/ (25κσmax ).
Theorem 2 provides the first theoretical guarantee of unregularized gradient descent for matrix completion, demonstrating near-optimal statistical accuracy and computational complexity.
• Implicit regularization: In Theorem 2, we bound the `2 /`∞ error of the iterates in a uniform manner
\
, which implies the iterates remain incoherent
via (28b). Note that X − X \ 2,∞ = maxj e>
j X −X
2
with the sensing vectors throughout and have small incoherence parameters (cf. (25)). In comparison, prior
works either include a penalty term on {ke>
j Xk2 }1≤j≤n [KMO10a,SL16] and/or kXkF [SL16] to encourage
an incoherent and/or low-norm solution, or add an extra projection operation to enforce incoherence
[CW15, ZL16]. Our results demonstrate that such explicit regularization is unnecessary.
• Constant step size: Without loss of generality we may assume that σmax = kM \ k = O(1), which can
be done by choosing proper scaling of M \ . Hence we have a constant step size ηt 1. Actually it is more
convenient to consider the scale invariant parameter ρ: Theorem 2 guarantees linear convergence of the
vanilla gradient descent at a constant rate ρ. Remarkably, the convergence occurs with respect to three
different unitarily invariant norms: the Frobenius norm k · kF , the `2 /`∞ norm k · k2,∞ , and the spectral
norm k · k. As far as we know, the latter two are established for the first time. Note that our result even
improves upon that for regularized gradient descent; see Table 1.
• Near-optimal sample complexity: When the rank r = O(1), vanilla gradient descent succeeds under
a near-optimal sample complexity n2 p & npoly log n, which is statistically optimal up to some logarithmic
factor.
• Near-minimal Euclidean error: In view of (28a), as t increases, the Euclidean error of vanilla GD
converges to
r
n
σ
t ct
\
X \ F,
(29)
X H −X F .
σmin p
which coincides with the theoretical guarantee in [CW15, Corollary 1] and matches the minimax lower
bound established in [NW12, KLT11].
• Near-optimal entrywise error: The `2 /`∞ error bound (28b) immediately yields entrywise control of
the empirical risk. Specifically, as soon as t is sufficiently large (so that the first term in (28b) is negligible),
we have
ct X t H
ct − X \ >
ct − X \ X \>
X t X t> − M \ ∞ ≤ X t H
+ X tH
∞
∞
ct
ct − X \
≤ X tH
X tH
2,∞
s
σ
n log n
M\ ∞ ,
.
σmin
p
2,∞
ct − X \
+ X tH
2,∞
X\
2,∞
ct −X \ k2,∞ ≤ kX \ k2,∞ and kM \ k∞ =
where the last line follows from (28b) as well as the facts that kX t H
\ 2
kX k2,∞ . Compared with the Euclidean loss (29), this implies that when r = O(1), the entrywise error of
X t X t> is uniformly spread out across all entries. As far as we know, this is the first result that reveals
near-optimal entrywise error control for noisy matrix completion using nonconvex optimization, without
resorting to sample splitting.
Remark 3. Theorem 2 remains valid if the total number T of iterations obeys T = nO(1) . In the noiseless
case where σ = 0, the theory allows arbitrarily large T .
Finally, we report the empirical statistical accuracy of vanilla gradient descent in the presence of noise.
Figure 5 displays the squared relative error of vanilla gradient descent as a function of the signal-to-noise
ratio (SNR), where the SNR is defined to be
P
\ 2
kM \ k2F
(j,k)∈Ω Mj,k
SNR := P
≈
,
(30)
n2 σ 2
(j,k)∈Ω Var (Ej,k )
16
-10
-20
-30
-40
-50
-60
-70
-80
-90
10
20
30
40
50
60
70
80
c (measured by k·k , k·k , k·k
Figure 5: Squared relative error of the estimate X
F
2,∞ modulo global transfor>
c
c
c
mation) and M = X X (measured by k·k∞ ) vs. SNR for noisy matrix completion, where n = 500, r = 10,
c denotes the estimate returned by Algorithm 2 after convergence.
p = 0.1, and ηt = 0.2. Here X
and the relative error is measured in terms of the square of the metrics as in (28) as well as the squared
entrywise prediction error. Both the relative error and the SNR are shown on a dB scale (i.e. 10 log10 (SNR)
and 10 log10 (squared relative error) are plotted). As one can see from the plot, the squared relative error
scales inversely proportional to the SNR, which is consistent with our theory.6
3.3
Blind deconvolution
Suppose we have collected m bilinear measurements
yj = b∗j h\ x\∗ aj ,
(31)
1 ≤ j ≤ m,
i.i.d.
where aj follows a complex Gaussian distribution, i.e. aj ∼ N 0, 21 IK + iN 0, 21 IK for 1 ≤ j ≤ m, and
∗
B := [b1 , · · · , bm ] ∈ Cm×K is formed by the first K columns of a unitary discrete Fourier transform (DFT)
matrix F ∈ Cm×m obeying F F ∗ = Im (see Appendix D.3.2 for a brief introduction to DFT matrices). This
setup models blind deconvolution, where the two signals under convolution belong to known low-dimensional
subspaces of dimension K [ARR14]7 . In particular, the partial DFT matrix B plays an important role in
image blind deblurring. In this subsection, we consider solving the following nonconvex optimization problem
minimizeh,x∈CK
f (h, x) =
m
X
j=1
b∗j hx∗ aj − yj
2
.
(32)
The (Wirtinger) gradient descent algorithm (with spectral initialization) is summarized in Algorithm 3; here,
∇h f (h, x) and ∇x f (h, x) stand for the Wirtinger gradient and are given in (77) and (78), respectively;
see [CLS15, Section 6] for a brief introduction to Wirtinger calculus.
It is self-evident that h\ and x\ are only identifiable up to global scaling, that is, for any nonzero α ∈ C,
h\ x\∗ =
∗
1 \
h αx\ .
α
In light of this, we will measure the discrepancy between
h
z :=
∈ C2K
and
x
6 Note
z \ :=
\
h
∈ C2K
x\
(33)
that when M \ is well-conditioned and when r = O(1), one can easily check that SNR ≈ kM \ k2F / n2 σ 2
2
2
theory says that the squared relative error bound is proportional to σ /σmin .
have set the dimensions of the two subspaces equal, and it is straightforward to extend our results to the
case of unequal subspace dimensions.
2 /(n2 σ 2 ), and our
σmin
7 For simplicity, we
17
via the following function
dist z, z
\
:= min
α∈C
s
2
1
h − h\
α
2
2
+ kαx − x\ k2 .
(34)
Algorithm 3 Vanilla gradient descent for blind deconvolution (with spectral initialization)
Input: {aj }1≤j≤m , {bj }1≤j≤m and {yj }1≤j≤m .
Spectral initialization: Let σ1 (M ), ȟ0 and x̌0 be the leading singular value, left and right singular
vectors of
m
X
M :=
yj bj a∗j ,
j=1
p
p
respectively. Set h0 = σ1 (M ) ȟ0 and x0 = σ1 (M ) x̌0 .
Gradient updates: for t = 0, 1, 2, . . . , T − 1 do
" 1
#
t+1 t
t
t
∇
f
h
,
x
2
h
t
h
h
.
=
− η kx1k2
xt+1
xt
∇ f ht , xt
kht k2 x
(35)
2
Before proceeding, we need to introduce the incoherence parameter [ARR14, LLSW16], which is crucial
for blind deconvolution, whose role is similar to the incoherence parameter (cf. Definition 3) in matrix
completion.
Definition 4 (Incoherence for blind deconvolution). Let the incoherence parameter µ of h\ be the smallest
number such that
µ
h\ 2 .
(36)
max b∗j h\ ≤ √
1≤j≤m
m
The incoherence parameter describes the spectral flatness of the signal h\ . With this definition in place,
we have the following theorem, where for identifiability we assume that h\ 2 = x\ 2 .
Theorem 3. Suppose the number of measurements obeys m ≥ Cµ2 K log9 m for some sufficiently large
constant C > 0, and suppose the step size η > 0 is taken to be some sufficiently small constant. Then there
exist constants c1 , c2 , C1 , C3 , C4 > 0 such that with probability exceeding 1 − c1 m−5 − c1 me−c2 K , the iterates
in Algorithm 3 satisfy
η t 1
dist z t , z \ ≤ C1 1 −
(37a)
x\ 2 ,
16 log2 m
1
max a∗l αt xt − x\ ≤ C3 1.5
x\ 2 ,
(37b)
1≤l≤m
log m
µ
1
max b∗l ht ≤ C4 √ log2 m h\ 2
(37c)
t
1≤l≤m
m
α
for all t ≥ 0. Here, we denote αt as the alignment parameter,
αt := arg min
α∈C
1 t
h − h\
α
2
2
+ αxt − x\
2
2
.
(38)
Theorem 3 provides the first theoretical guarantee of unregularized gradient descent for blind deconvolution at a near-optimal statistical and computational complexity. A few remarks are in order.
• Implicit regularization: Theorem 3 reveals that the unregularized gradient descent iterates remain
incoherent with the sampling mechanism (see (37b) and (37c)). Recall that prior works operate upon a
regularized cost function with an additional penalty term that regularizes the global scaling {khk2 , kxk2 }
and the incoherence {|b∗j h|}1≤j≤m [LLSW16, HH17, LS17]. In comparison, our theorem implies that it is
unnecessary to regularize either the incoherence or the scaling ambiguity, which is somewhat surprising.
This justifies the use of regularization-free (Wirtinger) gradient descent for blind deconvolution.
18
• Constant step size: Compared to the step size ηt . 1/m suggested in [LLSW16] for regularized gradient
descent, our theory admits a substantially more aggressive step size (i.e. ηt 1) even without regularization. Similar to phase retrieval, the computational efficiency is boosted by a factor of m, attaining
-accuracy within O (log(1/)) iterations (vs. O (m log(1/)) iterations in prior theory).
• Near-optimal sample complexity: It is demonstrated that vanilla gradient descent succeeds at a
near-optimal sample complexity up to logarithmic factors, although our requirement is slightly worse
than [LLSW16] which uses explicit regularization. Notably, even under the sample complexity herein, the
iteration complexity given in [LLSW16] is still O (m/poly log(m)).
• Incoherence of spectral initialization: As in phase retrieval, Theorem 3 demonstrates that the estimates returned by the spectral method are incoherent with respect to both {aj } and {bj }. In contrast, [LLSW16] recommends a projection operation (via a linear program) to enforce incoherence of the
initial estimates, which is dispensable according to our theory.
• Contraction in k·kF : It is easy to check that the Frobenius norm error satisfies ht xt∗ − h\ x\∗
dist z t , z \ , and therefore Theorem 3 corroborates the empirical results shown in Figure 1(c).
4
F
.
Related work
Solving nonlinear systems of equations has received much attention in the past decade. Rather than directly
attacking the nonconvex formulation, convex relaxation lifts the object of interest into a higher dimensional
space and then attempts recovery via semidefinite programming (e.g. [RFP10, CSV13, CR09, ARR14]). This
has enjoyed great success in both theory and practice. Despite appealing statistical guarantees, semidefinite
programming is in general prohibitively expensive when processing large-scale datasets.
Nonconvex approaches, on the other end, have been under extensive study in the last few years, due
to their computational advantages. There is a growing list of statistical estimation problems for which
nonconvex approaches are guaranteed to find global optimal solutions, including but not limited to phase
retrieval [NJS13, CLS15, CC17], low-rank matrix sensing and completion [TBS+ 16, BNS16, PKCS16, CW15,
ZL15, GLM16], blind deconvolution and self-calibration [LLSW16, LS17, CJ16, LLB17, LLJB17], dictionary
learning [SQW17], tensor decomposition [GM17], joint alignment [CC16], learning shallow neural networks [SJL17, ZSJ+ 17], robust subspace learning [NNS+ 14, MZL17, LM14, CJN17]. In several problems
[SQW16,SQW17,GM17,GLM16,LWL+ 16,LT16,MBM16,MZL17], it is further suggested that the optimization landscape is benign under sufficiently large sample complexity, in the sense that all local minima are
globally optimal, and hence nonconvex iterative algorithms become promising in solving such problems. Below we review the three problems studied in this paper in more details. Some state-of-the-art results are
summarized in Table 1.
• Phase retrieval. Candès et al. proposed PhaseLift [CSV13] to solve the quadratic systems of equations
based on convex programming. Specifically, it lifts the decision variable x\ into a rank-one matrix X \ =
x\ x\> and translates the quadratic constraints of x\ in (14) into linear constraints of X \ . By dropping
the rank constraint, the problem becomes convex [CSV13, SESS11, CL14, CCG15, CZ15, Tro15a]. Another
convex program PhaseMax [GS16,BR17,HV16,DTL17] operates in the natural parameter space via linear
programming, provided that an anchor vector is available. On the other hand, alternating minimization
[NJS13] with sample splitting has been shown to enjoy much better computational guarantee. In contrast,
Wirtinger Flow [CLS15] provides the first global convergence result for nonconvex methods without sample
splitting, whose statistical and computational guarantees are later improved by [CC17] via an adaptive
truncation strategy. Several other variants of WF are also proposed [CLM+ 16, KÖ16, Sol17], among
which an amplitude-based loss function has been investigated [WGE17, ZZLC17, WZG+ 16, WGSC17]. In
particular, [ZZLC17] demonstrates that the amplitude-based loss function has a better curvature, and
vanilla gradient descent can indeed converge with a constant step size at the order-wise optimal sample
complexity. A small sample of other nonconvex phase retrieval methods include [SBE14, SR15, CL16,
CFL15, DR17, GX16, Wei15, BEB17, TV17, CLW17, QZEW17], which are beyond the scope of this paper.
• Matrix completion. Nuclear norm minimization was studied in [CR09] as a convex relaxation paradigm to
solve the matrix completion problem. Under certain incoherence conditions imposed upon the ground truth
19
matrix, exact recovery is guaranteed under near-optimal sample complexity [CT10, Gro11, Rec11, Che15,
DR16]. Concurrently, several works [KMO10a, KMO10b, LB10, JNS13, HW14, HMLZ15, ZWL15, JN15,
TW16, JKN16, WCCL16, ZWL15] tackled the matrix completion problem via nonconvex approaches. In
particular, the seminal work by Keshavan et al. [KMO10a, KMO10b] pioneered the two-stage approach
that is widely adopted by later works. Sun and Luo [SL16] demonstrated the convergence of gradient
descent type methods for noiseless matrix completion with a regularized nonconvex loss function. Instead
of penalizing the loss function, [CW15, ZL16] employed projection to enforce the incoherence condition
throughout the execution of the algorithm. To the best of our knowledge, no rigorous guarantees have
been established for matrix completion without explicit regularization. A notable exception is [JKN16],
which uses unregularized stochastic gradient descent for matrix completion in the online setting. However,
the analysis is performed with fresh samples in each iteration. Our work closes the gap and makes the first
contribution towards understanding implicit regularization in gradient descent without sample splitting.
In addition, entrywise eigenvector perturbation has been studied by [JN15] and [AFWZ17] in order to
analyze SVD-based algorithms for matrix completion, which helps us establish theoretical guarantees for
the spectral initialization step.
• Blind deconvolution. In [ARR14], Ahmed et al. first proposed to invoke similar lifting ideas for blind
deconvolution, which translates the bilinear measurements (31) into a system of linear measurements of
a rank-one matrix X \ = h\ x\∗ . Near-optimal performance guarantees have been established for convex
relaxation [ARR14]. Under the same model, Li et al. [LLSW16] proposed a regularized gradient descent
algorithm that directly optimizes the nonconvex loss function (32) with a few regularization terms that
account for scaling ambiguity and incoherence. See [CJ16] for a related formulation. In [HH17], a Riemannian steepest descent method is developed that removes the regularization for scaling ambiguity, although
they still need to regularize for incoherence. In [AAH17], a linear program is proposed but requires exact knowledge of the signs of the signals. Blind deconvolution has also been studied for other models –
interested readers may refer to [Chi16, LS17, LLJB17, LS15, LTR16, ZLK+ 17, WC16].
On the other hand, our analysis framework is based on a leave-one-out perturbation argument. This
technique has been widely used to analyze high-dimensional problems with random designs, including but not
limited to robust M-estimation [EKBB+ 13,EK15], statistical inference for sparse regression [JM15], likelihood
ratio test in logistic regression [SCC17], phase synchronization [ZB17, AFWZ17], ranking from pairwise
comparisons [CFMW17], community recovery [AFWZ17], and recovering structured probability matrices
[Che17]. In particular, this technique results in tight performance guarantees for the generalized power
method [ZB17], the spectral method [AFWZ17, CFMW17], and convex programming approaches [EK15,
ZB17, SCC17, CFMW17], however it has not been applied to analyze nonconvex optimization algorithms.
Finally, we note that the notion of implicit regularization — broadly defined — arises in settings far
beyond the models and algorithms considered herein. For instance, it has been conjectured that in matrix
factorization, over-parameterized stochastic gradient descent effectively enforces certain norm constraints,
allowing it to converge to a minimal-norm solution as long as it starts from the origin [GWB+ 17]. The
stochastic gradient methods have also been shown to implicitly enforce Tikhonov regularization in several
statistical learning settings [LCR16]. More broadly, this phenomenon seems crucial in enabling efficient
training of deep neural networks [NTS14, NTSS17, ZBH+ 16, SHS17, KMN+ 16].
5
A general recipe for trajectory analysis
In this section, we sketch a general recipe for establishing performance guarantees of gradient descent, which
conveys the key idea for proving the main results of this paper. The main challenge is to demonstrate
that appropriate incoherence conditions are preserved throughout the trajectory of the algorithm. This
requires exploiting statistical independence of the samples in a careful manner, in conjunction with generic
optimization theory. Central to our approach is a leave-one-out perturbation argument, which allows to
decouple the statistical dependency while controlling the component-wise incoherence measures.
20
General Recipe (a leave-one-out analysis)
Step 1: characterize restricted strong convexity and smoothness of f , and identify the region
of incoherence and contraction (RIC).
Step 2: introduce leave-one-out sequences {X t,(l) } and {H t,(l) } for each l, where {X t,(l) }
(resp. {H t,(l) }) is independent of any sample involving φl (resp. ψl );
Step 3: establish the incoherence condition for {X t } and {H t } via induction. Suppose the
iterates satisfy the claimed conditions in the tth iteration:
(a) show, via restricted strong convexity, that the true iterates (X t+1 , H t+1 ) and the
leave-one-out version (X t+1,(l) , H t+1,(l) ) are exceedingly close;
(b) use statistical independence to show that X t+1,(l) − X \ (resp. H t+1,(l) − H \ ) is incoherent w.r.t. φl (resp. ψl ), namely, kφ∗l (X t+1,(l) − X \ )k2 and kψl∗ (H t+1,(l) − H \ )k2
are both well-controlled;
(c) combine the bounds to establish the desired incoherence condition concerning
max kφ∗l (X t+1 − X \ )k2 and max kψl∗ (H t+1 − H \ )k2 .
l
5.1
l
General model
Consider the following problem where the samples are collected in a bilinear/quadratic form as
yj = ψj∗ H \ X \∗ φj ,
1 ≤ j ≤ m,
(39)
where the objects of interest H \ , X \ ∈ Cn×r or Rn×r might be vectors or tall matrices taking either real
or complex values. The design vectors {ψj } and {φj } are in either Cn or Rn , and can be either random or
deterministic. This model is quite general and entails all three examples in this paper as special cases:
• Phase retrieval : H \ = X \ = x\ ∈ Rn , and ψj = φj = aj ;
• Matrix completion: H \ = X \ ∈ Rn×r and ψj , φj ∈ {e1 , · · · , en };
• Blind deconvolution: H \ = h\ ∈ CK , X \ = x\ ∈ CK , φj = aj , and ψj = bj .
For this setting, the empirical loss function is given by
m
f (Z) := f (H, X) =
2
1 X ∗
ψj HX ∗ φj − yj ,
m j=1
where we denote Z = (H, X). To minimize f (Z), we proceed with vanilla gradient descent
Z t+1 = Z t − η∇f Z t ,
∀t ≥ 0
following a standard spectral initialization, where η is the step size. As a remark, for complex-valued
problems, the gradient (resp. Hessian) should be understood as the Wirtinger gradient (resp. Hessian).
It is clear from (39) that Z \ = (H \ , X \ ) can only be recovered up to certain global ambiguity. For clarity
of presentation, we assume in this section that such ambiguity has already been taken care of via proper
global transformation.
5.2
Outline of the recipe
We are now positioned to outline the general recipe, which entails the following steps.
• Step 1: characterizing local geometry in the RIC. Our first step is to characterize a region R —
which we term as the region of incoherence and contraction (RIC) — such that the Hessian matrix ∇2 f (Z)
obeys strong convexity and smoothness,
0 ≺ αI ∇2 f (Z) βI,
21
∀Z ∈ R,
(40)
or at least along certain directions (i.e. restricted strong convexity and smoothness), where β/α scales
slowly (or even remains bounded) with the problem size. As revealed by optimization theory, this geometric
property (40) immediately implies linear convergence with the contraction rate 1 − O(α/β) for a properly
chosen step size η, as long as all iterates stay within the RIC.
A natural question then arises: what does the RIC R look like? As it turns out, the RIC typically contains
all points such that the `2 error kZ − Z \ kF is not too large and
(incoherence)
max φ∗j (X − X \ )
j
2
and max ψj∗ (H − H \ )
j
2
are well-controlled.
(41)
In the three examples, the above incoherence condition translates to:
\
– Phase retrieval : maxj a>
j (x − x ) is well-controlled;
– Matrix completion: X − X \
– Blind deconvolution:
is well-controlled;
2,∞
maxj a>
(x
−
j
\
x\ ) and maxj b>
j (h − h ) are well-controlled.
• Step 2: introducing the leave-one-out sequences. To justify that no iterates leave the RIC, we rely
on the construction of auxiliary sequences. Specifically, for each l, produce an auxiliary sequence {Z t,(l) =
(X t,(l) , H t,(l) )} such that X t,(l) (resp. H t,(l) ) is independent of any sample involving φl (resp. ψl ). As an
example, suppose that the φl ’s and the ψl ’s are independently and randomly generated. Then for each l,
one can consider a leave-one-out loss function
2
1 X ∗
f (l) (Z) :=
ψj HX ∗ φj − yj
m
j:j6=l
that discards the lth sample. One further generates {Z t,(l) } by running vanilla gradient descent w.r.t. this
auxiliary loss function, with a spectral initialization that similarly discards the lth sample. Note that this
procedure is only introduced to facilitate analysis and is never implemented in practice.
• Step 3: establishing the incoherence condition. We are now ready to establish the incoherence
condition with the assistance of the auxiliary sequences. Usually the proof proceeds by induction, where
our goal is to show that the next iterate remains within the RIC, given that the current one does.
– Step 3(a): proximity between the original and the leave-one-out iterates. As one can anticipate, {Z t } and {Z t,(l) } remain “glued” to each other along the whole trajectory, since their constructions
differ by only a single sample. In fact, as long as the initial estimates stay sufficiently close, their gaps
will never explode. To intuitively see why, use the fact ∇f (Z t ) ≈ ∇f (l) (Z t ) to discover that
Z t+1 − Z t+1,(l) = Z t − η∇f (Z t ) − Z t,(l) − η∇f (l) Z t,(l)
≈ Z t − Z t,(l) − η∇2 f (Z t ) Z t − Z t,(l) ,
which together with the strong convexity condition implies `2 contraction
Z t+1 − Z t+1,(l) F ≈ I − η∇2 f (Z t ) Z t − Z t,(l)
≤ Z t − Z t,(l)
F
2
.
Indeed, (restricted) strong convexity is crucial in controlling the size of leave-one-out perturbations.
– Step 3(b): incoherence condition of the leave-one-out iterates. The fact that Z t+1 and
Z t+1,(l) are exceedingly close motivates us to control the incoherence of Z t+1,(l) − Z \ instead, for
1 ≤ l ≤ m. By construction, X t+1,(l) (resp. H t+1,(l) ) is statistically independent of any sample involving the design vector
leads to a more friendly analysis for controlling
φl (resp. ψl ), a fact that typically
φ∗l X t+1,(l) − X \ 2 and ψl∗ H t+1,(l) − H \ 2 .
– Step 3(c): combining the bounds. With these results in place, apply the triangle inequality to
obtain
φ∗l X t+1 − X \ 2 ≤ φl k2 X t+1 − X t+1,(l) F + φ∗l X t+1,(l) − X \ 2 ,
where the first term
is controlled in Step 3(a) and the second term is controlled in Step 3(b). The term
ψl∗ H t+1 − H \ 2 can be bounded similarly. By choosing the bounds properly, this establishes the
incoherence condition for all 1 ≤ l ≤ m as desired.
22
6
Analysis for phase retrieval
In this section, we instantiate the general recipe presented in Section 5 to phase retrieval and prove Theorem 1.
Similar to the Section 7.1 in [CLS15], we are going to use ηt = c1 /(log n · kx\ k22 ) instead of c1 /(log n · kx0 k22 )
as the step size for analysis. This is because with high probability, kx0 k2 and kx\ k2 are rather close in the
relative sense. Without loss of generality, we assume throughout this section that x\ 2 = 1 and
dist(x0 , x\ ) = kx0 − x\ k2 ≤ kx0 + x\ k2 .
(42)
In addition, the gradient and the Hessian of f (·) for this problem (see (15)) are given respectively by
i
1 X h > 2
aj x − yj a>
j x aj ,
m j=1
m
∇f (x) =
i
2
1 Xh
3 a>
x
−
y
aj a>
j
j
j ,
m j=1
(43)
m
∇2 f (x) =
(44)
which are useful throughout the proof.
6.1
6.1.1
Step 1: characterizing local geometry in the RIC
Local geometry
We start by characterizing the region that enjoys both strong convexity and the desired level of smoothness.
This is supplied in the following lemma, which plays a crucial role in the subsequent analysis.
Lemma 1 (Restricted strong convexity and smoothness for phase retrieval). Fix any sufficiently small
constant C1 > 0 and any sufficiently large constant C2 > 0, and suppose the sample complexity obeys
m ≥ c0 n log n for some sufficiently large constant c0 > 0. With probability at least 1 − O(mn−10 ),
∇2 f (x) (1/2) · In
holds simultaneously for all x ∈ Rn satisfying x − x\
2
≤ 2C1 ; and
∇2 f (x) (5C2 (10 + C2 ) log n) · In
holds simultaneously for all x ∈ Rn obeying
max
1≤j≤m
a>
j
x − x\
\
x−x
Proof. See Appendix A.1.
2
(45a)
≤ 2C1 ,
p
≤ C2 log n.
(45b)
In words, Lemma 1 reveals that the Hessian matrix is positive definite and (almost) well-conditioned,
if one restricts attention to the set of points that are (i) not far away from the truth (cf. (45a)) and (ii)
incoherent with respect to the measurement vectors {aj }1≤j≤m (cf. (45b)).
6.1.2
Error contraction
As we point out before, the nice local geometry enables `2 contraction, which we formalize below.
Lemma 2. With probability exceeding 1 − O(mn−10 ), one has
xt+1 − x\
2
≤ (1 − η/2) xt − x\
2
(46)
for any xt obeying the conditions (45), provided that the step size satisfies 0 < η ≤ 1/ [5C2 (10 + C2 ) log n].
23
Proof. This proof applies the standard argument when establishing the `2 error contraction of gradient
descent for strongly convex and smooth functions. See Appendix A.2.
With the help of Lemma 2, we can turn the proof of Theorem 1 into ensuring that the trajectory
{xt }0≤t≤n lies in the RIC specified by (47).8 This is formally stated in the next lemma.
Lemma 3. Suppose for all 0 ≤ t ≤ T0 := n, the trajectory {xt } falls within the region of incoherence and
contraction (termed the RIC), namely,
xt − x\
t
max a>
l x −x
1≤l≤m
2
\
≤ C1 ,
p
≤ C2 log n,
(47a)
(47b)
then the claims in Theorem 1 hold true. Here and throughout this section, C1 , C2 > 0 are two absolute
constants as specified in Lemma 1.
Proof. See Appendix A.3.
6.2
Step 2: introducing the leave-one-out sequences
In comparison to the `2 error bound (47a) that captures the overall loss, the incoherence hypothesis (47b) —
which concerns sample-wise control of the empirical risk — is more complicated to establish. This is partly
due to the statistical dependence between xt and the sampling vectors {al }. As described in the general
recipe, the key idea is the introduction of a leave-one-out version of the WF iterates, which removes a single
measurement from consideration.
To be precise, for each 1 ≤ l ≤ m, we define the leave-one-out empirical loss function as
i2
1 X h > 2
f (l) (x) :=
aj x − yj ,
(48)
4m
j:j6=l
and the auxiliary trajectory xt,(l)
initialization x
0,(l)
t≥0
is constructed by running WF w.r.t. f (l) (x). In addition, the spectral
is computed based on the rescaled leading eigenvector of the leave-one-out data matrix
1 X
Y (l) :=
yj aj a>
(49)
j .
m
j:j6=l
Clearly, the entire sequence xt,(l) t≥0 is independent of the lth sampling vector al . This auxiliary procedure
is formally described in Algorithm 4.
6.3
Step 3: establishing the incoherence condition by induction
As revealed by Lemma 3, it suffices to prove that the iterates {xt }0≤t≤T0 satisfies (47) with high probability.
Our proof will be inductive in nature. For the sake of clarity, we list all the induction hypotheses:
max
1≤l≤m
xt − x\
2
xt − xt,(l)
2
t
max a>
j x −x
1≤j≤m
\
≤ C1 ,
r
≤ C3
≤ C2
p
(51a)
log n
n
(51b)
log n.
(51c)
Here C3 > 0 is some universal constant. The induction on (51a), that is,
xt+1 − x\
2
≤ C1 ,
(52)
has already been established in Lemma 2. This subsection is devoted to establishing (51b) and (51c) for the
(t + 1)th iteration, assuming that (51) holds true up to the tth iteration. We defer the justification of the
base case (i.e. initialization at t = 0) to Section 6.4.
8 Here,
we deliberately change 2C1 in (45a) to C1 in the definition of the RIC (47a) to ensure the correctness of the analysis.
24
Algorithm 4 The lth leave-one-out sequence for phase retrieval
Input: {aj }1≤j≤m,j6=l and {yj }1≤j≤m,j6
=l . 0,(l)
e
Spectral initialization: let λ1 Y (l) and x
be the leading eigenvalue and eigenvector of
Y (l) =
1 X
yj aj a>
j ,
m
j:j6=l
respectively, and set
x0,(l)
q
λ1 Y (l) /3 x
e0,(l) − x\
e0,(l) ,
if x
q
=
− λ1 Y (l) /3 x
e0,(l) , else.
2
e0,(l) + x\
≤ x
2
,
Gradient updates: for t = 0, 1, 2, . . . , T − 1 do
xt+1,(l) = xt,(l) − ηt ∇f (l) xt,(l) .
(50)
• Step 3(a): proximity between the original and the leave-one-out iterates. The leave-one-out
sequence {xt,(l) } behaves similarly to the true WF iterates {xt } while maintaining statistical independence
with al , a key fact that allows us to control the incoherence of lth leave-one-out sequence w.r.t. al . We
will formally quantify the gap between xt+1 and xt+1,(l) in the following lemma, which establishes the
induction in (51b).
Lemma 4. Under the hypotheses (51), with probability at least 1 − O(mn−10 ),
r
log n
t+1
t+1,(l)
max x
−x
≤ C3
,
2
1≤l≤m
n
(53)
as long as the sample size obeys m n log n and the stepsize 0 < η ≤ 1/ [5C2 (10 + C2 ) log n].
Proof. The proof relies heavily on the restricted strong convexity (see Lemma 1) and is deferred to Appendix A.4.
• Step 3(b): incoherence of the leave-one-out iterates. By construction, xt+1,(l) is statistically
independent of the sampling vector al . One can thus invoke the standard Gaussian
concentration results
and the union bound to derive that with probability at least 1 − O mn−10 ,
t+1,(l)
max a>
− x\
l x
1≤l≤m
p
≤ 5 log n xt+1,(l) − x\
2
p
≤ 5 log n xt+1,(l) − xt+1 2 + xt+1 − x\
!
r
(ii) p
log n
+ C1
≤ 5 log n C3
n
p
≤ C4 log n
(i)
2
(54)
holds for some constant C4 ≥ 6C1 > 0 and n sufficiently large. Here, (i) comes from the triangle inequality,
and (ii) arises from the proximity bound (53) and the condition (52).
• Step 3(c): combining the bounds. We are now prepared to establish (51c) for the (t + 1)th iteration.
Specifically,
t+1
max a>
− x\
l x
1≤l≤m
t+1
≤ max a>
− xt+1,(l)
l x
1≤l≤m
25
t+1,(l)
+ max a>
− x\
l x
1≤l≤m
p
≤ max kal k2 xt+1 − xt+1,(l) 2 + C4 log n
1≤l≤m
r
(ii) √
p
p
log n
+ C4 log n ≤ C2 log n,
≤ 6n · C3
n
(i)
(55)
where (i) follows from the Cauchy-Schwarz inequality and (54), the inequality (ii) is a consequence of (53)
and (98), and the last inequality holds as long as C2 /(C3 + C4 ) is sufficiently large.
Using mathematical induction and the union bound, we establish (51) for all t ≤ T0 = n with high probability.
This in turn concludes the proof of Theorem 1, as long as the hypotheses are valid for the base case.
6.4
The base case: spectral initialization
In the end, we return to verify the induction hypotheses for the base case (t = 0), i.e. the spectral initialization
obeys (51). The following lemma justifies (51a) by choosing δ sufficiently small.
Lemma 5. Fix any small constant δ > 0, and suppose m > c0 n log n for some large constant c0 > 0.
e0 as defined in Algorithm 1, and suppose without loss of generality that
Consider the two vectors x0 and x
(42) holds. Then with probability exceeding 1 − O(n−10 ), one has
kY − E [Y ]k ≤ δ,
kx0 − x\ k2 ≤ 2δ
and
e0 − x\
x
2
≤
√
(56)
2δ.
(57)
Proof. This result follows directly from the Davis-Kahan sinΘ theorem. See Appendix A.5.
We then move on to justifying (51b), the proximity between the original and leave-one-out iterates for
t = 0.
Lemma 6. Suppose m > c0 n log n for some large constant c0 > 0. Then with probability at least 1 − O(mn−10 ),
one has
r
log n
0
0,(l)
≤ C3
max x − x
.
(58)
2
1≤l≤m
n
Proof. This is also a consequence of the Davis-Kahan sinΘ theorem. See Appendix A.6.
The final claim (51c) can be proved using the same argument as in deriving (55), and hence is omitted.
7
Analysis for matrix completion
In this section, we instantiate the general recipe presented in Section 5 to matrix completion and prove
Theorem 2. Before continuing, we first gather a few useful facts regarding the loss function in (23). The
gradient of it is given by
1
∇f (X) = PΩ XX > − M \ + E X.
(59)
p
We define the expected gradient (with respect to the sampling set Ω) to be
∇F (X) = XX > − M \ + E X
and also the (expected) gradient without noise to be
∇fclean (X) =
1
PΩ XX > − M \ X
p
and
∇Fclean (X) = XX > − M \ X.
(60)
In addition, we need the Hessian ∇2 fclean (X), which is represented by an nr×nr matrix. Simple calculations
reveal that for any V ∈ Rn×r ,
2
1
1
>
vec (V ) ∇2 fclean (X) vec (V ) =
PΩ V X > + XV > F +
PΩ XX > − M \ , V V > ,
(61)
2p
p
where vec(V ) ∈ Rnr denotes the vectorization of V .
26
7.1
7.1.1
Step 1: characterizing local geometry in the RIC
Local geometry
The first step is to characterize the region where the empirical loss function enjoys restricted strong convexity
and smoothness in an appropriate sense. This is formally stated in the following lemma.
Lemma 7 (Restricted strong convexity and smoothness for matrix completion). Suppose that the sample
size obeys n2 p ≥ Cκ2 µrn log n for some sufficiently large constant C > 0. Then with probability at least
1 − O n−10 , the Hessian ∇2 fclean (X) as defined in (61) obeys
>
vec (V ) ∇2 fclean (X) vec (V ) ≥
σmin
2
kV kF
2
∇2 fclean (X) ≤
and
5
σmax
2
(62)
for all X and V = Y HY − Z, with HY := arg minR∈Or×r kY R − ZkF , satisfying:
X − X\
where 1/
p
κ3 µr log2 n and δ 1/κ.
2,∞
\
≤ X\
2,∞
,
\
kZ − X k ≤ δkX k,
(63a)
(63b)
Proof. See Appendix B.1.
Lemma 7 reveals that the Hessian matrix is well-conditioned in a neighborhood close to X \ that remains
incoherent measured in the `2 /`∞ norm (cf. (63a)), and along directions that point towards points which
are not far away from the truth in the spectral norm (cf. (63b)).
Remark 4. The second condition (63b) is characterized using the spectral norm k·k, while in previous works
this is typically presented in the Frobenius norm k · kF . It is also worth noting that the Hessian matrix —
even in the infinite-sample and noiseless case — is rank-deficient and cannot be positive definite. As a result,
we resort to the form of strong convexity by restricting attention to certain directions (see the conditions on
V ).
7.1.2
Error contraction
Our goal is to demonstrate the error bounds (28) measured in three different norms. Notably, as long as
ct − X \ k2,∞ is sufficiently small. Under our sample
the iterates satisfy (28) at the tth iteration, then kX t H
ct satisfies the `2 /`∞ condition (63a) required in Lemma 7. Consequently, we
complexity assumption, X t H
can invoke Lemma 7 to arrive at the following error contraction result.
Lemma 8 (Contraction w.r.t. the Frobenius norm). Suppose n2 p ≥ Cκ3 µ3 r3 n log3 n and the noise satisfies
(27). If the iterates satisfy (28a) and (28b) at the tth iteration, then with probability at least 1 − O(n−10 ),
r
σ
n
\
ct+1 − X \ ≤ C4 ρt+1 µr √1
X t+1 H
X
+
C
X\ F
1
F
F
np
σmin p
holds as long as 0 < η ≤ 2/(25κσmax ), 1 − (σmin /4) · η ≤ ρ < 1, and C1 is sufficiently large.
Proof. The proof is built upon Lemma 7. See Appendix B.2.
Further, if the current iterate satisfies all three conditions in (28), then we can derive a stronger sense of
error contraction, namely, contraction in terms of the spectral norm.
Lemma 9 (Contraction w.r.t. the spectral norm). Suppose n2 p ≥ Cκ3 µ3 r3 n log3 n and the noise satisfies
(27). If the iterates satisfy (28) at the tth iteration, then
r
σ
n
ct+1 − X \ ≤ C9 ρt+1 µr √1
X t+1 H
X \ + C10
X\
(64)
np
σmin p
holds with probability at least 1 − O(n−10 ), provided that 0 < η ≤ 1/ (2σmax ) and 1 − (σmin /3) · η ≤ ρ < 1.
27
Proof. The key observation is this: the iterate that proceeds according to the population-level gradient
reduces the error w.r.t. k · k, namely,
ct − η∇Fclean X t H
ct − X \ < X t H
ct − X \ ,
X tH
ct is sufficiently close to the truth. Notably, the orthonormal matrix H
ct is still chosen
as long as X t H
to be the one that minimizes the k · kF distance (as opposed to k · k), which yields a symmetry property
ct = X t H
ct > X \ , crucial for our analysis. See Appendix B.3 for details.
X \> X t H
7.2
Step 2: introducing the leave-one-out sequences
In order to establish the incoherence properties (28b) for the entire trajectory, which is difficult to deal with
directly due to the complicated
statistical dependence, we introduce a collection of leave-one-out versions
of {X t }t≥0 , denoted by X t,(l) t≥0 for each 1 ≤ l ≤ n. Specifically, X t,(l) t≥0 is the iterates of gradient
descent operating on the auxiliary loss function
f (l) (X) :=
1
PΩ−l XX > − M \ + E
4p
2
F
+
1
Pl XX > − M \
4
2
F
.
(65)
Here, PΩl (resp. PΩ−l and Pl ) represents the orthogonal projection onto the subspace of matrices which
vanish outside of the index set Ωl := {(i, j) ∈ Ω | i = l or j = l} (resp. Ω−l := {(i, j) ∈ Ω | i 6= l, j 6= l} and
{(i, j) | i = l or j = l}); that is, for any matrix M ,
(
Mi,j , if (i = l or j = l) and (i, j) ∈ Ω,
[PΩl (M )]i,j =
(66)
0,
else,
[PΩ−l (M )]i,j =
(
Mi,j , if i 6= l and j 6= l and (i, j) ∈ Ω
0,
else
and
[Pl (M )]i,j =
The gradient of the leave-one-out loss function (65) is given by
∇f (l) (X) =
(
0,
if i 6= l and j 6= l,
Mi,j , if i = l or j = l.
(67)
1
PΩ−l XX > − M \ + E X + Pl XX > − M \ X.
p
(68)
The full algorithm to obtain the leave-one-out sequence {X t,(l) }t≥0 (including spectral initialization) is
summarized in Algorithm 5.
Algorithm 5 The lth leave-one-out sequence for matrix completion
\
\
Input: Y = [Yi,j ]1≤i,j≤n , M·,l
, Ml,·
, r, p.
0,(l) (l) 0,(l)>
Spectral initialization: Let U
Σ U
be the top-r eigendecomposition of
M (l) :=
1
1
P −l (Y ) + Pl M \ = PΩ−l M \ + E + Pl M \
p Ω
p
with PΩ−l and Pl defined in (67), and set X 0,(l) = U 0,(l) Σ(l)
Gradient updates: for t = 0, 1, 2, . . . , T − 1 do
1/2
.
X t+1,(l) = X t,(l) − ηt ∇f (l) X t,(l) .
(69)
Remark 5. Rather than simply dropping all samples in the lth row/column, we replace the lth row/column
with their respective population means. In other words, the leave-one-out gradient forms an unbiased
surrogate for the true gradient, which is particularly important in ensuring high estimation accuracy.
28
7.3
Step 3: establishing the incoherence condition by induction
We will continue the proof of Theorem 2 in an inductive manner. As seen in Section 7.1.2, the induction
hypotheses (28a) and (28c) hold for the (t+1)th iteration as long as (28) holds at the tth iteration. Therefore,
we are left with proving the incoherence hypothesis (28b) for all 0 ≤ t ≤ T = O(n5 ). For clarity of analysis, it
is crucial to maintain a list of induction hypotheses, which includes a few more hypotheses that complement
(28), and is given below.
r
n
ct − X \ ≤ C4 ρt µr √1 + C1 σ
(70a)
X\ F ,
X tH
F
np
σmin p
s
s
!
log
n
σ
n
log
n
ct − X \
X tH
≤ C5 ρt µr
X \ 2,∞ ,
(70b)
+ C8
2,∞
np
σmin
p
r
n
ct − X \ ≤ C9 ρt µr √1 + C10 σ
(70c)
X tH
X\ ,
np
σmin p
s
s
!
log
n
n
log
n
σ
ct − X t,(l) Rt,(l) ≤ C3 ρt µr
max X t H
X \ 2,∞ ,
(70d)
+ C7
F
1≤l≤n
np
σmin
p
s
!
1
n log n
σ
t,(l) ct,(l)
\
t
H
− X l,· 2 ≤ C2 ρ µr √ + C6
max X
X \ 2,∞
(70e)
1≤l≤n
np
σmin
p
ct,(l) and Rt,(l) are orthonormal
hold for some absolute constants 0 < ρ < 1 and C1 , · · · , C10 > 0. Here, H
matrices defined by
ct,(l) := arg min X t,(l) R − X \
H
r×r
R∈O
Rt,(l) := arg min
R∈O r×r
F
(71)
,
ct
X t,(l) R − X t H
F
.
(72)
Clearly, the first three hypotheses (70a)-(70c) constitute the conclusion of Theorem 2, i.e. (28). The last two
hypotheses (70d) and (70e) are auxiliary properties connecting the true iterates and the auxiliary leave-oneout sequences. Moreover, we summarize below several immediate consequences of (70), which will be useful
throughout.
Lemma 10. Suppose n2 p κ3 µ2 r2 n log n and the noise satisfies (27). Under the hypotheses (70), one has
ct − X t,(l) H
ct,(l)
X tH
ct,(l) − X \
X t,(l) H
X t,(l) Rt,(l) − X \
F
F
2,∞
ct,(l) − X \
X t,(l) H
ct − X t,(l) Rt,(l) ,
≤ 5κ X t H
F
r
1
σ
n
t,(l) t,(l)
\
t
≤ X
R
−X
≤ 2C4 ρ µr √ + 2C1
X\ F ,
np
σmin p
F
s
s
(
)
log
n
σ
n log n
t
≤ (C3 + C5 ) ρ µr
+ (C8 + C7 )
X \ 2,∞ ,
np
σmin
p
r
σ
1
n
t
≤ 2C9 ρ µr √ + 2C10
X\ .
np
σmin p
(73a)
(73b)
(73c)
(73d)
In particular, (73a) follows from hypotheses (70c) and (70d).
Proof. See Appendix B.4.
In the sequel, we follow the general recipe outlined in Section 5 to establish the induction hypotheses.
We only need to establish (70b), (70d) and (70e) for the (t + 1)th iteration, since (70a) and (70c) have been
established in Section 7.1.2. Specifically, we resort to the leave-one-out iterates by showing that: first, the
true and the auxiliary iterates remain exceedingly close throughout; second, the lth leave-one-out sequence
stays incoherent with el due to statistical independence.
29
• Step 3(a): proximity between the original and the leave-one-out iterates. We demonstrate
that X t+1 is well approximated by X t+1,(l) , up to proper orthonormal transforms. This is precisely the
induction hypothesis (70d) for the (t + 1)th iteration.
Lemma 11. Suppose the sample complexity satisfies n2 p κ4 µ3 r3 n log3 n and the noise satisfies (27).
Under the hypotheses (70) for the tth iteration, we have
s
s
σ
log
n
n log n
ct+1 − X t+1,(l) Rt+1,(l) ≤ C3 ρt+1 µr
X \ 2,∞ + C7
X \ 2,∞
(74)
X t+1 H
np
σmin
p
F
with probability at least 1 − O(n−10 ), provided that 0 < η ≤ 2/(25κσmax ), 1 − (σmin /5) · η ≤ ρ < 1 and
C7 > 0 is sufficiently large.
Proof. The fact that this difference is well-controlled relies heavily on the benign geometric property of the
ct and X t,(l) Rt,(l)
Hessian revealed by Lemma 7. Two important remarks are in order: (1) both points X t H
ct − X t,(l) Rt,(l) forms a valid direction for restricted strong convexity.
satisfy (63a); (2) the difference X t H
These two properties together allow us to invoke Lemma 7. See Appendix B.5.
• Step 3(b): incoherence of the leave-one-out iterates. Given that X t+1,(l) is sufficiently close to
X t+1 , we turn our attention to establishing the incoherence of this surrogate X t+1,(l) w.r.t. el . This
amounts to proving the induction hypothesis (70e) for the (t + 1)th iteration.
Lemma 12. Suppose the sample complexity meets n2 p κ3 µ3 r3 n log3 n and the noise satisfies (27).
Under the hypotheses (70) for the tth iteration, one has
s
1
n log n
σ
ct+1,(l) − X \
X t+1,(l) H
≤ C2 ρt+1 µr √
X \ 2,∞ + C6
X \ 2,∞
(75)
l,· 2
np
σmin
p
−10
with probability
√ at least 1 − O(n ), as long as 0 < η ≤ 1/σmax , 1 − (σmin /3) · η ≤ ρ < 1, C2 κC9 and
C6 κC10 / log n.
Proof. The key observation is that X t+1,(l) is statistically independent from any sample in the lth
row/column of the matrix. Since there are an order of np samples in each row/column, we obtain enough
information that helps establish the desired incoherence property. See Appendix B.6.
• Step 3(c): combining the bounds. The inequalities (70d) and (70e) taken collectively allow us to
establish the induction hypothesis (70b). Specifically, for every 1 ≤ l ≤ n, write
ct+1 − X \
X t+1 H
l,·
ct+1 − X t+1,(l) H
ct+1,(l)
= X t+1 H
and the triangle inequality gives
ct+1 − X \
ct+1 − X t+1,(l) H
ct+1,(l)
X t+1 H
≤ X t+1 H
l,· 2
F
l,·
+
ct+1,(l) − X \
+ X t+1,(l) H
ct+1,(l) − X \
X t+1,(l) H
l,·
,
l,· 2
.
(76)
The second term has already been bounded by (75). Since we have established the induction hypotheses
(70c) and (70d) for the (t+1)th iteration, the first term can be bounded by (73a) for the (t+1)th iteration,
i.e.
ct+1 − X t+1,(l) H
ct+1,(l) ≤ 5κ X t+1 H
ct+1 − X t+1,(l) Rt+1,(l) .
X t+1 H
F
F
Plugging the above inequality, (74) and (75) into (76), we have
s
s
C
log
n
n log n
7
ct+1 − X \
X \ 2,∞ +
σ
X\
X t+1 H
≤ 5κ C3 ρt+1 µr
np
σmin
p
2,∞
30
2,∞
!
1
+ C2 ρ µr √
X\
np
s
log n
X\
≤ C5 ρt+1 µr
np
t+1
C6
+
σ
2,∞
σmin
s
n log n
X\
p
s
C8
n log n
+
X\
σ
2,∞
σmin
p
2,∞
2,∞
as long as C5 /(κC3 +C2 ) and C8 /(κC7 +C6 ) are sufficiently large. This establishes the induction hypothesis
(70b) and finishes the proof.
7.4
The base case: spectral initialization
Finally, we return to check the base case, namely, we aim to show that the spectral initialization satisfies
the induction hypotheses (70a)-(70e) for t = 0. This is accomplished via the following lemma.
Lemma 13. Suppose the sample size obeys n2p µ2 r2 n log n, the noise satisfies (27), and κ = σmax /σmin
1. Then with probability at least 1 − O n−10 , the claims in (70a)-(70e) hold simultaneously for t = 0.
Proof. This follows by invoking the Davis-Kahan sinΘ theorem [DK70] as well as the entrywise eigenvector
perturbation analysis in [AFWZ17]. We defer the proof to Appendix B.7.
8
Analysis for blind deconvolution
In this section, we instantiate the general recipe presented in Section 5 to blind deconvolution and prove
Theorem 3. Without loss of generality, we assume throughout that h\ 2 = x\ 2 = 1.
Before presenting the analysis, we first gather some simple facts about the empirical loss function in
(32). Recall the definition of z in (33), and for notational simplicity, we write f (z) = f (h, x). Since z is
complex-valued, we need to resort to Wirtinger calculus; see [CLS15, Section 6] for a brief introduction. The
Wirtinger gradient of (32) with respect to h and x are given respectively by
∇h f (z) = ∇h f (h, x) =
∇x f (z) = ∇x f (h, x) =
m
X
j=1
m
X
j=1
b∗j hx∗ aj − yj bj a∗j x;
(b∗j hx∗ aj − yj )aj b∗j h.
(77)
(78)
It is worth noting that the formal Wirtinger gradient contains ∇h f (h, x) and ∇x f (h, x) as well. Nevertheless, since f (h, x) is a real-valued function, the following identities always hold
and
∇h f (h, x) = ∇h f (h, x)
∇x f (h, x) = ∇x f (h, x).
In light of these observations, one often omits the gradient with respect to the conjugates; correspondingly,
the gradient update rule (35) can be written as
ht+1 = ht −
xt+1 = xt −
m
X
η
2
kxt k2 j=1
m
X
η
2
kht k2
j=1
b∗j ht xt∗ aj − yj bj a∗j xt ,
(b∗j ht xt∗ aj − yj )aj b∗j ht .
We can also compute the Wirtinger Hessian of f (z) as follows,
A B
2
∇ f (z) =
,
B∗ A
where
A=
"
Pm
∗ 2
∗
j=1 aj x bj bj
∗
Pm ∗ ∗
bj hx aj − yj bj a∗j
j=1
#
∗
∗
∗
b
hx
a
−
y
b
a
j
j
j
j
j=1
∈ C2K×2K ;
Pmj
2
∗
∗
j=1 bj h aj aj
Pm
31
(79a)
(79b)
(80)
B=
"
Pm
0
j=1
>
∗
∗
j=1 aj aj x bj bj h
Pm
bj b∗j h aj a∗j x
0
> #
∈ C2K×2K .
Last but not least, we say (h1 , x1 ) is aligned with (h2 , x2 ), if the following holds,
(
)
2
1
2
2
2
kh1 − h2 k2 + kx1 − x2 k2 = min
h1 − h2 + kαx1 − x2 k2 .
α∈C
α
2
To simplify notations, define zet as
1 t
et
h
h
ze = t := αtt t
e
αx
x
t
(81)
with the alignment parameter αt given in (38). Then we can see that zet is aligned with z \ and
dist z t , z \ = dist zet , z \ = zet − z \ 2 .
8.1
8.1.1
Step 1: characterizing local geometry in the RIC
Local geometry
The first step is to characterize the region of incoherence and contraction (RIC), where the empirical loss
function enjoys restricted strong convexity and smoothness properties. To this end, we have the following
lemma.
Lemma 14 (Restricted strong convexity and smoothness for blind deconvolution). Let c > 0 be a sufficiently
small constant and
δ = c/ log2 m.
Suppose the sample size satisfies m ≥ c0 µ2 K log9 m for some sufficiently large constant c0 > 0. Then with
probability at least 1 − O m−10 + e−K log m , the Wirtinger Hessian ∇2 f (z) obeys
2
u∗ D∇2 f (z) + ∇2 f (z) D u ≥ (1/4) · kuk2
and
∇2 f (z) ≤ 3
simultaneously for all
z=
h
x
where z satisfies
and
h1 − h2
x1 − x2
u=
h1 − h2
x1 − x2
max
h − h\
2
, x − x\
D=
γ 2 IK
γ 1 IK
γ 2 IK
2
2
, x1 − x\
2
,
(82a)
≤ δ;
x − x\ ≤ 2C3
(h1 , x1 ) is aligned with (h2 , x2 ), and they satisfy
max h1 − h\ 2 , h2 − h\
γ 1 IK
1
;
log3/2 m
µ
max b∗j h ≤ 2C4 √ log2 m;
1≤j≤m
m
max a∗j
1≤j≤m
and
, x2 − x\
2
(82b)
(82c)
≤ δ;
(83)
and finally, D satisfies for γ1 , γ2 ∈ R,
max {|γ1 − 1| , |γ2 − 1|} ≤ δ.
Here, C3 , C4 > 0 are numerical constants.
32
(84)
Proof. See Appendix C.1.
Lemma 14 characterizes the restricted strong convexity and smoothness of the loss function used in blind
deconvolution. To the best of our knowledge, this provides the first characterization regarding geometric
properties of the Hessian matrix for blind deconvolution. A few interpretations are in order.
• The conditions (82) specify the region of incoherence and contraction (RIC). In particular, (82a) specifies
a neighborhood that is close to the ground truth in `2 norm, and (82b) and (82c) specify the incoherence
region with respect to the sensing vectors {aj } and {bj }, respectively.
• Similar to matrix completion, the Hessian matrix is rank-deficient even at the population level. Consequently, we resort to a restricted form of strong convexity by focusing on certain directions. More
specifically, these directions can be viewed as the difference between two pre-aligned points that are not
far from the truth, which is characterized by (83).
• Finally, the diagonal matrix D accounts for scaling factors that are not too far from 1 (see (84)), which
allows us to account for different step sizes employed for h and x.
8.1.2
Error contraction
The restricted strong convexity and smoothness allow us to establish the contraction of the error measured
in terms of dist(·, z \ ) as defined in (34) as long as the iterates stay in the RIC.
Lemma 15. Suppose the number of measurements satisfies m µ2 K log9 m, and the step size η > 0 is
some sufficiently small constant. Then
dist z t+1 , z \ ≤ (1 − η/16) dist z t , z \ ,
provided that
dist z t , z \ ≤ ξ,
e t − x\ ≤ C3
max a∗j x
1≤j≤m
1
,
log1.5 m
e t ≤ C4 √µ log2 m
max b∗j h
1≤j≤m
m
(85a)
(85b)
(85c)
e t and x
et are defined in (81), and ξ 1/ log2 m.
for some constants C3 , C4 > 0. Here, h
Proof. See Appendix C.2.
As a result, if z t satisfies the condition (85) for all 0 ≤ t ≤ T , then the union bound gives
dist z t , z \ ≤ ρ dist z t−1 , z \ ≤ ρt dist z 0 , z \ ≤ ρt c1 ,
0 < t ≤ T,
where ρ := 1 − η/16. Furthermore, similar to the case of phase retrieval (i.e. Lemma 3), as soon as we
demonstrate that the conditions (85) hold for all 0 ≤ t ≤ m, then Theorem 3 holds true. The proof of this
claim is exactly the same as for Lemma 3, and is thus omitted for conciseness. In what follows, we focus on
establishing (85) for all 0 ≤ t ≤ m.
Before concluding this subsection, we make note of another important result that concerns the alignment
parameter αt , which will be useful in the subsequent analysis. Specifically, the alignment parameter sequence
{αt } converges linearly to a constant whose magnitude is fairly close to 1, as long as the two initial vectors
h0 and x0 have similar `2 norms and are close to the truth. Given that αt determines the global scaling of
the iterates, this reveals rapid convergence of both kht k2 and kxt k2 , which explains why there is no need to
impose extra terms to regularize the `2 norm as employed in [LLSW16, HH17].
Lemma 16. Suppose that m 1. The following two claims hold true.
33
• If |αt | − 1 ≤ 1/2 and dist(z t , z \ ) ≤ C1 / log2 m, then
αt+1
cC1
− 1 ≤ c dist(z t , z \ ) ≤
αt
log2 m
for some absolute constant c > 0;
• If |α0 | − 1 ≤ 1/4 and dist(z s , z \ ) ≤ C1 (1 − η/16)s / log2 m for all 0 ≤ s ≤ t, then one has
|αs+1 | − 1 ≤ 1/2,
0 ≤ s ≤ t.
Proof. See Appendix C.2.
The initial condition |α0 | − 1 < 1/4 will be guaranteed to hold with high probability by Lemma 19.
8.2
Step 2: introducing the leave-one-out sequences
As demonstrated by the assumptions in Lemma 15, the key is to show that the whole trajectory lies in
the region specified by (85a)-(85c). Once again, the difficulty lies in the statistical dependency between the
iterates {z t } and the measurement
vectors {aj }. We follow the general recipe
and introduce the leave-one
out sequences, denoted by ht,(l) , xt,(l) t≥0 for each 1 ≤ l ≤ m. Specifically, ht,(l) , xt,(l) t≥0 is the gradient
sequence operating on the loss function
X
2
b∗j hx∗ − h\ x\∗ aj .
(86)
f (l) (h, x) :=
j:j6=l
The whole sequence is constructed by running gradient descent with spectral initialization on the leave-oneout loss (86). The precise description is supplied
Algorithm
6.
int,(l)
h
t,(l)
For notational simplicity, we denote z
=
and use f (z t,(l) ) = f (ht,(l) , xt,(l) ) interchangeably.
xt,(l)
Define similarly the alignment parameters
αt,(l) := arg min
α∈C
and denote zet,(l) =
e t,(l)
h
et,(l)
x
1 t,(l)
h
− h\
α
2
2
+ αxt,(l) − x\
2
,
2
(87)
where
e t,(l) =
h
1
αt,(l)
ht,(l)
and
et,(l) = αt,(l) xt,(l) .
x
(88)
Algorithm 6 The lth leave-one-out sequence for blind deconvolution
Input: {aj }1≤j≤m,j6=l , {bj }1≤j≤m,j6=l and {yj }1≤j≤m,j6=l .
Spectral initialization: Let σ1 (M (l) ), ȟ0,(l) and x̌0,(l) be the leading singular value, left and right
singular vectors of
X
M (l) :=
yj bj a∗j ,
j:j6=l
p
p
respectively. Set h0,(l) = σ1 (M (l) ) ȟ0,(l) and x0,(l) = σ1 (M (l) ) x̌0,(l) .
Gradient updates: for t = 0, 1, 2, . . . , T − 1 do
"
#
t+1,(l) t,(l)
1
∇ f (l) ht,(l) , xt,(l)
h
h
kxt,(l) k22 h
.
=
−η
1
xt+1,(l)
xt,(l)
∇ f (l) ht,(l) , xt,(l)
kht,(l) k2 x
2
34
(89)
8.3
Step 3: establishing the incoherence condition by induction
As usual, we continue the proof in an inductive manner. For clarity of presentation, we list below the set of
induction hypotheses underlying our analysis:
dist z t , z \ ≤ C1
1
,
log2 m
s
µ
µ2 K log9 m
max dist z t,(l) , zet ≤ C2 √
,
1≤l≤m
m
m
1
et − x\ ≤ C3 1.5 ,
max a∗l x
1≤l≤m
log m
e t ≤ C4 √µ log2 m,
max b∗l h
1≤l≤m
m
(90a)
(90b)
(90c)
(90d)
e t, x
et and zet are defined in (81). Here, C1 , C3 > 0 are some sufficiently small constants, while
where h
C2 , C4 > 0 are some sufficiently large constants. We aim to show that if these hypotheses (90) hold up to
the tth iteration, then the same would hold for the (t + 1)th iteration with exceedingly high probability (e.g.
1 − O(m−10 )). The first hypothesis (90a) has already been established in Lemma 15, and hence the rest of
this section focuses on establishing the remaining three. To justify the incoherence hypotheses (90c) and
(90d) for the (t + 1)th iteration, we need to leverage the nice properties of the leave-one-out sequences, and
establish (90b) first. In the sequel, we follow the steps suggested in the general recipe.
• Step 3(a): proximity between the original and the leave-one-out iterates. We first justify the
hypothesis (90b) for the (t + 1)th iteration via the following lemma.
Lemma 17. Suppose the sample complexity obeys m µ2 K log9 m. There exists a constant c > 0 such
that under the hypotheses (90a)-(90d) for the tth iteration, with probability at least 1 − O(m−10 + me−cK )
one has
s
µ
µ2 K log9 m
max dist z t+1,(l) , zet+1 ≤ C2 √
1≤l≤m
m
m
s
µ
µ2 K log9 m
and
max zet+1,(l) − zet+1 2 . C2 √
,
1≤l≤m
m
m
provided that the step size η > 0 is some sufficiently small constant.
Proof. As usual, this result follows from the restricted strong convexity, which forces the distance between
the two sequences of interest to be contractive. See Appendix C.3.
• Step 3(b): incoherence of the leave-one-out iterate xt+1,(l) w.r.t. al . Next, we show that the
et+1,(l) — which is independent of al — is incoherent w.r.t. al in the sense that
leave-one-out iterate x
et+1,(l) − x\
a∗l x
≤ 10C1
1
log
3/2
(91)
m
with probability exceeding 1 − O m−10 + e−K log m . To see why, use the statistical independence and
the standard Gaussian concentration inequality to show that
et+1,(l) − x\
max a∗l x
1≤l≤m
p
≤ 5 log m max
1≤l≤m
et+1,(l) − x\
x
2
with probability exceeding 1 − O(m−10 ). It then follows from the triangle inequality that
et+1,(l) − x\
x
2
et+1,(l) − x
et+1
≤ x
35
2
et+1 − x\
+ x
2
(i)
µ
≤ CC2 √
m
(ii)
≤ 2C1
s
µ2 K log9 m
1
+ C1 2
m
log m
1
,
log2 m
√
where (i) follows from Lemmas 15 and 17, and (ii) holds as soon as m µ2 K log13/2 m. Combining the
preceding two bounds establishes (91).
• Step 3(c): combining the bounds to show incoherence of xt+1 w.r.t. {al }. The above bounds
immediately allow us to conclude that
et+1 − x\
max a∗l x
≤ C3
1
3/2
log m
with probability at least 1 − O m−10 + e−K log m , which is exactly the hypothesis (90c) for the (t + 1)th
iteration. Specifically, for each 1 ≤ l ≤ m, the triangle inequality yields
1≤l≤m
et+1 − x\
a∗l x
et+1 − x
et+1,(l)
≤ a∗l x
et+1,(l) − x\
+ a∗l x
et+1 − x
et+1,(l) + a∗l x
et+1,(l) − x\
≤ kal k2 x
2
s
(ii) √
µ
1
µ2 K log9 m
≤ 3 K · CC2 √
+ 10C1 3/2
m
m
log m
(iii)
1
≤ C3 3/2 .
log m
(i)
Here (i) follows from Cauchy-Schwarz, (ii) is a consequence of (190), Lemma 17 and the bound (91), and
the last inequality holds as long as m µ2 K log6 m and C3 ≥ 11C1 .
• Step 3(d): incoherence of ht+1 w.r.t. {bl }. It remains to justify that ht+1 is also incoherent w.r.t. its
associated design vectors {bl }. This proof of this step, however, is much more involved and challenging,
due to the deterministic nature of the bl ’s. As a result, we would need to “propagate” the randomness
brought about by {al } to ht+1 in order to facilitate the analysis. The result is summarized as follows.
Lemma 18. Suppose that the sample complexity obeys m µ2 K log9 m. Under the inductive hypotheses
(90a)-(90d) for the tth iteration, with probability exceeding 1 − O(m−10 ) we have
e t+1 ≤ C4 √µ log2 m
max b∗l h
m
1≤l≤m
as long as C4 is sufficiently large, and η > 0 is taken to be some sufficiently small constant.
Proof. The key idea is to divide {1, · · · , m} into consecutive bins each of size poly log(m), and to exploit
the randomness (namely, the randomness from al ) within each bin. This binning idea is crucial in ensuring
that the incoherence measure of interest does not blow up as t increases. See Appendix C.4.
With these steps in place, we conclude the proof of Theorem 3 via induction and the union bound.
8.4
The base case: spectral initialization
In order to finish the induction steps, we still need to justify the
induction hypotheses for the base cases,
namely, we need to show that the spectral initializations z 0 and z 0,(l) 1≤l≤m satisfy the induction hypotheses (90) at t = 0.
To start with, the initializations are sufficiently close to the truth when measured by the `2 norm, as
summarized by the following lemma.
36
Lemma 19. Fix any small constant ξ > 0. Suppose the sample size obeys m ≥ Cµ2 K log2 m/ξ 2 for some
sufficiently large constant C > 0. Then with probability at least 1 − O(m−10 ), we have
and
(92)
αh0 − h\ 2 + αx0 − x\ 2 ≤ ξ
min
α∈C,|α|=1
n
o
min
αh0,(l) − h\ 2 + αx0,(l) − x\ 2 ≤ ξ,
1 ≤ l ≤ m,
(93)
α∈C,|α|=1
and ||α0 | − 1| ≤ 1/4.
Proof. This follows from Wedin’s sinΘ theorem [Wed72] and [LLSW16, Lemma 5.20]. See Appendix C.5.
From the definition of dist(·, ·) (cf. (34)), we immediately have
s
2
(i)
1
1
2
0
\
\
\
dist z , z = min
+ kαx − x k2 ≤ min
h−h
h − h\
α∈C
α∈C
α
α
2
(ii)
≤
min
α∈C,|α|=1
αh0 − h\
2
+ αx0 − x\
(iii)
≤ C1
2
2
+ αx − x
\
2
1
,
log2 m
(94)
as long as m ≥ Cµ2 K log6 m for some sufficiently large constant C > 0. Here (i) follows from the elementary
2
inequality that a2 + b2 ≤ (a + b) for positive a and b, (ii) holds since the feasible set of the latter one is
strictly smaller, and (iii) follows directly from Lemma 19. This finishes the proof of (90a) for t = 0. Similarly,
with high probability we have
dist z 0,(l) , z \ ≤
min
α∈C,|α|=1
n
αh0,(l) − h\
+ αx0,(l) − x\
2
2
o
.
1
,
log2 m
1 ≤ l ≤ m.
(95)
Next, when properly aligned, the true initial estimate z 0 and the leave-one-out estimate z 0,(l) are expected
to be sufficiently close, as claimed by the following lemma. Along the way, we show that h0 is incoherent
w.r.t. the sampling vectors {bl }. This establishes (90b) and (90d) for t = 0.
Lemma 20. Suppose that m µ2 K log3 m. Then with probability at least 1 − O(m−10 ), one has
s
µ2 K log5 m
µ
max dist z 0,(l) , ze0 ≤ C2 √
1≤l≤m
m
m
(96)
and
e 0 ≤ C4
max b∗l h
1≤l≤m
µ log2 m
√
.
m
(97)
e0 ,
Proof. The key is to establish that dist z 0,(l) , ze0 can be upper bounded by some linear scaling of b∗l h
and vice versa. This allows us to derive bounds simultaneously for both quantities. See Appendix C.6.
Finally, we establish (90c) regarding the incoherence of x0 with respect to the design vectors {al }.
Lemma 21. Suppose that m µ2 K log6 m. Then with probability exceeding 1 − O(m−10 ), we have
e 0 − x\
max a∗l x
1≤l≤m
Proof. See Appendix C.7.
37
≤ C3
1
log
1.5
m
.
9
Discussions
This paper showcases an important phenomenon in nonconvex optimization: even without explicit enforcement of regularization, the vanilla form of gradient descent effectively achieves implicit regularization for a
large family of statistical estimation problems. We believe this phenomenon arises in problems far beyond
the three cases studied herein, and our results are initial steps towards understanding this fundamental
phenomenon. That being said, there are numerous avenues open for future investigation, and we conclude
the paper with a few of them.
• Improving sample complexity. In the current paper, the required sample complexity O µ3 r3 n log3 n for
matrix completion is sub-optimal when the rank r of the underlying matrix is large. While this allows us
to achieve a dimension-free iteration complexity, it is slightly higher than the sample complexity derived
for regularized gradient descent
in [CW15]. We expect our results continue to hold under lower sample
complexity O µ2 r2 n log n , but it calls for a more refined analysis (e.g. a generic chaining argument).
• Leave-one-out tricks for more general designs. So far our focus is on independent designs, including
the i.i.d. Gaussian design adopted in phase retrieval and partially in blind deconvolution, as well as the
independent sampling mechanism in matrix completion. Such independence property creates some sort
of “statistical homogeneity”, for which the leave-one-out argument works beautifully. It remains unclear
how to generalize such leave-one-out tricks for more general designs (e.g. more general sampling patterns
in matrix completion and more structured Fourier designs in phase retrieval and blind deconvolution). In
fact, the readers can already get a flavor of this issue in the analysis of blind deconvolution, where the
Fourier design vectors require much more delicate treatments than purely Gaussian designs.
• Uniform stability. The leave-one-out perturbation argument is established upon a basic fact: when we
exclude one sample from consideration, the resulting estimates/predictions do not deviate much from the
original ones. This leave-one-out stability bears similarity to the notion of uniform stability studied in
statistical learning theory [BE02, LLNT17]. We expect our analysis framework to be helpful for analyzing
other learning algorithms that are uniformly stable.
• Constrained optimization. We restrict ourselves to study empirical risk minimization problems in an unconstrained setting. It will be interesting to explore if such implicit regularization still holds for constrained
nonconvex problems.
• Other iterative methods. Iterative methods other than gradient descent have been extensively studied in the
nonconvex optimization literature, including alternating minimization, proximal methods, etc. Identifying
the implicit regularization feature for a broader class of iterative algorithms is another direction worth
exploring.
• Connections to deep learning? We have focused on nonlinear systems that are bilinear or quadratic in this
paper. Deep learning formulations/architectures, highly nonlinear, are notorious for its daunting nonconvex geometry. However, iterative methods including stochastic gradient descent have enjoyed enormous
practical success in learning neural networks (e.g. [ZSJ+ 17, SJL17]), even when the architecture is significantly over-parameterized without explicit regularization. We hope the message conveyed in this paper
for several simple statistical models can shed light on why simple forms of gradient descent and variants
work so well in learning complicated neural networks.
Acknowledgements
The work of Y. Chi is supported in part by the grants AFOSR FA9550-15-1-0205, ONR N00014-15-1-2387,
NSF CCF-1527456, ECCS-1650449 and CCF-1704245. Y. Chen would like to thank Yudong Chen for
inspiring discussions about matrix completion.
38
References
[AAH17]
A. Aghasi, A. Ahmed, and P. Hand. Branchhull: Convex bilinear inversion from the entrywise
product of signals with known signs. arXiv preprint arXiv:1702.04342, 2017.
[AFWZ17] E. Abbe, J. Fan, K. Wang, and Y. Zhong. Entrywise eigenvector analysis of random matrices
with low expected rank. arXiv preprint arXiv:1709.09565, 2017.
[ARR14]
A. Ahmed, B. Recht, and J. Romberg. Blind deconvolution using convex programming. IEEE
Transactions on Information Theory, 60(3):1711–1732, 2014.
[AS08]
N. Alon and J. H. Spencer. The Probabilistic Method (3rd Edition). Wiley, 2008.
[BE02]
O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2(Mar):499–526, 2002.
[BEB17]
T. Bendory, Y. C. Eldar, and N. Boumal. Non-convex phase retrieval from STFT measurements.
IEEE Transactions on Information Theory, 2017.
[BNS16]
S. Bhojanapalli, B. Neyshabur, and N. Srebro. Global optimality of local search for low rank
matrix recovery. In Advances in Neural Information Processing Systems, pages 3873–3881, 2016.
[BR17]
S. Bahmani and J. Romberg. Phase retrieval meets statistical learning theory: A flexible convex
relaxation. In Artificial Intelligence and Statistics, pages 252–260, 2017.
[Bub15]
S. Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends R in
Machine Learning, 8(3-4):231–357, 2015.
[CC16]
Y. Chen and E. Candès. The projected power method: An efficient algorithm for joint alignment
from pairwise differences. arXiv preprint arXiv:1609.05820, accepted to Communications on
Pure and Applied Mathematics, 2016.
[CC17]
Y. Chen and E. J. Candès. Solving random quadratic systems of equations is nearly as easy as
solving linear systems. Comm. Pure Appl. Math., 70(5):822–883, 2017.
[CCG15]
Y. Chen, Y. Chi, and A. J. Goldsmith. Exact and stable covariance estimation from quadratic
sampling via convex programming. IEEE Transactions on Information Theory, 61(7):4034–
4059, 2015.
[CESV13]
E. J. Candès, Y. C. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM Journal on Imaging Sciences, 6(1):199–225, 2013.
[CFL15]
P. Chen, A. Fannjiang, and G.-R. Liu. Phase retrieval with one or two diffraction patterns by
alternating projections with the null initialization. Journal of Fourier Analysis and Applications,
pages 1–40, 2015.
[CFMW17] Y. Chen, J. Fan, C. Ma, and K. Wang. Spectral method and regularized MLE are both optimal
for top-k ranking. arXiv preprint arXiv:1707.09971, 2017.
[Che15]
Y. Chen. Incoherence-optimal matrix completion. IEEE Transactions on Information Theory,
61(5):2909–2923, 2015.
[Che17]
Y. Chen. Regularized mirror descent: A nonconvex approach for learning mixed probability
distributions. 2017.
[Chi16]
Y. Chi. Guaranteed blind sparse spikes deconvolution via lifting and convex optimization. IEEE
Journal of Selected Topics in Signal Processing, 10(4):782–794, 2016.
[CJ16]
V. Cambareri and L. Jacques. A non-convex blind calibration method for randomised sensing
strategies. arXiv preprint arXiv:1605.02615, 2016.
39
[CJN17]
Y. Cherapanamjeri, P. Jain, and P. Netrapalli. Thresholding based outlier robust PCA. In
Conference on Learning Theory, pages 593–628, 2017.
[CL14]
E. J. Candès and X. Li. Solving quadratic equations via PhaseLift when there are about as
many equations as unknowns. Foundations of Computational Mathematics, 14(5):1017–1026,
2014.
[CL16]
Y. Chi and Y. M. Lu. Kaczmarz method for solving quadratic equations. IEEE Signal Processing
Letters, 23(9):1183–1187, 2016.
[CLM+ 16]
T. T. Cai, X. Li, Z. Ma, et al. Optimal rates of convergence for noisy sparse phase retrieval via
thresholded Wirtinger flow. The Annals of Statistics, 44(5):2221–2251, 2016.
[CLMW11] E. J. Candès, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of
ACM, 58(3):11:1–11:37, Jun 2011.
[CLS15]
E. J. Candès, X. Li, and M. Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and
algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, April 2015.
[CLW17]
J.-F. Cai, H. Liu, and Y. Wang. Fast rank one alternating minimization algorithm for phase
retrieval. arXiv preprint arXiv:1708.08751, 2017.
[CR09]
E. J. Candès and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational Mathematics, 9(6):717–772, April 2009.
[CSPW11]
V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence
for matrix decomposition. SIAM Journal on Optimization, 21(2):572–596, 2011.
[CSV13]
E. J. Candès, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery
from magnitude measurements via convex programming. Communications on Pure and Applied
Mathematics, 66(8):1017–1026, 2013.
[CT10]
E. Candès and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE
Transactions on Information Theory, 56(5):2053 –2080, May 2010.
[CW15]
Y. Chen and M. J. Wainwright. Fast low-rank estimation by projected gradient descent: General
statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025, 2015.
[CYC14]
Y. Chen, X. Yi, and C. Caramanis. A convex formulation for mixed regression with two
components: Minimax optimal rates. In Conference on Learning Theory, pages 560–604, 2014.
[CZ15]
T. Cai and A. Zhang. ROP: Matrix recovery via rank-one projections. The Annals of Statistics,
43(1):102–138, 2015.
[DK70]
C. Davis and W. M. Kahan. The rotation of eigenvectors by a perturbation. iii. SIAM Journal
on Numerical Analysis, 7(1):1–46, 1970.
[Dop00]
F. M. Dopico. A note on sin Θ theorems for singular subspace variations. BIT, 40(2):395–403,
2000.
[DR16]
M. A. Davenport and J. Romberg. An overview of low-rank matrix recovery from incomplete
observations. IEEE Journal of Selected Topics in Signal Processing, 10(4):608–622, 2016.
[DR17]
J. C. Duchi and F. Ruan. Solving (most) of a set of quadratic equalities: Composite optimization
for robust phase retrieval. arXiv preprint arXiv:1705.02356, 2017.
[DTL17]
O. Dhifallah, C. Thrampoulidis, and Y. M. Lu. Phase retrieval via linear programming: Fundamental limits and algorithmic improvements. arXiv preprint arXiv:1710.05234, 2017.
40
[EK15]
N. El Karoui. On the impact of predictor geometry on the performance on high-dimensional
ridge-regularized generalized robust regression estimators. Probability Theory and Related
Fields, pages 1–81, 2015.
[EKBB+ 13] N. El Karoui, D. Bean, P. J. Bickel, C. Lim, and B. Yu. On robust regression with highdimensional predictors. Proceedings of the National Academy of Sciences, 110(36):14557–14562,
2013.
[GLM16]
R. Ge, J. D. Lee, and T. Ma. Matrix completion has no spurious local minimum. In Advances
in Neural Information Processing Systems, pages 2973–2981, 2016.
[GM17]
R. Ge and T. Ma. On the optimization landscape of tensor decompositions. arXiv preprint
arXiv:1706.05598, 2017.
[Gro11]
D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions
on Information Theory, 57(3):1548–1566, March 2011.
[GS16]
T. Goldstein and C. Studer. Phasemax: Convex phase retrieval via basis pursuit. arXiv preprint
arXiv:1610.07531, 2016.
[GWB+ 17] S. Gunasekar, B. Woodworth, S. Bhojanapalli, B. Neyshabur, and N. Srebro. Implicit regularization in matrix factorization. arXiv preprint arXiv:1705.09280, 2017.
[GX16]
B. Gao and Z. Xu.
arXiv:1606.08135, 2016.
Phase retrieval using Gauss-Newton method.
[HH17]
W. Huang and P. Hand. Blind deconvolution by a steepest descent algorithm on a quotient
manifold. arXiv preprint arXiv:1710.03309, 2017.
[Hig92]
N. J. Higham. Estimating the matrix p-norm. Numerische Mathematik, 62(1):539–555, 1992.
[HKZ12]
D. Hsu, S. M. Kakade, and T. Zhang. A tail inequality for quadratic forms of subgaussian
random vectors. Electron. Commun. Probab., 17:no. 52, 6, 2012.
[HMLZ15]
T. Hastie, R. Mazumder, J. D. Lee, and R. Zadeh. Matrix completion and low-rank SVD via
fast alternating least squares. Journal of Machine Learning Research, 16:3367–3402, 2015.
[HV16]
P. Hand and V. Voroninski. An elementary proof of convex phase retrieval in the natural
parameter space via the linear program PhaseMax. arXiv preprint arXiv:1611.03935, 2016.
[HW14]
M. Hardt and M. Wootters. Fast matrix completion without the condition number. Conference
on Learning Theory, pages 638 – 678, 2014.
[JEH15]
K. Jaganathan, Y. C. Eldar, and B. Hassibi. Phase retrieval: An overview of recent developments. arXiv preprint arXiv:1510.07713, 2015.
[JKN16]
C. Jin, S. M. Kakade, and P. Netrapalli. Provable efficient online matrix completion via nonconvex stochastic gradient descent. In Advances in Neural Information Processing Systems,
pages 4520–4528, 2016.
[JM15]
A. Javanmard and A. Montanari. De-biasing the lasso: Optimal sample size for Gaussian
designs. arXiv preprint arXiv:1508.02757, 2015.
[JN15]
P. Jain and P. Netrapalli. Fast exact matrix completion with finite samples. In Conference on
Learning Theory, pages 1007–1034, 2015.
[JNS13]
P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In ACM symposium on Theory of computing, pages 665–674, 2013.
[KD09]
K. Kreutz-Delgado. The complex gradient operator and the CR-calculus. arXiv preprint
arXiv:0906.4835, 2009.
41
arXiv preprint
[KLT11]
V. Koltchinskii, K. Lounici, and A. B. Tsybakov. Nuclear-norm penalization and optimal rates
for noisy low-rank matrix completion. Ann. Statist., 39(5):2302–2329, 2011.
[KMN+ 16] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836,
2016.
[KMO10a]
R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE
Transactions on Information Theory, 56(6):2980 –2998, June 2010.
[KMO10b]
R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. J. Mach.
Learn. Res., 11:2057–2078, 2010.
[KÖ16]
R. Kolte and A. Özgür. Phase retrieval via incremental truncated Wirtinger flow. arXiv preprint
arXiv:1606.03196, 2016.
[Kol11]
V. Koltchinskii. Oracle inequalities in empirical risk minimization and sparse recovery problems,
volume 2033 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011.
[Lan93]
S. Lang. Real and functional analysis. Springer-Verlag, New York,, 10:11–13, 1993.
[LB10]
K. Lee and Y. Bresler. Admira: Atomic decomposition for minimum rank approximation. IEEE
Transactions on Information Theory, 56(9):4402–4416, 2010.
[LCR16]
J. Lin, R. Camoriano, and L. Rosasco. Generalization properties and implicit regularization
for multiple passes SGM. In International Conference on Machine Learning, pages 2340–2348,
2016.
[LL17]
Y. M. Lu and G. Li. Phase transitions of spectral initialization for high-dimensional nonconvex
estimation. arXiv preprint arXiv:1702.06435, 2017.
[LLB17]
Y. Li, K. Lee, and Y. Bresler. Blind gain and phase calibration for low-dimensional or sparse
signal sensing via power iteration. In Sampling Theory and Applications (SampTA), 2017 International Conference on, pages 119–123. IEEE, 2017.
[LLJB17]
K. Lee, Y. Li, M. Junge, and Y. Bresler. Blind recovery of sparse signals from subsampled
convolution. IEEE Transactions on Information Theory, 63(2):802–821, 2017.
[LLNT17]
T. Liu, G. Lugosi, G. Neu, and D. Tao. Algorithmic stability and hypothesis complexity. arXiv
preprint arXiv:1702.08712, 2017.
[LLSW16]
X. Li, S. Ling, T. Strohmer, and K. Wei. Rapid, robust, and reliable blind deconvolution via
nonconvex optimization. CoRR, abs/1606.04933, 2016.
[LM14]
G. Lerman and T. Maunu. Fast, robust and non-convex subspace recovery. arXiv preprint
arXiv:1406.6145, 2014.
[LS15]
S. Ling and T. Strohmer. Self-calibration and biconvex compressive sensing. Inverse Problems,
31(11):115002, 2015.
[LS17]
S. Ling and T. Strohmer. Regularized gradient descent: A nonconvex recipe for fast joint blind
deconvolution and demixing. arXiv preprint arXiv:1703.08642, 2017.
[LT16]
Q. Li and G. Tang. The nonconvex geometry of low-rank matrix optimizations with general
objective functions. arXiv preprint arXiv:1611.03060, 2016.
[LTR16]
K. Lee, N. Tian, and J. Romberg. Fast and guaranteed blind multichannel deconvolution under
a bilinear system model. arXiv preprint arXiv:1610.06469, 2016.
[LWL+ 16]
X. Li, Z. Wang, J. Lu, R. Arora, J. Haupt, H. Liu, and T. Zhao. Symmetry, saddle points, and
global geometry of nonconvex matrix factorization. arXiv preprint arXiv:1612.09296, 2016.
42
[Mat90]
R. Mathias. The spectral norm of a nonnegative matrix. Linear Algebra Appl., 139:269–284,
1990.
[Mat93]
R. Mathias. Perturbation bounds for the polar decomposition. SIAM Journal on Matrix Analysis and Applications, 14(2):588–597, 1993.
[MBM16]
S. Mei, Y. Bai, and A. Montanari. The landscape of empirical risk for non-convex losses. arXiv
preprint arXiv:1607.06534, 2016.
[MM17]
M. Mondelli and A. Montanari. Fundamental limits of weak recovery with applications to phase
retrieval. arXiv preprint arXiv:1708.05932, 2017.
[MZL17]
T. Maunu, T. Zhang, and G. Lerman. A well-tempered landscape for non-convex robust subspace recovery. arXiv preprint arXiv:1706.03896, 2017.
[NJS13]
P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. Advances
in Neural Information Processing Systems (NIPS), 2013.
[NNS+ 14]
P. Netrapalli, U. Niranjan, S. Sanghavi, A. Anandkumar, and P. Jain. Non-convex robust PCA.
In Advances in Neural Information Processing Systems, pages 1107–1115, 2014.
[NTS14]
B. Neyshabur, R. Tomioka, and N. Srebro. In search of the real inductive bias: On the role of
implicit regularization in deep learning. arXiv preprint arXiv:1412.6614, 2014.
[NTSS17]
B. Neyshabur, R. Tomioka, R. Salakhutdinov, and N. Srebro. Geometry of optimization and
implicit regularization in deep learning. arXiv preprint arXiv:1705.03071, 2017.
[NW12]
S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: optimal bounds with noise. J. Mach. Learn. Res., 13:1665–1697, 2012.
[PKCS16]
D. Park, A. Kyrillidis, C. Caramanis, and S. Sanghavi. Non-square matrix sensing without
spurious local minima via the Burer-Monteiro approach. arXiv preprint arXiv:1609.03240,
2016.
[QZEW17] Q. Qing, Y. Zhang, Y. Eldar, and J. Wright. Convolutional phase retrieval via gradient descent.
Neural Information Processing Systems, 2017.
[Rec11]
B. Recht. A simpler approach to matrix completion. Journal of Machine Learning Research,
12(Dec):3413–3430, 2011.
[RFP10]
B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix
equations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010.
[RV+ 13]
M. Rudelson, R. Vershynin, et al. Hanson-Wright inequality and sub-Gaussian concentration.
Electronic Communications in Probability, 18, 2013.
[SBE14]
Y. Shechtman, A. Beck, and Y. C. Eldar. GESPAR: Efficient phase retrieval of sparse signals.
IEEE Transactions on Signal Processing, 62(4):928–938, 2014.
[SCC17]
P. Sur, Y. Chen, and E. J. Candès. The likelihood ratio test in high-dimensional logistic
regression is asymptotically a rescaled chi-square. arXiv preprint arXiv:1706.01191, 2017.
[Sch92]
B. A. Schmitt. Perturbation bounds for matrix square roots and Pythagorean sums. Linear
Algebra Appl., 174:215–227, 1992.
[SESS11]
Y. Shechtman, Y. C. Eldar, A. Szameit, and M. Segev. Sparsity based sub-wavelength imaging
with partially incoherent light via quadratic compressed sensing. Optics express, 19(16), 2011.
[SHS17]
D. Soudry, E. Hoffer, and N. Srebro. The implicit bias of gradient descent on separable data.
arXiv preprint arXiv:1710.10345, 2017.
43
[SJL17]
M. Soltanolkotabi, A. Javanmard, and J. D. Lee. Theoretical insights into the optimization
landscape of over-parameterized shallow neural networks. arXiv preprint arXiv:1707.04926,
2017.
[SL16]
R. Sun and Z.-Q. Luo. Guaranteed matrix completion via non-convex factorization. IEEE
Transactions on Information Theory, 62(11):6535–6579, 2016.
[Sol14]
M. Soltanolkotabi. Algorithms and Theory for Clustering and Nonconvex Quadratic Programming. PhD thesis, Stanford University, 2014.
[Sol17]
M. Soltanolkotabi. Structured signal recovery from quadratic measurements: Breaking sample
complexity barriers via nonconvex optimization. arXiv preprint arXiv:1702.06175, 2017.
[SQW16]
J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. In Information Theory
(ISIT), 2016 IEEE International Symposium on, pages 2379–2383. IEEE, 2016.
[SQW17]
J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery over the sphere i: Overview and
the geometric picture. IEEE Transactions on Information Theory, 63(2):853–884, 2017.
[SR15]
P. Schniter and S. Rangan. Compressive phase retrieval via generalized approximate message
passing. IEEE Transactions on Signal Processing, 63(4):1043–1055, 2015.
[SS12]
W. Schudy and M. Sviridenko. Concentration and moment inequalities for polynomials of independent random variables. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium
on Discrete Algorithms, pages 437–446. ACM, New York, 2012.
[Tao12]
T. Tao. Topics in Random Matrix Theory. Graduate Studies in Mathematics. American Mathematical Society, Providence, Rhode Island, 2012.
[tB77]
J. M. F. ten Berge. Orthogonal Procrustes rotation for two or more matrices. Psychometrika,
42(2):267–276, 1977.
[TBS+ 16]
S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht. Low-rank solutions of linear
matrix equations via procrustes flow. In Proceedings of the 33rd International Conference on
International Conference on Machine Learning-Volume 48, pages 964–973. JMLR. org, 2016.
[Tro15a]
J. A. Tropp. Convex recovery of a structured signal from independent random linear measurements. In Sampling Theory, a Renaissance, pages 67–101. Springer, 2015.
[Tro15b]
J. A. Tropp. An introduction to matrix concentration inequalities. Found. Trends Mach. Learn.,
8(1-2):1–230, May 2015.
[TV17]
Y. S. Tan and R. Vershynin. Phase retrieval via randomized kaczmarz: Theoretical guarantees.
arXiv preprint arXiv:1706.09993, 2017.
[TW16]
J. Tanner and K. Wei. Low rank matrix completion by alternating steepest descent methods.
Applied and Computational Harmonic Analysis, 40(2):417–429, 2016.
[Ver12]
R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. Compressed
Sensing, Theory and Applications, pages 210 – 268, 2012.
[WC16]
L. Wang and Y. Chi. Blind deconvolution from multiple sparse inputs. IEEE Signal Processing
Letters, 23(10):1384–1388, 2016.
[WCCL16] K. Wei, J.-F. Cai, T. F. Chan, and S. Leung. Guarantees of riemannian optimization for low
rank matrix recovery. SIAM Journal on Matrix Analysis and Applications, 37(3):1198–1222,
2016.
[Wed72]
P.-Å. Wedin. Perturbation bounds in connection with singular value decomposition. BIT
Numerical Mathematics, 12(1):99–111, 1972.
44
[Wei15]
K. Wei. Solving systems of phaseless equations via Kaczmarz methods: A proof of concept
study. Inverse Problems, 31(12):125008, 2015.
[WGE17]
G. Wang, G. B. Giannakis, and Y. C. Eldar. Solving systems of random quadratic equations
via truncated amplitude flow. IEEE Transactions on Information Theory, 2017.
[WGSC17] G. Wang, G. B. Giannakis, Y. Saad, and J. Chen. Solving almost all systems of random
quadratic equations. arXiv preprint arXiv:1705.10407, 2017.
[WWS15]
C. D. White, R. Ward, and S. Sanghavi. The local convexity of solving quadratic equations.
arXiv preprint arXiv:1506.07868, 2015.
[WZG+ 16] G. Wang, L. Zhang, G. B. Giannakis, M. Akçakaya, and J. Chen. Sparse phase retrieval via
truncated amplitude flow. arXiv preprint arXiv:1611.07641, 2016.
[YWS15]
Y. Yu, T. Wang, and R. J. Samworth. A useful variant of the Davis-Kahan theorem for statisticians. Biometrika, 102(2):315–323, 2015.
[ZB17]
Y. Zhong and N. Boumal. Near-optimal bounds for phase synchronization. arXiv preprint
arXiv:1703.06605, 2017.
[ZBH+ 16]
C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires
rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
[ZCL16]
H. Zhang, Y. Chi, and Y. Liang. Provable non-convex phase retrieval with outliers: Median
truncated Wirtinger flow. In International conference on machine learning, pages 1022–1031,
2016.
[ZL15]
Q. Zheng and J. Lafferty. A convergent gradient descent algorithm for rank minimization and
semidefinite programming from random linear measurements. In Advances in Neural Information Processing Systems, pages 109–117, 2015.
[ZL16]
Q. Zheng and J. Lafferty. Convergence analysis for rectangular matrix completion using BurerMonteiro factorization and gradient descent. arXiv preprint arXiv:1605.07051, 2016.
[ZLK+ 17]
Y. Zhang, Y. Lau, H.-w. Kuo, S. Cheung, A. Pasupathy, and J. Wright. On the global geometry
of sphere-constrained sparse blind deconvolution. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pages 4894–4902, 2017.
[ZSJ+ 17]
K. Zhong, Z. Song, P. Jain, P. L. Bartlett, and I. S. Dhillon. Recovery guarantees for onehidden-layer neural networks. arXiv preprint arXiv:1706.03175, 2017.
[ZWL15]
T. Zhao, Z. Wang, and H. Liu. A nonconvex optimization framework for low rank matrix
estimation. In Advances in Neural Information Processing Systems, pages 559–567, 2015.
[ZZLC17]
H. Zhang, Y. Zhou, Y. Liang, and Y. Chi. A nonconvex approach for phase retrieval: Reshaped
wirtinger flow and incremental algorithms. Journal of Machine Learning Research, 2017.
45
A
Proofs for phase retrieval
Before proceeding, we gather a few simple facts. The standard concentration inequality for χ2 random
variables together with the union bound reveals that the sampling vectors {aj } obey
√
(98)
max kaj k2 ≤ 6n
1≤j≤m
with probability at least 1 − O(me−1.5n ). In addition, standard Gaussian concentration inequalities give
p
\
(99)
max a>
j x ≤ 5 log n
1≤j≤m
with probability exceeding 1 − O(mn−10 ).
A.1
Proof of Lemma 1
We start with the smoothness bound, namely, ∇2 f (x) O(log n) · In . It suffices to prove the upper bound
∇2 f (x) . log n. To this end, we first decompose the Hessian (cf. (44)) into three components as follows:
∇2 f (x) =
m
m
i
3 X h > 2
2 X > \ 2
\ 2
>
\ \>
aj x − a>
x
a
a
+
aj x aj a>
+ 2 In + 2x\ x\> ,
j j
j
j − 2 In + 2x x
m j=1
m j=1
{z
} |
{z
} |
{z
}
|
:=Λ1
:=Λ2
:=Λ3
\ 2
where we have used yj = (a>
j x ) . In the sequel, we control the three terms Λ1 , Λ2 and Λ3 in reverse order.
• The third term Λ3 can be easily bounded by
kΛ3 k ≤ 2 kIn k + 2 x\ x\>
• The second term Λ2 can be controlled by means of Lemma 32:
= 6.
kΛ2 k ≤ 2δ
for an arbitrarily small constant δ > 0, as long as m ≥ c0 n log n for c0 sufficiently large.
• It thus remains to control Λ1 . Towards this we discover that
m
3 X >
\
aj a>
.
kΛ1 k ≤
aj x − x\ a>
j
j x+x
m j=1
\
Under the assumption max1≤j≤m a>
j x−x
\
max a>
j x+x
1≤j≤m
(100)
√
≤ C2 log n and the fact (99), we can also obtain
\
>
\
≤ 2 max a>
j x + max aj x − x
1≤j≤m
1≤j≤m
Substitution into (100) leads to
≤ (10 + C2 )
p
log n.
m
kΛ1 k ≤ 3C2 (10 + C2 ) log n ·
1 X
aj a>
≤ 4C2 (10 + C2 ) log n,
j
m j=1
where the last inequality is a direct consequence of Lemma 31.
Combining the above bounds on Λ1 , Λ2 and Λ3 yields
∇2 f (x) ≤ kΛ1 k + kΛ2 k + kΛ3 k ≤ 4C2 (10 + C2 ) log n + 2δ + 6 ≤ 5C2 (10 + C2 ) log n,
as long as n is sufficiently large. This establishes the claimed smoothness property.
46
Next we move on to the strong convexity lower bound. Picking a constant C > 0 and enforcing proper
truncation, we get
∇2 f (x) =
m
m
m
i
2
1 X > \ 2
1 Xh
3 X > 2
>
>
−
3 a>
x
−
y
a
a
a
a
a
x
1
a x aj a>
j
j j
j j
j
j .
{|a>
j x|≤C }
m j=1
m j=1 j
m j=1 j
|
{z
} |
{z
}
:=Λ4
:=Λ5
We begin with the simpler term Λ5 . Lemma 32 implies that with probability at least 1 − O(n−10 ),
Λ5 − In + 2x\ x\> ≤ δ
holds for any small constant δ > 0, as long as m/(n log n) is sufficiently large. This reveals that
Λ5 (1 + δ) · In + 2x\ x\> .
To bound Λ4 , invoke Lemma 33 to conclude that with probability at least 1 − c3 e−c2 m (for some constants
c2 , c3 > 0),
Λ4 − 3 β1 xx> + β2 kxk22 In ≤ δkxk22
for any small constant δ > 0, provided that m/n is sufficiently large. Here,
and β2 := E ξ 2 1|ξ|≤C ,
β1 := E ξ 4 1{|ξ|≤C} − E ξ 2 1|ξ|≤C
where the expectation is taken with respect to ξ ∼ N (0, 1). By the assumption x − x\
kxk2 ≤ 1 + 2C1 ,
2
kxk2 − kx\ k22 ≤ 2C1 (4C1 + 1) ,
2
≤ 2C1 , one has
x\ x\> − xx> ≤ 6C1 (4C1 + 1) ,
which leads to
Λ4 − 3 β1 x\ x\> + β2 In
≤ Λ4 − 3 β1 xx> + β2 kxk22 In
+3
β1 x\ x\> + β2 In − β1 xx> + β2 kxk22 In
≤ δkxk22 + 3β1 x\ x\> − xx> + 3β2 In − kxk22 In
2
≤ δ (1 + 2C1 ) + 18β1 C1 (4C1 + 1) + 6β2 C1 (4C1 + 1) .
This further implies
i
h
2
Λ4 3 β1 x\ x\> + β2 In − δ (1 + 2C1 ) + 18β1 C1 (4C1 + 1) + 6β2 C1 (4C1 + 1) In .
Recognizing that β1 (resp. β2 ) approaches 2 (resp. 1) as C grows, we can thus take C1 small enough and C
large enough to guarantee that
Λ4 5x\ x\> + 2In .
Putting the preceding two bounds on Λ4 and Λ5 together yields
∇2 f (x) 5x\ x\> + 2In − (1 + δ) · In + 2x\ x\> (1/2) · In
as claimed.
A.2
Proof of Lemma 2
Using the update rule (cf. (17)) as well as the fundamental theorem of calculus [Lan93, Chapter XIII,
Theorem 4.2], we get
x
t+1
\
t
− x = x − η∇f x
t
Z
\
\
− x − η∇f x
= In − η
0
47
1
2
∇ f (x (τ )) dτ
xt − x\ ,
where we denote x (τ ) = x\ + τ (xt − x\ ), 0 ≤ τ ≤ 1. Here, the first equality makes use of the fact that
∇f (x\ ) = 0. Under the condition (45), it is self-evident that for all 0 ≤ τ ≤ 1,
x (τ ) − x\
= kτ (xt − x\ )k2 ≤ 2C1
2
\
max a>
l x(τ ) − x
1≤l≤m
This means that for all 0 ≤ τ ≤ 1,
and
t
\
≤ max a>
l τ x −x
1≤l≤m
≤ C2
p
log n.
(1/2) · In ∇2 f (x(τ )) [5C2 (10 + C2 ) log n] · In
in view of Lemma 1. Picking η ≤ 1/ [5C2 (10 + C2 ) log n] (and hence kη∇2 f (x(τ ))k ≤ 1), one sees that
0 In − η
Z
1
∇2 f (x (τ )) dτ (1 − η/2) · In ,
0
which immediately yields
xt+1 − x\
A.3
2
≤ In − η
Z
1
0
∇2 f (x (τ )) dτ · xt − x\
≤ (1 − η/2) xt − x\
2
2
.
Proof of Lemma 3
We start with proving (19a). For all 0 ≤ t ≤ T0 , invoke Lemma 2 recursively with the conditions (47) to
reach
xt − x\ 2 ≤ (1 − η/2)t x0 − x\ 2 ≤ C1 (1 − η/2)t x\ 2 .
(101)
This finishes the proof of (19a) for 0 ≤ t ≤ T0 and also reveals that
xT0 − x\
≤ C1 (1 − η/2)T0 x\
2
2
1
x\
n
2
(102)
,
provided that η 1/ log n. Applying the Cauchy-Schwarz inequality and the fact (98) indicate that
T0
max a>
− x\
l x
1≤l≤m
≤ max kal k2 kxT0 − x\ k2 ≤
√
1≤l≤m
6n ·
leading to the satisfaction of (45). Therefore, invoking Lemma 2 yields
xT0 +1 − x\
2
≤ (1 − η/2) xT0 − x\
2
p
1 \
kx k2 C2 log n,
n
1 \
kx k2 .
n
One can then repeat this argument to arrive at for all t > T0
xt − x\
2
≤ (1 − η/2)
t
x0 − x\
t
2
≤ C1 (1 − η/2)
x\
2
1 \
kx k2 .
n
(103)
We are left with (19b). It is self-evident that the iterates from 0 ≤ t ≤ T0 satisfy (19b) by assumptions.
For t > T0 , we can use the Cauchy-Schhwarz inequality to obtain
t
\
max a>
j x −x
1≤j≤m
≤ max kaj k2 xt − x\
1≤j≤m
2
where the penultimate relation uses the conditions (98) and (103).
48
√
n·
p
1
≤ C2 log n,
n
A.4
Proof of Lemma 4
First, going through the same derivation as in (54) and (55) will result in
p
t,(l)
max a>
− x\ ≤ C4 log n
l x
(104)
1≤l≤m
for some C4 < C2 , which will be helpful for our analysis.
We use the gradient update rules once again to decompose
h
i
xt+1 − xt+1,(l) = xt − η∇f xt − xt,(l) − η∇f (l) xt,(l)
h
i
h
i
= xt − η∇f xt − xt,(l) − η∇f xt,(l) − η ∇f xt,(l) − ∇f (l) xt,(l)
h
i
i > t,(l)
1 h > t,(l) 2
\ 2
= xt − xt,(l) − η ∇f xt − ∇f xt,(l) − η
− a>
al x
al ,
al x
l x
|
{z
} |m
{z
}
(l)
(l)
:=ν1
:=ν2
where the last line comes from the definition of ∇f (·) and ∇f (l) (·).
(l)
1. We first control the term ν2 , which is easier to deal with. Specifically,
(l)
kν2 k2 ≤ η
kal k2
m
t,(l)
a>
l x
(i)
2
\
− a>
l x
2
n log n
. C4 (C4 + 5)(C4 + 10)η
m
r
t,(l)
a>
l x
log n (ii)
≤ cη
n
r
log n
,
n
for any small constant c > 0. Here (i) follows since (98) and, in view of (99) and (104),
> t,(l)
\
t,(l) 2
\ 2
t,(l)
≤ C4 (C4 + 10) log n,
al x
− x\ + 2 a>
a>
− a>
≤ a>
− x\
l x
l x
l x
l x
p
\
t,(l)
t,(l)
and
a>
≤ a>
− x\ + a>
l x ≤ (C4 + 5) log n.
l x
l x
And (ii) holds as long as m n log n.
(l)
2. For the term ν1 , the fundamental theorem of calculus [Lan93, Chapter XIII, Theorem 4.2] tells us that
Z 1
(l)
ν 1 = In − η
∇2 f (x (τ )) dτ xt − xt,(l) ,
0
where we abuse the notation and denote x (τ ) = xt,(l) + τ (xt − xt,(l) ). By the induction hypotheses (51)
and the condition (104), one can verify that
max a>
l
1≤l≤m
x (τ ) − x\ 2 ≤ τ xt − x\ 2 + (1 − τ ) xt,(l) − x\ 2 ≤ 2C1
and
(105)
p
t
\
t,(l)
x (τ ) − x\ ≤ τ max a>
+ (1 − τ ) max a>
− x\ ≤ C2 log n
l x −x
l x
1≤l≤m
1≤l≤m
for all 0 ≤ τ ≤ 1, as long as C4 ≤ C2 . The second line follows directly from (104). To see why (105) holds,
we note that
r
log n
t,(l)
\
t,(l)
t
t
\
x
−x 2 ≤ x
− x 2 + x − x 2 ≤ C3
+ C1 ,
n
where the second inequality follows from the induction hypotheses (51b) and (51a). This combined with
(51a) gives
!
r
log n
\
x (τ ) − x 2 ≤ τ C1 + (1 − τ ) C3
+ C1 ≤ 2C1
n
as long as n is large enough, thus justifying (105). Hence by Lemma 1, ∇2 f (x (τ )) is positive definite and
almost well-conditioned. By choosing 0 < η ≤ 1/ [5C2 (10 + C2 ) log n], we get
(l)
ν1
2
≤ (1 − η/2) xt − xt,(l)
49
2
.
(l)
(l)
3. Combine the preceding bounds on ν1 and ν2 as well as the induction bound (51b) to arrive at
r
r
log n
log n
t+1
t+1,(l)
t
t,(l)
x
≤ (1 − η/2) x − x
+ cη
≤ C3
.
−x
2
2
n
n
(106)
This establishes (53) for the (t + 1)th iteration.
A.5
Proof of Lemma 5
In view of the assumption (42) that x0 − x\ 2 ≤ x0 + x\ 2 and the fact that x0 =
some λ1 (Y ) > 0 (which we will verify below), it is straightforward to see that
e0 − x\
x
2
e 0 + x\
≤ x
2
.
p
e0 for
λ1 (Y ) /3 x
One can then invoke the Davis-Kahan sinΘ theorem [YWS15, Corollary 1] to obtain
e 0 − x\
x
2
√
≤2 2
kY − E [Y ]k
.
λ1 (E [Y ]) − λ2 (E [Y ])
Note that (56) — kY − E[Y ]k ≤ δ — is a direct consequence of Lemma 32. Additionally, the fact that
E [Y ] = I + 2x\ x\> gives λ1 (E [Y ]) = 3, λ2 (E [Y ]) = 1, and λ1 (E [Y ]) − λ2 (E [Y ]) = 2. Combining this
spectral gap and the inequality kY − E[Y ]k ≤ δ, we arrive at
√
e0 − x\ 2 ≤ 2δ.
x
To connect this bound with x0 , we need to take into account the scaling factor
it follows from Weyl’s inequality and (56) that
p
λ1 (Y ) /3. To this end,
|λ1 (Y ) − 3| = |λ1 (Y ) − λ1 (E [Y ])| ≤ kY − E [Y ]k ≤ δ
and, as a consequence, λ1 (Y ) ≥ 3 − δ > 0 when δ ≤ 1. This further implies that
r
λ1 (Y )
−1
1
λ1 (Y )
λ1 (Y )
−1 = q 3
− 1 ≤ δ,
≤
3
3
3
λ1 (Y )
+1
3
(107)
√
√
√
√
where we have used the elementary identity a − b = (a − b) /( a + b). With these bounds in place, we
can use the triangle inequality to get
r
r
λ1 (Y ) 0
λ1 (Y ) 0
\
0
\
e −x
e −x
e0 + x
e0 − x\
x
=
x
x −x 2 =
3
3
2
2
r
λ1 (Y )
e0 − x\ 2
≤
−1 + x
3
√
1
≤ δ + 2δ ≤ 2δ.
3
A.6
Proof of Lemma 6
To begin with, repeating the same argument as in Lemma 5 (which we omit here for conciseness), we see
that for any fixed constant δ > 0,
h
i
√
e0,(l) − x\ 2 ≤ 2δ,
Y (l) − E Y (l) ≤ δ, kx0,(l) − x\ k2 ≤ 2δ,
x
1≤l≤m
(108)
holds with probability at least 1 − O(mn−10 ) as long as m n log n. The `2 bound on kx0 − x0,(l) k2 is
derived as follows.
50
e0 − x
e0,(l)
1. We start by controlling x
2
e0 − x
e0,(l)
x
. Combining (57) and (108) yields
2
e0 − x\
≤ x
2
e0,(l) − x\
+ x
√
≤ 2 2δ.
2
e0 − x
e0,(l) 2 ≤ x
e0 + x
e0,(l) 2 , and hence the Davis-Kahan sinΘ
For δ sufficiently small, this implies that x
theorem [DK70] gives
0,(l)
0,(l)
e
Y − Y (l) x
0
0,(l)
2 ≤ Y − Y (l) x
e
e −x
e
.
(109)
≤
x
2
2
(l)
λ1 (Y ) − λ2 Y
Here, the second inequality uses Weyl’s inequality:
λ1 Y − λ2 Y (l) ≥ λ1 (E[Y ]) − Y − E[Y ] − λ2 (E[Y (l) ]) − Y (l) − E[Y (l) ]
≥ 3 − δ − 1 − δ ≥ 1,
with the proviso that δ ≤ 1/2.
e0,(l) k2 . Applying the Weyl’s inequality and (56) yields
2. We now connect kx0 − x0,(l) k2 with ke
x0 − x
|λ1 (Y ) − 3| ≤ kY − E[Y ]k ≤ δ
λ1 (Y ) ∈ [3 − δ, 3 + δ] ⊆ [2, 4]
=⇒
and, similarly, λ1 (Y (l) ), kY k, kY (l) k ∈ [2, 4]. Invoke Lemma 34 to arrive at
0,(l)
e
Y − Y (l) x
1
4
0
0,(l)
2
√
√ x −x
√
e0 − x
e0,(l)
x
≤
+
2
+
2
3
2 2
2
0,(l)
e
,
≤ 6 Y − Y (l) x
2
(110)
2
(111)
where the last inequality comes from (109).
3. Everything then boils down to controlling
max
1≤l≤m
0,(l)
e
Y − Y (l) x
0,(l)
e
Y − Y (l) x
2
= max
1≤l≤m
≤ max
1≤l≤m
1
m
2
. Towards this we observe that
\
a>
l x
\ 2
a>
l x
√
2
e0,(l)
al a>
l x
e0,(l)
a>
l x
al
2
2
m
√
log n · n
.
m
r
log n n log n
·
.
(112)
n
m
√
The inequality (i) makes use of the fact maxl a>
x\ ≤ 5 log n (cf. (99)), the bound maxl kal k2 ≤
l
√
√
e0,(l) ≤ 5 log n (due to statistical independence and standard Gaussian
6 n (cf. (98)), and maxl a>
l x
concentration). As long as m/(n log n) is sufficiently large, substituting the above bound (112) into (111)
leads us to conclude that
r
log n
0
0,(l)
max x − x
(113)
≤ C3
2
1≤l≤m
n
(i)
log n ·
for any constant C3 > 0.
B
Proofs for matrix completion
Before proceeding to the proofs, let us record an immediate consequence of the incoherence property (25):
r
r
κµ
κµr
X \ 2,∞ ≤
X\ F ≤
X\ .
(114)
n
n
51
where κ = σmax /σmin is the condition number of M \ . This follows since
1/2
1/2
≤ U \ 2,∞ Σ\
X \ 2,∞ = U \ Σ\
2,∞
r
r
1/2
√
µ
µ
≤
U \ F Σ\
≤
U \ F κσmin
n
n
r
r
κµ
κµr
\
X F≤
X\ .
≤
n
n
Unless otherwise specified, we use the indicator variable δj,k to denote whether the entry in the location
(j, k) is included in Ω. Under our model, δj,k is a Bernoulli random variable with mean p.
B.1
Proof of Lemma 7
By the expression of the Hessian in (61), one can decompose
>
2
1
1
PΩ V X > + XV > F +
PΩ XX > − M \ , V V >
2p
p
2
2 1
1
>
>
\>
\ >
−
+
V X + XV
PΩ V X + X V
PΩ XX > − M \ , V V >
F
F
2p
p
{z
} |
{z
}
vec (V ) ∇2 fclean (X) vec (V ) =
=
1
PΩ
2p
|
:=α1
:=α2
2
1
1
1
2
2
+
PΩ V X \> + X \ V > F −
V X \> + X \ V > F +
V X \> + X \ V > F .
2p
2
2
{z
}
|
{z
} |
:=α4
:=α3
The basic idea is to demonstrate that: (1) α4 is bounded both from above and from below, and (2) the first
three terms are sufficiently small in size compared to α4 .
1. We start by controlling α4 . It is immediate to derive the following upper bound
α4 ≤ V X \>
2
F
+ X \V >
2
F
2
2
≤ 2kX \ k2 kV kF = 2σmax kV kF .
When it comes to the lower bound, one discovers that
o
1n
2
2
α4 =
V X \> F + X \ V > F + 2Tr X \> V X \> V
2
h
>
> i
2
≥ σmin kV kF + Tr Z + X \ − Z V Z + X \ − Z V
2
2
≥ σmin kV kF + Tr Z > V Z > V − 2 Z − X \ kZk kV kF − Z − X \
2
≥ (σmin − 5δσmax ) kV kF + Tr Z > V Z > V ,
2
2
kV kF
(115)
where the last line comes from the assumptions that
Z − X\ ≤ δ X\ ≤ X\
and
kZk ≤ Z − X \ + X \ ≤ 2 X \ .
With our assumption V = Y HY − Z in mind, it comes down to controlling
Tr Z > V Z > V = Tr Z > (Y HY − Z) Z > (Y HY − Z) .
From the definition of HY , we see from Lemma 35 that Z > Y HY (and hence Z > (Y HY − Z)) is a
symmetric matrix, which implies that
Tr Z > (Y HY − Z) Z > (Y HY − Z) ≥ 0.
Substitution into (115) gives
2
α4 ≥ (σmin − 5δσmax ) kV kF ≥
provided that κδ ≤ 1/50.
52
9
2
σmin kV kF ,
10
2. For α1 , we consider the following quantity
PΩ V X > + XV >
2
F
= PΩ V X > , PΩ V X > + PΩ V X > , PΩ XV >
+ PΩ XV > , PΩ V X > + PΩ XV > , PΩ XV >
= 2 PΩ V X > , PΩ V X > + 2 PΩ V X > , PΩ XV > .
Similar decomposition can be performed on PΩ V X \> + X \ V >
α1 =
2
F
as well. These identities yield
1
PΩ V X > , PΩ V X > − PΩ V X \> , PΩ V X \>
p
{z
}
|
:=β1
1
+
.
PΩ V X > , PΩ XV > − PΩ V X \> , PΩ X \ V >
p
{z
}
|
:=β2
For β2 , one has
β2 =
>
E
1D
PΩ V X − X \
, PΩ X − X \ V >
p
>
E 1
1D
PΩ V X \> , PΩ
PΩ V X − X \
+
, PΩ X \ V > +
p
p
which together with the inequality |hA, Bi| ≤ kAkF kBkF gives
|β2 | ≤
>
1
PΩ V X − X \
p
2
F
+
>
2
PΩ V X − X \
p
F
X − X\ V >
PΩ X \ V >
This then calls for upper bounds on the following two terms
>
1
√ PΩ V X − X \
p
and
F
1
√ PΩ X \ V >
p
F
F
.
(116)
.
The injectivity of PΩ (cf. [CR09, Section 4.2] or Lemma 38)—when restricted to the tangent space of
M \ —gives: for any fixed constant γ > 0,
1
√ PΩ X \ V > F ≤ (1 + γ) X \ V > F ≤ (1 + γ) X \ kV kF
p
with probability at least 1 − O n−10 , provided that n2 p/(µnr log n) is sufficiently large. In addition,
>
1
PΩ V X − X \
p
2
F
=
=
1
p
X
1≤j,k≤n
X
1≤j≤n
> 2
\
δj,k Vj,· Xk,· − Xk,·
>
X
1
\
\
>
Vj,·
Vj,·
δj,k Xk,· − Xk,·
Xk,· − Xk,·
p
1≤k≤n
>
1 X
2
\
\
δj,k Xk,· − Xk,·
Xk,· − Xk,·
kV kF
1≤j≤n p
1≤k≤n
1
X
2
2
\
≤
max
δj,k
max Xk,· − Xk,·
kV kF
p 1≤j≤n
1≤k≤n
2
≤ max
1≤k≤n
≤ (1 + γ) n X − X \
53
2
2,∞
2
kV kF ,
with probability exceeding 1 − O n−10 , which holds as long as np/ log n is sufficiently large. Taken
collectively, the above bounds yield that for any small constant γ > 0,
q
2
2
2
2
2
2
2
|β2 | ≤ (1 + γ) n X − X \ 2,∞ kV kF + 2 (1 + γ) n kX − X \ k2,∞ kV kF · (1 + γ) kX \ k kV kF
√
2
2
. 2 n X \ 2,∞ + n X \ 2,∞ X \ kV kF ,
where the last inequality makes use of the assumption kX − X \ k2,∞ ≤ kX \ k2,∞ . The same analysis can
be repeated to control β1 . Altogether, we obtain
√
2
2
|α1 | ≤ |β1 | + |β2 | . n2 X \ 2,∞ + n X \ 2,∞ X \ kV kF
r
(i)
(ii) 1
√
κµr
2
2
2 κµr
≤ n
+ n
σmax kV kF ≤
σmin kV kF ,
n
n
10
p
where (i) utilizes the incoherence condition (114) and (ii) holds with the proviso that κ3 µr 1.
3. To bound α2 , apply the Cauchy-Schwarz inequality to get
1
1
2
≤
|α2 | = V , PΩ XX > − M \ V
PΩ XX > − M \ kV kF .
p
p
In view of Lemma 43, with probability at least 1 − O n−10 ,
as soon as
p
√
1
2
PΩ XX > − M \ ≤ 2n2 X \ 2,∞ + 4 n log n X \ 2,∞ X \
p
r
√
1
κµr
2 κµr
+ 4 n log n
σmax ≤
σmin
≤ 2n
n
n
10
κ3 µr log n 1, where we utilize the incoherence condition (114). This in turn implies that
1
2
σmin kV kF .
10
Notably, this bound holds uniformly over all X satisfying the condition in Lemma 7, regardless of the
statistical dependence between X and the sampling set Ω.
|α2 | ≤
4. The last term α3 can also be controlled using the injectivity of PΩ when restricted to the tangent space
of M \ . Specifically, it follows from the bounds in [CR09, Section 4.2] or Lemma 38 that
|α3 | ≤ γ V X \> + X \ V >
2
2
F
≤ 4γσmax kV kF ≤
1
2
σmin kV kF
10
for any γ > 0 such that κγ is a small constant, as soon as n2 p κ2 µrn log n.
5. Taking all the preceding bounds collectively yields
>
vec (V ) ∇2 fclean (X) vec (V ) ≥ α4 − |α1 | − |α2 | − |α3 |
9
3
1
2
2
≥
−
σmin kV kF ≥ σmin kV kF
10 10
2
for all V satisfying our assumptions, and
>
vec (V ) ∇2 fclean (X) vec (V ) ≤ α4 + |α1 | + |α2 | + |α3 |
3
5
2
2
≤ 2σmax + σmin kV kF ≤ σmax kV kF
10
2
for all V . Since this upper bound holds uniformly over all V , we conclude that
∇2 fclean (X) ≤
as claimed.
54
5
σmax
2
B.2
Proof of Lemma 8
ct+1 is chosen to minimize the error in terms of the Frobenius norm (cf. (26)), we have
Given that H
ct+1 − X \
X t+1 H
F
ct − X \
≤ X t+1 H
=
F
t
t
c − X\
X − η∇f X t H
F
ct − η∇f X t H
ct − X \
= X tH
F
1
(ii)
t ct
t ct
t ct
= X H − η ∇fclean X H − PΩ (E) X H − X \
p
F
1
ct − η∇fclean X t H
ct − X \ − η∇fclean X \
ct ,
+η
≤ X tH
PΩ (E) X t H
p
F
F
{z
} |
|
{z
}
(i)
:=α1
(117)
:=α2
where (i) follows from the identity ∇f (X t R) = ∇f (X t ) R for any orthonormal matrix R ∈ Or×r , (ii) arises
from the definitions of ∇f (X) and ∇fclean (X) (see (59) and (60), respectively), and the last inequality (117)
utilizes the triangle inequality and the fact that ∇fclean (X \ ) = 0. It thus suffices to control α1 and α2 .
1. For the second term α2 in (117), it is easy to see that
α2 ≤ η
1
PΩ (E)
p
ct
X tH
F
≤ 2η
1
PΩ (E)
p
X\
F
≤ 2ηCσ
r
n
kX \ kF
p
ct
ct −
for some absolute constant C > 0. Here, the second inequality holds because X t H
≤ X tH
F
\
\
\
X F + X F ≤ 2 X F , following the hypothesis (28a) together with our assumptions on the noise
and the sample complexity. The last inequality makes use of Lemma 40.
2. For the first term α1 in (117), the fundamental theorem of calculus [Lan93, Chapter XIII, Theorem 4.2]
reveals
h
i
ct − η∇fclean X t H
ct − X \ − η∇fclean X \
vec X t H
h
i
h
i
ct − X \ − η · vec ∇fclean X t H
ct − ∇fclean X \
= vec X t H
!
Z 1
2
ct − X \ ,
= Inr − η
∇ fclean (X(τ )) dτ vec X t H
(118)
|0
{z
}
:=A
ct − X \ ). Taking the squared Euclidean norm of both sides of the
where we denote X(τ ) := X \ + τ (X t H
equality (118) leads to
2
2
ct − X \
(Inr − ηA) vec X t H
ct − X \ > Inr − 2ηA + η 2 A2 vec X t H
ct − X \
= vec X t H
ct − X \
(α1 ) = vec X t H
ct − X \
≤ X tH
2
F
>
2
+ η 2 kAk
ct − X \
X tH
2
F
ct − X \
− 2η vec X t H
>
ct − X \ ,
A vec X t H
(119)
where in (119) we have used the fact that
ct − X \
vec X t H
>
ct − X \ ≤ kAk2 vec X t H
ct − X \
A2 vec X t H
2
2
Based on the condition (28b), it is easily seen that ∀τ ∈ [0, 1],
s
s
!
log
n
C
n
log
n
8
+
σ
X\
X (τ ) − X \ 2,∞ ≤ C5 µr
np
σmin
p
55
2
= kAk
2,∞
ct − X \
X tH
.
2
F
.
Taking X = X (τ ) , Y = X t and Z = X \ in Lemma 7, one can easily verify the assumptions therein
given our sample size condition n2 p κ3 µ3 r3 n log3 n and the noise condition (27). As a result,
ct − X \
vec X t H
>
ct − X \
ct − X \ ≥ σmin X t H
A vec X t H
2
Substituting these two inequalities into (119) yields
25
2
2
ct − X \
− σmin η X t H
(α1 ) ≤ 1 + η 2 σmax
4
2
F
2
F
and
kAk ≤
5
σmax .
2
σmin
ct − X \
≤ 1−
η X tH
2
2
as long as 0 < η ≤ (2σmin )/(25σmax
), which further implies that
σmin
ct − X \
α1 ≤ 1 −
η X tH
4
F
2
F
.
3. Combining the preceding bounds on both α1 and α2 and making use of the hypothesis (28a), we have
r
ct+1 − X \ ≤ 1 − σmin η X t H
ct − X \ + 2ηCσ n X \
X t+1 H
F
4
p
F
F
r
r
σmin
σ
n
n
1
≤ 1−
η
X \ F + C1
X \ F + 2ηCσ
X\ F
C4 ρt µr √
4
np
σmin p
p
r
σmin C1
n
σmin
1
\
t
X F+ 1−
X\ F
≤ 1−
η C4 ρ µr √
η
+ 2ηC σ
4
np
4
σmin
p
r
1
σ
n
t+1
\
≤ C4 ρ µr √
X F + C1
X\ F
np
σmin p
2
as long as 0 < η ≤ (2σmin )/(25σmax
), 1 − (σmin /4) · η ≤ ρ < 1 and C1 is sufficiently large. This completes
the proof of the contraction with respect to the Frobenius norm.
B.3
Proof of Lemma 9
To facilitate analysis, we construct an auxiliary matrix defined as follows
ft+1 := X t H
ct − η 1 PΩ X t X t> − M \ + E X \ .
X
p
(120)
With this auxiliary matrix in place, we invoke the triangle inequality to bound
ft+1 − X \ .
ct+1 − X \ ≤ X t+1 H
ct+1 − X
ft+1 + X
X t+1 H
|
{z
} |
{z
}
:=α1
(121)
:=α2
ft+1 is also not far from the truth.
1. We start with the second term α2 and show that the auxiliary matrix X
t+1
f
The definition of X
allows one to express
ct − η 1 PΩ X t X t> − M \ + E X \ − X \
α2 = X t H
p
1
ct − η 1 PΩ X t X t> − X \ X \> X \ − X \
≤η
PΩ (E) X \ + X t H
p
p
1
ct − η X t X t> − X \ X \> X \ − X \
≤η
PΩ (E) X \ + X t H
p
|
{z
}
(122)
:=β1
1
+η
PΩ X t X t> − X \ X
p
|
\>
X \ − X t X t> − X \ X \> X \ ,
{z
}
:=β2
56
(123)
where we have used the triangle inequality to separate the population-level component (i.e. β1 ), the
perturbation (i.e. β2 ), and the noise component. In what follows, we will denote
ct − X \
∆t := X t H
which, by Lemma 35, satisfies the following symmetry property
ct> X t> X \ = X \> X t H
ct
H
∆t> X \ = X \> ∆t .
=⇒
(124)
(a) The population-level component β1 is easier to control. Specifically, we first simplify its expression as
β1 = ∆t − η ∆t ∆t> + ∆t X \> + X \ ∆t> X \
≤ ∆t − η ∆t X \> + X \ ∆t> X \ + η ∆t ∆t> X \ .
{z
} |
{z
}
|
:=γ1
:=γ2
The leading term γ1 can be upper bounded by
γ1 = ∆t − η∆t Σ\ − ηX \ ∆t> X \ = ∆t − η∆t Σ\ − ηX \ X \> ∆t
1
1 t
1
=
∆ Ir − 2ηΣ\ +
Ir − 2ηM \ ∆t ≤
Ir − 2ηΣ\ + Ir − 2ηM \
∆t
2
2
2
where the second identity follows from the symmetry property (124). By choosing η ≤ 1/(2σmax ), one
has 0 Ir − 2ηΣ\ (1 − 2ησmin ) Ir and 0 Ir − 2ηM \ Ir , and further one can ensure
γ1 ≤
1
[(1 − 2ησmin ) + 1] ∆t = (1 − ησmin ) ∆t .
2
(125)
Next, regarding the higher order term γ2 , we can easily obtain
γ2 ≤ η ∆t
2
X\ .
(126)
The bounds (125) and (126) taken collectively give
β1 ≤ (1 − ησmin ) ∆t + η ∆t
2
X\ .
(127)
(b) We now turn to the perturbation part β2 by showing that
1
1
β2 =
PΩ ∆t ∆t> + ∆t X \> + X \ ∆t> X \ − ∆t ∆t> + ∆t X \> + X \ ∆t> X \
η
p
1
1
≤
PΩ ∆t X \> X \ − ∆t X \> X \ + PΩ X \ ∆t> X \ − X \ ∆t> X \
p
p
F
F
{z
} |
{z
}
|
:=θ1
:=θ2
1
+
PΩ ∆t ∆t> X \ − ∆t ∆t> X \ ,
p
F
|
{z
}
(128)
:=θ3
where the last inequality holds due to the triangle inequality as well as the fact that kAk ≤ kAkF . In
the sequel, we shall bound the three terms separately.
• For the first term θ1 in (128), the lth row of p1 PΩ ∆t X \> X \ − ∆t X \> X \ is given by
n
n
X
1X
1
\
\
\>
\>
(δl,j − p) ∆tl,· Xj,·
Xj,·
= ∆tl,·
(δl,j − p) Xj,·
Xj,·
p j=1
p j=1
57
where, as usual, δl,j = 1{(l,j)∈Ω} . Lemma 41 together with the union bound reveals that
q
n
1
1X
2
2
\
\>
\ 2
\
\
(δl,j − p) Xj,· Xj,· .
p kX k2,∞ kX k log n + X 2,∞ log n
p j=1
p
s
kX \ k22,∞ σmax log n kX \ k22,∞ log n
+
p
p
for all 1 ≤ l ≤ n with high probability. This gives
n
X
1X
1
\>
\
\>
\
∆tl,·
≤ ∆tl,· 2
(δl,j − p) Xj,·
Xj,·
(δl,j − p) Xj,·
Xj,·
p j=1
p j
2
s
kX \ k2 σ
\ 2
log
n
kX
k
log
n
max
2,∞
2,∞
. ∆tl,· 2
+
,
p
p
which further reveals that
v
u
uX
n
u
1X
\>
\
(δl,j − p) ∆tl,· Xj,·
Xj,·
θ1 = t
p j
l=1
2
. ∆t
2
F
s
kX \ k2
s
kX \ k2
\ 2
σ
log
n
kX k2,∞ log n
2,∞ max
+
p
p
\ 2
rkX
k
log
n
2,∞
+
. ∆t
p
p
(s
)
(ii)
κµr2 log n κµr3/2 log n
t
. ∆
+
σmax
np
np
(i)
2,∞ rσmax
log n
√
(iii)
≤ γσmin ∆t ,
√
for arbitrarily small γ > 0. Here, (i) follows from k∆t kF ≤ r k∆t k, (ii) holds owing to the incoherence condition (114), and (iii) follows as long as n2 p κ3 µr2 n log n.
• For the second term θ2 in (128), denote
A = PΩ X \ ∆t> X \ − p X \ ∆t> X \ ,
whose lth row is given by
Al,· =
\
Xl,·
n
X
j=1
\
(δl,j − p) ∆t>
j,· Xj,· .
Recalling the induction hypotheses (28b) and (28c), we define
s
s
log
n
σ
n log n
X \ 2,∞ + C8
X\
∆t 2,∞ ≤ C5 ρt µr
np
σmin
p
r
n
σ
1
∆t ≤ C9 ρt µr √
X \ + C10
X \ := ψ.
np
σmin p
(129)
2,∞
:= ξ
(130)
(131)
With these two definitions in place, we now introduce a “truncation level”
ω := 2pξσmax
58
(132)
that allows us to bound θ2 in terms of the following two terms
v
v
v
u n
u n
u n
X
X
u
uX
1u
1
1
2
2
2
θ2 = t
kAl,· k2 ≤ t
kAl,· k2 1{kAl,· k ≤ω} + t
kAl,· k2 1{kAl,· k ≥ω} .
2
2
p
p
p
l=1
l=1
l=1
{z
}
{z
}
|
|
:=φ1
:=φ2
We will apply different strategies when upper bounding the terms φ1 and φ2 , with their bounds given
in the following two lemmas under the induction hypotheses (28b) and (28c).
Lemma 22. Under the conditions in Lemma 9, there exist some constants c, C > 0 such that with
probability exceeding 1 − c exp(−Cnr log n),
q
(133)
φ1 . ξ pσmax kX \ k22,∞ nr log2 n
holds simultaneously for all ∆t obeying (130) and (131). Here, ξ is defined in (130).
Lemma 23. Under the conditions in Lemma 9, with probability at least 1 − O n−10 ,
q
2
φ2 . ξ κµr2 p log2 n X \
(134)
holds simultaneously for all ∆t obeying (130) and (131). Here, ξ is defined in (130).
The bounds (133) and (134) together with the incoherence condition (114) yield
s
q
q
κµr2 log2 n
1
1
2
ξσmax .
θ2 . ξ pσmax kX \ k22,∞ nr log2 n + ξ κµr2 p log2 n X \ .
p
p
p
• Next, we assert that the third term θ3 in (128) has the same upper bound as θ2 . The proof follows
by repeating the same argument used in bounding θ2 , and is hence omitted.
Take the previous three bounds on θ1 , θ2 and θ3 together to arrive at
s
κµr2 log2 n
t
e
β2 ≤ η (|θ1 | + |θ2 | + |θ3 |) ≤ ηγσmin ∆ + Cη
ξσmax
p
e > 0.
for some constant C
(c) Substituting the preceding bounds on β1 and β2 into (123), we reach
(i)
1
α2 ≤ 1 − ησmin + ηγσmin + η ∆t X \
PΩ (E) X \
∆t + η
p
s
s
s
2 log2 n
κµr
log
n
σ
n log n
e
+ Cη
σmax C5 ρt µr
X \ 2,∞ + C8
X\
p
np
σmin
p
(ii)
σmin
1
≤ 1−
η ∆t + η
PΩ (E) X \
2
p
s
s
s
2 log2 n
κµr
log
n
σ
n log n
t
\
e
+ Cη
σmax C5 ρ µr
X 2,∞ + C8
X\
p
np
σmin
p
r
(iii)
σmin
n
≤ 1−
η ∆t + Cησ
X\
2
p
s
r
r
κ2 µ2 r3 log3 n
1
σ
n
t
e
σmax C5 ρ µr
+ C8
X\
+ Cη
np
np
σmin p
2,∞
2,∞
!
!
(135)
for some constant C > 0. Here, (i) uses the definition of ξ (cf. (130)), (ii) holds if γ is small enough and
k∆t k X \ σmin , and (iii) follows from Lemma 40 as well as the incoherence condition (114). An
59
immediate consequence of (135) is that under the sample size condition and the noise condition of this
lemma, one has
ft+1 − X \ X \ ≤ σmin /2
X
(136)
if 0 < η ≤ 1/σmax .
2. We then move on to the first term α1 in (121), which can be rewritten as
ct R1 − X
ft+1 ,
α1 = X t+1 H
with
ct
R1 = H
−1
ft+1 satisfies
(a) First, we claim that X
ct+1 := arg min
H
R∈O r×r
ct R − X \
X t+1 H
ft+1 R − X \
Ir = arg min
X
r×r
R∈O
F
F
.
(137)
(138)
,
ft+1 is already rotated to the direction that is most “aligned” with X \ . This important
meaning that X
ft+1 is
property eases the analysis. In fact, in view of Lemma 35, (138) follows if one can show that X \> X
ct is symmetric
symmetric and positive semidefinite. First of all, it follows from Lemma 35 that X \> X t H
and, hence, by definition,
ft+1 = X \> X t H
ct − η X \> PΩ X t X t> − M \ + E X \
X \> X
p
is also symmetric. Additionally,
ft+1 − M \ ≤ X
ft+1 − X \
X \> X
X \ ≤ σmin /2,
where the second inequality holds according to (136). Weyl’s inequality guarantees that
ft+1
X \> X
thus justifying (138) via Lemma 35.
1
σmin Ir ,
2
(b) With (137) and (138) in place, we resort to Lemma 37 to establish the bound. Specifically, take
ft+1 and X2 = X t+1 H
ct , and it comes from (136) that
X1 = X
X1 − X \
Moreover, we have
in which
ct − X
ft+1
kX1 − X2 k X \ = X t+1 H
X\ ,
1
ct
X t − η PΩ X t X t> − M \ + E X t H
p
ct − η 1 PΩ X t X t> − M \ + E X \
− X tH
p
t t>
t t
1
c − X\ .
X H
= −η PΩ X X − M \ + E
p
ct − X
ft+1 =
X t+1 H
This allows one to derive
X \ ≤ σmin /2.
ct − X \ + η 1 PΩ (E) X t H
ct − X
ft+1 ≤ η 1 PΩ X t X t> − M \ X t H
ct − X \
X t+1 H
p
p
60
≤ η 2n ∆t
2
2,∞
√
+ 4 n log n ∆t
2,∞
X \ + Cσ
r
n
∆t
p
(139)
for some absolute constant C > 0. Here the last inequality follows from Lemma 40 and Lemma 43. As
a consequence,
r
√
n
\
t 2
t
\
kX1 − X2 k X ≤ η 2n ∆ 2,∞ + 4 n log n ∆ 2,∞ X + Cσ
∆t X \ .
p
Under our sample size condition and the noise condition (27) and the induction hypotheses (28), one
can show
kX1 − X2 k X \ ≤ σmin /4.
Apply Lemma 37 and (139) to reach
ct − X
ft+1
α1 ≤ 5κ X t+1 H
√
2
≤ 5κη 2n ∆t 2,∞ + 2 n log n ∆t
2,∞
X \ + Cσ
r
n
∆t .
p
3. Combining the above bounds on α1 and α2 , we arrive at
r
ct+1 − X \ ≤ 1 − σmin η ∆t + ηCσ n X \
X t+1 H
2
p
s
r
r
C8
κ2 µ2 r3 log3 n
1
n
t
e
σmax C5 ρ µr
+
X\
+ Cη
σ
np
np σmin
p
r
√
n
t 2
t
\
+ 5ηκ 2n ∆ 2,∞ + 2 n log n ∆ 2,∞ X + Cσ
∆t
p
r
σ
1
n
\
t+1
X + C10
X\ ,
≤ C9 ρ µr √
np
σmin p
with the proviso that ρ ≥ 1 − (σmin /3) · η, κ is a constant, and n2 p µ3 r3 n log3 n.
B.3.1
Proof of Lemma 22
In what follows, we first assume that the δj,k ’s are independent, and then use the standard decoupling trick
to extend the result to symmetric sampling case (i.e. δj,k = δk,j ).
To begin with, we justify the concentration bound for any ∆t independent of Ω, followed by the standard
covering argument that extends the bound to all ∆t . For any ∆t independent of Ω, one has
2
and
\
\
B := max Xl,·
(δl,j − p) ∆t>
≤ X \ 2,∞ ξ
j,· Xj,·
1≤j≤n
2
n
>
X
2
\
\
\
\
t>
V := E
(δl,j − p) Xl,· ∆t>
j,· Xj,· Xl,· ∆j,· Xj,·
j=1
≤p
\
Xl,·
\
≤ p Xl,·
≤ 2p
2
2
2
n
X
2
X \ 2,∞
t
∆t>
j,· ∆j,·
j=1
X\
2
\ 2
X 2,∞
2
2,∞
ψ2
ξ 2 σmax ,
where ξ and ψ are defined respectively in (130) and (131). Here, the last line makes use of the fact that
√
X \ 2,∞ ψ ξ X \ = ξ σmax ,
(140)
61
as long as n is sufficiently large. Apply the matrix Bernstein inequality [Tro15b, Theorem 6.1.1] to get
!
ct2
P kAl,· k2 ≥ t ≤ 2r exp −
2
2
2pξ 2 σmax kX \ k2,∞ + t · kX \ k2,∞ ξ
!
ct2
≤ 2r exp −
2
4pξ 2 σmax kX \ k2,∞
for some constant c > 0, provided that
t ≤ 2pσmax ξ.
This upper bound on t is exactly the truncation level ω we introduce in (132). With this in mind, we can
easily verify that
kAl,· k2 1{kAl,· k ≤ω}
2
2
is a sub-Gaussian random variable with variance proxy not exceeding O pξ 2 σmax X \ 2,∞ log r . Therefore, invoking the concentration bounds for quadratic functions [HKZ12, Theorem 2.1] yields that for some
constants C0 , C > 0, with probability at least 1 − C0 e−Cnr log n ,
φ21 =
n
X
l=1
2
kAl,· k2 1{kAl,· k
2
≤ω }
. pξ 2 σmax kX \ k22,∞ nr log2 n.
Now that we have established an upper bound on any fixed matrix ∆t (which holds with exponentially
high probability), we can proceed to invoke the standard epsilon-net argument to establish a uniform bound
over all feasible ∆t . This argument is fairly standard, and is thus omitted; see [Tao12, Section 2.3.1] or the
1
proof of Lemma 42. In conclusion, we have that with probability exceeding 1 − C0 e− 2 Cnr log n ,
v
u n
q
uX
2
kAl,· k2 1{kAl,· k ≤ω} . pξ 2 σmax kX \ k22,∞ nr log2 n
φ1 = t
(141)
2
l=1
holds simultaneously for all ∆t ∈ Rn×r obeying the conditions of the lemma.
In the end, we comment on how to extend the bound to the symmetric sampling pattern where δj,k = δk,j .
2
Recall from (129) that the diagonal element δl,l cannot change the `2 norm of Al,· by more than X \ 2,∞ ξ.
As a result, changing all the diagonals {δl,l } cannot change the quantity of interest (i.e. φ1 ) by more than
√
2
n X \ 2,∞ ξ. This is smaller than the right hand side of (141) under our incoherence and sample size
conditions. Hence from now on we ignore the effect of {δl,l } and focus on off-diagonal terms. The proof
then follows from the same argument as in [GLM16, Theorem D.2]. More specifically, we can employ the
construction of Bernoulli random variables introduced therein to demonstrate that the upper bound in (141)
0
0
still holds if the indicator δi,j is replaced by (τi,j + τi,j
)/2, where τi,j and τi,j
are independent copies of
the symmetric Bernoulli random variables. Recognizing that sup∆t φ1 is a norm of the Bernoulli random
variables τi,j , one can repeat the decoupling argument in [GLM16, Claim D.3] to finish the proof. We omit
the details here for brevity.
B.3.2
Proof of Lemma 23
Observe from (129) that
kAl,· k2 ≤ X
\
≤ X\
2,∞
n
X
j=1
2,∞
\
(δl,j − p) ∆t>
j,· Xj,·
n
X
\
t
δl,j ∆t>
j,· Xj,· + p ∆
j=1
62
(142)
X\
\
δl,1 X1,·
..
\
t>
≤ X \ 2,∞ δl,1 ∆t>
+ pψ X
1,· , · · · , δl,n ∆n,·
.
\
δl,n Xn,·
√
≤ X \ 2,∞ Gl ∆t · 1.2 p X \ + pψ X \ ,
(143)
where ψ is as defined in (131) and Gl (·) is as defined in Lemma 41. Here, the last inequality follows from
Lemma 41, namely, for some constant C > 0, the following holds with probability at least 1 − O(n−10 )
\
δl,1 X1,·
12
q
2
..
\ 2
\ 2
2
\
\
+ C pkX k2,∞ kX k log n + CkX k2,∞ log n
≤ p X
.
\
δl,n Xn,·
r
1
κµr
κµr log n 2
√
(144)
≤ p+C p
log n + C
X \ ≤ 1.2 p X \ ,
n
n
where we also use the incoherence condition (114) and the sample complexity condition n2 p κµrn log n.
Hence, the event
kAl,· k2 ≥ ω = 2pσmax ξ
together with (142) and (143) necessarily implies that
n
X
j=1
t
Gl ∆
≥
\
(δl,j − p) ∆t>
j,· Xj,· ≥ 2pσmax
2pσmax ξ
kX \ kkX \ k2,∞
√
1.2 p
− pψ
≥
√
2 pkX \ kξ
kX \ k2,∞
−
ξ
kX \ k2,∞
√
pψ
1.2
and
√
≥ 1.5 p
ξ
X\ ,
kX \ k2,∞
where the last inequality follows from the bound (140). As a result, with probability at least 1 − O(n−10 )
(i.e. when (144) holds for all l’s) we can upper bound φ2 by
v
v
u n
uX
uX
u n
2
2
,
√ √
kAl,· k2 1{kAl,· k ≥ω} ≤ t
kAl,· k2 1
φ2 = t
1.5 pξ σmax
t
2
l=1
kGl (∆ )k≥
l=1
kX \ k2,∞
where the indicator functions are now specified with respect to kGl (∆t )k.
Next, we divide into multiple cases based on the size of kGl (∆t )k. By Lemma 42, for some constants
c1 , c2 > 0, with probability at least 1 − c1 exp (−c2 nr log n),
n
X
l=1
αn
1{kGl (∆t )k≥4√pψ+√2k rξ} ≤ k−3
2
(145)
for any k ≥ 0 and any α & log n. We claim that it suffices to consider the set of sufficiently large k obeying
√
√
2k rξ ≥ 4 pψ
or equivalently k ≥ log
16pψ 2
;
rξ 2
(146)
otherwise we can use (140) to obtain
√
√
√
√
4 pψ + 2k rξ ≤ 8 pψ 1.5 p
ξ
kX \ k2,∞
X\ ,
which contradicts the event kAl,· k2 ≥ ω. Consequently, we divide all indices into the following sets
n
√
√
o
Sk = 1 ≤ l ≤ n : Gl ∆t ∈
2k rξ, 2k+1 rξ
63
(147)
defined for each integer k obeying (146). Under the condition (146), it follows from (145) that
n
X
l=1
1{kGl (∆t )k≥√2k+2 rξ} ≤
n
X
l=1
αn
1{kGl (∆t )k≥4√pψ+√2k rξ} ≤ k−3 ,
2
meaning that the cardinality of Sk satisfies
|Sk+2 | ≤
αn
2k−3
or
|Sk | ≤
αn
2k−5
which decays exponentially fast as k increases. Therefore, when restricting attention to the set of indices
within Sk , we can obtain
r
sX
√
2
(i)
√
2
2
kAl,· k2 ≤ |Sk | · kX \ k2,∞ 1.2 2k+1 rξ p kX \ k + pψ kX \ k
l∈Sk
r
√
αn
√
\
k+1 rξ p X \ + pψ X \
2
X
2
2,∞
2k−5
r
√
(ii)
αn
√
≤ 4
X \ 2,∞ 2k+1 rξ p X \
k−5
2
p
(iii)
2
≤ 32 ακµr2 pξ X \ ,
≤
where (i) follows from the bound (143) and the constraint (147) in Sk , (ii) is a consequence of (146) and (iii)
uses the incoherence condition (114).
Now that we have developed an upper bound with respect to each Sk , we can add them up to yield the
final upper bound. Note that there are in total no more than O (log n) different sets, i.e. Sk = ∅ if k ≥ c1 log n
for c1 sufficiently large. This arises since
√
√
√ √
kGl (∆t )k ≤ k∆t kF ≤ nk∆t k2,∞ ≤ nξ ≤ n rξ
and hence
1{kGl (∆t )k≥4√pψ+√2k rξ} = 0
and
Sk = ∅
if k/ log n is sufficiently large. One can thus conclude that
φ22 ≤
c1X
log n
k=log
16pψ 2
rξ2
p
leading to φ2 . ξ ακµr2 p log n X \
large constant c > 0.
B.4
2
X
l∈Sk
2
kAl,· k2 .
p
ακµr2 pξ X \
2 2
· log n,
. The proof is finished by taking α = c log n for some sufficiently
Proof of Lemma 10
ct and X2 = X t,(l) Rt,(l) , we get
1. To obtain (73a), we invoke Lemma 37. Setting X1 = X t H
s
(i)
(ii) 1
1
C10
n log n
\
\
t
X1 − X
X ≤ C9 ρ µr √ σmax +
σ
σmax ≤ σmin ,
np
σmin
p
2
where (i) follows from (70c) and (ii) holds as long as n2 p κ2 µ2 r2 n and the noise satisfies (27). In
addition,
kX1 − X2 k X \ ≤ kX1 − X2 kF X \
s
(i)
log n
≤ C3 ρt µr
X\
np
64
C7
+
σ
2,∞
σmin
s
n log n
X\
p
2,∞
!
X\
(ii)
t
≤ C3 ρ µr
(iii)
≤
s
log n
C7
σmax +
σ
np
σmin
s
n log n
σmax
p
1
σmin ,
2
where (i) utilizes (70d), (ii) follows since X \ 2,∞ ≤ X \ , and (iii) holds if n2 p κ2 µ2 r2 n log n and the
noise satisfies (27). With these in place, Lemma 37 immediately yields (73a).
ct,(l) . The second inequality is con2. The first inequality in (73b) follows directly from the definition of H
t,(l) t,(l)
cerned with the estimation error of X
R
with respect to the Frobenius norm. Combining (70a),
(70d) and the triangle inequality yields
X t,(l) Rt,(l) − X \
F
ct − X \
≤ X tH
1
≤ C4 ρt µr √
X\
np
+
F
1
≤ C4 ρ µr √
X\
np
C1 σ
+
F
σmin
t
1
X\
≤ 2C4 ρt µr √
np
F
C1 σ
σmin
+
r
r
2C1 σ
σmin
n
p
n
p
r
ct − X t,(l) Rt,(l)
+ X tH
F
s
s
C7 σ n log n
log
n
t
\
\
X F + C3 ρ µr
X 2,∞ +
X \ 2,∞
np
σmin
p
s
s
r
r
log
n
κµ
C
σ
n log n κµ
7
\
t
\
X F + C3 ρ µr
X F+
X\
np
n
σmin
p
n
F
n
X\
p
F
F
(148)
,
where the last step holds true as long as n κµ log n.
3. To obtain (73c), we use (70d) and (70b) to get
ct − X \
ct − X t,(l) Rt,(l)
X t,(l) Rt,(l) − X \
≤ X tH
+ X tH
2,∞
2,∞
F
s
s
s
C8 σ n log n
log n
log n
≤ C5 ρt µr
X \ 2,∞ +
X \ 2,∞ + C3 ρt µr
X\
np
σmin
p
np
s
s
C8 + C7
log n
n log n
t
\
≤ (C3 + C5 ) ρ µr
X 2,∞ +
X \ 2,∞ .
σ
np
σmin
p
C7 σ
+
2,∞
σmin
s
n log n
X\
p
4. Finally, to obtain (73d), one can take the triangle inequality
ct,(l) − X \ ≤ X t,(l) H
ct,(l) − X t H
ct
X t,(l) H
F
ct − X t,(l) Rt,(l)
≤ 5κ X t H
ct − X \
+ X tH
F
ct − X \ ,
+ X tH
where the second line follows from (73a). Combine (70d) and (70c) to yield
ct,(l) − X \
X t,(l) H
s
s
!
r
C
C10
n log n
1
n
7
\
t
\
t
+
σ
X
X
+
σ
X\
+
C
ρ
µr
≤ 5κ C3 ρ µr
√
9
2,∞
2,∞
σmin
p
np
σmin
p
s
s
!
r
r
κµr
log
n
C
n
log
n
1
C10 σ n
7
\
t
t
\
≤ 5κ
X
C3 ρ µr
+
σ
+ C9 ρ µr √
X +
X\
n
np
σmin
p
np
σmin p
r
1
2C10 σ n
≤ 2C9 ρt µr √
X\ +
X\ ,
np
σmin
p
log n
X\
np
where the second inequality uses the incoherence of X \ (cf. (114)) and the last inequality holds as long
as n κ3 µr log n.
65
2,∞
B.5
Proof of Lemma 11
From the definition of Rt+1,(l) (see (72)), we must have
ct+1 − X t+1,(l) Rt+1,(l)
X t+1 H
F
ct − X t+1,(l) Rt,(l)
≤ X t+1 H
F
.
The gradient update rules in (24) and (69) allow one to express
t h t,(l)
i
ct − X t+1,(l) Rt,(l) = X t − η∇f X t H
c − X
X t+1 H
− η∇f (l) X t,(l) Rt,(l)
h
i
ct − η∇f X t H
ct − X t,(l) Rt,(l) − η∇f (l) X t,(l) Rt,(l)
= X tH
h
i
ct − X t,(l) Rt,(l) − η ∇f (X t H
ct ) − ∇f X t,(l) Rt,(l) )
= X tH
h
i
− η ∇f X t,(l) Rt,(l) − ∇f (l) X t,(l) Rt,(l) ,
where we have again used the fact that ∇f (X t ) R = ∇f (X t R) for any orthonormal matrix R ∈ Or×r
(similarly for ∇f (l) X t,(l) ). Relate the right-hand side of the above equation with ∇fclean (X) to reach
h
i
ct − X t+1,(l) Rt,(l) = X t H
ct − X t,(l) Rt,(l) − η ∇fclean X t H
ct − ∇fclean X t,(l) Rt,(l)
X t+1 H
|
{z
}
(l)
:=B1
1
t,(l)
t,(l)>
\
t,(l)
t,(l)>
\
− η PΩl X
X
− M − Pl X
X
−M
X t,(l) Rt,(l)
p
{z
}
|
(l)
:=B2
1
ct − X t,(l) Rt,(l) + η 1 PΩ (E) X t,(l) Rt,(l) ,
+ η PΩ (E) X t H
p
p l
|
{z
} |
{z
}
(l)
(149)
(l)
:=B3
:=B4
where we have used the following relationship between ∇f (l) (X) and ∇f (X):
1
∇f (l) (X) = ∇f (X) − PΩl XX > − M \ + E X + Pl XX > − M \ X
p
(150)
for all X ∈ Rn×r with PΩl and Pl defined respectively in (66) and (67). In the sequel, we control the four
terms in reverse order.
(l)
1. The last term B4 is controlled via the following lemma.
Lemma 24. Suppose that the sample size obeys n2 p > Cµ2 r2 n log2 n for some sufficiently large constant
(l)
C > 0. Then with probability at least 1 − O n−10 , the matrix B4 as defined in (149) satisfies
(l)
B4
F
. ησ
s
n log n
X\
p
2,∞
.
(l)
2. The third term B3 can be bounded as follows
(l)
B3
F
1
≤η
PΩ (E)
p
t ct
X H −X
t,(l)
R
t,(l)
where the second inequality comes from Lemma 40.
(l)
3. For the second term B2 , we have the following lemma.
66
F
. ησ
r
n
ct − X t,(l) Rt,(l)
X tH
p
F
,
Lemma 25. Suppose that the sample size obeys n2 p µ2 r2 n log n. Then with probability exceeding
(l)
1 − O n−10 , the matrix B2 as defined in (149) satisfies
(l)
B2
F
.η
s
κ2 µ2 r2 log n
X t,(l) Rt,(l) − X \
np
2,∞
(151)
σmax .
(l)
4. Regarding the first term B1 , apply the fundamental theorem of calculus [Lan93, Chapter XIII, Theorem
4.2] to get
Z 1
(l)
2
ct − X t,(l) Rt,(l) ,
(152)
vec B1 = Inr − η
∇ fclean (X(τ )) dτ vec X t H
0
ct − X t,(l) Rt,(l) . Going through
where we abuse the notation and denote X(τ ) := X t,(l) Rt,(l) + τ X t H
the same derivations as in the proof of Lemma 8 (see Appendix B.2), we get
σmin
(l)
ct − X t,(l) Rt,(l)
(153)
η X tH
B1 F ≤ 1 −
4
F
2
).
with the proviso that 0 < η ≤ (2σmin )/(25σmax
Applying the triangle inequality to (149) and invoking the preceding four bounds, we arrive at
ct+1 − X t+1,(l) Rt+1,(l)
X t+1 H
F
s
σmin
κ2 µ2 r2 log n
ct − X t,(l) Rt,(l) + Cη
e
≤ 1−
η X tH
X t,(l) Rt,(l) − X \
σmax
4
np
F
2,∞
s
r
n
n log n
t
t
t,(l)
t,(l)
c −X
e
e
X \ 2,∞
X H
R
+ Cησ
+ Cησ
p
p
F
s
r
σmin
n
κ2 µ2 r2 log n
ct − X t,(l) Rt,(l) + Cη
e
e
= 1−
η + Cησ
X tH
X t,(l) Rt,(l) − X \
4
p
np
F
s
n log n
e
X \ 2,∞
+ Cησ
p
s
2σmin
κ2 µ2 r2 log n
ct − X t,(l) Rt,(l) + Cη
e
≤ 1−
η X tH
X t,(l) Rt,(l) − X \
σmax
9
np
F
2,∞
s
n log n
e
X \ 2,∞
+ Cησ
p
2,∞
σmax
p
e > 0. Here the last inequality holds as long as σ n/p σmin , which is satisfied
for some absolute constant C
under our noise condition (27). This taken collectively with the hypotheses (70d) and (73c) leads to
ct+1 − X t+1,(l) Rt+1,(l)
X t+1 H
sF
s
!
log n
n log n
2σmin
σ
t
\
\
η
C3 ρ µr
≤ 1−
X 2,∞ + C7
X 2,∞
9
np
σmin
p
s
s
s
"
#
κ2 µ2 r2 log n
log n
σ
n log n
t
e
+ Cη
(C3 + C5 ) ρ µr
+ (C8 + C7 )
X\
np
np
σmin
p
s
n log n
e
+ Cησ
X \ 2,∞
p
67
2,∞
σmax
σmin
≤ 1−
η C3 ρt µr
5
s
log n
X\
np
2,∞
+ C7
σ
σmin
s
n log n
X\
p
2,∞
as long as C7 > 0 is sufficiently large, where we have used the sample complexity assumption n2 p
κ4 µ2 r2 n log n and the step size 0 < η ≤ 1/(2σmax ) ≤ 1/(2σmin ). This finishes the proof.
B.5.1
Proof of Lemma 24
By the unitary invariance of the Frobenius norm, one has
(l)
B4
F
=
η
PΩl (E) X t,(l)
p
F
,
where all nonzero entries of the matrix PΩl (E) reside in the lth row/column. Decouple the effects of the lth
row and the lth column of PΩl (E) to reach
p
(l)
B4
η
F
≤
n
X
t,(l)
δl,j El,j Xj,·
{z
}
j=1 |
:=uj
+
|
2
X
t,(l)
δl,j El,j Xl,·
j:j6=l
(154)
,
}2
{z
:=α
where δl,j := 1{(l,j)∈Ω} indicates whether the (l, j)-th entry is observed. Since X t,(l) is independent of
{δl,j }1≤j≤n and {El,j }1≤j≤n , we can treat the first term as a sum of independent vectors {uj }. It is easy to
verify that
,
≤ X t,(l)
kδl,j El,j kψ1 . σ X t,(l)
kuj k2
2,∞
2,∞
ψ1
where k · kψ1 denotes the sub-exponential norm [KLT11, Section 6]. Further, one can calculate
n
n
X
X
t,(l)
t,(l)>
t,(l)
t,(l)>
2
Xj,· Xj,· = pσ 2 X t,(l)
V := E
(δl,j El,j ) Xj,· Xj,· . pσ 2 E
j=1
j=1
2
F
.
Invoke the matrix
Bernstein inequality [KLT11, Proposition 2] to discover that with probability at least
1 − O n−10 ,
n
X
j=1
.
uj
2
.
p
V log n + kuj k
q
ψ1
log2 n
2
pσ 2 X t,(l) F log n + σ X t,(l)
log2 n
2,∞
p
+ σ X t,(l)
log2 n
. σ np log n X t,(l)
2,∞
2,∞
p
. σ np log n X t,(l)
,
2,∞
where the third inequality follows from X t,(l)
2
2
F
≤ n X t,(l)
2
,
2,∞
and the last inequality holds as long as
np log n.
Additionally, the remaining term α in (154) can be controlled using the same argument, giving rise to
p
α . σ np log n X t,(l) 2,∞ .
We then complete the proof by observing that
X t,(l)
2,∞
= X t,(l) Rt,(l)
2,∞
≤ X t,(l) Rt,(l) − X \
2,∞
+ X\
2,∞
≤ 2 X\
2,∞
,
(155)
where the last inequality follows by combining (73c), the sample complexity condition n2 p µ2 r2 n log n,
and the noise condition (27).
68
B.5.2
Proof of Lemma 25
For notational simplicity, we denote
C := X t,(l) X t,(l)> − M \ = X t,(l) X t,(l)> − X \ X \> .
(156)
Since the Frobenius norm is unitarily invariant, we have
(l)
B2
F
1
PΩl (C) − Pl (C) X t,(l)
p
|
{z
}
=η
.
F
:=W
Again, all nonzero entries of the matrix W reside in its lth row/column. We can deal with the lth row and
the lth column of W separately as follows
p
(l)
B2
η
≤
F
n
X
(δl,j − p) Cl,j Xj,·
n
X
(δl,j − p) Cl,j Xj,·
j=1
.
j=1
t,(l)
+
+
√
(δl,j − p) kCk∞ Xl,·
t,(l)
np kCk∞ Xl,·
t,(l)
(δl,j − p) Cl,j Xj,·
1≤j≤n
V :=
2
2
,
2
where δl,j := 1{(l,j)∈Ω} and the second line relies on the fact that
L := max
t,(l)
2
j:j6=l
2
t,(l)
sX
2
≤ kCk∞ X t,(l)
P
2
j:j6=l
(δl,j − p) np. It follows that
(i)
2,∞
≤ 2 kCk∞ X \
2,∞
,
n
n
X
X
t,(l)
t,(l)>
t,(l)
t,(l)>
2 2
E (δl,j − p) Cl,j
Xj,· Xj,·
≤ pkCk2∞
Xj,· Xj,·
j=1
j=1
2 (ii)
2
= p kCk∞ X t,(l)
F
2
≤ 4p kCk∞ X \
2
F
.
Here, (i) is a consequence of (155). In addition, (ii) follows from
X t,(l)
F
= X t,(l) Rt,(l)
F
≤ X t,(l) Rt,(l) − X \
F
+ X\
F
≤ 2 X\
F
,
where the last inequality comes from (73b), the sample complexity condition n2 p µ2 r2 n log n, and the
noise condition (27). The matrix Bernstein inequality [Tro15b, Theorem 6.1.1] reveals that
n
X
j=1
t,(l)
.
(δl,j − p) Cl,j Xj,·
2
p
V log n + L log n .
q
2
2
p kCk∞ kX \ kF log n + kCk∞ X \
with probability exceeding 1 − O n−10 , and as a result,
p
(l)
B2
η
F
.
p
p log n kCk∞ X \
F
+
√
np kCk∞ X \
2,∞
log n
(157)
2,∞
as soon as np log n.
To finish up, we make the observation that
>
kCk∞ = X t,(l) Rt,(l) X t,(l) Rt,(l)
− X \ X \>
≤
X
t,(l)
R
t,(l)
>
−X
X t,(l) Rt,(l)
\
69
∞
∞
>
+ X \ X t,(l) Rt,(l) − X \
− X \ X \>
∞
≤ X t,(l) Rt,(l) − X \
≤3 X
t,(l)
R
t,(l)
−X
X t,(l) Rt,(l)
2,∞
\
2,∞
X
\
2,∞
2,∞
+ X\
2,∞
X t,(l) Rt,(l) − X \
2,∞
(158)
,
where the last line arises from (155). This combined with (157) gives
s
r
log n
n
(l)
\
B2
.η
kCk∞ X F + η
kCk∞ X \ 2,∞
p
p
F
s
r
(i)
log n
n
2
X \ 2,∞ X \ F + η
X \ 2,∞
X t,(l) Rt,(l) − X \
X t,(l) Rt,(l) − X \
.η
p
p
2,∞
2,∞
s
r
r
(ii)
κµr
log n
κµr2
n
t,(l) t,(l)
\
. η
X
X t,(l) Rt,(l) − X \
R
−X
σmax + η
σmax
p
n
p
2,∞
2,∞ n
s
κ2 µ2 r2 log n
σmax ,
X t,(l) Rt,(l) − X \
.η
np
2,∞
where (i) comes from (158), and (ii) makes use of the incoherence condition (114).
B.6
Proof of Lemma 12
We first introduce an auxiliary matrix
h
i
ft+1,(l) := X t,(l) H
ct,(l) − η 1 PΩ−l X t,(l) X t,(l)> − M \ + E + Pl X t,(l) X t,(l)> − M \ X \ .
X
p
(159)
With this in place, we can use the triangle inequality to obtain
ct+1,(l) − X \
X t+1,(l) H
l,· 2
≤
ft+1,(l) − X \
ct+1,(l) − X
ft+1,(l)
+ X
.
X t+1,(l) H
l,· 2
l,· 2
{z
} |
{z
}
|
:=α1
(160)
:=α2
In what follows, we bound the two terms α1 and α2 separately.
ft+1,(l) (see (159)) that
1. Regarding the second term α2 of (160), we see from the definition of X
h
i
ft+1,(l) − X \
ct,(l) − η X t,(l) X t,(l)> − X \ X \> X \ − X \ ,
X
= X t,(l) H
l,·
(161)
l,·
where we also utilize the definitions of PΩ−l and Pl in (67). For notational convenience, we denote
ct,(l) − X \ .
∆t,(l) := X t,(l) H
(162)
This allows us to rewrite (161) as
h
i
t,(l)
ft+1,(l) − X \
X
= ∆l,· − η ∆t,(l) X \> + X \ ∆t,(l)> X \
l,·
t,(l)
l,·
t,(l)
h
i
− η ∆t,(l) ∆t,(l)> X \
l,·
t,(l)
\
= ∆l,· − η∆l,· Σ\ − ηXl,·
∆t,(l)> X \ − η∆l,· ∆t,(l)> X \ ,
which further implies that
t,(l)
t,(l)
α2 ≤ ∆l,· − η∆l,· Σ\
t,(l)
≤ ∆l,·
\
+ η Xl,·
∆t,(l)> X \
2
Ir − ηΣ\ + η X \
2
Ir − ηΣ\ + 2η X \
t,(l)
≤ ∆l,·
2
2,∞
2,∞
70
t,(l)
2
∆t,(l)
∆t,(l)
+ η ∆l,· ∆t,(l)> X \
t,(l)
X \ + η ∆l,·
X\ .
2
2
∆t,(l)
X\
t,(l)
Here, the last line follows from the fact that ∆l,·
2
hypothesis (70e) to get
t,(l)
∆l,·
1
≤ C2 ρ µr √
X\
np
t
2
2,∞
+ C6
≤ X\
σ
σmin
2,∞
s
. To see this, one can use the induction
n log n
X\
p
2,∞
X\
2,∞
(163)
p
as long as np µ2 r2 and σ (n log n) /p σmin . By taking 0 < η ≤ 1/σmax , we have 0 Ir − ηΣ\
(1 − ησmin ) Ir , and hence can obtain
t,(l)
α2 ≤ (1 − ησmin ) ∆l,·
2
+ 2η X \
2,∞
X\ .
∆t,(l)
(164)
An immediate consequence of the above two inequalities and (73d) is
α2 ≤ kX \ k2,∞ .
(165)
2. The first term α1 of (160) can be equivalently written as
α1 =
where
ct,(l)
R1 = H
Simple algebra yields
α1 ≤
≤
−1
ct,(l) R1 − X
ft+1,(l)
X t+1,(l) H
ct+1,(l) := arg min
H
R∈O r×r
ct,(l) − X
ft+1,(l)
X t+1,(l) H
ct,(l) − X
ft+1,(l)
X t+1,(l) H
|
{z
:=β1
Here, to bound the the second term we have used
ft+1,(l)
X
l,·
2
ft+1,(l) − X \
≤ X
l,·
l,·
2
l,·
2
ft+1,(l)
+ X
l,·
+2 X \
}
\
+ Xl,·
l,· 2
,
ct,(l) R − X \
X t+1,(l) H
R1
l,· 2
2,∞
2
F
,
kR1 − Ir k
kR1 − Ir k .
| {z }
:=β2
2
\
= α2 + Xl,·
2
≤ 2 X\
2,∞
,
where the last inequality follows from (165). It remains to upper bound β1 and β2 . For both β1 and β2 ,
ct,(l) − X
ft+1,(l) . By the definition of X
ft+1,(l) in (159) and the
a central quantity to control is X t+1,(l) H
gradient update rule for X t+1,(l) (see (69)), one has
ct,(l) − X
ft+1,(l)
X t+1,(l) H
h
i
1
t,(l) ct,(l)
t,(l)
t,(l)>
\
t,(l)
t,(l)>
\
t,(l) ct,(l)
= X
H
− η PΩ−l X
X
− M + E + Pl X
X
−M
X
H
p
h
i
ct,(l) − η 1 PΩ−l X t,(l) X t,(l)> − M \ + E + Pl X t,(l) X t,(l)> − M \ X \
− X t,(l) H
p
1
η
t,(l)
t,(l)>
\
\>
t,(l)
t,(l)>
\
\>
= −η PΩ−l X
X
−X X
+ Pl X
X
−X X
∆t,(l) + PΩ−l (E) ∆t,(l) .
p
p
(166)
It is easy to verify that
(ii)
(i) 1
1
PΩ−l (E) ≤
PΩ (E) . σ
p
p
71
r
n (iii) δ
≤ σmin
p
2
for δ > 0 sufficiently small. Here, (i) uses the elementary fact that the spectral norm of a submatrix is
no more than that of the matrix itself, (ii) arises from Lemma 40 and (iii) is a consequence of the noise
condition (27). Therefore, in order to control (166), we need to upper bound the following quantity
γ :=
1
PΩ−l X t,(l) X t,(l)> − X \ X \> + Pl X t,(l) X t,(l)> − X \ X \> .
p
(167)
To this end, we make the observation that
γ≤
1
PΩ X t,(l) X t,(l)> − X \ X \>
p
|
{z
}
:=γ1
1
+
PΩl X t,(l) X t,(l)> − X \ X \> − Pl X t,(l) X t,(l)> − X \ X \> ,
p
|
{z
}
(168)
:=γ2
where PΩl is defined in (66). An application of Lemma 43 reveals that
γ1 ≤ 2n X t,(l) Rt,(l) − X \
2
2,∞
√
+ 4 n log n X t,(l) Rt,(l) − X \
2,∞
X\ ,
where Rt,(l) ∈ Or×r is defined in (72). Let C = X t,(l) X t,(l)> − X \ X \> as in (156), and one can bound
the other term γ2 by taking advantage of the triangle inequality and the symmetry property:
v
uX
r
r
n
(i)
(ii)
2u
n
n
2 2
t
γ2 ≤
(δl,j − p) Cl,j .
X \ 2,∞ ,
kCk∞ .
X t,(l) Rt,(l) − X \
p j=1
p
p
2,∞
Pn
2
where (i) comes from the standard Chernoff bound j=1 (δl,j − p) np, and in (ii) we utilize the bound
established in (158). The previous two bounds taken collectively give
2
√
γ ≤ 2n X t,(l) Rt,(l) − X \
+ 4 n log n X t,(l) Rt,(l) − X \
2,∞
r
n
δ
e
+C
X t,(l) Rt,(l) − X \
X \ 2,∞ ≤ σmin
p
2
2,∞
2,∞
X\
(169)
e > 0 and δ > 0 sufficiently small. The last inequality follows from (73c), the incoherence
for some constant C
condition (114) and our sample size condition. In summary, we obtain
ct,(l) − X
ft+1,(l) ≤ η γ + 1 PΩ−l (E)
X t+1,(l) H
∆t,(l) ≤ ηδσmin ∆t,(l) ,
(170)
p
for δ > 0 sufficiently small. With the estimate (170) in place, we can continue our derivation on β1 and
β2 .
(a) With regard to β1 , in view of (166) we can obtain
(i)
β1 = η
≤η
(ii)
= η
≤η
X t,(l) X t,(l)> − X \ X \>
X t,(l) X t,(l)> − X \ X \>
t,(l)
∆
t,(l)
∆l,·
2
X
t,(l) ct,(l)
H
>
∆t,(l)
2
l,· 2
\
+X ∆
\
X t,(l) + Xl,·
72
l,·
∆t,(l)
t,(l)>
∆t,(l)
l,· 2
2
∆t,(l)
∆t,(l)
t,(l)
≤ η ∆l,·
2
\
∆t,(l) + η Xl,·
X t,(l)
2
∆t,(l)
2
(171)
,
where (i) follows from the definitions of PΩ−l and Pl (see (67) and note that all entries in the lth row
of PΩ−l (·) are identically zero), and the identity (ii) is due to the definition of ∆t,(l) in (162).
(b) For β2 , we first claim that
ft+1,(l) R − X \
X
Ir := arg min
R∈O r×r
F
(172)
,
whose justification follows similar reasonings as that of (138), and is therefore omitted. In particular,
ft+1,(l) is symmetric and
it gives rise to the facts that X \> X
ft+1,(l)
X
>
X\
1
σmin Ir .
2
(173)
We are now ready to invoke Lemma 36 to bound β2 . We abuse the notation and denote C :=
ft+1,(l) > X \ and E := X t+1,(l) H
ct,(l) − X
ft+1,(l) > X \ . We have
X
kEk ≤
1
σmin ≤ σr (C) .
2
The first inequality arises from (170), namely,
ct,(l) − X
ft+1,(l)
kEk ≤ X t+1,(l) H
(i)
≤ ηδσmin X \
2 (ii)
≤
X\
X \ ≤ ηδσmin ∆t,(l)
1
σmin ,
2
where (i) holds since ∆t,(l) ≤ X \ and (ii) holds true for δ sufficiently small and η ≤ 1/σmax . Invoke
Lemma 36 to obtain
2
kEk
σr−1 (C) + σr (C)
2
ct,(l) − X
ft+1,(l)
≤
X t+1,(l) H
σmin
β2 = kR1 − Ir k ≤
≤ 2δη ∆t,(l)
X\
(174)
X\ ,
(175)
where (174) follows since σr−1 (C) ≥ σr (C) ≥ σmin /2 from (173), and the last line comes from (170).
(c) Putting the previous bounds (171) and (175) together yields
t,(l)
α1 ≤ η ∆l,·
2
\
∆t,(l) + η Xl,·
X t,(l)
2
∆t,(l)
2
+ 4δη X \
2,∞
∆t,(l)
X\ .
3. Combine (160), (164) and (176) to reach
ct+1,(l) − X \
X t+1,(l) H
t,(l)
+ η ∆l,·
2
t,(l)
l,· 2
X t,(l)
≤ 1 − ησmin + η X t,(l)
(i)
≤ (1 − ησmin ) ∆l,·
\
∆t,(l) + η Xl,·
∆t,(l)
t,(l)
2
2
+ 2η X \
∆t,(l)
2
2,∞
+ 4δη X \
+ 4η X \ 2,∞ ∆t,(l)
!
(ii)
1
C6
n log n
σmin
t
η
C2 ρ µr √ +
X \ 2,∞
σ
≤ 1−
2
np σmin
p
r
1
2C10
n
+ 4η X \ X \ 2,∞ 2C9 ρt µr √
X\ +
σ
X\
np
σmin
p
∆l,·
s
73
2
∆t,(l)
2,∞
X\
X\
∆t,(l)
X\
(176)
(iii)
t+1
≤ C2 ρ
1
X\
µr √
np
C6
+
σ
2,∞
σmin
s
n log n
X\
p
2,∞
.
Here, (i) follows since ∆t,(l) ≤ X \ and δ is sufficiently small, (ii) invokes the hypotheses (70e) and
(73d) and recognizes that
s
!
1
σmin
2C
n
log
n
10
2C9 µr √
≤
X t,(l) ∆t,(l) ≤ 2 X \
X\ +
X\
σ
np
σmin
np
2
holds under the sample size√and noise condition, while (iii) is valid as long as 1 − (σmin /3) · η ≤ ρ < 1,
C2 κC9 and C6 κC10 / log n.
B.7
Proof of Lemma 13
For notational convenience, we define the following two orthonormal matrices
Q := arg min
U 0R − U \
r×r
R∈O
and
F
Q(l) := arg min
U 0,(l) R − U \
r×r
R∈O
F
.
ct (see (26)) is called the orthogonal Procrustes problem [tB77]. It is well-known
The problem of finding H
t
c always exists and is given by
that the minimizer H
ct = sgn X t> X \ .
H
Here, the sign matrix sgn(B) is defined as
sgn(B) := U V >
(177)
for any matrix B with singular value decomposition B = U ΣV > , where the columns of U and V are left
and right singular vectors, respectively.
Before proceeding, we make note of the following perturbation bounds on M 0 and M (l) (as defined in
Algorithm 2 and Algorithm 5, respectively):
1
1
PΩ M \ − M \ +
PΩ (E)
p
p
r
r
r
r
(ii)
n
n
n
σ
n√
2
≤ C
σmin
M \ 2,∞ + Cσ
=C
X \ 2,∞ + C √
p
p
p
σmin p
r
r
(iii)
(iv)
1 √
σ
n
≤ C µr
σmax + √
X \ σmin ,
np
σmin p
(i)
M0 − M\ ≤
(178)
for some universal constant C > 0. Here, (i) arises from the triangle inequality, (ii) utilizes Lemma 39 and
Lemma 40, (iii) follows from the incoherence condition (114) and (iv) holds under our sample complexity
assumption that n2 p µ2 r2 n and the noise condition (27). Similarly, we have
r
r
1 √
σ
n
(l)
\
σmax + √
X \ σmin .
(179)
M − M . µr
np
σmin p
Combine Weyl’s inequality, (178) and (179) to obtain
Σ0 − Σ\ ≤ M 0 − M \ σmin
and
Σ(l) − Σ\ ≤ M (l) − M \ σmin ,
(180)
and
1
σmin ≤ σr Σ(l) ≤ σ1 Σ(l) ≤ 2σmax .
2
(181)
which further implies
1
σmin ≤ σr Σ0 ≤ σ1 Σ0 ≤ 2σmax
2
We start by proving (70a), (70b) and (70c). The key decomposition we need is the following
h
i
c0 − X \ = U 0 Σ0 1/2 H
c0 − Q + U 0 Σ0 1/2 Q − Q Σ\ 1/2 + U 0 Q − U \ Σ\ 1/2 .
X 0H
74
(182)
1. For the spectral norm error bound in (70c), the triangle inequality together with (182) yields
c0 − X \ ≤
X 0H
Σ0
1/2
c0 − Q +
H
Σ0
1/2
Q − Q Σ\
1/2
+
√
σmax U 0 Q − U \ ,
where we have also used the fact that kU 0 k = 1. Recognizing that M 0 − M \ σmin (see (178)) and
the assumption σmax /σmin . 1, we can apply Lemma 47, Lemma 46 and Lemma 45 to obtain
Σ0
1/2
c0 − Q .
H
Q − Q Σ\
1
M0 − M\ ,
σmin
1/2
U 0Q − U \ .
.√
1
σmin
(183a)
1
M0 − M\ ,
σmin
(183b)
M0 − M\ .
(183c)
These taken collectively imply the advertised upper bound
√
1
1
1
σmax
M0 − M\ + √
M0 − M\ . √
M0 − M\
σmin
σmin
σmin
r r
r
1
σmax
σ
n
. µr
X\ ,
+
np σmin
σmin p
c0 − X \ .
X 0H
1/2
√
≤ 2σmax (see (181)) and the bounded condition number
Σ0
where we also utilize the fact that
assumption, i.e. σmax /σmin . 1. This finishes the proof of (70c).
2. With regard to the Frobenius norm bound in (70a), one has
c0 − X \
X 0H
F
√
c0 − X \
r X 0H
r
r
r
r
√
(i)
1
σ
n √
1
σ
n √ σmax √
+
r X \ = µr
+
r√
σmin
. µr
np σmin p
np σmin p
σmin
r
r
(ii)
1
σ
n √
. µr
+
r X\ F .
np σmin p
≤
Here (i) arises from (70c) and (ii) holds true since σmax /σmin 1 and
the proof of (70a).
√ √
r σmin ≤ X \
F
, thus completing
3. The proof of (70b) follows from similar arguments as used in proving (70c). Combine (182) and the triangle
inequality to reach
n
o
1/2
c0 − X \
c0 − Q + Σ0 1/2 Q − Q Σ\ 1/2
X 0H
≤ U 0 2,∞
Σ0
H
2,∞
√
+ σmax U 0 Q − U \ 2,∞ .
Plugging in the estimates (178), (181), (183a) and (183b) results in
r
r
√
1
σ
n
0 c0
\
X H −X
+
X \ U 0 2,∞ + σmax U 0 Q − U \
. µr
np σmin p
2,∞
2,∞
.
It remains to study the component-wise error of U 0 . To this end, it has already been shown in [AFWZ17,
Lemma 14] that
r
r
1
σ
n
U 0 2,∞ . U \ 2,∞
(184)
U 0 Q − U \ 2,∞ . µr
+
U \ 2,∞ and
np σmin p
75
under our assumptions. These combined with the previous inequality give
r
r
r
r
1
n √
1
n
σ
σ
0 c0
\
\
. µr
X H −X
σmax U 2,∞ . µr
X\
+
+
np σmin p
np σmin p
2,∞
2,∞
,
where the last relation is due to the observation that
√
σmax U \
.
2,∞
√
σmin U \
2,∞
≤ X\
2,∞
.
4. We now move on to proving (70e). Recall that Q(l) = arg minR∈Or×r U 0,(l) R − U \
inequality,
c0,(l) − X \
X 0,(l) H
\
\
Note that Xl,·
= Ml,·
U \ Σ\
0,(l)
l,· 2
0,(l)
0,(l)
Xl,·
≤
−1/2
Xl,·
≤ Xl,·
c0,(l) − Q(l)
H
c0,(l)
H
2
−Q
2
+
(l)
X 0,(l) Q(l) − X \
+
X
0,(l)
Q
(l)
−X
and, by construction of M (l) ,
(l)
= Ml,· U 0,(l) Σ(l)
−1/2
\
= Ml,·
U 0,(l) Σ(l)
−1/2
F
\
. By the triangle
l,· 2
l,· 2
(185)
.
.
We can thus decompose
n
h
−1/2 (l)
−1/2 i 0,(l) (l)
−1/2 o
\
X 0,(l) Q(l) − X \
= Ml,·
U 0,(l) Σ(l)
Q − Q(l) Σ\
+ U
Q − U \ Σ\
,
l,·
which further implies that
X 0,(l) Q(l) − X \
l,· 2
≤ M\
2,∞
Σ(l)
In order to control this, we first see that
Σ(l)
−1/2
Q(l) − Q(l) Σ\
−1/2
−1/2
=
Σ(l)
.
1
.
Q(l) − Q(l) Σ\
−1/2
+√
1
U 0,(l) Q(l) − U \
.
σmin
(186)
−1/2 h (l)
1/2
1/2 (l) i \ −1/2
Q
Σ\
− Σ(l)
Q
Σ
Q(l) Σ\
σmin
1
1/2
− Σ(l)
M (l) − M \ ,
3/2
σmin
−1/2
Q(l)
where the penultimate inequality uses (181) and the last inequality arises from Lemma 46. Additionally,
Lemma 45 gives
1
U 0,(l) Q(l) − U \ .
M (l) − M \ .
σmin
Plugging the previous two bounds into (186), we reach
X
0,(l)
(l)
Q
−X
\
l,· 2
.
1
M
3/2
σmin
where the last relation follows from M \
Note that this also implies that
(·)l,·
2
0,(l)
Xl,·
(l)
−M
2,∞
2
\
M
\
= X \ X \>
≤2 X
\
2,∞
2,∞
2,∞
.
≤
µr
√
r
1
σ
+
np σmin
σmax X \
2,∞
r
n
X\
p
0,(l)
0,(l)
2
= Xl,· Q(l)
2
≤
. To see this, one has by the unitary invariance of
X 0,(l) Q(l) − X \
76
.
and the estimate (179).
,
Xl,·
2,∞
l,· 2
\
+ Xl,·
2
≤ 2 X\
2,∞
.
Substituting the above bounds back to (185) yields in
X
0,(l) c0,(l)
H
−X
\
r
r
1
σ
n
0,(l)
(l)
c
+ µr
. X 2,∞ H
−Q
X\
+
np σmin p
r
r
1
n
σ
. µr
X \ 2,∞ ,
+
np σmin p
\
l,· 2
2,∞
where the second line relies on Lemma 47, the bound (179), and the condition σmax /σmin 1. This
establishes (70e).
5. Our final step is to justify (70d). Define B := arg minR∈Or×r U 0,(l) R − U 0
R0,(l) (cf. (72)), one has
c0 − X 0,(l) R0,(l) ≤ X 0,(l) B − X 0 .
X 0H
F
. From the definition of
F
F
Recognizing that
X 0,(l) B − X 0 = U 0,(l)
h
Σ(l)
1/2
B − B Σ0
1/2 i
we can use the triangle inequality to bound
1/2
1/2
X 0,(l) B − X 0 ≤ Σ(l)
B − B Σ0
F
F
1/2
,
+ U 0,(l) B − U 0 Σ0
+ U 0,(l) B − U 0
Σ0
F
In view of Lemma 46 and the bounds (178) and (179), one has
Σ(l)
−1/2
B − BΣ1/2
F
.√
1
σmin
From Davis-Kahan’s sinΘ theorem [DK70] we see that
U 0,(l) B − U 0
F
.
1
σmin
These estimates taken together with (181) give
X 0,(l) B − X 0
F
.√
1
σmin
M 0 − M (l) U 0,(l)
M 0 − M (l) U 0,(l)
F
M 0 − M (l) U 0,(l)
F
1/2
.
.
.
F
.
It then boils down to controlling M 0 − M (l) U 0,(l) F . Quantities of this type have showed up multiple
times already, and hence
we omit the proof details for conciseness (see Appendix B.5). With probability
at least 1 − O n−10 ,
s
( s
)
0,(l)
log n
n log n
0
(l)
M −M
U
. µr
σmax + σ
U 0,(l)
.
np
p
2,∞
F
If one further has
U 0,(l)
2,∞
. U\
2,∞
.√
1
X\
σmin
2,∞
(187)
,
then taking the previous bounds collectively establishes the desired bound
s
)
( s
log n
σ
n log n
0 c0
0,(l) 0,(l)
+
X\
X H −X
R
. µr
np
σmin
p
F
2,∞
.
Proof of Claim (187). Denote by M (l),zero the matrix derived by zeroing out the lth row/column of M (l) ,
and U (l),zero ∈ Rn×r containing the leading r eigenvectors of M (l),zero . On the one hand, [AFWZ17,
Lemma 4 and Lemma 14] demonstrate that
max kU (l),zero k2,∞ . kU \ k2,∞ .
1≤l≤n
77
On the other hand, by the Davis-Kahan sin Θ theorem [DK70] we obtain
1
U 0,(l) sgn U 0,(l)> U (l),zero − U (l),zero .
M (l) − M (l),zero U (l),zero
σmin
F
where sgn(A) denotes the sign matrix of A. For any j 6= l, one has
U (l),zero = M (l) − M (l),zero
M (l) − M (l),zero
j,·
(l),zero
since the lth row of Ul,·
(l),zero
j,l
Ul,·
F
(188)
,
= 01×r ,
is identically zero by construction. In addition,
U (l),zero
M (l) − M (l),zero
l,·
As a consequence, one has
M (l) − M (l),zero U (l),zero
F
2
\
U (l),zero
= Ml,·
=
2
M (l) − M (l),zero
≤ M\
l,·
2,∞
≤ σmax U \
U (l),zero
which combined with (188) and the assumption σmax /σmin 1 yields
U 0,(l) sgn U 0,(l)> U (l),zero − U (l),zero . U \
F
2
2,∞
≤ σmax U \
.
2,∞
,
2,∞
The claim (187) then follows by combining the above estimates:
U 0,(l)
= U 0,(l) sgn U 0,(l)> U (l),zero
2,∞
2,∞
(l),zero
0,(l)
≤ kU
k2,∞ + U
sgn U 0,(l)> U (l),zero − U (l),zero
F
. kU \ k2,∞ ,
where we have utilized the unitary invariance of k·k2,∞ .
C
Proofs for blind deconvolution
Before proceeding to the proofs, we make note of the following concentration results. The standard Gaussian
concentration inequality and the union bound give
p
(189)
max a∗l x\ ≤ 5 log m
1≤l≤m
with probability at least 1 − O(m−10 ). In addition, with probability exceeding 1 − Cm exp(−cK) for some
constants c, C > 0,
√
max kal k2 ≤ 3 K.
(190)
1≤l≤m
In addition, the population/expected Wirtinger Hessian at the truth z \ is given by
IK
0
0
h\ x\>
0
IK x\ h\>
0
.
∗
∇2 F z \ =
\
IK
0
0
x h\>
\ \> ∗
hx
0
0
IK
C.1
(191)
Proof of Lemma 14
First, we find it convenient to decompose the Wirtinger Hessian (cf. (80)) into the expected Wirtinger Hessian
at the truth (cf. (191)) and the perturbation part as follows:
∇2 f (z) = ∇2 F z \ + ∇2 f (z) − ∇2 F z \ .
(192)
The proof then proceeds by showing that (i) the population Hessian ∇2 F z \ satisfies the restricted
strong
2
2
\
convexity and smoothness properties as advertised, and (ii) the perturbation ∇ f (z) − ∇ F z is wellcontrolled under our assumptions. We start by controlling the population Hessian in the following lemma.
78
Lemma 26. Instate the notation and the conditions of Lemma 14. We have
2
∇2 F z \ = 2
and
u∗ D∇2 F z \ + ∇2 F z \ D u ≥ kuk2 .
The next step is to bound the perturbation. To this end, we define the set
S := {z : z satisfies (82)} ,
and derive the following lemma.
Lemma 27. Suppose the sample complexity satisfies m µ2 K log9 m, c >0 is a sufficiently small constant,
and δ = c/ log2 m. Then with probability at least 1 − O m−10 + e−K log m , one has
sup ∇2 f (z) − ∇2 F z \
z∈S
≤ 1/4.
Combining the two lemmas, we can easily see that for z ∈ S,
∇2 f (z) ≤ ∇2 F z \ + ∇2 f (z) − ∇2 F z \ ≤ 2 + 1/4 ≤ 3,
which verifies the smoothness upper bound. In addition,
u∗ D∇2 f (z) + ∇2 f (z) D u
= u∗ D∇2 F z \ + ∇2 F z \ D u + u∗ D ∇2 f (z) − ∇2 F z \ u + u∗ ∇2 f (z) − ∇2 F z \ Du
(i)
2
≥ u∗ D∇2 F z \ + ∇2 F z \ D u − 2 kDk ∇2 f (z) − ∇2 F z \ kuk2
(ii)
2
≥ kuk2 − 2 (1 + δ) ·
(iii)
≥
1
2
kuk2
4
1
2
kuk2 ,
4
where (i) uses the triangle inequality, (ii) holds because of Lemma 27 and the fact that kDk ≤ 1 + δ, and
(iii) follows if δ ≤ 1/2. This establishes the claim on the restricted strong convexity.
C.1.1
Proof of Lemma 26
We start by proving the identity ∇2 F z \
h\
1 0
,
u1 = √
2 0
x\
= 2. Let
0
\
1 x
,
u2 = √
2 h\
0
h\
1 0
,
u3 = √
2 0
−x\
0
\
1 x
.
u4 = √
2 −h\
0
Recalling that kh\ k2 = kx\ k2 = 1, we can easily check that these four vectors form an orthonormal set. A
little algebra reveals that
∇2 F z \ = I4K + u1 u∗1 + u2 u∗2 − u3 u∗3 − u4 u∗4 ,
which immediately implies
∇2 F z \
= 2.
We now turn
attention to the restricted strong convexity. Since u∗ D∇2 F z \ u is the complex conjugate
of u∗ ∇2 F z \ Du as both ∇2 F (z \ ) and D are Hermitian, we will focus on the first term u∗ D∇2 F z \ u.
This term can be rewritten as
u∗ D∇2 F z \ u
79
h\ x\>
h1 − h2
x1 − x2
0
h1 − h2
0
x1 − x2
IK
\ \>
h1 − h2 + h x (x1 − x2 )
h
i
(ii)
∗
x1 − x2 + x\ h\> (h1 − h2 )
∗
∗
∗
= γ1 (h1 − h2 ) , γ2 (x1 − x2 ) , γ1 h1 − h2 , γ2 (x1 − x2 )
x\ h\> ∗ (x1 − x2 ) + (h1 − h2 )
∗
h\ x\> (h1 − h2 ) + (x1 − x2 )
∗
h1 − h2 + h\ (x1 − x2 ) x\
∗ \
\
h
i
∗
∗ x1 − x2 + x (h1 − h2 ) h
∗
∗
= γ1 (h1 − h2 ) , γ2 (x1 − x2 ) , γ1 h1 − h2 , γ2 (x1 − x2 )
h1 − h2 + h\ (x1 − x2 )∗ x\
∗
x1 − x2 + x\ (h1 − h2 ) h\
IK
h
i
∗
0
(i)
∗
∗
∗
= (h1 − h2 ) , (x1 − x2 ) , h1 − h2 , (x1 − x2 ) D
0
∗
h\ x\>
2
0
IK
∗
x\ h\>
0
0
x\ h\>
IK
0
2
= 2γ1 kh1 − h2 k2 + 2γ2 kx1 − x2 k2
∗
∗
∗
∗
+ (γ1 + γ2 ) (h1 − h2 ) h\ (x1 − x2 ) x\ + (γ1 + γ2 ) (h1 − h2 ) h\ (x1 − x2 ) x\ ,
|
|
{z
}
{z
}
:=β
(193)
=β
where (i) uses the definitions of u and ∇2 F z \ , and (ii) follows from the definition of D. In view of the
assumption (84), we can obtain
2
2
2
2
2
2γ1 kh1 − h2 k2 + 2γ2 kx1 − x2 k2 ≥ 2 min {γ1 , γ2 } kh1 − h2 k2 + kx1 − x2 k2 ≥ (1 − δ) kuk2 ,
where the last inequality utilizes the identity
2
2
2
(194)
2 kh1 − h2 k2 + 2 kx1 − x2 k2 = kuk2 .
It then boils down to controlling β. Toward this goal, we decompose β into the following four terms
∗
∗
∗
∗
β = (h1 − h2 ) h2 (x1 − x2 ) x2 + (h1 − h2 ) h\ − h2 (x1 − x2 ) x\ − x2
|
{z
} |
{z
}
:=β1
+ (h1 − h2 )
|
∗
\
|β2 | ≤ h\ − h2
2
x\ − x2
2
∗
h − h2 (x1 − x2 ) x2 + (h1 −
{z
} |
:=β3
Since h2 − h\ 2 and x2 − x\
regarding β2 , we discover that
2
:=β2
∗
h2 ) h2 (x1
− x2 )
{z
∗
:=β4
x\ − x2 .
}
are both small by (83), β2 , β3 and β4 are well-bounded. Specifically,
kh1 − h2 k2 kx1 − x2 k2 ≤ δ 2 kh1 − h2 k2 kx1 − x2 k2 ≤ δ kh1 − h2 k2 kx1 − x2 k2 ,
where the second inequality is due to (83) and the last one holds since δ < 1. Similarly, we can obtain
|β3 | ≤ δ kx2 k2 kh1 − h2 k2 kx1 − x2 k2 ≤ 2δ kh1 − h2 k2 kx1 − x2 k2 ,
and
|β4 | ≤ δ kh2 k2 kh1 − h2 k2 kx1 − x2 k2 ≤ 2δ kh1 − h2 k2 kx1 − x2 k2 ,
where both lines make use of the facts that
kx2 k2 ≤ x2 − x\
2
+ x\
2
≤1+δ ≤2
and
kh2 k2 ≤ h2 − h\
2
+ h\
2
≤ 1 + δ ≤ 2.
Combine the previous three bounds to reach
2
|β2 | + |β3 | + |β4 | ≤ 5δ kh1 − h2 k2 kx1 − x2 k2 ≤ 5δ
where we utilize the elementary inequality ab ≤ (a2 + b2 )/2 and the identity (194).
80
2
kh1 − h2 k2 + kx1 − x2 k2
5
2
= δ kuk2 ,
2
4
(195)
The only remaining term is thus β1 . Recalling that (h1 , x1 ) and (h2 , x2 ) are aligned by our assumption,
we can invoke Lemma 56 to obtain
∗
2
2
(h1 − h2 ) h2 = kx1 − x2 k2 + x∗2 (x1 − x2 ) − kh1 − h2 k2 ,
which allows one to rewrite β1 as
o
n
∗
2
2
β1 = kx1 − x2 k2 + x∗2 (x1 − x2 ) − kh1 − h2 k2 · (x1 − x2 ) x2
2
∗
∗
2
2
= (x1 − x2 ) x2 kx1 − x2 k2 − kh1 − h2 k2 + (x1 − x2 ) x2 .
Consequently,
2
2
∗
∗
2
β1 + β1 = 2 (x1 − x2 ) x2 2 + 2Re (x1 − x2 ) x2 kx1 − x2 k2 − kh1 − h2 k2
2
∗
2
≥ 2Re (x1 − x2 ) x2 kx1 − x2 k2 − kh1 − h2 k2
(i)
∗
2
≥ − (x1 − x2 ) x2 kuk2
(ii)
2
≥ −4δ kuk2 .
Here, (i) arises from the triangle inequality that
2
2
2
1
2
kuk2 ,
2
2
kx1 − x2 k2 − kh1 − h2 k2 ≤ kx1 − x2 k2 + kh1 − h2 k2 =
and (ii) occurs since kx1 − x2 k2 ≤ kx1 − x\ k2 + kx2 − x\ k2 ≤ 2δ and kx2 k2 ≤ 2 (see (195)).
To finish up, note that γ1 + γ2 ≤ 2(1 + δ) ≤ 3 for δ < 1/2. Substitute these bounds into (193) to obtain
2
u∗ D∇2 F z \ u ≥ (1 − δ) kuk2 + (γ1 + γ2 ) β + β
2
≥ (1 − δ) kuk2 + (γ1 + γ2 ) β1 + β1 − 2 (γ1 + γ2 ) (|β2 | + |β3 | + |β4 |)
5
2
2
2
≥ (1 − δ) kuk2 − 12δ kuk2 − 6 · δ kuk2
4
2
≥ (1 − 20.5δ) kuk2
1
2
≥ kuk2
2
as long as δ is small enough.
C.1.2
Proof of Lemma 27
In view of the expressions of ∇2 f (z) and ∇2 F z \ (cf. (80) and (191)) and the triangle inequality, we get
∇2 f (z) − ∇2 F z \
(196)
≤ 2α1 + 2α2 + 4α3 + 4α4 ,
where the four terms on the right-hand side are defined as follows
α1 =
m
X
a∗j x bj b∗j − IK ,
m
X
b∗j hx∗ aj − yj bj a∗j ,
j=1
α3 =
j=1
2
α2 =
m
X
j=1
α4 =
2
b∗j h aj a∗j − IK ,
m
X
bj b∗j h aj a∗j x
j=1
In what follows, we shall control supz∈S αj for j = 1, 2, 3, 4 separately.
81
>
− h\ x\> .
1. Regarding the first term α1 , the triangle inequality gives
α1 ≤
|
m
X
m
X
2
a∗j x bj b∗j −
j=1
{z
2
a∗j x\ bj b∗j +
j=1
|
}
:=β1
m
X
j=1
2
a∗j x\ bj b∗j − IK .
{z
}
:=β2
• To control β1 , the key observation is that a∗j x and a∗j x\ are extremely close. We can rewrite β1 as
β1 =
m
X
a∗j x
2
j=1
− a∗j x\
2
m
X
bj b∗j ≤
a∗j x
2
j=1
− a∗j x\
2
bj b∗j ,
(197)
where
a∗j x
2
− a∗j x\
2 (i)
=
a∗j x − x\
∗
+ 2 a∗j x − x\ a∗j x\
(iii)
p
1
1
≤ 4C32 3 + 4C3 3/2 · 5 log m
log m
log m
1
. C3
.
log m
(ii)
≤ a∗j x − x\
∗ ∗ \
∗
a∗j x − x\ + a∗j x − x\
aj x + a∗j x\ a∗j x − x\
2
Here, the first line (i) uses the identity for u, v ∈ C,
|u|2 − |v|2 = u∗ u − v ∗ v = (u − v)∗ (u − v) + (u − v)∗ v + v ∗ (u − v),
the second relation (ii) comes from the triangle inequality, and the third line (iii) follows from (189) and
the assumption (82b). Substitution into (197) gives
β1 ≤ max
1≤j≤m
2
a∗j x
−
2
a∗j x\
m
X
bj b∗j . C3
j=1
where the last inequality comes from the fact that
Pm
j=1
1
,
log m
bj b∗j = IK .
• The other term β2 can be bounded through Lemma 59, which reveals that with probability 1−O m−10 ,
β2 .
r
K
log m.
m
Taken collectively, the preceding two bounds give
r
K
1
sup α1 .
log m + C3
.
m
log
m
z∈S
Hence P(supz∈S α1 ≤ 1/32) = 1 − O(m−10 ).
2. We are going to prove that P(supz∈S α2 ≤ 1/32) = 1 − O(m−10 ). The triangle inequality allows us to
bound α2 as
α2 ≤
|
m
X
j=1
2
2
2
b∗j h aj a∗j − khk2 IK + khk2 IK − IK .
{z
}
:=θ1 (h)
82
|
{z
:=θ2 (h)
}
The second term θ2 (h) is easy to control. To see this, we have
2
θ2 (h) = khk2 − 1 = khk2 − 1 (khk2 + 1) ≤ 3δ < 1/64,
where the penultimate relation uses the assumption that h − h\
khk2 − 1 ≤ δ,
2
≤ δ and hence
khk2 ≤ 1 + δ ≤ 2.
For the first term θ1 (h), we define a new set
H := h ∈ CK : kh − h\ k2 ≤ δ
and
max
1≤j≤m
2C4 µ log2 m
√
≤
m
b∗j h
.
It is easily seen that supz∈S θ1 ≤ suph∈H θ1 . We plan to use the standard covering argument to show that
P sup θ1 (h) ≤ 1/64 = 1 − O(m−10 ).
(198)
h∈H
To this end, we define cj (h) = |b∗j h|2 for every 1 ≤ j ≤ m. It is straightforward to check that
θ1 (h) =
m
X
j=1
m
X
j=1
c2j
=
m
X
j=1
|b∗j h|4
≤
cj (h) aj a∗j − IK
max
1≤j≤m
|b∗j h|2
X
m
j=1
max |cj | ≤
,
|b∗j h|2
=
1≤j≤m
max
1≤j≤m
|b∗j h|2
for h ∈ H. In the above argument, we have used the facts that
m
X
j=1
|b∗j h|2 = h∗
m
X
j=1
Pm
j=1
2C4 µ log2 m
√
m
khk22
≤4
2
(199)
,
2C4 µ log2 m
√
m
2
(200)
bj b∗j = IK and
bj b∗j h = khk22 ≤ (1 + δ)2 ≤ 4,
together with the definition of H. Lemma 57 combined with (199) and (200) readily yields that for any
fixed h ∈ H and any t ≥ 0,
(
)!
2
t
t
e1 K − C
e2 min
, Pm 2
P(θ1 (h) ≥ t) ≤ 2 exp C
max1≤j≤m |cj |
j=1 cj
e1 K − C
e2 mt min {1, t/4} ,
≤ 2 exp C
(201)
4C42 µ2 log4 m
e1 , C
e2 > 0 are some universal constants.
where C
Now we are in a position to strengthen this bound to obtain uniform control of θ1 over H. Note that for
any h1 , h2 ∈ H,
|θ1 (h1 ) − θ1 (h2 )| ≤
m
X
j=1
|b∗j h1 |2 − |b∗j h2 |2 aj a∗j + kh1 k22 − kh2 k22
= max |b∗j h1 |2 − |b∗j h2 |2
1≤j≤m
m
X
j=1
aj a∗j + kh1 k22 − kh2 k22 ,
where
|b∗j h2 |2 − |b∗j h1 |2 = (h2 − h1 )∗ bj b∗j h2 + h∗1 bj b∗j (h2 − h1 )
83
≤ 2 max{kh1 k2 , kh2 k2 }kh2 − h1 k2 kbj k22
4K
≤ 4kh2 − h1 k2 kbj k22 ≤
kh2 − h1 k2
m
and
kh1 k22 − kh2 k22 = |h∗1 (h1 − h2 ) − (h1 − h2 )∗ h2 | ≤ 2 max{kh1 k2 , kh2 k2 }kh2 − h1 k2 ≤ 4kh1 − h2 k2 .
o
n P
m
∗
Define an event E0 =
j=1 aj aj ≤ 2m . When E0 happens, the previous estimates give
|θ1 (h1 ) − θ1 (h2 )| ≤ (8K + 4)kh1 − h2 k2 ≤ 10Kkh1 − h2 k2 ,
∀h1 , h2 ∈ H.
e be an ε-net covering H (see [Ver12, Definition 5.1]). We have
Let ε = 1/(1280K), and H
(
)
!
1
1
sup θ1 (h) ≤
∩ E0 ⊆ sup θ1 ≤
128
64
h∈H
e
h∈H
and as a result,
1
P sup θ1 (h) ≥
64
h∈H
1
≤ P sup θ1 (h) ≥
128
e
h∈H
!
e · max P θ1 (h) ≥ 1
+ P(E0c ) ≤ |H|
+ P(E0c ).
e
128
h∈H
e ≤C
e3 K log K for some absolute
Lemma 57 forces that P(E0c ) = O(m−10 ). Additionally, we have log |H|
e
constant C3 > 0 according to [Ver12, Lemma 5.2]. Hence (201) leads to
m(1/128) min {1, (1/128)/4}
1
e
e
e
e
|H| · max P θ1 (h) ≥
≤ 2 exp C3 K log K + C1 K − C2
e
128
4C42 µ2 log4 m
h∈H
!
e
e3 K log m − C4 m
≤ 2 exp 2C
µ2 log4 m
e4 > 0. Under the sample complexity m µ2 K log5 m, the right-hand side of the
for some constant C
above display is at most O m−10 . Combine the estimates above to establish the desired high-probability
bound for supz∈S α2 .
3. Next, we will demonstrate that
P( sup α3 ≤ 1/96) = 1 − O m−10 + e−K log m .
z∈S
To this end, we let
a∗1
A = ... ∈ Cm×K ,
a∗m
b∗1
B = ... ∈ Cm×K ,
b∗m
where for each 1 ≤ j ≤ m,
C=
c1 (z)
c2 (z)
···
cm (z)
∈ Cm×m ,
cj (z) := b∗j hx∗ aj − yj = b∗j (hx∗ − h\ x\∗ )aj .
As a consequence, we can write α3 = kB ∗ CAk.
The key observation is that both the `∞ norm and the Frobenius normof C are well-controlled. Specifically,
we claim for the moment that with probability at least 1 − O m−10 ,
kCk∞ = max |cj | ≤ C
1≤j≤m
84
µ log5/2 m
√
;
m
(202a)
2
kCkF
=
m
X
j=1
2
|cj | ≤ 12δ 2 ,
(202b)
where C > 0 is some absolute constant. This motivates us to divide the entries in C into multiple groups
based on their magnitudes.
To be precise, introduce R := 1 + dlog2 (Cµ log7/2 m)e sets {Ir }1≤r≤R , where
)
(
Cµ log5/2 m
Cµ log5/2 m
√
√
< |cj | ≤
,
1≤r ≤R−1
Ir = j ∈ [m] :
2r m
2r−1 m
SR−1
and IR = {1, · · · , m} \
r=1 Ir . An immediate consequence of the definition of Ir and the norm
constraints in (202) is the following cardinality bound
2
|Ir | ≤
kCkF
minj∈Ir |cj |
2
≤
12δ 2
Cµ log5/2
2
√
r
m
m
2 =
12δ 2 4r
m
log5 m
|
{z
}
C 2 µ2
(203)
δr
for 1 ≤ r ≤ R − 1. Since {Ir }1≤r≤R form a partition of the index set {1, · · · , m}, it is easy to see that
B ∗ CA =
R
X
r=1
(BIr ,· )∗ CIr ,Ir AIr ,· ,
where DI,J denotes the submatrix of D induced by the rows and columns of D having indices from I and
J , respectively, and DI,· refers to the submatrix formed by the rows from the index set I. As a result,
one can invoke the triangle inequality to derive
α3 ≤
R−1
X
r=1
kBIr ,· k · kCIr ,Ir k · kAIr ,· k + kBIR ,· k · kCIR ,IR k · kAIR ,· k .
(204)
Recognizing that B ∗ B = IK , we obtain
kBIr ,· k ≤ kBk = 1
for every 1 ≤ r ≤ R. In addition, by construction of Ir , we have
kCIr ,Ir k = max |cj | ≤
j∈Ir
Cµ log5/2 m
√
2r−1 m
for 1 ≤ r ≤ R, and specifically for R, one has
kCIR ,IR k = max |cj | ≤
j∈IR
Cµ log5/2 m
1
√
≤√
,
2R−1 m
m log m
which follows from the definition of R, i.e. R = 1 + dlog2 (Cµ log7/2 m)e. Regarding kAIr ,· k, we discover
that kAIR ,· k ≤ kAk and in view of (203),
kAIr ,· k ≤
sup
I:|I|≤δr m
kAI,· k ,
1 ≤ r ≤ R − 1.
Substitute the above estimates into (204) to get
α3 ≤
R−1
X
r=1
Cµ log5/2 m
kAk
√
sup kAI,· k + √
.
r−1
2
m I:|I|≤δr m
m log m
85
(205)
√
It remains to upper bound kAk and supI:|I|≤δr m kAI,· k. Lemma 57 tells us that kAk ≤ 2 m with
probability at least 1 − O m−10 . Furthermore, we can invoke Lemma 58 to bound supI:|I|≤δr m kAI,· k
for each 1 ≤ r ≤ R − 1. It is easily seen from our assumptions m µ2 K log9 m and δ = c/ log2 m that
δr K/m. In addition,
7/2
12δ 2 41+log2 (Cµ log
12δ 2 4R−1
≤
δr ≤ 2 2
5
C µ log m
C 2 µ2 log5 m
m)
= 48δ 2 log2 m =
48c
1.
log2 m
e2 , C
e3 > 0
By Lemma 58 we obtain that for some constants C
!
!
q
e2 C
e3
C
e3 δr m log(e/δr ) ≤ 2 exp −
δr m log(e/δr )
P
sup kAI,· k ≥ 4C
3
I:|I|≤δr m
!
e2 C
e3
C
≤ 2 exp −
δr m ≤ 2e−K .
3
Taking the union bound and substituting
the estimates above into (205), we see that with probability at
least 1 − O m−10 − O (R − 1)e−K ,
√
q
Cµ log5/2 m
e3 δr m log(e/δr ) + √ 2 m
√
·
4
C
2r−1 m
m log m
r=1
R−1
X q
e3 log(e/δr ) + 2
≤
4δ 12C
log m
r=1
p
1
.
. (R − 1)δ log(e/δ1 ) +
log m
α3 ≤
Note that µ ≤
√
R−1
X
m, R − 1 = dlog2 (Cµ log7/2 m)e . log m, and
s
r
e
eC 2 µ2 log5 m
log
. log m.
= log
δ1
48δ 2
Therefore, with probability exceeding 1 − O m−10 − O e−K log m ,
sup α3 . δ log2 m +
z∈S
1
.
log m
By taking c to be small enough in δ = c/ log2 m, we get
P sup α3 ≥ 1/96 ≤ O m−10 + O e−K log m
z∈S
as claimed.
Finally, it remains to justify (202). For all z ∈ S, the triangle inequality tells us that
|cj | ≤ b∗j h(x − x\ )∗ aj + b∗j (h − h\ )x\∗ aj
b∗j h + b∗j h\ · a∗j x\
p
2C3
2C4 µ log2 m
µ
2C4 µ log2 m
√
√
√
·
+
+
5 log m
≤
3/2
m
m
m
log m
≤ b∗j h · a∗j (x − x\ ) +
≤C
µ log5/2 m
√
,
m
86
for some large constant C > 0, where we have used the definition of S and the fact (189). The claim (202b)
follows directly from [LLSW16, Lemma 5.14]. To avoid confusion, we use µ1 to refer to the parameter µ
therein. Let L = m, N = K, d0 = 1, µ1 = C4 µ log2 m/2, and ε = 1/15. Then
S ⊆ Nd0 ∩ Nµ1 ∩ Nε ,
and the sample complexity condition L µ21 (K + N ) log2 L is satisfied
because we have assumed m
µ2 K log6 m. Therefore with probability exceeding 1 − O m−10 + e−K , we obtain that for all z ∈ S,
2
kCkF ≤
5
hx∗ − h\ x\∗
4
2
F
.
The claim (202b) can then be justified by observing that
hx∗ − h\ x\∗
F
= h x − x\
∗
+ h − h\ x\∗
F
≤ khk2 x − x\
2
+ h − h\
2
x\
2
≤ 3δ.
4. It remains to control α4 , for which we make note of the following inequality
α4 ≤
|
m
X
j=1
bj b∗j (hx> − h\ x\> )aj aj ∗ +
{z
}
θ3
|
m
X
j=1
bj b∗j h\ x\> (aj aj ∗ − IK )
{z
}
θ4
with aj denoting the entrywise conjugate of aj . Since {aj } has the same joint distribution as {aj }, by
the same argument used for bounding α3 we obtain control of the first term, namely,
P sup θ3 ≥ 1/96 = O(m−10 + e−K log m).
z∈S
Note that m µ2 K log m/δ 2 and δ 1. According to [LLSW16, Lemma 5.20],
P sup θ4 ≥ 1/96 ≤ P sup θ4 ≥ δ = O(m−10 ).
z∈S
z∈S
Putting together the above bounds, we reach P(supz∈S α4 ≤ 1/48) = 1 − O(m−10 + e−K log m).
5. Combining all the previous bounds for supz∈S αj and (196), we deduce that with probability 1−O(m−10 +
e−K log m),
1
1
1
1
1
∇2 f (z) − ∇2 F z \ ≤ 2 ·
+2·
+4·
+4·
= .
32
32
96
48
4
C.2
Proofs of Lemma 15 and Lemma 16
Proof of Lemma 15. In view of the definition of αt+1 (see (38)), one has
dist z t+1 , z \
2
=
1
αt+1
2
ht+1 − h\
2
+ αt+1 xt+1 − x\
2
2
≤
1 t+1
h
− h\
αt
2
2
+ αt xt+1 − x\
The gradient update rules (79) imply that
1
η
1 t+1
t
e t − η ∇h f zet ,
h
=
ht −
∇
f
z
=h
h
2
2
t
kx k2
αt
αt
ke
xt k2
η
η
et −
αt xt+1 = αt xt −
∇x f z t
=x
∇x f zet ,
2
t
2
t
e
kh k2
kh k2
87
2
2
.
b t+1 = 1 ht+1 and x
e t = 1 ht and x
et = αt xt as in (81). Let h
bt+1 = αt xt+1 . We further
where we denote h
αt
αt
get
−2
b t+1 − h\
e t − h\
ke
xt k2 IK
h
h
∇h f (e
zt)
−2
∇ f (e
x
et − x\
t+1
t
et
− x\
h
IK
x z )
b
x
2
.
=
−
η
b t+1
−2
e t − h\
zt)
∇h f (e
h
ke
xt k2 IK
− h\ h
e t −2 IK
∇x f (e
zt)
bt+1 − x\
et − x\
x
x
h
2
|
{z
}
:=D
(206)
\
The fundamental theorem of calculus (see Appendix D.3.1) together with the fact that ∇f z = 0 tells us
e t − h\
h
∇h f (e
z t ) − ∇h f z \
∇h f (e
zt)
Z 1
x
t
\
∇x f (e
z t ) − ∇x f z \
zt)
e −x
2
= ∇x f (e
=
,
(207)
∇
f
(z
(τ
))
dτ
∇h f (e
e t − h\
z t ) ∇h f (e
z t ) − ∇h f (z \ )
h
0
|
{z
}
∇x f (e
zt)
∇x f (e
z t ) − ∇x f (z \ )
e t − x\
x
:=A
where we denote z (τ ) := z \ + τ zet − z \ and ∇2 f is the Wirtinger Hessian. To further simplify notation,
b t+1
h
denote zbt+1 = t+1 . The identity (207) allows us to rewrite (206) as
b
x
t+1
t
zb
− z\
ze − z \
= (I − ηDA)
.
(208)
zbt+1 − z \
zet − z \
Take the squared Euclidean norm of both sides of (208) to reach
t
∗
1 zet − z \
ze − z \
∗
t+1
\ 2
zb
−z 2 =
(I − ηDA) (I − ηDA)
zet − z \
2 zet − z \
t
∗
zet − z \
1 ze − z \
2
2
I
+
η
AD
A
−
η
(DA
+
AD)
=
zet − z \
2 zet − z \
t
t
∗
η ze − z \
ze − z \
t
\ 2
2
2
2
≤ (1 + η kAk kDk ) ze − z 2 −
(DA + AD)
.
zet − z \
2 zet − z \
Since z (τ ) lies between zet and z \ , we conclude from the assumptions (85) that for all 0 ≤ τ ≤ 1,
max h (τ ) − h\ 2 , x (τ ) − x\ 2 ≤ dist z t , z \ ≤ ξ ≤ δ;
1
max a∗j x (τ ) − x\ ≤ C3 3/2 ;
1≤j≤m
log m
µ
max b∗j h (τ ) ≤ C4 √ log2 m
1≤j≤m
m
for ξ > 0 sufficiently small. Moreover, it is straightforward to see that
et
γ1 := x
satisfy
−2
2
et
γ2 := h
and
max {|γ1 − 1| , |γ2 − 1|} . max
n
e t − h\
h
−2
2
et − x\
, x
2
2
o
≤δ
as long as ξ > 0 is sufficiently small. We can now readily invoke Lemma 14 to arrive at
zet − z \
zet − z \
∗
kAk kDk ≤ 3(1 + δ) ≤ 4
(DA + AD)
zet − z \
zet − z \
88
≥
1
4
and
zet − z \
zet − z \
2
=
2
1 t
ze − z \
2
2
2
.
(209)
Substitution into (209) indicates that
2
zbt+1 − z \
2
When 0 < η ≤ 1/128, this implies that
zbt − z \
and hence
zet+1 − z \
2
≤ zbt+1 − z \
≤ 1 + 16η 2 − η/4 zet − z \
2
2
1/2
2
2
≤ (1 − η/8) zet − z \
≤ (1 − η/8)
This completes the proof of Lemma 15.
zet − z \
2
2
2
2
.
,
≤ (1 − η/16) dist(z t , z \ ).
Proof of Lemma 16. Reuse the notation in this subsection, namely, zbt+1 =
bt+1 = αt xt+1 . From (210), one can tell that
and x
zet+1 − z \
Invoke Lemma 52 with β = αt to get
2
≤ zbt+1 − z \
αt+1 − αt . zbt+1 − z \
≤ dist(z t , z \ ).
2
2
b t+1
h
bt+1
x
b t+1 =
with h
(210)
1 t+1
h
αt
≤ dist(z t , z \ ).
This combined with the assumption ||αt | − 1| ≤ 1/2 implies that
αt ≥
1
2
αt+1 − αt
αt+1
1
=
−
1
. dist(z t , z \ ) . C1 2 .
αt
αt
log m
and
This finishes the proof of the first claim.
The second claim can be proved by induction. Suppose that |αs | − 1 ≤ 1/2 and dist(z s , z \ ) ≤ C1 (1 −
η/16)s / log2 m hold for all 0 ≤ s ≤ τ ≤ t , then using our result in the first part gives
|α
τ +1
0
| − 1 ≤ |α | − 1 +
≤
1
+
4
η
16
τ
X
α
s+1
s=0
−α
s
cC1
1
≤
2
2
log m
τ
X
1
≤ +c
dist(z s , z \ )
4
s=0
for m sufficiently large. The proof is then complete by induction.
C.3
Proof of Lemma 17
Define the alignment parameter between z t,(l) and zet as
2
1 t,(l)
1
h
− ht + αxt,(l) − αt xt
t
α
α
α∈C
2
t,(l)
b
h
Further denote, for simplicity of presentation, zbt,(l) = t,(l) with
b
x
t,(l)
αmutual := argmin
b t,(l) :=
h
1
t,(l)
αmutual
ht,(l)
89
2
.
t,(l)
bt,(l) := αmutual xt,(l) .
x
and
Clearly, zbt,(l) is aligned with zet .
Armed with the above notation, we have
s
1 t+1,(l)
1
dist z t+1,(l) , zet+1 = min
h
−
ht+1
t+1
α
α
α
2
2
2
+ αxt+1,(l) − αt+1 xt+1
2
2
v
u
u
= min t
αt
αt+1
α
v
u
u
u
≤t
≤ max
αt
αt+1
!
!
αt+1
αt
1
1 αt+1 t+1,(l)
h
− ht+1
t
α α
αt
1
ht+1,(l)
t,(l)
!
2
+
2
2
1
− ht+1
αt
+
αt+1
αt
αt+1
αt
αmutual
2
"
1
1 t+1 #
t+1,(l)
t
h
−
h
α
t,(l)
αt
α
mutual
, t+1
t,(l)
α
αmutual xt+1,(l) − αt xt+1
t+1
αt
α t+1 xt+1,(l) − αt xt+1
α
t,(l)
αmutual xt+1,(l) − αt xt+1
2
2
2
(211)
2
(212)
,
2
t,(l)
where (211) follows by taking α = ααt αmutual . The latter bound is more convenient to work with when
controlling the gap between z t,(l) and z t .
We can then apply the gradient update rules (79) and (89) to get
#
"
1
ht+1,(l) − α1t ht+1
t,(l)
αmutual
t,(l)
αmutual xt+1,(l) − αt xt+1
η
1
(l)
t,(l)
t,(l)
t,(l)
h ,x
− α1t ht − kxηt k2 ∇h f (ht , xt )
h
− t,(l) 2 ∇h f
αt,(l)
kx k2
2
mutual
=
t,(l)
η
η
αmutual xt,(l) − t,(l) 2 ∇x f (l) ht,(l) , xt,(l) − αt xt − kht k2 ∇x f (ht , xt )
2
kh k2
η
(l) b t,(l) bt,(l)
e t − ηt 2 ∇h f h
b t,(l) −
e t, x
et
h ,x
− h
h
2∇ f
ke
x k2
kxbt,(l) k2 h
=
.
η
η
t
(l) b t,(l) bt,(l)
t
e t, x
e
bt,(l) − b t,(l)
e
x
−
x
∇
f
h
,
x
−
∇
f
h
x
x
2
et 2
kh
k2
kh k2
By construction, we can write the leave-one-out gradients as
∇h f (l) (h, x) = ∇h f (h, x) − (b∗l hx∗ al − yl ) bl a∗l x
∇x f
(l)
(h, x) = ∇h f (h, x) −
(b∗l hx∗ al
−
and
yl )al b∗l h,
which allow us to continue the derivation and obtain
t
"
η
t,(l)
1 t+1 #
1
t+1,(l)
b t,(l) −
b t,(l) , x
e − ηt 2 ∇h f
b
h
∇
f
h
− h
h
−
h
2
h
ke
x k2
t,(l)
bt,(l) k
x
αt
k
αmutual
2
=
t,(l)
η
t,(l)
t,(l)
t
t,(l)
t+1,(l)
t t+1
b
b
e − eηt 2 ∇x f
b
− x
x
− b t,(l) 2 ∇x f h , x
αmutual x
−α x
kh
k2
kh k2
1
∗ b t,(l) bt,(l)∗
∗ bt,(l)
x
a
−
y
b
h
b
a
x
2
l
l
l
l
l
kxbt,(l) k2
−η
.
1
b t,(l) x
b t,(l)
bt,(l)∗ al − yl al b∗l h
b∗l h
2
t,(l)
b
kh
k2
{z
}
|
e t, x
et
h
e t, x
et
h
:=J3
This further gives
"
1
t,(l)
ht+1,(l) −
αmutual
t,(l)
αmutual xt+1,(l)
1 t+1
h
αt
− αt xt+1
#
η
η
t,(l)
t,(l) bt,(l)
t
t et
b
b
e
e
− t,(l) 2 ∇h f h , x
− h − t,(l) 2 ∇h f h , x
h
kxb k2
kxb k2
=
t,(l)
η
η
t,(l)
t,(l)
t
t
t
b
e
b
b
e − b t,(l) 2 ∇x f h , x
e
x
− b t,(l) 2 ∇x f h , x
− x
kh k2
kh k2
|
{z
+η
|
e t, x
et
− t,(l) 2 ∇h f h
kxb k2
1
1
t
e t, x
e
−
∇
f
h
2
2
x
etk
b t,(l) k
kh
kh
2
{z2
1
ke
xt k22
1
:=ν1
:=ν2
In what follows, we bound the three terms ν1 , ν2 , and ν3 separately.
90
−ην3 .
}
}
(213)
1. Regarding the first term ν1 , one can adopt the same strategy as in Appendix C.2. Specifically, write
t
η
η
t
t,(l)
b t,(l) −
e −
b
h
∇
f
∇
f
(e
z
)
z
−
h
2
2
h
h
kxbt,(l) k2
kxbt,(l) k2
b t,(l) − h
et
h
t
t,(l)
η
η
t,(l)
t
b
e − b t,(l) 2 ∇x f (e
x
− b t,(l) 2 ∇x f zb
z) x
− x
t,(l)
et
−x
b
kh k2
kh k2
=
b t,(l) − h
et
b t,(l)
η
η
h
t,(l)
t
t
e
− kb
∇ f zb
− h − kb
∇ f (e
z)
h
xt,(l) k22 h
xt,(l) k22 h
t,(l) − x
t
b
e
x
η
η
bt,(l) − b t,(l)
et − b t,(l)
x
∇x f zbt,(l) − x
∇x f (e
zt)
kh
k22
kh
k22
−2
bt,(l) 2 IK
x
∇h f zbt,(l) − ∇h f (e
zt)
−2
b t,(l)
∇ f zbt,(l) − ∇ f (e
IK
h
zt)
x
x
2
−η
.
t,(l) − ∇ f (e
t)
t,(l) −2
b
∇
f
z
z
h
h
b
x
I
K
2
−2
∇x f zbt,(l) − ∇x f (e
zt)
b t,(l)
IK
h
2
|
{z
}
:=D
The fundamental theorem of calculus (see Appendix D.3.1) reveals that
b t,(l) − h
et
h
∇h f zbt,(l) − ∇h f (e
zt)
Z 1
∇ f zbt,(l) − ∇ f (e
t,(l)
t
b
et
−x
z)
x
x
x
∇2 f (z (τ )) dτ t,(l)
=
t,(l)
t
b
et
− ∇h f (e
z)
∇h f zb
h
−h
{z
}
|0
t,(l)
t
t,(l)
∇x f zb
− ∇x f (e
z)
b
et
x
−x
:=A
,
where we abuse the notation and denote z (τ ) = zet + τ zbt,(l) − zet . In order to invoke Lemma 14, we
need to verify the conditions required therein. Recall the induction hypothesis (90b) that
s
µ
µ2 K log9 m
dist z t,(l) , zet = zbt,(l) − zet 2 ≤ C2 √
,
m
m
and the fact that z (τ ) lies between zbt,(l) and zet . For all 0 ≤ τ ≤ 1:
√
(a) If m µ2 K log13/2 m, then
o
n
z (τ ) − z \ 2 ≤ max zbt,(l) − z \ 2 , zet − z \ 2 ≤ zet − z \ 2 + zbt,(l) − zet
s
1
µ
µ2 K log9 m
1
≤ C1 2 + C2 √
≤ 2C1 2 ,
m
m
log m
log m
2
where we have used the induction hypotheses (90a) and (90b);
(b) If m µ2 K log6 m, then
max a∗j x (τ ) − x\
1≤j≤m
bt,(l) − x
et + a∗j x
e t − x\
= max τ a∗j x
1≤j≤m
bt,(l) − x
et
≤ max a∗j x
1≤j≤m
e t − x\
+ max a∗j x
≤ max kaj k2 zbt,(l) − zet
1≤j≤m
√
µ
≤ 3 K · C2 √
m
s
1≤j≤m
2
+ C3
1
log
3/2
m
µ2 K log9 m
1
1
+ C3 3/2 ≤ 2C3 3/2 ,
m
log m
log m
which follows from the bound (190) and the induction hypotheses (90b) and (90c);
91
(214)
(c) If m µK log5/2 m, then
b t,(l) − h
e t + b∗ h
et
max b∗j h (τ ) = max τ b∗j h
j
1≤j≤m
1≤j≤m
b t,(l) − h
et
≤ max b∗j h
1≤j≤m
b t,(l)
et
+ max b∗j h
1≤j≤m
et
et
≤ max kbj k2 h
− h 2 + max b∗j h
1≤j≤m
1≤j≤m
s
r
K
µ2 K log9 m
µ
µ
µ
· C2 √
+ C4 √ log2 m ≤ 2C4 √ log2 m,
(215)
≤
m
m
m
m
m
p
which makes use of the fact kbj k2 = K/m as well as the induction hypotheses (90b) and (90d).
These properties satisfy the condition (82) required in Lemma 14. The other two conditions (83) and
(84) are also straightforward to check and hence we omit it. Thus, we can repeat the argument used in
Appendix C.2 to obtain
kν1 k2 ≤ (1 − η/16) · zbt,(l) − zet 2 .
2. In terms of the second term ν2 , it is easily seen that
(
1
1
1
kν2 k2 ≤ max
−
,
2
2
et
et 2
bt,(l) 2
x
x
h
2
2
−
1
2
2
b t,(l)
h
)
∇h f (e
zt)
∇x f (e
zt)
.
2
We first note that the upper bound on k∇2 f (·) k (which essentially provides a Lipschitz constant on the
gradient) in Lemma 14 forces
1
∇h f (e
z t ) − ∇h f z \
∇h f (e
zt)
=
. zet − z \ 2 . C1 2 ,
t
\
∇x f (e
zt)
∇
f
(e
z
)
−
∇
f
z
x
x
log
m
2
2
where the first identity follows since ∇h f z \ = 0, and the last inequality comes from the induction
bt,(l) 2 1, one can easily verify that
hypothesis (90a). Additionally, recognizing that ke
xt k2 x
1
2
ke
xt k2
−
1
bt,(l)
x
2
2
=
bt,(l)
x
2
2
2
− ke
xt k2
2
2
2
bt,(l)
ke
xt k2 · x
.
bt,(l)
x
2
et
− x
2
bt,(l) − x
et
. x
2
.
A similar bound holds for the other term involving h. Combining the estimates above thus yields
kν2 k2 . C1
1
zbt,(l) − zet
log2 m
2
.
3. When it comes to the last term ν3 , one first sees that
b t,(l) x
b t,(l) x
bt,(l)∗ al − yl bl a∗l x
bt,(l) ≤ b∗l h
bt,(l)∗ al − yl kbl k2 a∗l x
bt,(l) .
b∗l h
2
The bounds (189) and (214) taken collectively yield
bt,(l) ≤ a∗l x\ + a∗l x
bt,(l) − x\
a∗l x
.
p
log m + C3
In addition, the same argument as in obtaining (215) tells us that
1
log
3/2
m
p
log m.
b t,(l) − h\ ) . C4 √µ log2 m.
b∗l (h
m
Combine the previous two bounds to obtain
b t,(l) x
b t,(l) (b
b t,(l) − h\ )x\∗ al
bt,(l)∗ al − yl ≤ b∗l h
b∗l h
xt,(l) − x\ )∗ al + b∗l (h
92
(216)
t,(l)
b t,(l) − h\ ) · a∗ x\
b t,(l) · a∗ (b
− x\ ) + b∗l (h
≤ b∗l h
l
l x
∗ b t,(l)
\
∗ \
∗
t,(l)
\
b t,(l) − h\ ) · a∗ x\
≤ bl (h
· al (b
− h ) + bl h
x
− x ) + b∗l (h
l
log2 m
µ
1
log2 m p
log5/2 m
. C4 µ √
+√
· C3 3/2 + C4 µ √
· log m . C4 µ √
.
m
m
m
m
log m
Substitution into (216) gives
b t,(l) x
bt,(l)∗ al
b∗l h
Similarly, we can also derive
− yl
bt,(l)
bl a∗l x
2
log5/2 m
. C4 µ √
·
m
r
K p
· log m.
m
(217)
b t,(l)
b t,(l) x
b t,(l) x
b t,(l) ≤ b∗ h
bt,(l)∗ al − yl kal k2 b∗l h
bt,(l)∗ al − yl al b∗l h
b∗l h
l
. C4 µ
log5/2 m √
µ
√
· K · C4 √ log2 m
m
m
Putting these bounds together indicates that
µ
kν3 k2 . (C4 ) √
m
2
s
µ2 K log9 m
.
m
The above bounds taken together with (212) and (213) ensure the existence of a constant C > 0 such that
s
t+1
t
2 K log9 m
α
α
1
µ
η
µ
2
zbt,(l) − zet 2 + C (C4 ) η √
dist z t+1,(l) , zet+1 ≤ max
1−
, t+1
+ CC1 η 2
αt
α
16
m
m
log m
s
2 K log9 m
(i) 1 − η/21
µ
η t,(l)
µ
2
≤
1−
zb
− zet 2 + C (C4 ) η √
1 − η/20
20
m
m
s
µ
µ2 K log9 m
η t,(l)
2
zb
− zet 2 + 2C (C4 ) η √
≤ 1−
21
m
m
s
η
µ
µ2 K log9 m
2
= 1−
dist z t,(l) , zet + 2C (C4 ) η √
21
m
m
s
(ii)
µ
µ2 K log9 m
≤ C2 √
.
m
m
Here, (i) holds as long as m is sufficiently large such that CC1 1/log2 m 1 and
t+1
α
αt
1 − η/21
max
,
<
,
αt
αt+1
1 − η/20
(218)
which is guaranteed by Lemma 16. The inequality (ii) arises from the induction hypothesis (90b) and taking
C2 > 0 is sufficiently large.
e t+1 , x
et+1 ) and
Finally we establish the second inequality claimed in the lemma. Take (h1 , x1 ) = (h
t+1,(l)
t+1,(l)
b
b
(h2 , x2 ) = (h
,x
) in Lemma 55. Since both (h1 , x1 ) and (h2 , x2 ) are close enough to (h\ , x\ ), we
deduce that
s
µ
µ2 K log9 m
zet+1,(l) − zet+1 2 . zbt+1,(l) − zet+1 2 . C2 √
m
m
as claimed.
93
C.4
Proof of Lemma 18
Before going forward, we make note of the following inequality
max b∗l
1≤l≤m
1
αt+1
ht+1 ≤
αt
αt+1
1 t+1
1
h
≤ (1 + δ) max b∗l ht+1
t
1≤l≤m
α
αt
max b∗l
1≤l≤m
for some small δ log−2 m, where the last relation follows from Lemma 16 that
1
αt+1
−1 .
≤δ
αt
log2 m
for m sufficiently large. In view of the above inequality, the focus of our subsequent analysis will be to
control maxl b∗l α1t ht+1 .
The gradient update rule for ht+1 (cf. (79a)) gives
m
X
1 t+1 e t
e tx
et∗ − h\ x\∗ aj a∗j x
et ,
h
= h − ηξ
bj b∗j h
t
α
j=1
e t = 1 ht and x
et = αt xt . Here and below, we denote ξ = 1/ke
xt k22 for notational convenience. The
where h
αt
above formula can be further decomposed into the following terms
m
X
1 t+1 e t
t
e t a∗ x
h
= h − ηξ
bj b∗j h
je
αt
j=1
= 1 − ηξ x\
− ηξ
where we use the fact that
|
Pm
j=1
m
X
j=1
2
2
e t − ηξ
h
|
e t a∗ x\
bj b∗j h
j
2
+ ηξ
j=1
m
X
j=1
2
m
X
et
bj b∗j h\ x\∗ aj a∗j x
t
e t a∗ x
bj b∗j h
je
{z
2
− a∗j x\
2
}
:=v1
− x\
{z
2
+ ηξ
2
}
:=v2
|
m
X
j=1
et ,
bj b∗j h\ x\∗ aj a∗j x
{z
}
:=v3
bj b∗j = IK . In the sequel, we shall control each term separately.
1. We start with |b∗l v1 | by making the observation that
m
h
X
∗ t ∗
i
1 ∗
t
\
∗ \
∗
t
\ ∗
e t a∗ x
e
e
e
b∗l bj b∗j h
|bl v1 | =
−
x
a
x
+
a
x
a
x
−
x
j
j
j
j
ηξ
j=1
≤
m
X
j=1
|b∗l bj |
et
max b∗j h
1≤j≤m
e t − x\
max a∗j x
1≤j≤m
Combining the induction hypothesis (90c) and the condition (189) yields
et ≤ max a∗j x
e t − x\
max a∗j x
1≤j≤m
1≤j≤m
+ max a∗j x\ ≤ C3
1≤j≤m
as long as m is sufficiently large. This further implies
et − x\
max a∗j x
1≤j≤m
et + a∗j x\
a∗j x
≤ C3
1
log
3/2
m
1
log3/2 m
et + a∗j x\
a∗j x
+5
.
p
p
log m ≤ 6 log m
p
· 11 log m ≤ 11C3
1
.
log m
Substituting it into (219) and taking Lemma 48, we arrive at
1 ∗
e t · C3 1 . C3 max b∗ h
e t ≤ 0.1 max b∗ h
et ,
|bl v1 | . log m · max b∗j h
j
j
1≤j≤m
1≤j≤m
1≤j≤m
ηξ
log m
with the proviso that C3 is sufficiently small.
94
(219)
2. We then move on to |b∗l v3 |, which obeys
m
m
X
X
1 ∗
∗
∗ \ \∗
∗ \
et − x\ .
b∗l bj b∗j h\ x\∗ aj a∗j x
bl bj bj h x aj aj x +
|b v3 | ≤
ηξ l
j=1
j=1
(220)
Regarding the first term, we have the following lemma, whose proof is given in Appendix C.4.1.
2
Lemma 28. Suppose
m ≥ CK log m for some sufficiently large constant C > 0. Then with probability
−10
at least 1 − O m
, one has
m
X
µ
b∗l bj b∗j h\ x\∗ aj a∗j x\ − b∗l h\ . √ .
m
j=1
For the remaining term, we apply the same strategy as in bounding |b∗l v1 | to get
m
X
b∗l bj b∗j h\ x\∗ aj a∗j
t
e −x
x
j=1
\
≤
m
X
j=1
|b∗l bj |
max
1≤j≤m
b∗j h\
max
1≤j≤m
a∗j
p
µ
1
≤ 4 log m · √ · C3 3/2 · 5 log m
m
log m
µ
. C3 √ ,
m
t
e −x
x
\
max
1≤j≤m
a∗j x\
where the second line follows from the incoherence (36), the induction hypothesis (90c), the condition
(189) and Lemma 48. Combining the above three inequalities and the incoherence (36) yields
µ
1 ∗
µ
µ
|b v3 | . b∗l h\ + √ + C3 √ . (1 + C3 ) √ .
ηξ l
m
m
m
3. Finally, we need to control |b∗l v2 |. For convenience of presentation, we will only bound |b∗1 v2 | in the
sequel, but the argument easily extends to all other bl ’s. The idea is to group {bj }1≤j≤m into bins each
containing τ adjacent vectors, and to look at each bin separately. Here, τ poly log(m) is some integer
to be specified later. For notational simplicity, we assume m/τ to be an integer, although all arguments
continue to hold when m/τ is not an integer. For each 0 ≤ l ≤ m − τ , the following summation over τ
adjacent data obeys
b∗1
τ
X
j=1
= b∗1
et
bl+j b∗l+j h
τ
X
j=1
=
a∗l+j x\
et
bl+1 b∗l+1 h
τ
X
a∗l+j x\
j=1
+ b∗1
τ
X
j=1
2
2
− x\
a∗l+j x\
− x\
2
2
2
2
2
− x\
2
2
+ b∗1
τ
X
j=1
e t + b∗
b∗1 bl+1 b∗l+1 h
1
∗ et
bl+1 (bl+j − bl+1 ) h
a∗l+j x\
2
t ∗
e
bl+j b∗l+j − bl+1 b∗l+1 h
al+j x\
τ
X
j=1
− x\
et
(bl+j − bl+1 ) b∗l+j h
2
2
.
a∗l+j x\
2
2
− x\
− x\
2
2
2
2
(221)
We will now bound each term in (221) separately.
Pτ
∗
\ 2
\ 2
• Before bounding the first term in (221), we first bound the pre-factor
j=1 |al+j x | − kx k2 . Notably, the fluctuation of this quantity does not grow fast as it is the sum of i.i.d. random variables
95
2
over a group of relatively large size, i.e. τ . Since 2 a∗j x\ follows the χ22 distribution, by standard
concentration results (e.g. [RV+ 13, Theorem 1.1]), with probability exceeding 1 − O m−10 ,
τ
X
a∗l+j x\
2
j=1
− kx\ k22
.
p
τ log m.
With this result in place, we can bound the first term in (221) as
τ
X
p
2
e t . τ log m |b∗ bl+1 | max b∗ h
et .
b∗1 bl+1 b∗l+1 h
a∗l+j x\ − kx\ k22
l
1
1≤l≤m
j=1
Taking the summation over all bins gives
m
m
τ
τ −1 X
τ −1
X
X
p
2
∗
∗
t
\ 2
∗
\
e
et .
b bkτ +1 bkτ +1 h . τ log m
akτ +j x − kx k2
|b∗1 bkτ +1 | max b∗l h
1
1≤l≤m
k=0
j=1
(222)
k=0
It is straightforward to see from the proof of Lemma 48 that
m
τ −1
X
k=0
|b∗1 bkτ +1 |
=
kb1 k22
+
m
τ −1
X
k=1
|b∗1 bkτ +1 | ≤
K
+O
m
log m
τ
(223)
.
Substituting (223) into the previous inequality (222) gives
s
m
√
τ
τ −1 X
3
X
2
e t . K τ log m + log m max b∗ h
et
a∗kτ +j x\ − kx\ k22
b∗ b
b∗
h
l
1 kτ +1 kτ +1
1≤l≤m
m
τ
k=0
j=1
et ,
≤ 0.1 max b∗l h
1≤l≤m
√
3
as long as m K τ log m and τ log m.
• The second term of (221) obeys
b∗1
τ
X
j=1
et
(bl+j − bl+1 ) b∗l+j h
a∗l+j x\
2
− x\
2
2
v
v
uX
uX
τ
τ
u
2u
et t
|b∗1 (bl+j − bl+1 )| t
a∗l+j x\
≤ max b∗l h
1≤l≤m
j=1
2
j=1
2
− kx\ k2
v
uX
u τ
√
2
et t
. τ max b∗l h
|b∗1 (bl+j − bl+1 )| ,
1≤l≤m
2
j=1
where the first inequality is due to Cauchy-Schwarz, and the second one holds because of the following
lemma, whose proof can be found in Appendix C.4.2.
Lemma 29. Suppose τ ≥ C log4 m for some sufficiently large constant C > 0. Then with probability
exceeding 1 − O m−10 ,
τ
X
2
2 2
a∗j x\ − x\ 2 . τ.
j=1
With the above bound in mind, we can sum over all bins of size τ to obtain
b∗1
m
τ −1
τ
XX
k=0 j=1
et
(bkτ +j − bkτ +1 ) b∗kτ +j h
96
n
a∗l+j x\
2
− x\
2
2
o
v
m
uX
τ −1
√ X
u τ
2
et
t
.
τ
|b∗1 (bkτ +j − bkτ +1 )|
max b∗ h
1≤l≤m l
j=1
k=0
≤ 0.1 max
1≤l≤m
et
b∗l h
.
Here, the last line arises from Lemma 51, which says that for any small constant c > 0, as long as
m τ K log m
v
m
uX
τ −1
X
u τ
1
2
t
|b∗1 (bkτ +j − bkτ +1 )| ≤ c √ .
τ
j=1
k=0
• The third term of (221) obeys
b∗1
τ
X
j=1
∗ et
bl+1 (bl+j − bl+1 ) h
≤ |b∗1 bl+1 |
.τ
τ
X
|b∗1 bl+1 |
2
a∗l+j x\
j=1
max
0≤l≤m−τ, 1≤j≤τ
where the last line relies on the inequality
τ
X
n
a∗l+j x
\ 2
j=1
−
2
x\ 2
2
a∗l+j x\
− x\
2
− x\
2
2
o
∗ et
max
(b
− bl+1 ) h
0≤l≤m−τ, 1≤j≤τ l+j
2
∗ et
(bl+j − bl+1 ) h
,
v
u τ
√ uX
≤ τt
a∗l+j x\
2
j=1
−
2
kx\ k2
2
.τ
owing to Lemma 29 and the Cauchy-Schwarz inequality. Summing over all bins gives
m
τ −1
X
b∗1
k=0
.τ
τ
X
j=1
m
τ −1
X
k=0
. log m
∗ et
bkτ +1 (bkτ +j − bkτ +1 ) h
|b∗1 bkτ +1 |
max
0≤l≤m−τ, 1≤j≤τ
max
0≤l≤m−τ, 1≤j≤τ
n
a∗kτ +j x\
2
− x\
2
2
o
∗ et
(bl+j − bl+1 ) h
∗ et
(bl+j − bl+1 ) h
,
where the last relation makes use of (223) with the proviso that m Kτ . It then boils down to bounding
∗ et
et
. Without loss of generality, it suffices to look at (bj − b1 )∗ h
max0≤l≤m−τ, 1≤j≤τ (bl+j − bl+1 ) h
for all 1 ≤ j ≤ τ . Specifically, we claim for the moment that
µ
∗ et
max (bj − b1 ) h
≤ cC4 √ log m
m
(224)
1≤j≤τ
for some sufficiently small constant c > 0, provided that m τ K log4 m. As a result,
m
τ −1
X
k=0
b∗1
τ
X
j=1
∗ et
bkτ +1 (bkτ +j − bkτ +1 ) h
n
a∗kτ +j x\
2
− x\
2
2
o
µ
. cC4 √ log2 m.
m
• Putting the above results together, we get
m
τ
τ −1
n
X
X
1 ∗
\
e t a∗
|b1 v2 | ≤
b∗1
bkτ +j b∗kτ +j h
kτ +j x
ηξ
j=1
2
k=0
97
−
2
x\ 2
o
≤ 0.2 max
1≤l≤m
et
b∗l h
µ
2
+O cC4 √ log m .
m
4. Combining the preceding bounds guarantees the existence of some constant C8 > 0 such that
e t + C8 (1 + C3 )ηξ √µ + C8 ηξcC4 √µ log2 m
e t + 0.3ηξ max b∗ h
e t+1 ≤ (1 + δ) (1 − ηξ) b∗ h
b∗l h
l
l
1≤l≤m
m
m
(i)
µ
1
µ
µ
2
2
√
√
√
(1 − 0.7ηξ) C4
≤ 1+O
log m + C8 (1 + C3 )ηξ
+ C8 ηξcC4
log m
m
m
m
log2 m
(ii)
µ
≤ C4 √ log2 m.
m
Here, (i) uses the induction hypothesis (90d), and (ii) holds as long as c > 0 is sufficiently small (so that
(1 + δ)C8 ηξc 1) and η > 0 is some sufficiently small constant. In order for the proof to go through, it
suffices to pick
τ = c10 log4 m
for some sufficiently large constant c10 > 0. Accordingly, we need the sample size to exceed
m µ2 τ K log4 m µ2 K log8 m.
Finally, it remains to verify the claim (224), which we accomplish in Appendix C.4.3.
C.4.1
Proof of Lemma 28
Denote
wj = b∗l bj b∗j h\ x\∗ aj a∗j x\ .
P
m
Recognizing that E[aj a∗j ] = IK and j=1 bj b∗j = IK , we can write the quantity of interest as the sum of
independent random variables, namely,
m
X
j=1
b∗l bj b∗j h\ x\∗ aj a∗j x\ − b∗l h\ =
m
X
j=1
(wj − E [wj ]) .
Further, the sub-exponential norm (see definition in [Ver12]) of wj − E [wj ] obeys
(i)
(ii)
kwj − E [wj ]kψ1 ≤ 2 kwj kψ1 ≤
4 |b∗l bj |
b∗j h\
2
a∗j x\ ψ
2
(iii)
.
|b∗l bj |
√
µ (iv) µ K
√
≤
,
m
m
where (i) arises from the centering property of the sub-exponential norm (see [Ver12, Remark 5.18]), (ii)
utilizes the relationship between the sub-exponential norm and the sub-Gaussian norm [Ver12, Lemma 5.14]
and (iii) is a consequence of the incoherence condition (36) and the fact that a∗j x\ ψ . 1, and (iv) follows
2
p
from kbj k2 = K/m. Let M = maxj∈[m] kwj − E [wj ]kψ1 and
V2 =
m
X
j=1
which follows since
Pm
2
kwj − E [wj ]kψ1 .
2
∗
∗
j=1 |bl bj | = bl
P
m
X
j=1
m
∗
j=1 bj bj
Pm 2
1, j=1 aj
µ
|b∗l bj | √
m
2
=
µ2
µ2 K
2
kbl k2 =
,
m
m2
bl = kbl k22 = K/m. Let aj = kwj − E [wj ]kψ1 and
Xj = (wj − E[wj ])/aj . Since kXj kψ1 =
= V 2 and maxj∈[m] |aj | = M , we can invoke [Ver12,
Proposition 5.16] to obtain that
m
X
t t2
P
,
,
aj Xj ≥ t ≤ 2 exp −c min
M V2
j=1
√
where c > 0 is some universal constant. By taking t = µ/ m, we see there exists some constant c0 such that
√
m
X
µ
µ/ m µ2 /m
∗
∗ \ \∗
∗ \
∗ \
P
bl bj bj h x aj aj x − bl h ≥ √
≤ 2 exp −c min
,
M
V2
m
j=1
98
√
µ2 /m
µ/ m
√
≤ 2 exp −c0 min
, 2
µ K/m µ K/m2
o
np
= 2 exp −c0 min
m/K, m/K
.
We conclude the proof by observing that m K log2 m as stated in the assumption.
C.4.2
Proof of Lemma 29
2
From the elementary inequality (a − b) ≤ 2 a2 + b2 , we see that
τ
X
a∗j x\
2
j=1
2 2
− x\
2
≤2
τ
X
a∗j x\
4
+ x\
j=1
4
2
=2
τ
X
a∗j x\
4
(225)
+ 2τ,
j=1
Pτ
4
where the last identity holds true since x\ 2 = 1. It thus suffices to control j=1 a∗j x\ . Let ξi = a∗j x\ ,
which is a standard complex Gaussian random variable. Since the ξi ’s are statistically independent, one has
!
τ
X
|ξi |4 ≤ C4 τ
Var
i=1
for some constant C4 > 0. It then follows from the hypercontractivity concentration result for Gaussian
polynomials that [SS12, Theorem 1.9]
( τ
)
1/4 !
X
4
c2 τ 2
4
Pτ
P
|ξi | − E |ξi |
≥ cτ ≤ C exp −c2
Var ( i=1 |ξi |4 )
i=1
!
2 2 1/4 !
2 1/4
c τ
c
≤ C exp −c2
= C exp −c2
τ 1/4
C4 τ
C4
≤ O(m−10 ),
for some constants c, c2 , C > 0, with the proviso that τ log4 m. As a consequence, with probability at
least 1 − O(m−10 ),
τ
τ
h
i
X
X
4
4
a∗j x\ . τ +
E a∗j x\
τ,
j=1
j=1
which together with (225) concludes the proof.
C.4.3
Proof of Claim (224)
We will prove the claim by induction. Again, observe that
αt−1
∗ et
∗ 1 t
h =
(bj − b1 ) h
= (bj − b1 )
αt
αt
∗
(bj − b1 )
1
αt−1
ht ≤ (1 + δ) (bj − b1 )
∗
∗
1
for some δ log−2 m, which allows us to look at (bj − b1 ) t−1
ht instead.
α
Use the gradient update rule for ht (cf. (79a)) once again to get
1
αt−1
t
1
t−1
m
X
bl b∗l
t−1
t−1∗
\
\∗
h x
−h x
2
kxt−1 k2 l=1
m
X
e t−1 − ηθ
e t−1 x
et−1∗ − h\ x\∗ al a∗l x
et−1 ,
=h
bl b∗l h
h =
αt−1
h
−
η
l=1
99
al a∗l xt−1
!
1
αt−1
ht
2
.
2
et−1
where we denote θ := 1/ x
∗
(bj − b1 )
1
αt−1
This further gives rise to
∗ e t−1
∗
ht = (bj − b1 ) h
− ηθ (bj − b1 )
m
X
l=1
e t−1 x
et−1
et−1∗ − h\ x\∗ al a∗l x
bl b∗l h
m
X
∗ e t−1
∗
e t−1 x
et−1
et−1∗ − h\ x\∗ x
= (bj − b1 ) h
− ηθ (bj − b1 )
bl b∗l h
l=1
m
X
∗
e t−1 x
et−1
et−1∗ − h\ x\∗ (al a∗l − IK ) x
− ηθ (bj − b1 )
bl b∗l h
= 1−
ηθke
xt−1 k22
∗ e t−1
∗
et−1
(bj − b1 ) h
+ ηθ (bj − b1 ) h\ x\∗ x
|
{z
}
∗
− ηθ (bj − b1 )
|
l=1
:=β1
m
X
l=1
e t−1 x
et−1∗ − h\ x\∗ (al a∗l − IK ) x
et−1 ,
bl b∗l h
{z
:=β2
where the last identity makes use of the fact that
Pm
l=1
bl b∗l = IK . For β1 , one can get
}
µ
1
∗
xt−1 k2 ≤ 4 √ ,
|β1 | ≤ (bj − b1 ) h\ kx\ k2 ke
ηθ
m
et−1 and x\ are extremely close, i.e.
where we utilize the incoherence condition (36) and the fact that x
et−1 − x\ 2 ≤ dist z t−1 , z \ 1
x
=⇒
ke
xt−1 k2 ≤ 2.
Regarding the second term β2 , we have
)
(m
X
1
∗
e t−1 x
et−1∗ − h\ x\∗ (al a∗l − IK ) x
et−1 .
(bj − b1 ) bl
max b∗l h
|β2 | ≤
1≤l≤m
ηθ
l=1
|
{z
}
:=ψ
The term ψ can be bounded as follows
e t−1 x
et−1∗ (al a∗l − I) x
et−1 + max b∗l h\ x\∗ (al a∗l − IK ) x
et−1
ψ ≤ max b∗l h
1≤l≤m
1≤l≤m
e t−1 max x
et−1∗ (al a∗l − IK ) x
et−1 + max b∗l h\ max x\∗ (al a∗l − IK ) x
et−1
≤ max b∗l h
1≤l≤m
1≤l≤m
1≤l≤m
1≤l≤m
µ
∗ e t−1
√
.
. log m max bl h
+
1≤l≤m
m
Here, we have used the incoherence condition (36) and the facts that
et−1 ≤ a∗l x
et−1
(e
xt−1 )∗ (al a∗l − I) x
x
\∗
(al a∗l
t−1
e
− I) x
≤
2
2
et−1 2
a∗l x
et−1
+ x
a∗l x\ 2
2
2
. log m,
et−1
+ x
2
kx\ k2 . log m,
which are immediate consequences of (90c) and (189). Combining this with Lemma 50, we see that for any
small constant c > 0
1
µ
1
∗ e t−1
√
|β2 | ≤ c
max b h
+
ηθ
log m 1≤l≤m l
m
holds as long as m τ K log4 m.
To summarize, we arrive at
∗ et
et−1
(bj − b1 ) h
≤ (1 + δ)
1 − ηθ x
2
2
µ
1
∗ e t−1
(bj − b1 ) h
+ 4ηθ √ + cηθ
log m
m
100
e t−1 + √µ
max b∗l h
1≤l≤m
m
.
et−1
Making use of the induction hypothesis (85c) and the fact that x
2
2
≥ 0.9, we reach
µ
cµηθ
∗ e t−1
t
e
+ cC4 ηθ √ log m + √
(bj − b1 ) h ≤ (1 + δ) (1 − 0.9ηθ) (bj − b1 ) h
.
m
m log m
∗
Recall that δ 1/ log2 m. As a result, if η > 0 is some sufficiently small constant and if
µ
µ
µ
∗ e t−1
≤ 10c C4 √ log m + √
(bj − b1 ) h
≤ 20cC4 √ log m
m
ηθ m log m
m
holds, then one has
µ
∗ et
≤ 20cC4 √ log m.
(bj − b1 ) h
m
Therefore, this concludes the proof of the claim (224) by induction, provided that the base case is true,
i.e. for some c > 0 sufficiently small
µ
∗ e0
(bj − b1 ) h
≤ 20cC4 √ log m.
m
(226)
The claim (226) is proved in Appendix C.6 (see Lemma 30).
C.5
Proof of Lemma 19
Recall that ȟ0 and x̌0 are the leading left and right singular vectors of M , respectively. Applying a variant
of Wedin’s sinΘ theorem [Dop00, Theorem 2.1], we derive that
min
α∈C,|α|=1
αȟ0 − h\
2
+ αx̌0 − x\
2
≤
c1 kM − E [M ]k
,
σ1 (E [M ]) − σ2 (M )
(227)
for some universal constant c1 > 0. Regarding the numerator of (227), it has been shown in [LLSW16, Lemma
5.20] that for any ξ > 0,
kM − E [M ]k ≤ ξ
(228)
with probability exceeding 1 − O(m−10 ), provided that
m≥
c2 µ2 K log2 m
ξ2
for some universal constant c2 > 0. For the denominator of (227), we can take (228) together with Weyl’s
inequality to demonstrate that
σ1 (E [M ]) − σ2 (M ) ≥ σ1 (E [M ]) − σ2 (E [M ]) − kM − E [M ]k ≥ 1 − ξ,
where the last inequality utilizes the facts that σ1 (E [M ]) = 1 and σ2 (E[M ]) = 0. These together with
(227) reveal that
min
α∈C,|α|=1
αȟ0 − h\
2
+ αx̌0 − x\
2
≤
c1 ξ
≤ 2c1 ξ
1−ξ
(229)
as long as ξ ≤ 1/2.
p
Now
we connect the preceding bound (229) with the scaled singular vectors h0 = σ1 (M ) ȟ0 and
p
x0 = σ1 (M ) x̌0 . For any α ∈ C with |α| = 1, from the definition of h0 and x0 we have
αh0 − h\
+ αx0 − x\
2
2
=
p
σ1 (M ) αȟ0 − h\
101
2
+
p
σ1 (M ) αx̌0 − x\
2
.
Since αȟ0 , αx̌0 are also the leading left and right singular vectors of M , we can invoke Lemma 60 to get
p
2 |σ1 (M ) − σ1 (E [M ])|
p
αh0 − h\ 2 + αx0 − x\ 2 ≤ σ1 (E[M ]) αȟ0 − h\ 2 + αx̌0 − x\ 2 + p
σ1 (M ) + σ1 (E [M ])
2 |σ1 (M ) − σ1 (E [M ])|
p
.
(230)
= αȟ0 − h\ 2 + αx̌0 − x\ 2 +
σ1 (M ) + 1
In addition, we can apply Weyl’s inequality once again to deduce that
(231)
|σ1 (M ) − σ1 (E[M ])| ≤ kM − E[M ]k ≤ ξ,
where the last inequality comes from (228). Substitute (231) into (230) to obtain
αh0 − h\
2
+ αx0 − x\
2
≤ αȟ0 − h\
2
+ αx̌0 − x\
Taking the minimum over α, one can thus conclude that
αȟ0 − h\
αh0 − h\ 2 + αx0 − x\ 2 ≤ min
min
α∈C,|α|=1
α∈C,|α|=1
2
2
(232)
+ 2ξ.
+ αx̌0 − x\
2
+ 2ξ ≤ 2c1 ξ + 2ξ,
where the last inequality comes from (229). Since ξ is arbitrary, by taking m/(µ2 K log2 m) to be large
enough, we finish the proof for (92). Carrying out similar arguments (which we omit here), we can also
establish (93).
The last claim in Lemma 19 that ||α0 | − 1| ≤ 1/4 is a direct corollary of (92) and Lemma 52.
C.6
Proof of Lemma 20
The proof is composed of three steps:
• In the first step, we show that the normalized singular vectors of M and M (l) are close enough; see (240).
• We then proceed by passing this proximity result to the scaled singular vectors; see (243).
• Finally, we translate the usual `2 distance metric to the distance function we defined in (34); see (245).
Along the way, we also prove the incoherence of h0 with respect to {bl }.
Here comes the formal proof. Recall that ȟ0 and x̌0 are respectively the leading left and right singular
vectors of M , and ȟ0,(l) and x̌0,(l) are respectively the leading left and right singular vectors of M (l) . Invoke
Wedin’s sinΘ theorem [Dop00, Theorem 2.1] to obtain
n
o
M − M (l) x̌0,(l) 2 + ȟ0,(l)∗ M − M (l) 2
0
0,(l)
0
0,(l)
min
αȟ − ȟ
+ αx̌ − x̌
≤ c1
2
2
α∈C,|α|=1
σ1 M (l) − σ2 (M )
for some universal constant c1 > 0. Using the Weyl’s inequality we get
σ1 M (l) − σ2 (M ) ≥ σ1 E[M (l) ] − kM (l) − E[M (l) ]k − σ2 (E[M ]) − kM − E[M ]k
≥ 3/4 − kM (l) − E[M (l) ]k − kM − E[M ]k ≥ 1/2,
where the penultimate inequality follows from
σ1 E[M (l) ] ≥ 3/4
for m sufficiently large, and the last inequality comes from [LLSW16, Lemma 5.20], provided that m ≥
c2 µ2 K log2 m for some sufficiently large constant c2 > 0. As a result, denoting
n
o
β 0,(l) := argmin
αȟ0 − ȟ0,(l) 2 + αx̌0 − x̌0,(l) 2
(233)
α∈C,|α|=1
allows us to obtain
β 0,(l) ȟ0 − ȟ0,(l)
+ β 0,(l) x̌0 − x̌0,(l)
2
2
≤ 2c1
n
M − M (l) x̌0,(l)
+ ȟ0,(l)∗ M − M (l)
2
2
o
. (234)
It then boils down to controlling the two terms on the right-hand side of (234). By construction,
M − M (l) = bl b∗l h\ x\∗ al a∗l .
102
• To bound the first term, observe that
M − M (l) x̌0,(l)
= bl b∗l h\ x\∗ al a∗l x̌0,(l) = kbl k2 b∗l h\ a∗l x\ · a∗l x̌0,(l)
2
s
2
µ
K log m
≤ 30 √ ·
,
(235)
m
m
p
where we use the fact that kbl k2 = K/m, the
incoherence condition (36), the bound (189) and the fact
that with probability exceeding 1 − O m−10 ,
p
max a∗l x̌0,(l) ≤ 5 log m,
2
1≤l≤m
due to the independence between x̌0,(l) and al .
• To bound the second term, for any α
e obeying |e
α| = 1 one has
ȟ0,(l)∗ M − M (l)
2
= ȟ0,(l)∗ bl b∗l h\ x\∗ al a∗l
2
= kal k2 b∗l h\ a∗l x\ · b∗l ȟ0,(l)
p
√
µ
≤ 3 K · √ · 5 log m · b∗l ȟ0,(l)
m
r
r
2
(ii)
µ K log m
µ2 K log m ∗
∗ 0
≤ 15
α
ebl ȟ + 15
bl α
eȟ0 − ȟ0,(l)
m
m
r
r
r
2
2
(iii)
µ K log m ∗ 0
µ K log m
K
bl ȟ + 15
≤ 15
·
α
eȟ0 − ȟ0,(l)
m
m
m
(i)
2
.
Here, (i) arises from the incoherence condition (36) together with the bounds (189) p
and (190), the inequality
(ii) comes from the triangle inequality, and the last line (iii) holds since kbl k2 = K/m and |e
α| = 1.
Substitution of the above bounds into (234) yields
β 0,(l) ȟ0 − ȟ0,(l) + β 0,(l) x̌0 − x̌0,(l)
2
2
s
r
r
r
2
2
K log m
µ K log m ∗ 0
µ2 K log m
K
µ
+ 15
bl ȟ + 15
·
α
eȟ0 − ȟ0,(l)
≤ 2c1 30 √ ·
m
m
m
m
m
2
.
Since the previous inequality holds for all |e
α| = 1, we can choose α
e = β 0,(l) and rearrange terms to get
r
r !
µ2 K log m K 0,(l) 0
β
ȟ − ȟ0,(l) + β 0,(l) x̌0 − x̌0,(l)
1 − 30c1
m
m
2
2
s
r
µ
K log2 m
µ2 K log m ∗ 0
≤ 60c1 √ ·
+ 30c1
bl ȟ .
m
m
m
p
p
Under the condition that m µK log1/2 m, one has 1 − 30c1 µ2 K log m/m · K/m ≥ 12 , and therefore
s
r
K log2 m
µ2 K log m ∗ 0
µ
0,(l) 0
0,(l)
0,(l) 0
0,(l)
β
ȟ − ȟ
+ 60c1
+ β
x̌ − x̌
≤ 120c1 √ ·
bl ȟ ,
m
m
m
2
2
which immediately implies that
o
n
max
β 0,(l) ȟ0 − ȟ0,(l) + β 0,(l) x̌0 − x̌0,(l)
1≤l≤m
2
2
s
r
µ
K log2 m
µ2 K log m
≤ 120c1 √ ·
+ 60c1
max b∗l ȟ0 .
1≤l≤m
m
m
m
103
(236)
We then move on to b∗l ȟ0 . The aim is to show that max1≤l≤m b∗l ȟ0 can also be upper bounded by
the left-hand side of (236). By construction, we have M x̌0 = σ1 (M ) ȟ0 , which further leads to
b∗l ȟ0 =
1
b∗ M x̌0
σ1 (M ) l
(i)
≤2
m
X
(b∗l bj ) b∗j h\ x\∗ aj a∗j x̌0
j=1
m
X
∗ \
≤ 2
|b∗l bj | max
bj h a∗j x\ a∗j x̌0
j=1
1≤j≤m
n
µ p
≤ 8 log m · √ · 5 log m max
a∗j x̌0,(j) + kaj k2 β 0,(j) x̌0 − x̌0,(j)
1≤j≤m
m
s
2
µ log m
µ2 K log3 m
+ 120
max β 0,(j) x̌0 − x̌0,(j) ,
≤ 200 √
1≤j≤m
m
m
2
(ii)
2
o
(237)
where β 0,(j) is as defined in (233). Here, (i) comes from the lower bound σ1 (M ) ≥ 1/2. The bound (ii) follows
by
the incoherence condition (36), the bound (189), the triangle inequality, as well as the estimate
√
Pmcombining
∗
∗ 0,(j)
≤ 5 log m
j=1 |bl bj | ≤ 4 log m from Lemma 48. The last line uses the upper estimate max1≤j≤m aj x̌
and (190). Our bound (237) further implies
s
2
µ
log
m
µ2 K log3 m
max b∗l ȟ0 ≤ 200 √
(238)
+ 120
max β 0,(j) x̌0 − x̌0,(j) .
1≤j≤m
1≤l≤m
m
m
2
The above bound (238) taken together with (236) gives
n
o
s
K log2 m
+ β 0,(l) x̌0 − x̌0,(l)
1≤l≤m
m
2
2
s
r
µ log2 m
µ2 K log m
µ2 K log3 m
+ 60c1
200 √
+ 120
max β 0,(j) x̌0 − x̌0,(j)
1≤j≤m
m
m
m
max
β 0,(l) ȟ0 − ȟ0,(l)
As long as m µ2 K log2 m we have 60c1
we are left with
max
1≤l≤m
n
β 0,(l) ȟ0 − ȟ0,(l)
µ
≤ 120c1 √ ·
m
2
.
(239)
q
p
µ2 K log m/m · 120 µ2 K log3 m/m ≤ 1/2. Rearranging terms,
2
+ β 0,(l) x̌0 − x̌0,(l)
2
o
µ
≤ c3 √
m
s
µ2 K log5 m
m
(240)
for some constant c3 > 0. Further, this bound combined with (238) yields
s
s
2
3
2
µ log m
µ K log m
µ
µ2 K log5 m
µ log2 m
max b∗l ȟ0 ≤ 200 √
+ 120
· c3 √
≤ c2 √
1≤l≤m
m
m
m
m
m
(241)
for some constant c2 > 0, with the proviso that m µ2 K log2 m.
We now translate the preceding bounds to the scaled version. Recall from the bound (231) that
(242)
1/2 ≤ 1 − ξ ≤ kM k = σ1 (M ) ≤ 1 + ξ ≤ 2,
as long as ξ ≤ 1/2. For any α ∈ C with |α| = 1, αȟ0 , αx̌0 are still the leading left and right singular vectors
of M . Hence, we can use Lemma 60 to derive that
n
o
σ1 M − σ1 M (l) ≤ M − M (l) x̌0,(l) + αȟ0 − ȟ0,(l) + αx̌0 − x̌0,(l)
kM k
2
104
2
2
M − M (l) x̌0,(l)
≤
and
2
+2
+ αx0 − x0,(l)
2
p
q
0
=
σ1 (M ) αȟ − σ1 M (l) ȟ0,(l)
αh0 − h0,(l)
≤
p
σ1 (M )
2
n
αȟ0 − ȟ0,(l)
√ n
≤ 2 αȟ0 − ȟ0,(l)
2
2
2
+ αx0 − x0,(l)
+ αx̌0 − x̌0,(l)
2
≤
√
2
which together with (235) and (240) implies
min
α∈C,|α|=1
n
αh0 − h0,(l)
+
2
+ αx̌0 − x̌0,(l)
Taking the previous two bounds collectively yields
αh0 − h0,(l)
n
2
o
+
αȟ0 − ȟ0,(l)
2
o
q
p
σ1 (M )αx̌0 − σ1 M (l) x̌0,(l)
2
o
√
M − M (l) x̌0,(l)
2
2
+ αx̌0 − x̌0,(l)
+ αx0 − x0,(l)
2
2 σ1 (M ) − σ1 (M (l) )
p
+p
σ1 (M ) + σ1 (M (l) )
2 σ1 (M ) − σ1 (M (l) ) .
2
2
+6
o
n
αȟ0 − ȟ0,(l)
µ
≤ c5 √
m
s
2
+ αx̌0 − x̌0,(l)
µ2 K log5 m
m
2
o
,
(243)
for some constant c5 > 0, as long as ξ is sufficiently small. Moreover, we have
n
1 0
α 0,(l)
h −
h
+ α0 x0 − αα0 x0,(l) ≤ 2 h0 − αh0,(l) + x0 − αx0,(l)
2
2
2
α0
α0
for any |α| = 1, where α0 is defined in (38) and, according to Lemma 19, satisfies
1/2 ≤ |α0 | ≤ 2.
2
o
(244)
Therefore,
r
2
α 0,(l) 2
h0 −
h
+ α0 x0 − αα0 x0,(l)
0
α∈C,|α|=1
2
2
α
α 0,(l)
1 0
+ α0 x0 − αα0 x0,(l)
≤ min
h −
h
α∈C,|α|=1
α0
α0
2
n
o
≤ 2 min
h0 − αh0,(l) + x0 − αx0,(l)
α∈C,|α|=1
2
2
s
µ
µ2 K log5 m
≤ 2c5 √
.
m
m
min
1
α0
2
Furthermore, we have
dist z
0,(l)
0
, ze
r
1 0,(l)
1 0 2
2
h
−
h 2 + αx0,(l) − α0 x0 2
0
α∈C
α
α
r
1 0
α 0,(l) 2
≤ min
h −
h
+ α0 x0 − αα0 x0,(l)
0
α∈C,|α|=1
2
α
α0
s
µ2 K log5 m
µ
,
≤2c5 √
m
m
= min
2
2
(245)
where the second line follows since the latter is minimizing over a smaller feasible set. This completes the
proof for the claim (96).
105
e 0 , one first sees that
Regarding b∗l h
p
b∗l h0 =
σ1 (M )b∗l ȟ0 ≤
√
2c2
µ log2 m
√
,
m
where the last relation holds due to (241) and (242). Hence, using the property (244), we have
e 0 = b∗
b∗l h
l
1
α0
h0 ≤
1
α0
√
µ log2 m
b∗l h0 ≤ 2 2c2 √
,
m
which finishes the proof of the claim (97).
Before concluding this section, we note a byproduct of the proof. Specifically, we can establish the claim
required in (226) using many results derived in this section. This is formally stated in the following lemma.
Lemma 30. Fix any small constant c > 0. Suppose the number of samples obeys m τ K log4 m. Then
with probability at least 1 − O m−10 , we have
µ
∗ e0
≤ c √ log m.
max (bj − b1 ) h
m
1≤j≤τ
Proof. Instate the notation and hypotheses in Appendix C.6. Recognize that
p
∗ 1
∗ 1
∗ e0
h0 = (bj − b1 )
σ1 (M )ȟ0
(bj − b1 ) h
= (bj − b1 )
0
0
α
α
1 p
∗
σ1 (M ) (bj − b1 ) ȟ0
≤
α0
∗
≤ 4 (bj − b1 ) ȟ0 ,
∗
where the√ last inequality comes from (242) and (244). It thus suffices to prove that (bj − b1 ) ȟ0 ≤
cµ log m/ m for some c > 0 small enough. To this end, it can be seen that
∗
1
∗
(bj − b1 ) M x̌0
σ1 (M )
m
X
∗
≤2
(bj − b1 ) bk b∗k h\ x\∗ ak a∗k x̌0
(bj − b1 ) ȟ0 =
≤2
(i)
≤c
(ii)
k=1
m
X
k=1
∗
(bj − b1 ) bk
!
max
1≤k≤m
b∗k h\ a∗k x\ a∗k x̌0
n
1
µ p
√
·
·
5
log
m
max
a∗j x̌0,(j) + kaj k2 α0,(j) x̌0 − x̌0,(j)
1≤j≤m
m
log2 m
µ
1
µ
. c√
≤ c √ log m,
m log m
m
2
o
(246)
where (i) comes from Lemma 50, the incoherence condition (36), and the estimate (189). The last line (ii)
holds since we have already established (see (237) and (240))
n
o p
max
a∗j x̌0,(j) + kaj k2 α0,(j) x̌0 − x̌0,(j)
. log m.
1≤j≤m
2
The proof is then complete.
C.7
Proof of Lemma 21
Recall that α0 and α0,(l) are the alignment parameters between z 0 and z \ , and between z 0,(l) and z \ ,
respectively, that is,
1 0
2
h − h\ 22 + αx0 − x\ 2 ,
α0 := argmin
α
α∈C
106
α0,(l) := argmin
α∈C
Also, we let
0,(l)
αmutual := argmin
α∈C
1 0,(l)
h
− h\
α
2
2
1 0,(l)
1 0
h
h
−
α
α0
+ αx0,(l) − x\
2
2
2
2
.
+ αx0,(l) − α0 x0
2
2
.
The triangle inequality together with (94) and (245) then tells us that
s
2
1
2
0,(l)
h0,(l) − h\ + αmutual x0,(l) − x\ 2
0,(l)
2
αmutual
s
s
2
2
2
1 0
1 0
1
2
0,(l)
≤
h −
h0,(l) + α0 x0 − αmutual x0,(l) +
h − h\ + kα0 x0 − x\ k2
0
0,(l)
2
2
α0
α
2
αmutual
s
µ
µ2 K log5 m
1
≤ 2c5 √
+ C1 2
m
m
log m
1
≤ 2C1 2 ,
log m
√
where the last relation holds as long as m µ2 K log9/2 m.
Let
1
1
0,(l)
x1 = α0 x0 , h1 =
h0
and
x2 = αmutual x0,(l) , h2 =
h0,(l) .
0
0,(l)
α
αmutual
It is easy to see that x1 , h1 , x2 , h2 satisfy the assumptions in Lemma 55, which implies
s
s
2
2
1
1 0 2
1 0
1
2
0,(l)
0,(l)
0,(l)
0,(l)
0
0
h
−
h
h −
h0,(l) + α0 x0 − αmutual x0,(l)
+ α
x
−α x 2 .
0
0
0,(l)
0,(l)
2
2
2
α
α
α
αmutual
s
µ2 K log5 m
µ
.√
,
(247)
m
m
where the last line comes from (245). With this upper estimate at hand, we are now ready to show that
with high probability,
a∗l α0 x0 − x\
(i)
≤ a∗l α0,(l) x0,(l) − x\
+ a∗l α0 x0 − α0,(l) x0,(l)
p
≤ 5 log m α0,(l) x0,(l) − x\
(ii)
+ kal k2 α0 x0 − α0,(l) x0,(l)
s
(iii) p
√
1
µ
µ2 K log5 m
√
.
log m ·
+
K
m
m
log2 m
(iv)
.
1
log
3/2
m
2
2
,
where (i) follows from the triangle inequality, (ii) uses Cauchy-Schwarz and the independence between x0,(l)
and al , (iii) holds because of (95) and (247) under the condition m µ2 K log6 m, and (iv) holds true as
long as m µ2 K log4 m.
107
D
D.1
D.1.1
Technical lemmas
Technical lemmas for phase retrieval
Matrix concentration inequalities
i.i.d.
Lemma 31. Suppose that aj ∼ N (0, In ) for every 1 ≤ j ≤ m. Fix any small constant δ > 0. With
probability at least 1 − C2 e−c2 m , one has
m
1 X
aj a>
j − In ≤ δ,
m j=1
as long as m ≥ c0 n for some sufficiently large constant c0 > 0. Here, C2 , c2 > 0 are some universal constants.
Proof. This is an immediate consequence of [Ver12, Corollary 5.35].
i.i.d.
Lemma 32. Suppose that aj ∼ N (0, In ), for every 1 ≤ j ≤ m. Fix any small constant δ > 0. With
probability at least 1 − O(n−10 ), we have
m
1 X > \ 2
\ 2
\ \>
a x aj a>
≤ δkx\ k22 ,
j − kx k2 In + 2x x
m j=1 j
provided that m ≥ c0 n log n for some sufficiently large constant c0 > 0.
Proof. This is adapted from [CLS15, Lemma 7.4].
i.i.d.
Lemma 33. Suppose that aj ∼ N (0, In ), for every 1 ≤ j ≤ m. Fix any small constant δ > 0 and any
constant C > 0. Suppose m ≥ c0 n for some sufficiently large constant c0 > 0. Then with probability at least
1 − C2 e−c2 m ,
m
1 X > 2
>
>
2
aj x 1{|a>
≤ δkxk22 ,
x|≤C} aj aj − β1 xx + β2 kxk2 In
j
m j=1
holds for some absolute constants c2 , C2 > 0, where
β1 := E ξ 4 1{|ξ|≤C} − E ξ 2 1|ξ|≤C
and
with ξ being a standard Gaussian random variable.
∀x ∈ Rn
β2 = E ξ 2 1|ξ|≤C
Proof. This is supplied in [CC17, supplementary material].
D.1.2
Matrix perturbation bounds
Lemma 34. Let λ1 (A), u be the leading eigenvalue and eigenvector of a symmetric matrix A, respectively,
e u
e respectively. Suppose that
e be the leading eigenvalue and eigenvector of a symmetric matrix A,
and λ1 (A),
e
e
λ1 (A), λ1 (A), kAk, kAk ∈ [C1 , C2 ] for some C1 , C2 > 0. Then,
p
λ1 (A) u −
Proof. Observe that
q
e u
e
λ1 (A)
q
p
e u
e
λ1 (A) u − λ1 (A)
2
≤
2
p
≤
e u
A−A
√
2 C1
λ1 (A) u −
q
108
2
p
C2
e k2 .
+
ku − u
C2 + √
C1
e u
λ1 (A)
+
2
q
q
e u − λ1 (A)
e u
e
λ1 (A)
2
p
≤
λ1 (A) −
q
e +
λ1 (A)
q
e ku − u
e k2 ,
λ1 (A)
where the last inequality follows since kuk2 = 1. Using the identity
p
λ1 (A) −
q
√
a−
√
(248)
√
√
b = (a − b)/( a + b), we have
e
e
λ1 A − λ1 (A)
λ1 A − λ1 (A)
e =
√
≤
λ1 (A)
,
q
p
2 C1
e
λ1 (A) + λ1 (A)
e This combined with (248) yields
where the last inequality comes from our assumptions on λ1 (A) and λ1 (A).
q
p
e u
e
λ1 (A) u − λ1 (A)
2
e
λ1 A − λ1 (A)
p
√
e k2 .
≤
+ C2 ku − u
2 C1
(249)
e , use the relationship between the eigenvalue and the eigenvector to obtain
To control λ1 A − λ1 (A)
e = u> Au − u
eu
e > Ae
λ1 (A) − λ1 (A)
e u + u> Au
e −u
e + u
e −u
eu
e > Au
e > Au
e > Ae
≤ u> A − A
e ,
e u + 2 ku − u
e k2 A
≤ A−A
2
which together with (249) gives
q
p
e u
e
λ1 (A) u − λ1 (A)
2
≤
≤
as claimed.
D.2
D.2.1
e u
A−A
e
p
e k2 A
+ 2 ku − u
√
e k2
+ C2 ku − u
2 C1
e u
p
A−A
C2
2
√
e k2
+ √
+ C2 ku − u
2 C1
C1
2
Technical lemmas for matrix completion
Orthogonal Procrustes problem
The orthogonal Procrustes problem is a matrix approximation problem which seeks an orthogonal matrix R
b to be the minimizer of
to best “align” two matrices A and B. Specifically, for A, B ∈ Rn×r , define R
minimizeR∈Or×r kAR − BkF .
(250)
b of (250).
The first lemma is concerned with the characterization of the minimizer R
b is the minimizer of (250) if and only if R
b > A> B is symmetric and
Lemma 35. For A, B ∈ Rn×r , R
positive semidefinite.
Proof. This is an immediate consequence of [tB77, Theorem 2].
Let A> B = U ΣV > be the singular value decomposition of A> B ∈ Rr×r . It is easy to check that
b := U V > satisfies the conditions that R
b > A> B is both symmetric and positive semidefinite. In view of
R
b = U V > is the minimizer of (250). In the special case when C := A> B is invertible, R
b enjoys
Lemma 35, R
the following equivalent form:
b=H
c (C) := C C > C −1/2 ,
R
(251)
c (·) is an Rr×r -valued function on Rr×r . This motivates us to look at the perturbation bounds for
where H
c (·), which is formulated in the following lemma.
the matrix-valued function H
109
Lemma 36. Let C ∈ Rr×r be a nonsingular matrix. Then for any matrix E ∈ Rr×r with kEk ≤ σmin (C)
and any unitarily invariant norm |||·|||, one has
c (·) is defined above.
where H
c (C + E) − H
c (C)
H
2
|||E|||,
σr−1 (C) + σr (C)
≤
Proof. This is an immediate consequence of [Mat93, Theorem 2.3].
With Lemma 36 in place, we are ready to present the following bounds on two matrices after “aligning”
them with X \ .
Lemma 37. Instate the notation in Section 3.2. Suppose X1 , X2 ∈ Rn×r are two matrices such that
X1 − X \
X \ ≤ σmin /2,
kX1 − X2 k X
\
(252a)
(252b)
≤ σmin /4.
Denote
R1 := argmin X1 R − X \
R2 := argmin X2 R − X \
and
F
R∈O r×r
R∈O r×r
F
.
Then the following two inequalities hold true:
kX1 R1 − X2 R2 k ≤ 5κ kX1 − X2 k
kX1 R1 − X2 R2 kF ≤ 5κ kX1 − X2 kF .
and
Proof. Before proving the claims, we first gather some immediate consequences of the assumptions (252).
>
Denote C = X1> X \ and E = (X2 − X1 ) X \ . It is easily seen that C is invertible since
C − X \> X \ ≤ X1 − X \
(i)
X \ ≤ σmin /2
(ii)
=⇒
σr (C) ≥ σmin /2,
(253)
where (i) follows from the assumption (252a) and (ii) is a direct application of Weyl’s inequality. In addition,
C + E = X2> X \ is also invertible since
(i)
(ii)
kEk ≤ kX1 − X2 k X \ ≤ σmin /4 < σr (C) ,
where (i) arises from the assumption (252b) and (ii) holds because of (253). When both C and C + E are
invertible, the orthonormal matrices R1 and R2 admit closed-form expressions as follows
R1 = C C > C
−1/2
and
Moreover, we have the following bound on kX1 k:
(i)
kX1 k ≤ X1 − X \ + X \
(ii)
≤
h
i−1/2
>
R2 = (C + E) (C + E) (C + E)
.
σmin
σmax
+ X\ ≤
+ X\
2 kX \ k
2 kX \ k
(iii)
≤ 2 X\ ,
(254)
where (i) is the triangle inequality, (ii) uses the assumption (252a) and (iii) arises from the fact that X \ =
√
σmax .
With these in place, we turn to establishing the claimed bounds. We will focus on the upper bound
on kX1 R1 − X2 R2 kF , as the bound on kX1 R1 − X2 R2 k can be easily obtained using the same argument.
Simple algebra reveals that
kX1 R1 − X2 R2 kF = k(X1 − X2 ) R2 + X1 (R1 − R2 )kF
≤ kX1 − X2 kF + kX1 k kR1 − R2 kF
≤ kX1 − X2 kF + 2 X \ kR1 − R2 kF ,
110
(255)
where the first inequality uses the fact that kR2 k = 1 and the last inequality comes from (254). An
application of Lemma 36 leads us to conclude that
2
kEkF
σr (C) + σr−1 (C)
2
>
(X2 − X1 ) X \
≤
σmin
F
2
\
≤
kX2 − X1 kF X ,
σmin
kR1 − R2 kF ≤
(256)
(257)
where (256) utilizes (253). Combine (255) and (257) to reach
kX1 R1 − X2 R2 kF ≤ kX1 − X2 kF +
4
kX2 − X1 kF X \
2
σmin
≤ (1 + 4κ) kX1 − X2 kF ,
which finishes the proof by noting that κ ≥ 1.
D.2.2
Matrix concentration inequalities
This section collects various measure concentration results regarding the Bernoulli random variables {δj,k }1≤j,k≤n ,
which is ubiquitous in the analysis for matrix completion.
Lemma 38. Fix any small constant δ > 0, and suppose that m δ −2 µnr log n. Then with probability
exceeding 1 − O n−10 , one has
1
(1 − δ)kBkF ≤ √ kPΩ (B)kF ≤ (1 + δ)kBkF
p
holds simultaneously for all B ∈ Rn×n lying within the tangent space of M \ .
Proof. This result has been established in [CR09, Section 4.2] for asymmetric sampling patterns (where each
(i, j), i 6= j is included in Ω independently). It is straightforward to extend the proof and the result to
symmetric sampling patterns (where each (i, j), i ≥ j is included in Ω independently). We omit the proof
for conciseness.
2
Lemma 39. Fix a matrix M ∈ Rn×n
. Suppose n p ≥ c0 n log n for some sufficiently large constant c0 > 0.
−10
With probability at least 1 − O n
, one has
1
PΩ (M ) − M ≤ C
p
r
n
kM k∞ ,
p
where C > 0 is some absolute constant.
Proof. See [KMO10a, Lemma 3.2]. Similar to Lemma 38, the result therein was provided for the asymmetric
sampling patterns but can be easily extended to the symmetric case.
Lemma 40. Recall from Section 3.2 that E ∈ Rn×n is the symmetric noise matrix. Suppose the sample size
obeys n2 p ≥ c0 n log2 n for some sufficiently large constant c0 > 0. With probability at least 1 − O n−10 ,
one has
r
1
n
PΩ (E) ≤ Cσ
,
p
p
where C > 0 is some universal constant.
Proof. See [CW15, Lemma 11].
111
Lemma 41. Fix some matrix A ∈ Rn×r with n ≥ 2r and some 1 ≤ l ≤ n. Suppose {δl,j }1≤j≤n are
independent Bernoulli random variables with means {pj }1≤j≤n no more than p. Define
>
>
r×n
Gl (A) := δl,1 A>
.
1,· , δl,2 A2,· , · · · , δl,n An,· ∈ R
Then one has
Median [kGl (A)k] ≤
s
q
2
p kAk +
2
2
2
2p kAk2,∞ kAk log (4r) +
2 kAk2,∞
3
log (4r)
and for any constant C ≥ 3, with probability exceeding 1 − n−(1.5C−1)
n
X
j=1
(δl,j − p)A>
j,· Aj,· ≤ C
and
kGl (A)k ≤
s
2
p kAk + C
q
2
2
2
p kAk2,∞ kAk log n + kAk2,∞ log n ,
q
2
2
2
p kAk2,∞ kAk log n + kAk2,∞ log n .
Proof. By the definition of Gl (A) and the triangle inequality, one has
>
2
kGl (A)k = Gl (A) Gl (A)
=
n
X
j=1
δl,j A>
j,· Aj,· ≤
n
X
j=1
Therefore, it suffices to control the first term. It can be seen that
zero-mean random matrices. Letting
(δl,j − pj ) A>
j,· Aj,·
1≤j≤n
are i.i.d.
2
(δl,j − pj ) A>
j,· Aj,· ≤ kAk2,∞
L := max
1≤j≤n
and V :=
2
(δl,j − pj ) A>
j,· Aj,· + p kAk .
n
n
h
i
h
i
X
X
2
2
2
2
2
>
E (δl,j − pj ) A>
A
A
A
≤
E
(δ
−
p
)
kAk
A>
l,j
j
j,· j,· j,· j,·
j,· Aj,· ≤ p kAk2,∞ kAk
2,∞
j=1
j=1
and invoking matrix Bernstein’s inequality [Tro15b, Theorem 6.1.1], one has for all t ≥ 0,
!
n
X
−t2 /2
>
(δl,j − pj ) Aj,· Aj,· ≥ t ≤ 2r · exp
P
.
2
2
2
p kAk2,∞ kAk + kAk2,∞ · t/3
(258)
j=1
We can thus find an upper bound on Median
h P
n
j=1
(δl,j − pj ) A>
j,· Aj,·
i
by finding a value t that ensures
the right-hand side of (258) is smaller than 1/2. Using this strategy and some simple calculations, we get
2
n
q
X
2 kAk2,∞
2
2
Median
(δl,j − pj ) A>
A
≤
kAk
2p
kAk
log
(4r)
+
log (4r)
j,·
j,·
2,∞
3
j=1
and for any C ≥ 3,
n
X
j=1
(δl,j −
p j ) A>
j,· Aj,·
q
2
2
2
p kAk2,∞ kAk log n + kAk2,∞ log n
≤C
holds with probability at least 1 − n−(1.5C−1) . As a consequence, we have
s
2
q
2 kAk2,∞
2
2
2
Median [kGl (A)k] ≤ p kAk + 2p kAk2,∞ kAk log (4r) +
log (4r),
3
112
and with probability exceeding 1 − n−(1.5C−1) ,
q
2
2
2
2
2
kGl (A)k ≤ p kAk + C
p kAk2,∞ kAk log n + kAk2,∞ log n .
This completes the proof.
Lemma 42. Let {δl,j }1≤l≤j≤n be i.i.d. Bernoulli random variables with mean p and δl,j = δj,l . For any
∆ ∈ Rn×r , define
>
>
r×n
Gl (∆) := δl,1 ∆>
.
1,· , δl,2 ∆2,· , · · · , δl,n ∆n,· ∈ R
Suppose the sample size obeys n2 p κµrn log2 n. Then for any k > 0 and α > 0 large enough, with
probability at least 1 − c1 e−αCnr log n/2 ,
n
X
l=1
1{kGl (∆)k≥4√pψ+2√krξ} ≤
holds simultaneously for all ∆ ∈ Rn×r obeying
s
log n
X\
k∆k2,∞ ≤ C5 ρt µr
np
and
2,∞
2αn log n
k
+ C8 σ
1
k∆k ≤ C9 ρt µr √
X \ + C10 σ
np
r
s
n log n
X\
p
2,∞
:= ξ
n
X \ := ψ,
p
where c1 , C5 , C8 , C9 , C10 > 0 are some absolute constants.
Proof. For simplicity of presentation, we will prove the claim for the asymmetric case where {δl,j }1≤l,j≤n
are independent. The results immediately carry over to the symmetric case as claimed in this lemma. To
see this, note that we can always divide Gl (∆) into
Gl (∆) = Gupper
(∆) + Glower
(∆),
l
l
where all nonzero components of Gupper
(∆) come from the upper triangular part (those blocks with l ≤ j
l
), while all nonzero components of Glower
(∆) are from the lower triangular part (those blocks with l > j).
l
We can then look at {Gupper
(∆)
|
1
≤
l
≤
n} and {Gupper
(∆) | 1 ≤ l ≤ n} separately using the argument
l
l
we develop for the asymmetric case. From now on, we assume that {δl,j }1≤l,j≤n are independent.
e ∈ Rn×r ,
Suppose for the moment that ∆ is statistically independent of {δl,j }. Clearly, for any ∆, ∆
e
Gl (∆) − Gl (∆)
e ≤ Gl (∆) − Gl ∆
e
≤ Gl (∆) − Gl ∆
v
uX
2
u n
e j,·
≤t
∆j,· − ∆
F
2
j=1
e ,
:= d ∆, ∆
which implies that kGl (∆)k is 1-Lipschitz with respect to the metric d (·, ·). Moreover,
max kδl,j ∆j,· k2 ≤ k∆k2,∞ ≤ ξ
1≤j≤n
according to our assumption. Hence, Talagrand’s inequality [CC16, Proposition 1] reveals the existence of
some absolute constants C, c > 0 such that for all λ > 0
P {kGl (∆)k − Median [kGl (∆)k] ≥ λξ} ≤ C exp −cλ2 .
(259)
We then proceed to control Median [kGl (∆)k]. A direct application of Lemma 41 yields
r
p
2ξ 2
√
Median [kGl (∆)k] ≤ 2pψ 2 + p log (4r)ξψ +
log (4r) ≤ 2 pψ,
3
113
where the last relation holds since pψ 2 ξ 2 log r, which follows by combining the definitions of ψ and ξ, the
sample size condition
np κµr log2 n, and the incoherence condition (114). Thus, substitution into (259)
√
and taking λ = kr give
n
o
√
√
P kGl (∆)k ≥ 2 pψ + krξ ≤ C exp (−ckr)
(260)
for any k ≥ 0. Furthermore, invoking [AS08, Corollary A.1.14] and using the bound (260), one has
!
n
X
t log t
P
1{kGl (∆)k≥2√pψ+√krξ} ≥ tnC exp (−ckr) ≤ 2 exp −
nC exp (−ckr)
2
l=1
for any t ≥ 6. Choose t = α log n/ [kC exp (−ckr)] ≥ 6 to obtain
!
n
X
αC
αn
log
n
√
√
≤ 2 exp −
P
nr log n .
1{kGl (∆)k≥2 pψ+ krξ} ≥
k
2
(261)
l=1
Pn
So far we have demonstrated that for any fixed ∆ obeying our assumptions, l=1 1{kGl (∆)k≥2√pψ+√krξ}
is well controlled with exponentially high probability. In order to extend the results to all feasible ∆, we
resort to the standard -net argument. Clearly, due to the homogeneity property of kGl (∆)k, it suffices to
restrict attention to the following set:
where ψ/ξ . kX k/kX k2,∞ .
\
\
√
n. We then proceed with the following steps.
1. Introduce the auxiliary function
1,
χl (∆) =
(262)
S = {∆ | min {ξ, ψ} ≤ k∆k ≤ ψ} ,
√
kGl (∆)k−2 pψ−
√
√
2
pψ+
krξ
√
0,
√
√
if kGl (∆)k ≥ 4 pψ + 2 krξ,
√
√
√
√
krξ
, if kGl (∆)k ∈ [2 pψ + krξ, 4 pψ + 2 krξ],
else.
Clearly, this function is sandwiched between two indicator functions
1{kGl (∆)k≥4√pψ+2√krξ} ≤ χl (∆) ≤ 1{kGl (∆)k≥2√pψ+√krξ} .
Note that χl is more convenient to work with due to continuity.
2. Consider an -net N [Tao12, Section 2.3.1] of the set S as defined in (262). For any = 1/nO(1) , one can
find such a net with cardinality log |N | . nr log n. Apply the union bound and (261) to yield
!
!
n
n
X
X
αn log n
αn
log
n
P
χl (∆) ≥
, ∀∆ ∈ N ≤ P
1{kGl (∆)k≥2√pψ+√krξ} ≥
, ∀∆ ∈ N
k
k
l=1
l=1
αC
αC
≤ 2|N | exp −
nr log n ≤ 2 exp −
nr log n ,
2
4
as long as α is chosen to be sufficiently large.
3. One can then use the continuity argument to extend the bound to all ∆ outside the -net, i.e. with
exponentially high probability,
n
X
2αn log n
χl (∆) ≤
, ∀∆ ∈ S
k
l=1
=⇒
n
X
l=1
1{kGl (∆)k≥4√pψ+2√krξ} ≤
n
X
l=1
χl (∆) ≤
2αn log n
,
k
This is fairly standard (see, e.g. [Tao12, Section 2.3.1]) and is thus omitted here.
114
∀∆ ∈ S
We have thus concluded the proof.
2
Lemma 43. Suppose the sample size obeys
n p ≥ Cκµrn log n for some sufficiently large constant C > 0.
−10
Then with probability at least 1 − O n
,
1
PΩ XX > − X \ X \> ≤ 2n2 X \
p
2
2,∞
√
+ 4 n log n X \
2,∞
X\
holds simultaneously for all X ∈ Rn×r satisfying
X − X\
2,∞
≤ X\
2,∞
,
(263)
where > 0 is any fixed constant.
Proof. To simplify the notations hereafter, we denote ∆ := X − X \ . With this notation in place, one can
decompose
XX > − X \ X \> = ∆X \> + X \ ∆> + ∆∆> ,
which together with the triangle inequality implies that
1
1
1
1
PΩ XX > − X \ X \> ≤
PΩ ∆X \> +
PΩ X \ ∆> +
PΩ ∆∆>
p
p
p
p
1
1
=
PΩ ∆∆> +2 PΩ ∆X \> .
p
p
|
{z
}
|
{z
}
:=α1
(264)
:=α2
In the sequel, we bound α1 and α2 separately.
1. Recall from [Mat90, Theorem 2.5] the elementary inequality that
(265)
kCk ≤ |C| ,
where |C| := [|ci,j |]1≤i,j≤n for any matrix C = [ci,j ]1≤i,j≤n . In addition, for any matrix D := [di,j ]1≤i,j≤n
such that |di,j | ≥ |ci,j | for all i and j, one has |C| ≤ |D| . Therefore
α1 ≤
1
PΩ
p
∆∆>
2
≤ k∆k2,∞
1
PΩ 11> .
p
Lemma 39 then tells us that with probability at least 1 − O(n−10 ),
r
1
n
PΩ 11> − 11> ≤ C
p
p
(266)
for some universal constant C > 0, as long as p log n/n. This together with the triangle inequality
yields
r
1
1
n
PΩ 11> ≤
PΩ 11> − 11> + 11> ≤ C
+ n ≤ 2n,
(267)
p
p
p
provided that p 1/n. Putting together the previous bounds, we arrive at
2
α1 ≤ 2n k∆k2,∞ .
(268)
2. Regarding the second term α2 , apply the elementary inequality (265) once again to get
PΩ ∆X \> ≤ PΩ ∆X \>
,
which motivates us to look at PΩ ∆X\>
instead. A key step of this part is to take advantage of
the `2,∞ norm constraint of PΩ ∆X \> . Specifically, we claim for the moment that with probability
exceeding 1 − O(n−10 ),
2
2
(269)
PΩ ∆X \> 2,∞ ≤ 2pσmax k∆k2,∞ := θ
115
holds under our sample size condition. In addition, we also have the following trivial `∞ norm bound
PΩ ∆X \> ∞ ≤ k∆k2,∞ X \ 2,∞ := γ.
(270)
In what follows, for simplicity of presentation, we will denote
A := PΩ ∆X \> .
(271)
(a) To facilitate the analysis of kAk, we first introduce k0 + 1 = 12 log (κµr) auxiliary matrices9 Bs ∈ Rn×n
that satisfy
kX
0 −1
kAk ≤ kBk0 k +
kBs k .
(272)
s=0
To be precise, each Bs is defined such that
(
1
1
if Aj,k ∈ ( 2s+1
γ, 21s γ],
s γ,
[Bs ]j,k = 2
0,
else,
(
1
if Aj,k ≤ 2k10 γ,
k γ,
[Bk0 ]j,k = 2 0
0,
else,
for 0 ≤ s ≤ k0 − 1
and
which clearly satisfy (272); in words, Bs is constructed by rounding up those entries of A within a
prescribed magnitude interval. Thus, it suffices to bound kBs k for every s. To this end, we start with
s = k0 and use the definition of Bk0 to get
q
(ii)
(iii) √
(i)
1
2
k∆k2,∞ X \ 2,∞ ≤ 4 np k∆k2,∞ X \ ,
kBk0 k ≤ kBk0 k∞ (2np) ≤ 4np √
κµr
where (i) arises from Lemma 44, with 2np being a crude upper bound on the number of nonzero entries
in each row and each column. This can be derived by applying the standard Chernoff bound on Ω. The
second inequality (ii) relies on the definitions of γ and k0 . The last one (iii) follows from the incoherence
condition (114). Besides, for any 0 ≤ s ≤ k0 − 1, by construction one has
2
2
kBs k2,∞ ≤ 4θ = 8pσmax k∆k2,∞
and
kBs k∞ =
1
γ,
2s
where θ is as defined in (269). Here, we have used the fact that the magnitude of each entry of Bs is at
most 2 times that of A. An immediate implication is that there are at most
2
kBs k2,∞
2
kBs k∞
2
≤
nonzero entries in each row of Bs and at most
8pσmax k∆k2,∞
:= kr
2
1
2s γ
kc = 2np
nonzero entries in each column of Bs , where kc is derived from the standard Chernoff bound on Ω.
Utilizing Lemma 44 once more, we discover that
q
p
√
1 p
2
kBs k ≤ kBs k∞ kr kc = s γ kr kc = 16np2 σmax k∆k2,∞ = 4 np k∆k2,∞ X \
2
for each 0 ≤ s ≤ k0 − 1. Combining all, we arrive at
kAk ≤
9 For
simplicity, we assume
is not an integer.
1
2
kX
0 −1
s=0
√
kBs k + kBk0 k ≤ (k0 + 1) 4 np k∆k2,∞ X \
log (κµr) is an integer. The argument here can be easily adapted to the case when
116
1
2
log (κµr)
√
≤ 2 np log (κµr) k∆k2,∞ X \
√
≤ 2 np log n k∆k2,∞ X \ ,
where the last relation holds under the condition n ≥ κµr. This further gives
√
1
α2 ≤ kAk ≤ 2 n log n k∆k2,∞ X \ .
p
(b) In order to finish the proof of this part, we need to justify the claim (269). Observe that
2
2 Xn
\>
∆l,· Xj,·
δl,j
PΩ ∆X \> l,· =
j=1
2
Xn
\>
\
= ∆l,·
δl,j Xj,·
Xj,·
∆>
l,·
j=1
Xn
2
\>
\
δl,j Xj,· Xj,·
≤ k∆k2,∞
j=1
(273)
(274)
for every 1 ≤ l ≤ n, where δl,j indicates whether the entry with the index (l, j) is observed or not.
Invoke Lemma 41 to yield
h
i 2
Xn
\>
\
\>
\>
\>
δl,j Xj,·
Xj,·
= δl,1 X1,·
, δl,2 X2,·
, · · · , δl,n Xn,·
j=1
q
2
2
2
≤ pσmax + C
p kX \ k2,∞ kX \ k log n + X \ 2,∞ log n
!
r
pκµr log n
κµr log n
≤ p+C
+C
σmax
n
n
(275)
≤ 2pσmax ,
with high probability, as soon as np κµr log n. Combining (274) and (275) yields
as claimed in (269).
PΩ
∆X \>
2
l,· 2
2
≤ 2pσmax k∆k2,∞ ,
1≤l≤n
3. Taken together, the preceding bounds (264), (268) and (273) yield
√
1
2
PΩ XX > − X \ X \> ≤ α1 + 2α2 ≤ 2n k∆k2,∞ + 4 n log n k∆k2,∞ X \ .
p
The proof is completed by substituting the assumption k∆k2,∞ ≤ X \
2,∞
.
In the end of this subsection, we record a useful lemma to bound the spectral norm of a sparse Bernoulli
matrix.
n ×n
Lemma 44. Let A ∈ {0, 1} 1 2 be a binary matrix, and suppose that there
√ are at most kr and kc nonzero
entries in each row and column of A, respectively. Then one has kAk ≤ kc kr .
Proof. This immediately follows from the elementary inequality kAk2 ≤ kAk1→1 kAk∞→∞ (see [Hig92,
equation (1.11)]), where kAk1→1 and kAk∞→∞ are the induced 1-norm (or maximum absolute column sum
norm) and the induced ∞-norm (or maximum absolute row sum norm), respectively.
D.2.3
Matrix perturbation bounds
Lemma 45. Let M ∈ Rn×n be a symmetric matrix with the top-r eigendecomposition U ΣU > . Assume
M − M \ ≤ σmin /2 and denote
b := argmin U R − U \ .
Q
F
R∈O r×r
Then there is some numerical constant c3 > 0 such that
b − U \ ≤ c3 M − M \ .
UQ
σmin
117
Proof. Define Q = U > U \ . The triangle inequality gives
b − Q + U U >U \ − U \ .
b − Q + UQ − U\ ≤ Q
b − U\ ≤ U Q
UQ
(276)
[AFWZ17, Lemma 3] asserts that
b−Q ≤4
Q
M − M \ /σmin
2
as long as M − M \ ≤ σmin /2. For the remaining term in (276), one can use U \> U \ = Ir to obtain
U U > U \ − U \ = U U > U \ − U \ U \> U \ ≤ U U > − U \ U \> ,
which together with the Davis-Kahan sinΘ theorem [DK70] reveals that
U U >U \ − U \ ≤
c2
M − M\
σmin
b − Q , U U > U \ − U \ and (276) to reach
for some constant c2 > 0. Combine the estimates on Q
b − U\ ≤
UQ
4
σmin
M −M
\
2
+
c2
c3
M − M\ ≤
M − M\
σmin
σmin
for some numerical constant c3 > 0, where we have utilized the fact that M − M \ /σmin ≤ 1/2.
f ∈ Rn×n be two symmetric matrices with top-r eigendecompositions U ΣU > and
Lemma 46. Let M , M
>
e
e
e
f − M \ ≤ σmin /4, and suppose σmax /σmin is
U ΣU , respectively. Assume M − M \ ≤ σmin /4 and M
bounded by some constant c1 > 0, with σmax and σmin the largest and the smallest singular values of M \ ,
respectively. If we denote
e ,
Q := argmin U R − U
F
R∈O r×r
then there exists some numerical constant c3 > 0 such that
e 1/2 ≤ √ c3
f−M
Σ1/2 Q − QΣ
M
σmin
e 1/2
Σ1/2 Q − QΣ
and
F
≤√
c3
σmin
f−M U
M
F
.
Proof. Here, we focus on the Frobenius norm; the bound on the operator norm follows from the same
argument, and hence we omit the proof. Since k·kF is unitarily invariant, we have
e 1/2
Σ1/2 Q − QΣ
F
e 1/2
= Q> Σ1/2 Q − Σ
F
,
e 1/2 are the matrix square roots of Q> ΣQ and Σ,
e respectively. In view of the matrix
where Q> Σ1/2 Q and Σ
square root perturbation bound [Sch92, Lemma 2.1],
e 1/2
Σ1/2 Q − QΣ
F
≤
1/2
σmin (Σ)
1
>
e
Q ΣQ − Σ
e 1/2
+ σmin (Σ)
F
≤√
where the last inequality follows from the lower estimates
σmin (Σ) ≥ σmin Σ\ − kM − M \ k ≥ σmin /4
1
e
Q> ΣQ − Σ
σmin
e ≥ σmin /4. Recognizing that Σ = U > M U and Σ
e =U
e >M
fU
e , one gets
and, similarly, σmin (Σ)
>
e
e >M
fU
e
Q> ΣQ − Σ
= UQ M UQ − U
F
F
>
>
>
f UQ
f UQ − U
e >M
f UQ
≤ UQ M UQ − UQ M
+ UQ M
F
>f
>fe
e
e
+ U M UQ − U MU
F
118
F
F
,
(277)
≤
f−M U
M
F
e
+ 2 UQ − U
f−M U
M
f ≤
M
F
where the last relation holds due to the upper estimate
F
e
+ 4σmax U Q − U
F
(278)
,
f ≤ M\ + M
f − M \ ≤ σmax + σmin /4 ≤ 2σmax .
M
Invoke the Davis-Kahan sinΘ theorem [DK70] to obtain
e
UQ − U
F
≤
f−M U
M
c2
f)
σr (M ) − σr+1 (M
F
≤
2c2
σmin
for some constant c2 > 0, where the last inequality follows from the bounds
σr (M ) ≥ σr M \ − kM − M \ k ≥ 3σmin /4,
f ) ≤ σr+1 M \ + kM
f − M \ k ≤ σmin /4.
σr+1 (M
f−M U
M
F
(279)
,
Combine (277), (278), (279) and the fact σmax /σmin ≤ c1 to reach
for some constant c3 > 0.
e 1/2
Σ1/2 Q − QΣ
F
≤√
c3
σmin
f−M U
M
F
Lemma 47. Let M ∈ Rn×n be a symmetric matrix with the top-r eigendecomposition U ΣU > . Denote
X = U Σ1/2 and X \ = U \ (Σ\ )1/2 , and define
b := argmin U R − U \
Q
R∈O r×r
c := argmin XR − X \
H
and
F
R∈O r×r
F
.
Assume M − M \ ≤ σmin /2, and suppose σmax /σmin is bounded by some constant c1 > 0. Then there
exists a numerical constant c3 > 0 such that
b−H
c ≤
Q
c3
M − M\ .
σmin
Proof. We first collect several useful facts about the spectrum of Σ. Weyl’s inequality tells us that Σ − Σ\ ≤
M − M \ ≤ σmin /2, which further implies that
σr (Σ) ≥ σr Σ\ − Σ − Σ\ ≥ σmin /2
Denote
kΣk ≤ Σ\ + Σ − Σ\ ≤ 2σmax .
and
Q = U >U \
H = X >X \.
and
Simple algebra yields
H = Σ1/2 Q Σ\
1/2
b
= Σ1/2 Q − Q
|
Σ\
1/2
b − QΣ
b 1/2
+ Σ1/2 Q
{z
:=E
Σ\
It can be easily seen that σr−1 (A) ≥ σr (A) ≥ σmin /2, and
b ·
kEk ≤ Σ1/2 · Q − Q
Σ\
1/2
b − QΣ
b 1/2 ·
+ Σ1/2 Q
b − QΣ
b 1/2 ,
b +√σmax Σ1/2 Q
≤ 2σmax Q − Q
| {z }
|
{z
}
:=α
:=β
which can be controlled as follows.
119
1/2
b ΣΣ\
+Q
} |
{z
Σ\
:=A
1/2
1/2
.
}
• Regarding α, use [AFWZ17, Lemma 3] to reach
b ≤ 4 M − M\
α= Q−Q
• For β, one has
(ii)
(i)
b > Σ1/2 Q
b − Σ1/2
β= Q
≤
2σr
1
b > ΣQ
b−Σ
Q
Σ1/2
2
2
/σmin
.
(iii)
=
2σr
1
b − QΣ
b
ΣQ
,
Σ1/2
where (i) and (iii) come from the unitary invariance of k·k, and (ii) follows from the matrix square root
perturbation bound [Sch92, Lemma 2.1]. We can further take the triangle inequality to obtain
b − Q) − (Q
b − Q)Σ
b − QΣ
b
= ΣQ − QΣ + Σ(Q
ΣQ
b
≤ kΣQ − QΣk + 2 kΣk Q − Q
b
= U M − M \ U \> + Q Σ\ − Σ + 2 kΣk Q − Q
b
≤ U M − M \ U \> + Q Σ\ − Σ + 2 kΣk Q − Q
≤ 2 M − M \ + 4σmax α,
where the last inequality uses the Weyl’s inequality kΣ\ −Σk ≤ kM −M \ k and the fact that kΣk ≤ 2σmax .
• Rearrange the previous bounds to arrive at
kEk ≤ 2σmax α +
√
σmax √
1
2 M − M \ + 4σmax α ≤ c2 M − M \
σmin
for some numerical constant c2 > 0, where we have used the assumption that σmax /σmin is bounded.
b = sgn (A) (see definition in (177)), we are ready to invoke Lemma 36 to deduce that
Recognizing that Q
for some constant c3 > 0.
D.3
D.3.1
b−H
c ≤
Q
2
c3
kEk ≤
M − M\
σr−1 (A) + σr (A)
σmin
Technical lemmas for blind deconvolution
Wirtinger calculus
In this section, we formally prove the fundamental theorem of calculus and the mean-value form of Taylor’s
theorem under the Wirtinger calculus; see (283) and (284), respectively.
Let f : Cn → R be a real-valued function. Denote z = x + iy ∈ Cn , then f (·) can alternatively be
viewed as a function R2n → R. There is a one-to-one mapping connecting the Wirtinger derivatives and the
conventional derivatives [KD09]:
x
z
= J −1
,
(280a)
z
y
x
z
∇R f
= J ∗ ∇C f
,
(280b)
y
z
x
z
J,
(280c)
∇2R f
= J ∗ ∇2C f
y
z
where the subscripts R and C represent calculus in the real (conventional) sense and in the complex
(Wirtinger) sense, respectively, and
In iIn
J=
.
In −iIn
120
With these relationships in place, we are ready to verify the fundamental theorem of calculus using the
Wirtinger derivatives. Recall from [Lan93, Chapter XIII, Theorem 4.2] that
∇R f
x1
y1
x2
y2
x (τ )
y (τ )
− ∇R f
where
:=
=
Z
1
∇2R f
0
x2
y2
+τ
Substitute the identities (280) into (281) to arrive at
J ∗ ∇C f
z1
z1
− J ∗ ∇C f
z2
z2
x (τ )
y (τ )
x1
y1
−
dτ
x2
y2
x1
y1
−
x2
y2
,
(281)
.
z1
z2
dτ J J −1
−
z1
z2
0
Z 1
z (τ )
z2
z1
∇2C f
−
,
(282)
= J∗
dτ
z
z2
z
(τ
)
1
0
= J∗
Z
1
∇2C f
z (τ )
z (τ )
where z1 = x1 + iy1 , z2 = x2 + iy2 and
z (τ )
z2
z1
z2
:=
+τ
−
.
z2
z1
z2
z (τ )
Simplification of (282) gives
∇C f
z1
z1
− ∇C f
z2
z2
=
Z
0
1
∇2C f
z (τ )
z (τ )
dτ
z1
z1
−
z2
z2
.
Repeating the above arguments, one can also show that
∗
1 z1 − z2
z1 − z2
z1 − z2
∗
2
f (z1 ) − f (z2 ) = ∇C f (z2 )
+
∇C f (e
z)
,
z1 − z2
z1 − z2
2 z1 − z2
(283)
(284)
where ze is some point lying on the vector connecting z1 and z2 . This is the mean-value form of Taylor’s
theorem under the Wirtinger calculus.
D.3.2
Discrete Fourier transform matrices
Let B ∈ Cm×K be the first K columns of a discrete Fourier transform (DFT) matrix F ∈ Cm×m , and denote
by bl the lth column of the matrix B ∗ . By definition,
∗
1
bl = √
1, ω (l−1) , ω 2(l−1) , · · · , ω (K−1)(l−1) ,
m
2π
where ω := e−i m with i representing the imaginary unit. It is seen that for any j 6= l,
b∗l bj
K−1
K−1
K−1
1 X k(l−1) k(j−1) (i) 1 X k(l−1) k(1−j)
1 X l−j k (ii) 1 1 − ω K(l−j)
=
ω
·ω
ω
·ω
=
ω
=
.
=
m
m
m
m 1 − ω l−j
k=0
k=0
(285)
k=0
Here, (i) uses ω α = ω −α for all α ∈ R, while the last identity (ii) follows from the formula for the sum of a
finite geometric series when ω l−j 6= 1. This leads to the following lemma.
Lemma 48. For any m ≥ 3 and any 1 ≤ l ≤ m, we have
m
X
j=1
|b∗l bj | ≤ 4 log m.
121
Proof. We first make use of the identity (285) to obtain
m
X
j=1
|b∗l bj |
=
2
kbl k2
m
m
π
K
1 X sin K (l − j) m
1 X 1 − ω K(l−j)
,
=
+
+
π
m
1 − ω l−j
m m
sin (l − j) m
j:j6=l
j:j6=l
2
where the last identity follows since kbl k2 = K/m and, for all α ∈ R,
2π
π
π
π
|1 − ω α | = 1 − e−i m α = e−i m α ei m α − e−i m α
π
.
= 2 sin α
m
(286)
Without loss of generality, we focus on the case when l = 1 in the sequel. Recall that for c > 0, we denote
by bcc the largest integer that does not exceed c. We can continue the derivation to get
m
X
j=1
|b∗1 bj |
m
m
π
(i) 1 X
K
1
1 X sin K (1 − j) m
K
+
≤
=
+
π
π
m m j=2 sin (1 − j) m
m j=2 sin (j − 1) m
m
b m2 c+1
m
X
1
K
1
1 X
+
=
+
π
π
m
m
sin (j − 1) m
sin (j − 1) m
j=2
j=b m
2 c+2
bm
m
2 c+1
X
X
1
1
K
(ii) 1
+
=
+ ,
π
π
m
m
sin
(j
−
1)
sin
(m
+
1
−
j)
m
m
j=2
j=b m
2 c+2
π
where (i) follows from sin K (1 − j) m
≤ 1 and |sin (x)| = |sin (−x)|, and (ii) relies on the fact that
sin (x) = sin (π − x). The property that sin (x) ≥ x/2 for any x ∈ [0, π/2] allows one to further derive
m+1
m
m
b
b
b
m
m
2 c+1
2 c
2 c−1
X
X
X 1 K
2 X 1
2m
1 X
2m
K
+
=
+
|b∗1 bj | ≤
+
+
m
(j − 1) π
(m + 1 − j) π
m
π
k
k
m
m
j=1
j=2
k=1
k=1
j=b 2 c+2
m
(i) 4 X
(iii)
1 K (ii) 4
≤
+
≤ (1 + log m) + 1 ≤ 4 log m,
π
k
m
π
k=1
where in (i) we extend the range of the summation, (ii) uses the elementary inequality
and (iii) holds true as long as m ≥ 3.
∗
Pm
k=1
k −1 ≤ 1 + log m
The next lemma considers the difference of two inner products, namely, (bl − b1 ) bj .
m
, we have
Lemma 49. For all 0 ≤ l − 1 ≤ τ ≤ 10
∗
(bl − b1 ) bj ≤
(
8τ /π
4τ K
(j−l) m + (j−l)2
8τ /π
K
4τ
m−(j−l) m + [m−(j−1)]2
for
for
l+τ ≤j ≤ m
2 + 1,
m
2 + l ≤ j ≤ m − τ.
In addition, for any j and l, the following uniform upper bound holds
∗
(bl − b1 ) bj ≤ 2
K
.
m
Proof. Given (285), we can obtain for j 6= l and j 6= 1,
∗
(bl − b1 ) bj =
=
1 1 − ω K(l−j)
1 − ω K(1−j)
−
m 1 − ω l−j
1 − ω 1−j
1 1 − ω K(l−j)
1 − ω K(1−j)
1 − ω K(1−j)
1 − ω K(1−j)
−
+
−
l−j
l−j
l−j
m 1−ω
1−ω
1−ω
1 − ω 1−j
122
1 − ω K(1−j)
1 ω K(1−j) − ω K(l−j)
+ ω l−j − ω 1−j
l−j
m
1−ω
(1 − ω l−j ) (1 − ω 1−j )
1 1 − ω K(l−1)
2
1
≤
+
1 − ω 1−l
,
m 1 − ω l−j
m
(1 − ω l−j ) (1 − ω 1−j )
=
where the last line is due to the triangle inequality and |ω α | = 1 for all α ∈ R. The identity (286) allows us
to rewrite this bound as
(
)
π
h
sin (1 − l) m
πi
1
1
∗
.
sin K (l − 1)
+
(bl − b1 ) bj ≤
(287)
π
π
m sin (l − j) m
m
sin (1 − j) m
Combined with the fact that |sin x| ≤ 2 |x| for all x ∈ R, we can upper bound (287) as
(
)
π
2τ m
1
1
π
∗
2Kτ +
,
(bl − b1 ) bj ≤
π
π
m sin (l − j) m
m
sin (1 − j) m
where we also utilize the assumption 0 ≤ l − 1 ≤ τ . Then for l + τ ≤ j ≤ bm/2c + 1, one has
(l − j)
π
π
≤
m
2
and
(1 − j)
π
π
≤ .
m
2
Therefore, utilizing the property sin (x) ≥ x/2 for any x ∈ [0, π/2], we arrive at
2
π
4τ
4τ K
8τ /π
∗
(bl − b1 ) bj ≤
2Kτ +
≤
+
,
(j − l) π
m j−1
(j − l) m (j − l)2
where the last inequality holds since j − 1 > j − l. Similarly we can obtain the upper bound for bm/2c + l ≤
j ≤ m − τ using nearly identical argument (which is omitted for brevity).
The uniform upper bound can be justified as follows
∗
(bl − b1 ) bj ≤ (kbl k2 + kb1 k2 ) kbj k2 ≤ 2K/m.
2
The last relation holds since kbl k2 = K/m for all 1 ≤ l ≤ m.
Next, we list two consequences of the above estimates in Lemma 50 and Lemma 51.
Lemma 50. Fix any constant c > 0 that is independent of m and K. Suppose m ≥ Cτ K log4 m for some
sufficiently large constant C > 0, which solely depends on c. If 0 ≤ l − 1 ≤ τ , then one has
m
X
j=1
∗
(bl − b1 ) bj ≤
c
.
log2 m
Proof. For some constant c0 > 0, we can split the index set [m] into the following three disjoint sets
n
j m ko
A1 = j : l + c0 τ log2 m ≤ j ≤
,
2
n jmk
o
A2 = j :
+ l ≤ j ≤ m − c0 τ log2 m ,
2
and
A3 = [m] \ (A1 ∪ A2 ) .
With this decomposition in place, we can write
m
X
j=1
∗
(bl − b1 ) bj =
X
j∈A1
∗
(bl − b1 ) bj +
X
j∈A2
∗
(bl − b1 ) bj +
We first look at A1 . By Lemma 49, one has for any j ∈ A1 ,
∗
(bl − b1 ) bj ≤
8τ /π
4τ K
+
,
j − l m (j − l)2
123
X
j∈A3
∗
(bl − b1 ) bj .
and hence
X
j∈A1
bm
2 c+1
X
∗
(bl − b1 ) bj ≤
8τ /π
4τ K
+
j − l m (j − l)2
j=l+c0 τ log2 m
!
≤
m
4τ K X 1 8τ
+
m
k
π
k=1
m
X
k=c0 τ log2 m
1
k2
K
16τ
1
,
log m +
m
π c0 τ log2 m
Pm
Pm
where the last inequality arises from k=1 k −1 ≤ 1 + log m ≤ 2 log m and k=c k −2 ≤ 2/c.
Similarly, for j ∈ A2 , we have
≤ 8τ
∗
(bl − b1 ) bj ≤
4τ
K
8τ /π
+
,
m − (j − l) m [m − (j − 1)]2
which in turn implies
X
j∈A2
∗
(bl − b1 ) bj ≤ 8τ
K
16τ
1
.
log m +
m
π c0 τ log2 m
Regarding j ∈ A3 , we observe that
|A3 | ≤ 2 c0 τ log2 m + l ≤ 2 c0 τ log2 m + τ + 1 ≤ 4c0 τ log2 m.
∗
This together with the simple bound (bl − b1 ) bj ≤ 2K/m gives
X
∗
(bl − b1 ) bj ≤ 2
j∈A3
K
8c0 τ K log2 m
|A3 | ≤
.
m
m
The previous three estimates taken collectively yield
m
X
j=1
∗
(bl − b1 ) bj ≤
1
1
16τ K log m 32τ
8c0 τ K log2 m
+
≤c 2
+
2
m
π c0 τ log m
m
log m
as long as c0 ≥ (32/π) · (1/c) and m ≥ 8c0 τ K log4 m/c.
Lemma 51. Fix any constant c > 0 that is independent of m and K. Consider an integer τ > 0, and
suppose that m ≥ Cτ K log m for some large constant C > 0, which depends solely on c. Then we have
v
bm/τ c u τ
X uX
c
2
t
|b∗1 (bkτ +j − bkτ +1 )| ≤ √ .
τ
j=1
k=0
Proof. The proof strategy is similar to the one used in Lemma 50. First notice that
∗
|b∗1 (bkτ +j − bkτ +1 )| = (bm − bm+1−j ) bkτ .
As before, for some c1 > 0, we can split the index set {1, · · · , bm/τ c} into three disjoint sets
ko
n
jj m k
+ 1 − j /τ ,
B1 = k : c1 ≤ k ≤
2 k
n jj m k
o
B2 = k :
+ 1 − j /τ + 1 ≤ k ≤ b(m + 1 − j) /τ c − c1 ,
n
j2m ko
and
B3 = 1, · · · ,
\ (B1 ∪ B2 ) ,
τ
where 1 ≤ j ≤ τ .
124
By Lemma 49, one has
∗
(bm − bm+1−j ) bkτ ≤
Hence for any k ∈ B1 ,
v
uX
u τ
√
2
t
|b∗1 (bkτ +j − bkτ +1 )| ≤ τ
j=1
4τ K
8τ /π
,
+
kτ m (kτ )2
4τ K
8τ /π
+
kτ m (kτ )2
!
k ∈ B1 .
=
√
τ
4K
8/π
+ 2
km
k τ
,
which further implies that
v
τ
m
Xu
uX
√ X
√ K log m 16 1 1
8/π
4K
2
t
√
|b∗1 (bkτ +j − bkτ +1 )| ≤ τ
+ 2
≤8 τ
+
,
km
k τ
m
π τ c1
j=1
k∈B1
k=c1
Pm
Pm
where the last inequality follows since k=1 k −1 ≤ 2 log m and k=c1 k −2 ≤ 2/c1 . A similar bound can be
obtained for k ∈ B2 .
For the remaining set B3 , observe that
|B3 | ≤ 2c1 .
∗
This together with the crude upper bound (bl − b1 ) bj ≤ 2K/m gives
v
√
r
τ
Xu
uX
√ 2K
4c1 τ K
2
2
∗
∗
t
|b1 (bkτ +j − bkτ +1 )| ≤ |B3 | τ max |b1 (bkτ +j − bkτ +1 )| ≤ |B3 | τ ·
≤
.
j
m
m
j=1
k∈B3
The previous estimates taken collectively yield
v
√
bm/τ c u τ
X uX
√ K log m 16 1 1
1
4c1 τ K
2
∗
t
√
+
≤ c√ ,
|b1 (bkτ +j − bkτ +1 )| ≤ 2 8 τ
+
m
π
c
m
τ
τ
1
j=1
k=0
as long as c1 1/c and m/(c1 τ K log m) 1/c.
D.3.3
Complex-valued alignment
Let gh,x (·) : C → R be a real-valued function defined as
gh,x (α) :=
1
h − h\
α
2
2
+ αx − x\
2
2
,
which is the key function in the definition (34). Therefore, the alignment parameter of (h, x) to (h\ , x\ ) is
the minimizer of gh,x (α). This section is devoted to studying various properties of gh,x (·). To begin with,
the Wirtinger gradient and Hessian of gh,x (·) can be calculated as
"
#
∂gh,x (α,α)
2
−2
2
−2
α kxk2 − x∗ x\ − α−1 (α) khk2 + (α) h\∗ h
∂α
∇gh,x (α) = ∂gh,x (α,α) =
;
(288)
2
−1
2
α kxk2 − x\∗ x − (α) α−2 khk2 + α−2 h∗ h\
∂α
2
−4
2
−3
2
−3
kxk2 + |α| khk2
2α−1 (α) khk2 − 2 (α) h\∗ h
∇ gh,x (α) =
.
(289)
−1
2
2
−4
2
2 (α) α−3 khk2 − 2α−3 h∗ h\
kxk2 + |α| khk2
The first lemma reveals that, as long as β1 h, βx is sufficiently close to (h\ , x\ ), the minimizer of gh,x (α)
cannot be far away from β.
2
125
Lemma 52. Assume theres exists β ∈ C with 1/2 ≤ |β| ≤ 3/2 such that max
δ ≤ 1/4. Denote by α
b the minimizer of gh,x (α), then we necessarily have
n
1
h
β
− h\
2
, βx − x\
2
o
≤
α − β| ≤ 18δ.
|b
α| − |β| ≤ |b
Proof. The first inequality is a direct consequence of the triangle inequality. Hence we concentrate on the
second one. Notice that by assumption,
gh,x (β) =
1
h − h\
β
2
2
+ βx − x\
2
2
≤ 2δ 2 ,
(290)
which immediately implies that gh,x (b
α) ≤ 2δ 2 . It thus suffices to show that for any α obeying |α − β| > 18δ,
2
one has gh,x (α) > 2δ , and hence it cannot be the minimizer. To this end, we lower bound gh,x (α) as follows:
2
= (α − β) x + βx − x\ 2
h
∗ i
2
2
2
= |α − β| kxk2 + βx − x\ 2 + 2Re (α − β) βx − x\ x
∗
2
2
≥ |α − β| kxk2 − 2 |α − β| βx − x\ x .
gh,x (α) ≥ αx − x\
Given that βx − x\
2
2
2
≤ δ ≤ 1/4 and x\
2
= 1, we have
kβxk2 ≥ kx\ k2 − βx − x\
2
≥ 1 − δ ≥ 3/4,
which together with the fact that 1/2 ≤ |β| ≤ 3/2 implies
kxk2 ≥ 1/2
and
βx − x\
∗
and
kxk2 ≤ 2
x ≤ βx − x\
Taking the previous estimates collectively yields
gh,x (α) ≥
2
kxk2 ≤ 2δ.
1
2
|α − β| − 4δ |α − β| .
4
It is self-evident that once |α − β| > 18δ, one gets gh,x (α) > 2δ 2 , and hence α cannot be the minimizer as
gh,x (α) > gh,x (β) according to (290). This concludes the proof.
The next lemma reveals the local strong convexity of gh,x (α) when α is close to one.
Lemma 53. Assume that max h − h\ 2 , x − x\ 2 ≤ δ for some sufficiently small constant δ > 0.
Then, for any α satisfying |α − 1| ≤ 18δ and any u, v ∈ C, one has
1 2
u
2
∗ ∗
2
[u , v ] ∇ gh,x (α)
≥
|u| + |v| ,
v
2
where ∇2 gh,x (·) stands for the Wirtinger Hessian of gh,x (·).
Proof. For simplicity of presentation, we use gh,x (α, α) and gh,x (α) interchangeably. By (289), for any
u, v ∈ C , one has
h
i
u
2
−4
2
2
2
−3
2
−3
∗ ∗
2
[u , v ] ∇ gh,x (α)
= kxk2 + |α| khk2 |u| + |v| +2 Re u∗ v 2α−1 (α) khk2 − 2 (α) h\∗ h .
v
|
{z
}
|
{z
}
:=β1
:=β2
2
2
We would like to demonstrate that this is
at least on the order of |u| + |v| . We first develop a lower
bound on β1 . Given the assumption that max h − h\ 2 , x − x\ 2 ≤ δ, one necessarily has
1 − δ ≤ kxk2 ≤ 1 + δ
and
126
1 − δ ≤ khk2 ≤ 1 + δ.
Thus, for any α obeying |α − 1| ≤ 18δ, one has
−4
2
−4
2
β1 ≥ 1 + |α|
(1 − δ) ≥ 1 + (1 + 18δ)
(1 − δ) ≥ 1
as long as δ > 0 is sufficiently small. Regarding the second term β2 , we utilizes the conditions |α − 1| ≤ 18δ,
kxk2 ≤ 1 + δ and khk2 ≤ 1 + δ to get
−3
2
α−1 khk2 − h\∗ h
−3
2
= 2 |u| |v| |α|
α−1 − 1 khk2 − (h\ − h)∗ h
−3
2
≤ 2 |u| |v| |α|
α−1 − 1 khk2 + h − h\ 2 khk2
18δ
−3
2
≤ 2 |u| |v| (1 − 18δ)
(1 + δ) + δ (1 + δ)
1 − 18δ
2
2
. δ |u| + |v| ,
|β2 | ≤ 2 |u| |v| |α|
2
2
where the last relation holds since 2 |u| |v| ≤ |u| + |v| and δ > 0 is sufficiently small. Combining the
previous bounds on β1 and β2 , we arrive at
1
u
2
2
2
2
[u∗ , v ∗ ] ∇2 gh,x (α)
≥ (1 − O(δ)) |u| + |v| ≥
|u| + |v|
v
2
as long as δ is sufficiently small. This completes the proof.
Additionally, in a local region surrounding the optimizer, the alignment parameter is Lipschitz continuous,
namely, the difference of the alignment parameters associated with two distinct vector pairs is at most
proportional to the `2 distance between the two vector pairs involved, as demonstrated below.
Lemma 54. Suppose that the vectors x1 , x2 , h1 , h2 ∈ CK satisfy
max x1 − x\ 2 , h1 − h\ 2 , x2 − x\ 2 , h2 − h\
2
(291)
≤ δ ≤ 1/4
for some sufficiently small constant δ > 0. Denote by α1 and α2 the minimizers of gh1 ,x1 (α) and gh2 ,x2 (α),
respectively. Then we have
|α1 − α2 | . kx1 − x2 k2 + kh1 − h2 k2 .
Proof. Since α1 minimizes gh1 ,x1 (α), the mean-value form of Taylor’s theorem (see Appendix D.3.1) gives
gh1 ,x1 (α2 ) ≥ gh1 ,x1 (α1 )
= gh1 ,x1 (α2 ) + ∇gh1 ,x1 (α2 )
∗
α1 − α2
α1 − α2
+
1
(α1 − α2 , α1 − α2 ) ∇2 gh1 ,x1 (e
α)
2
α1 − α2
α1 − α2
,
where α
e is some complex number lying between α1 and α2 , and ∇gh1 ,x1 and ∇2 gh1 ,x1 are the Wirtinger
gradient and Hessian of gh1 ,x1 (·), respectively. Rearrange the previous inequality to obtain
|α1 − α2 | .
k∇gh1 ,x1 (α2 )k2
λmin (∇2 gh1 ,x1 (e
α))
(292)
as long as λmin ∇2 gh1 ,x1 (e
α) > 0. This calls for evaluation of the Wirtinger gradient and Hessian of
gh1 ,x1 (·).
Regarding the Wirtinger Hessian, by the assumption (291), we can invoke Lemma 52 with β = 1 to reach
max {|α1 − 1| , |α2 − 1|} ≤ 18δ. This together with Lemma 53 implies
λmin ∇2 gh1 ,x1 (e
α) ≥ 1/2,
since α
e lies between α1 and α2 .
127
For the Wirtinger gradient, since α2 is the minimizer of gh2 ,x2 (α), the first-order optimality condition
[KD09, equation (38)] requires ∇gh2 ,x2 (α2 ) = 0 , which gives
k∇gh1 ,x1 (α2 )k2 = k∇gh1 ,x1 (α2 ) − ∇gh2 ,x2 (α2 )k2 .
Plug in the gradient expression (288) to reach
k∇gh1 ,x1 (α2 ) − ∇gh2 ,x2 (α2 )k2
i
√ h
2
−2
2
−2
= 2 α2 kx1 k2 − x∗1 x\ − α2−1 (α2 ) kh1 k2 + (α2 ) h\∗ h1
h
i
2
−2
2
−2
− α2 kx2 k2 − x∗2 x\ − α2−1 (α2 ) kh2 k2 + (α2 ) h\∗ h2
2
. |α2 |
2
kx1 k2
1
2
. |α2 | kx1 k2 − kx2 k2 + x∗1 x\ − x∗2 x\ +
−
2
kx2 k2
+ kx1 − x2 k2 +
1
2
3
|α2 |
|α2 |
1
2
kh1 k2 − kh2 k2 +
2
3 kh1 k2
−
2
kh2 k2
+
2
1
|α2 |
2
|α2 |
h\∗ h1 − h\∗ h2
kh1 − h2 k2 ,
where the last line follows from the triangle inequality. It is straightforward to see that
1/2 ≤ |α2 | ≤ 2,
2
2
kx1 k2 − kx2 k2 . kx1 − x2 k2 ,
2
2
kh1 k2 − kh2 k2 . kh1 − h2 k2
under the condition (291) and the assumption kx\ k2 = kh\ k2 = 1, where the first inequality follows from
Lemma 52. Taking these estimates together reveals that
k∇gh1 ,x1 (α2 ) − ∇gh2 ,x2 (α2 )k2 . kx1 − x2 k2 + kh1 − h2 k2 .
The proof is accomplished by substituting the two bounds on the gradient and the Hessian into (292).
Further, if two vector pairs are both close to the optimizer, then their distance after alignement (w.r.t. the
optimizer) cannot be much larger than their distance without alignment, as revealed by the following lemma.
Lemma 55. Suppose that the vectors x1 , x2 , h1 , h2 ∈ CK satisfy
max x1 − x\ 2 , h1 − h\ 2 , x2 − x\ 2 , h2 − h\
2
≤ δ ≤ 1/4
(293)
for some sufficiently small constant δ > 0. Denote by α1 and α2 the minimizers of gh1 ,x1 (α) and gh2 ,x2 (α),
respectively. Then we have
2
kα1 x1 − α2 x2 k2 +
1
1
h1 −
h2
α1
α2
2
2
2
2
. kx1 − x2 k2 + kh1 − h2 k2 .
Proof. To start with, we control the magnitudes of α1 and α2 . Lemma 52 together with the assumption
(293) guarantees that
1/2 ≤ |α1 | ≤ 2
and
1/2 ≤ |α2 | ≤ 2.
Now we can prove the lemma. The triangle inequality gives
kα1 x1 − α2 x2 k2 = kα1 (x1 − x2 ) + (α1 − α2 ) x2 k2
≤ |α1 | kx1 − x2 k2 + |α1 − α2 | kx2 k2
(i)
≤ 2 kx1 − x2 k2 + 2 |α1 − α2 |
(ii)
. kx1 − x2 k2 + kh1 − h2 k2 ,
where (i) holds since |α1 | ≤ 2 and kx2 k2 ≤ 1 + δ ≤ 2, and (ii) arises from Lemma 54 that |α1 − α2 | .
kx1 − x2 k2 + kh1 − h2 k2 . Similarly,
1
1
1
1
1
h1 −
h2 =
(h1 − h2 ) +
−
h2
α1
α2
α1
α1
α2
2
2
128
1
1
1
kh1 − h2 k2 +
kh2 k2
−
α1
α1
α2
|α1 − α2 |
≤ 2 kh1 − h2 k2 + 2
|α1 α2 |
. kx1 − x2 k2 + kh1 − h2 k2 ,
≤
where the last inequality comes from Lemma 54 as well as the facts that |α1 | ≥ 1/2 and |α2 | ≥ 1/2
as
q shown above. Combining all of the above bounds and recognizing that kx1 − x2 k2 + kh1 − h2 k2 ≤
2
2
2 kx1 − x2 k2 + 2 kh1 − h2 k2 , we conclude the proof.
Finally, there is a useful identity associated with the minimizer of ge(α) as defined below.
Lemma 56. For any h1 , h2 , x1 , x2 ∈ CK , denote
α] := arg min ge(α),
α
e1 =
e1 = α] x1 and h
Let x
1
h1 ,
α]
where
then we have
e 1 − x2
x
2
2
1
h1 − h2
α
ge (α) :=
e 1 − h2
+ x∗2 (e
x1 − x2 ) = h
Proof. We can rewrite the function ge (α) as
2
2
2
2
2
+ kαx1 − x2 k2 .
e 1 − h2 ∗ h2 .
+ h
∗
1
1
h1 h2 − h∗2
h1
α
α
1
1
1
2
2
2
2
= αα kx1 k2 + kx2 k2 − αx∗1 x2 − αx∗2 x1 +
kh1 k2 + kh2 k2 − h∗1 h2 − h∗2 h1 .
αα
α
α
2
2
2
∗
ge (α) = |α| kx1 k2 + kx2 k2 − (αx1 ) x2 − x∗2 (αx1 ) +
1
α
2
2
2
kh1 k2 + kh2 k2 −
The first-order optimality condition [KD09, equation (38)] requires
∂e
g
1
1
1
2
2
= α] kx1 k2 − x∗1 x2 + ] − 2 kh1 k2 − − 2 h∗2 h1 = 0,
∂α α=α]
α
α]
α]
which further simplifies to
2
e1
e∗1 x2 = h
ke
x1 k2 − x
2
2
e1
− h∗2 h
e 1 = 1 h1 , and α] 6= 0 (otherwise ge(α] ) = ∞ and cannot be the minimizer). Furthermore,
e1 = α] x1 , h
since x
α]
this condition is equivalent to
e 1 − h2 ∗ h
e 1.
e∗1 (e
x
x1 − x2 ) = h
Recognizing that
∗
e∗1 (e
e1 − x2 (e
x
x1 − x2 ) = x∗2 (e
x1 − x2 ) + x
x1 − x2 ) = x∗2 (e
x1 − x2 ) + ke
x1 − x2 k22 ,
∗
e∗ h
e 1 − h2 = h∗ h
e 1 − h2 + h
e 1 − h2 h
e 1 − h2 = h∗ h
e 1 − h2 + kh
e 1 − h2 k2 ,
h
1
2
2
2
we arrive at the desired identity.
D.3.4
Matrix concentration inequalities
The proof for
blind deconvolution is largely built upon the concentration of random matrices that are
functions of aj a∗j . In this subsection, we collect the measure concentration results for various forms of
random matrices that we encounter in the analysis.
129
i.i.d.
Lemma 57. Suppose aj ∼ N 0, 12 IK + iN 0, 21 IK for every 1 ≤ j ≤ m, and {cj }1≤j≤m are a set of
e1 , C
e2 > 0 such that for all t ≥ 0
fixed numbers. Then there exist some universal constants C
(
)!
m
X
t
t2
∗
e
e
cj (aj aj − IK ) ≥ t ≤ 2 exp C1 K − C2 min
.
P
, Pm 2
maxj |cj |
j=1 cj
j=1
Proof. This is a simple variant of [Ver12, Theorem 5.39], which uses the Bernstein inequality and the standard
covering argument. Hence we omit its proof.
i.i.d.
Lemma 58. Suppose aj ∼ N 0, 21 IK +iN 0, 21 IK for every 1 ≤ j ≤ m. Then there exist some absolute
e1 , C
e2 , C
e3 > 0 such that for all max{1, 3C
e1 K/C
e2 }/m ≤ ε ≤ 1, one has
constants C
!
X
e2 C
e3
e
C
e
∗
e
aj aj ≥ 4C3 εm log
≤ 2 exp −
P
sup
εm log
,
ε
3
ε
|J|≤εm
j∈J
where J ⊆ [m] and |J| denotes its cardinality.
Proof. The proof relies on Lemma 57 and the union bound. First, invoke Lemma 57 to see that for any fixed
J ⊆ [m] and for all t ≥ 0, we have
X
e1 K − C
e2 |J| min t, t2 ,
P
(aj a∗j − IK ) ≥ |J| t ≤ 2 exp C
(294)
j∈J
e1 , C
e2 > 0, and as a result,
for some constants C
X
(i)
P sup
aj a∗j ≥ dεme(1 + t) ≤ P sup
|J|≤εm
j∈J
|J|=dεme
≤ P sup
(ii)
≤
|J|=dεme
X
j∈J
X
j∈J
aj a∗j ≥ dεme(1 + t)
(aj a∗j − IK ) ≥ dεmet
m
e1 K − C
e2 dεme min t, t2 ,
· 2 exp C
dεme
where dce denotes the smallest integer that is no smaller than c. Here, (i) holds since we take the supremum
over a larger set and (ii) results from (294) and the union bound. Apply the elementary inequality nk ≤
(en/k)k for any 0 ≤ k ≤ n to obtain
dεme
X
em
e1 K − C
e2 dεme min t, t2
exp C
P sup
aj a∗j ≥ dεme(1 + t) ≤ 2
dεme
|J|≤εm
j∈J
e1 K − C
e2 εm min t, t2
exp C
ε h
i
e1 K − εm C
e2 min t, t2 − 2 log(e/ε) .
= 2 exp C
≤2
e 2εm
(295)
where the second inequality uses εm ≤ dεme ≤ 2εm whenever 1/m ≤ ε ≤ 1.
e3 ≥ max{1, 6/C
e2 } and t = C
e3 log(e/ε). To see this, it is
The proof is then completed by taking C
2
e
e2 εm/3 ≤ C
e2 εmt/3, and
easy to check that min{t, t } = t since t ≥ 1. In addition, one has C1 K ≤ C
e
2 log(e/ε) ≤ C2 t/3. Combine the estimates above with (295) to arrive at
X
X
(i)
e3 εm log(e/ε) ≤ P sup
P sup
aj a∗j ≥ 4C
aj a∗j ≥ dεme(1 + t)
|J|≤εm
j∈J
|J|≤εm
130
j∈J
h
i
e1 K − εm C
e2 min t, t2 − 2 log(e/ε)
≤ 2 exp C
!
e2 C
e3
C
e2 t/3 = 2 exp −
εm log(e/ε)
≤ 2 exp −εmC
3
(ii)
e3 log(e/ε). The inequality
as claimed. Here (i) holds due to the facts that dεme ≤ 2εm and 1 + t ≤ 2t ≤ 2C
(ii) arises from the estimates listed above.
Lemma 59. Suppose m K log3 m. With probability exceeding 1 − O m−10 , we have
m
X
2
a∗j x\
bj b∗j
j=1
Proof. The identity
Pm
j=1
− IK .
r
K
log m.
m
bj b∗j = IK allows us to rewrite the quantity on the left-hand side as
m
X
j=1
m
X
2
a∗j x\ bj b∗j − IK =
j=1 |
− 1 bj b∗j ,
{z
}
2
a∗j x\
:=Zj
where the Zj ’s are independent zero-mean random matrices. To control the above spectral norm, we resort
to the matrix Bernstein inequality [Kol11, Theorem 2.7]. To this end, we first need to upper bound the
sub-exponential norm k · kψ1 (see definition in [Ver12]) of each summand Zj , i.e.
kZj k
2
ψ1
= kbj k2
a∗j x\
2
−1
2
ψ1
. kbj k2
a∗j x\
2
ψ1
.
K
,
m
where we make use of the facts that
2
kbj k2 = K/m
a∗j x\
and
We further need to bound the variance parameter, that is,
" m
m
X
X
2
∗
σ0 := E
Zj Zj = E
a∗j x\
j=1
.
m
X
j=1
bj b∗j bj b∗j =
j=1
2
2
ψ1
. 1.
2
− 1 bj b∗j bj b∗j
#
m
K
K X
bj b∗j = ,
m j=1
m
2
Pm
where the second line arises since E |a∗j x\ |2 − 1
1, kbj k22 = K/m, and j=1 bj b∗j = IK . A direct
application of the matrix
Bernstein inequality [Kol11, Theorem 2.7] leads us to conclude that with probability
exceeding 1 − O m−10 ,
m
X
Zj . max
j=1
(r
) r
K
K
K
2
log m, log m
log m,
m
m
m
where the last relation holds under the assumption that m K log3 m.
D.3.5
Matrix perturbation bounds
We also need the following perturbation bound on the top singular vectors of a given matrix. The following
lemma is parallel to Lemma 34.
131
Lemma 60. Let σ1 (A), u and v be the leading singular value, left and right singular vectors of A, respece u
e respectively.
e and v
e be the leading singular value, left and right singular vectors of A,
tively, and let σ1 (A),
e
Suppose σ1 (A) and σ1 (A) are not identically zero, then one has
e v
A−A
e ≤
σ1 (A) − σ1 (A)
q
p
e u
e
σ1 (A) u − σ1 (A)
+
2
q
p
e v
e
σ1 (A) v − σ1 (A)
2
2
e ;
e k2 + kv − v
ek2 ) A
+ (ku − u
≤
Proof. The first claim follows since
p
e
2 σ1 (A) − σ1 (A)
q
e k2 + kv − v
ek2 ) + p
σ1 (A) (ku − u
.
e
σ1 (A) + σ1 (A)
ev
e = u∗ Av − u
e ∗ Ae
σ1 (A) − σ1 (A)
e −u
e + u
e −u
ev
e v + u∗ Av
e ∗ Av
e ∗ Av
e ∗ Ae
≤ u∗ A − A
e v + ku − u
e + A
e kv − v
e k2 A
ek2 .
≤ A−A
2
With regards to the second claim, we see that
q
p
p
p
e u
e ≤
e
σ1 (A) u − σ1 (A)
σ1 (A) u − σ1 (A) u
2
=
=
Similarly, one can obtain
p
σ1 (A) v −
q
p
p
e k2 +
σ1 (A) ku − u
p
e k2 + p
σ1 (A) ku − u
e v
e
σ1 (A)
2
≤
p
Add these two inequalities to complete the proof.
2
+
σ1 (A) −
e−
σ1 (A) u
q
e
σ1 (A)
q
e u
e
σ1 (A)
e
σ1 (A) − σ1 (A)
q
.
e
σ1 (A) + σ1 (A)
ek2 + p
σ1 (A) kv − v
132
p
e
σ1 (A) − σ1 (A)
q
.
e
σ1 (A) + σ1 (A)
2
| 9 |
Expansion of polynomial Lie group integrals in terms of
certain maps on surfaces, and factorizations of permutations
Marcel Novaes
arXiv:1601.08206v2 [math-ph] 7 Feb 2016
Instituto de Fı́sica, Universidade Federal de Uberlândia, Uberlândia, MG, 38408-100, Brazil
Abstract
Using the diagrammatic approach to integrals over Gaussian random matrices, we
find a representation for polynomial Lie group integrals as infinite sums over certain
maps on surfaces. The maps involved satisfy a specific condition: they have some
marked vertices, and no closed walks that avoid these vertices. We also formulate our
results in terms of permutations, arriving at new kinds of factorization problems.
1
1.1
Introduction
Background
We are interested in integrals over the orthogonal group O(N ) of N -dimensional real matrices O satisfying OOT = 1, where T means transpose, and over the unitary group U (N ) of
N -dimensional complex matrices U satisfying U U † = 1, where † means transpose conjugate.
These groups have a unique invariant probability measure, known as Haar measure, and
integrals over them may be seen as averages over ensembles of random matrices.
We will considerQaverages of functions that are polynomial Q
in the matrix elements,
†
n
i.e. quantities like k=1 Uik ,jk Upk ,qk for the unitary group and nk=1 Oik ,jk Oikb ,jkb for the
orthogonal one (results for the unitary symplectic group Sp(N ) are very close to those
for O(N ), so we do not consider this group in detail). From the statistical point of view,
these are joint moments of the matrix elements, considered as correlated random variables.
Their study started in physics [1, 2], with applications to quantum chaos [3, 4, 5], and
found its way into mathematics, initially for the unitary group [6] and soon after for the
orthogonal and symplectic ones [7, 8]. Since then, they have been extensively explored
[9, 10], related to Jucys-Murphy elements [11, 12] and generalized to symmetric spaces [13].
Unsurprisingly, after these developments some new applications have been found in physics
[14, 15, 16, 17, 18, 19].
For the unitary group average, the result is different from zero only if the q-labels are a
permutation of the i-labels, and the p-labels are a permutation of the j-labels. The basic
building blocks of the calculation, usually called Weingarten functions, are of the kind
WgU
N (π) =
Z
dU
U (N )
n
Y
†
,
Uk,k Uk,π(k)
(1)
k=1
where π is an element of the permutation group, π ∈ Sn . In general, if there is more than
one permutation relating the sets of labels (due to repeated indices, e.g. h|U1,2 |4 i), the result
is a sum of Weingarten functions. The cycletype of a permutation in Sn is a partition of n,
α = (α1 , α2 , ...) ⊢ n = |α|, whose parts are the lengths of its cycles; the function WgU
N (π)
depends only on the cycletype of π [6, 7].
1
The result of the orthogonal group average is different from zero only if the i-labels
satisfy some matching (see Section 2.1 for matchings) and the j-labels also satisfy some
matching [7, 9, 8]. For concreteness, we may choose the i’s to satisfy only the trivial
matching, and the j’s to satisfy only some matching m of cosettype β. The basic building
blocks of polynomial integrals over the orthogonal group are
Z
O
dOO1,j1 O1,jb1 O2,j2 O2,jb2 · · · On,jn On,jnb .
(2)
WgN (β) =
Ø(N )
As examples, let us mention
Z
U
WgN ((2)) =
U (N )
†
†
dU U1,1 U1,2
U2,2 U2,1
=
−1
,
(N − 1)N (N + 1)
corresponding to the permutation π = (12), which has cycletype (2), and
Z
−1
O
dOO1,1 O1,2 O2,2 O2,1 =
WgN ((2)) =
,
(N − 1)N (N + 2)
Ø(N )
(3)
(4)
corresponding to the matching {{1, b
2}, {2, b
1}} for the j’s, which has cosettype (2).
Our subject is the combinatorics associated with the large N asymptotics of these integrals. For the unitary case, this has been addressed in previous works [4, 6, 11, 20, 21],
where connections with maps and factorizations of permutations have already appeared.
However, our approach is different, and the maps/factorizations we encounter are new. Our
treatment of the orthogonal case is also new. We proceed by first considering Weingarten
functions in the context of ensembles of random truncated matrices; then relating those to
Gaussian ensembles, and finally using the rich combinatorial structure of the latter.
In what follows we briefly present our results. A review of some basic concepts can
be found in Section 2. Results for the unitary group are obtained in Section 3, while the
orthogonal group is considered in Section 4. In an Appendix we discuss several different
factorization problems that are related to 1/N expansions of Weingarten functions.
1.2
1.2.1
Results and discussion for the unitary group
Maps
For the unitary group, we represent the Weingarten function as an infinite sum over orientable maps.
Theorem 1 If α is a partition with ℓ(α) parts, then
WgU
N (α) =
(−1)ℓ(α) X χ
N
N 2|α|+ℓ(α) χ
X
(−1)V (w) ,
(5)
w∈B(α,χ)
where the first sum is over Euler characteristic, and V (w) is the number of vertices in the
map w.
As we will discuss with more detail in Section 3.2 the (finite) set B(α, χ) contains all
maps, not necessarily connected, with the following properties: i) they are orientable with
Euler characteristic χ; ii) they have ℓ(α) marked vertices with valencies (2α1 , 2α2 , ...); iii)
all other vertices have even valence larger than 2; iv) all closed walks along the boundaries
of the edges (which we see as ribbons) visit the marked vertices in exactly one corner; v)
they are face-bicolored and have |α| faces of each color.
As an example, the 1/N expansion for the function in (3), for which α = (2), starts
−
1
1
1
− 5 − 7 − ··· ,
3
N
N
N
2
(6)
Figure 1: A few of the maps that contribute to the Weingarten function (3), according to
the expansion (5). They are face-bicolored, with different colors being indicated by solid
and dashed lines, respectively. All boundary walks visit the marked vertex, which is the
black disk. Their contributions are discussed in the text.
The first two orders come from maps like the ones shown in Figure 1. The map in panel
a) is the only leading order contribution. It has χ = 2 and V = 2, so its value is indeed
−1/N 3 . The next order comes from elements of B((2), 0). There are in total 21 different
maps in that set having four vertices of valence 4; one of them is shown in panel b). There
are 28 different maps in that set having two vertices of valence 4 and one vertex of valence
6; one of them is shown in panel c). Finally, there are 8 different maps in that set having
one vertex of valence 4 and one vertex of valence 8; one of them is shown in panel d). They
all have χ = 0, and their combined contribution is (−21 + 28 − 8)/N 5 = −1/N 5 .
1.2.2
Factorizations
By a factorization of Π we mean an ordered pair, f ≡ (τ1 , τ2 ), such that Π = τ1 τ2 . We call
Π the ‘target’ of the factorization f . If Π ∈ Sk , then the Euler characteristic of f is given
by χ = ℓ(Π) − k + ℓ(τ1 ) + ℓ(τ2 ).
The number of factorizations of a permutation depends only on its cycletype, so it makes
sense to restrict attention to specific representatives. Call a permutation ‘standard’ if each
of its cycles contains only adjacent numbers, and the cycles have weakly decreasing length
when ordered with respect to least element. For example, (123)(45) is standard. Since
WgU
N (π) depends only on the cycletype of π, we may take π to be standard.
As we will discuss with more detail in Section 3.3, the relevant factorizations for WgU
N (π)
are those whose target is of the kind Π = πρ, where the ‘complement’ ρ is a standard
permutation acting on the set {n + 1, ..., n + m}, for some m ≥ 0. They satisfy the following
properties: i) they have Euler characteristic χ; ii) the complement ρ has no fixed points;
iii) every cycle of the factors τ1 , τ2 must have exactly one element in {1, ..., n}. Notice that
the last condition implies ℓ(τ1 ) = ℓ(τ2 ) = n. Let the (finite) set of all such factorizations be
denoted F(π, χ). Then, we have
3
Theorem 2 Let π ∈ Sn be a standard permutation, then
WgU
N (π) =
Q
(−1)ℓ(π) X χ
N
N 2n+ℓ(π) χ
X
f ∈F (π,χ)
(−1)ℓ(Π)
,
zρ
(7)
where zρ = j j vj vj !, with vj the number of times part j occurs in the cycletype of the
complement ρ.
Theorem 2 follows from Theorem 1 by a simple procedure for associating factorizations
to maps in B(α, χ), discussed in Section 3.3. Associations of this kind are well known
[22, 23, 24].
For example, the leading order in Eq.(6), for which π = (12), has a contribution from
the factorization (12)(34) = (14)(23) · (13)(24), which has ρ = (34) and is one of two
factorizations that can be associated with the map in Figure 1a. Several factorizations can
be associated with the other maps in Figure 1. We mention one possibility for each of them:
(12)(34)(56)(78) = (148)(25763) · (13)(285746) for the map in Figure 1b; (12)(345)(67) =
(17)(23546) · (16)(27435) for the map in Figure 1c; (12)(3456) = (164)(253) · (1365)(24) for
the map in Figure 1d. Notice how they have different complements.
Other factorization problems are also related to the coefficients in the 1/N expansion
for WgU
N (π) (see the Appendix). Collins initially showed [6] that they can be expressed in
terms of the number of ‘Proper’ factorizations of π. Matsumoto and Novak later showed
[11] that the coefficients count Monotone factorizations. On the other hand, Berkolaiko and
Irving recently defined [25] Inequivalent-Cycle factorizations and showed that
X
X
(−1)r Iα,χ (r) =
(−1)r Pα,χ (r) = (−1)n+ℓ(α) Mα,χ ,
(8)
r≥0
r≥0
where Iα,χ (r) and Pα,χ (r) are, respectively, the numbers of Inequivalent-Cycle and Proper
factorizations of π, with cycletype α ⊢ n, into r factors having Euler characteristic χ,
while Mα,χ is the number of Monotone factorizations of π with Euler characteristic χ.
Interestingly, our factorizations satisfy a very similar sum rule, namely
X
f ∈F (π;χ)
(−1)ℓ(Π)
= (−1)n+ℓ(α) Mα,χ .
zρ
(9)
An important difference between (9) and (8) is that the factorizations in (9) take place in
Sn+m for some m ≥ 0, while all those in (8) take place in Sn .
Notice that our factorizations must satisfy condition iii), which is related to the distribution of the elements from the set {1, ..., n} among the cycles of the factors τ1 , τ2 . This is
close in spirit to the kind of questions studied by Bóna, Stanley and others [26, 27], which
count factorizations satisfying some placement conditions on the elements of the target.
1.3
1.3.1
Results and discussion for the orthogonal group
Maps
For the orthogonal group, we represent the Weingarten function as an infinite sum over
maps (orientable and non-orientable).
Theorem 3 Let β be a partition with ℓ(β) parts, then
WgO
N +1 (β)
(−2)ℓ(β) X χ
N
= 2|β|+ℓ(β)
N
χ
4
X
w∈N B(β,χ)
1
−
2
V (w)
,
(10)
Figure 2: Two of the maps that contribute to the Weingarten function (4), according to
the expansion (10). They are not orientable: some of the ribbons have ‘twists’.
where the first sum is over Euler characteristic, and V (w) is the number of vertices in the
map w.
As we will discuss with more detail in Section 4.2 the (finite) set N B(β, χ) contains all
maps, not necessarily connected or orientable, with the following properties: i) they have
Euler characteristic χ; ii) they have ℓ(β) marked vertices with valencies (2β1 , 2β2 , ...); iii)
all other vertices have even valence larger than 2; iv) all closed walks along the boundaries
of the edges visit the marked vertices in exactly one corner; v) they are face-bicolored and
have |β| faces of each color.
Notice that the expansion is in powers of N −1 for the group O(N + 1) and not for O(N ).
For example, the first terms in the expansion of the function in Eq.(4) at dimension N + 1
are
1
4
13
−1
= − 3 + 4 − 5 ...
(11)
N (N + 1)(N + 3)
N
N
N
The leading order comes from the map shown in Figure 1a, which is orientable. The next
order comes from non-orientable maps in the set N B((2), 1). There are in total 8 different
maps having two vertices of valence 2; one of them is shown in Figure 2a. There are in total
4 different maps having one vertex of valence 2 and one vertex of valence 3; one of them is
shown in Figure 2b. Their combined contribution is (8 − 4)/N 4 = 4/N 4 .
Remark It is known [13] that the Weingarten function of the unitary symplectic group
Sp(N ) is proportional to WgO
−2N . Therefore the appropriate dimension for map expansion
in Sp(N ) is also different from N : it has to be Sp(N − 1/2) (a non-integer dimension is to
be understood by keeping in mind that Weingarten functions are rational functions of N ).
Interestingly, if we assign a parameter α = 2, 1, 1/2 for orthogonal, unitary and symplectic
groups, respectively (sometimes called Jack parameter), then the appropriate dimensions
for the map expansion in powers of N −1 can be written as N + α − 1.
1.3.2
Factorizations
c = {b
c Define the
Let [n] = {1, ..., n} and [n]
1, ..., n
b}. We consider the action of S2n on [n]∪ [n].
‘hat’ involution on permutations as π
b(a) = π −1 (b
a) (assuming b
b
a = a). We call permutations
that are invariant under this transformation ‘palindromic’, e.g. (12b
2b
1) and (12)(b
2b
1) are
palindromic.
Given a partition β, define π ∈ Sn to be the standard permutation that has cycletype β
and define Π = πρ, where the ‘complement’ ρ is a standard permutation acting on the set
{n + 1, ..., n + m} for some m ≥ 0. Define the fixed-point free involutions p1 , whose cycles
are (a b
a) for 1 ≤ a ≤ n + m, and p2 whose cycles are of the type (b
a a + 1), but with the
additions computed modulo the cycle lengths of Π, i.e.
\
p2 = (b
1 2)(b
2 3) · · · (βb1 1)(β\
1 + 1 β1 + 2) · · · (β1 + β2 β1 + 1) · · · .
5
(12)
b
They provide a factorization of the palindromic version of the target, p2 p1 = Π Π.
b
The problem we need to solve is to find all factorizations Π Π = f2 f1 , that satisfy the
following properties: i) their Euler characteristic, defined as ℓ(Π) − m − n + ℓ(f1 ) + ℓ(f2 ),
is χ; ii) the complement ρ has no fixed points; iii) the factors may be written as f1 = θp1
and f2 = p2 θ for some fixed-point free involution θ; iv) f1 is palindromic; v) every cycle
c Clearly, the crucial quantity
of the factors f1 , f2 contains exactly one element in [n] ∪ [n].
is actually θ. Let the (finite) set of all pairs (Π, θ) satisfying these conditions be denoted
N F(β, χ). Then, we have
Theorem 4 For a given partition β of length ℓ(β),
WgO
N +1 (β)
Q
(−2)ℓ(β) X χ
N
= 2|β|+ℓ(β)
N
χ
X
(Π,θ)∈N F (β,χ)
1
zρ
1
−
2
ℓ(Π)
,
(13)
where zρ = j j vj vj !, with vj the number of times part j occurs in the cycletype of the
complement ρ.
Theorem 4 follows from Theorem 3 by a simple procedure for describing combinatorially
the maps in N B(β, χ), discussed in Section 4.3. Such kind of descriptions are well known
[28, 29, 30].
For example, the leading order in Eq.(11) has a contribution from the pair Π = (12)(34),
θ = (1b
3)(2b
4)(4b
2)(3b
1). The next order comes from factorizations associated with the maps in
Figure 2. We mention one for each of them: Π = (12)(34)(56), θ = (1b
3)(b
13)(25)(b
24)(b
46)(b
5b
6)
b
b
b
b
b
for a) and Π = (12)(345), θ = (13)(13)(25)(24)(45) for b).
Other factorizations are related to the coefficients in the 1/N expansion of orthogonal
Weingarten functions, as we discuss in the Appendix. Matsumoto has shown [31] that
these coefficients count certain factorizations that are matching analogues of the Monotone family. Following the steps in [6] we show that the coefficients can be expressed in
terms of analogues of Proper factorizations. These relations hold for O(N ), however, not
O(N + 1). Therefore the relation to our results is less direct. On the other hand, Berkolaiko and Kuipers have shown [32] that certain ‘Palindromic Monotone’ factorizations are
related to the 1/N expansion for O(N + 1). The appropriate analogue of Inequivalent-Cycle
factorizations is currently missing.
1.4
Connection with physics
This work was originally motivated by applications in physics, in the semiclassical approximation to the quantum mechanics of chaotic systems. Without going into too much
detail, the problem in question was to construct correlated sets of chaotic trajectories
that are responsible for the relevant quantum effects in the semiclassical limit [33]. Connections between this topic and factorizations of permutations had already been noted
[34, 32, 35, 36, 37]. In [18] we suggested that such sets of trajectories could be obtained
from the diagrammatic expansion of a certain matrix integral, with the proviso that the
dimension of the matrices had to be set to zero after the calculation. When we realized the
connection to truncated random unitary matrices, this became our Theorem 1. The suggestion from [18], initially restricted to systems without time-reversal symmetry, was later
extended to systems that have this symmetry [19]; the connection with truncated random
orthogonal matrices then gave rise to our Theorem 3.
1.5
Acknowledgments
The connection between our previous work [18] and truncated unitary matrices was first
suggested by Yan Fyodorov.
6
Figure 3: Three examples of graphs associated with matchings. Edges coming from the
trivial matching, t, are drawn in dashed line. In a) we have m = {{1, b
2}, {2, b
1}, {3, b
3}},
which can be produced as (12)(t) or (b
1b
2)(t), among others ways; permutations (12) and
(b
1b
2) thus belong to the same coset in S2n /Hn . In b) m = {{2, b
3}, {3, b
2}}, which can be
produced as (23)(t) or (b
2b
3)(t), among others. The matchings in both a) and b) have the
same cosettype, namely (2, 1); the permutations producing them therefore belong to the
same double coset in Hn \S2n /Hn . In c) m = {{1, 2}, {b
1, b
2}, {3, b
4}, {4, b
5}, {5, b
3}}, and its
cosettype is (3, 2).
This work had financial support from CNPq (PQ-303318/2012-0) and FAPEMIG (APQ00393-14).
2
2.1
Basic concepts
Partitions, Permutations and Matchings
By α ⊢ n we mean
Pα is a partition of n, i.e. a weakly decreasing sequence of positive
integers such that i αi = |α| = n. The number of non-zero parts is called the length of
the partition and denoted by ℓ(α).
The group of permutations of n elements is Sn . The cycletype of π ∈ Sn is the partition
α ⊢ n whose parts are the lengths of the cycles of π. The length of a permutation is the
number of cycles it has, ℓ(π) = ℓ(α), while its rank is r(π) = n − ℓ(π) = r(α). We multiply
permutations from right to left, e.g. (13)(12) = (123). The conjugacyQ
class Cλ is the set of
all permutations with cycletype λ. Its size is |Cλ | = n!/zλ , with zλ = j j vj vj !, where vj is
the number of times part j occurs in λ.
c = {b
c be a set with 2n elements. A matching
Let [n] = {1, ..., n}, [n]
1, ..., n
b} and let [n]∪ [n]
on this set is a collection of n disjoint subsets with two elements each. The set of all such
matchings is Mn . The trivial matching is defined as t = {{1, b
1}, {2, b
2}, ..., {n, n
b}}. A
permutation π acts on matchings by replacing blocks like {a, b} by {π(a), π(b)}. If π(t) = m
we say that π produces the matching m.
c two
Given a matching m, let Gm be a graph with 2n vertices having labels in [n] ∪ [n],
vertices being connected by an edge if they belong to the same block in either m or t.
Since each vertex belongs to two edges, all connected components of Gm are cycles of even
length. The cosettype of m is the partition of n whose parts are half the number of edges in
the connected components of Gm . See examples in Figure 3. A permutation has the same
cosettype as the matching it produces.
c It
We realize the group S2n as the group of all permutations acting on the set [n] ∪ [n].
n
has a subgroup called the hyperoctahedral, Hn , with |Hn | = 2 n! elements, which leaves
invariant the trivial matching and is generated by permutations of the form (a b
a) or of
b
the form (a b)(b
a b). The cosets S2n /Hn ∼ Mn can therefore be represented by matchings.
The trivial matching identifies the coset of the identity permutation. We may inject Mn
into S2n by using fixed-point free involutions. This is done by the simple identification
m = {{a1 , a2 }, {a3 , a4 }, ...} 7→ σm = (a1 a2 )(a3 a4 ) · · · .
7
3
3
5
3
5
4
4
5
3
3
5
4
4
5
3
5
4
4
Figure 4: Wick’s rule for complex Gaussian random matrices. Starting from hTr(ZZ † )3 i =
P
P
†
†
†
b1 ,b2 ,b3 Za1 b1 Zb1 a2 Za2 b2 Zb2 a3 Za3 b3 Zb3 a1 , we arrange the elements around a vertex
a1 ,a2 ,a3
(left panel). Then, we produce all possible connections between the marked ends of the
arrows, respecting orientation. Two of them are shown. The map in the middle panel has
only one face of each color, and χ = 0. The map in the right panel has three faces of one
color and a single face of the other, with χ = 2.
The double cosets Hn \S2n /Hn , on the other hand, are indexed by partitions of n: two
permutations belong to the same double coset if and only if they have the same cosettype
[38] (hence this terminology). We denote by Kλ the double coset of all permutations with
cosettype λ. Its size is |Kλ | = |Hn ||Cλ |2r(λ) .
Given a sequence of 2n numbers, (i1 , ..., i2n ) we say that it satisfies the matching m
if the elements
coincide when paired according to m. This is quantified by the function
Q
∆m (i) = b∈m δib1 ,ib2 , where the product runs over the blocks of m and b1 , b2 are the
elements of block b.
2.2
Maps
Maps are graphs drawn on a surface, with a well defined sense of rotation around each
vertex. We represent the edges of a map by ribbons. These ribbons meet at the vertices,
which are represented by disks, and the region where two ribbons meet is called a corner.
It is possible to go from one vertex to another by walking along a boundary of a ribbon.
Upon arriving at a vertex, a walker may move around the boundary of the disk to a boundary
of the next ribbon, and then depart again. Such a walk we call a boundary walk. A boundary
walk that eventually retraces itself is called closed and delimits a face of the map.
Let V , E and F be respectively the numbers of vertices, edges and faces of a map.
The Euler characteristic of the map is χ = V − E + F , and it is additive in the connected
components (we do not require maps to be connected, and each connected component is
understood to be drawn on a different surface).
In all of the maps used in this work, ribbons have boundaries of two different colors. For
convenience, we represent those colors by simply drawing these boundaries with two types
os lines: dashed lines and solid lines. Ribbons are attached to vertices in such a way that all
corners and all faces have a well defined color, i.e. our maps are face-bicolored. Examples
are shown in Figures 1 and 2.
2.3
Non-hermitian Gaussian random matrices
We consider N -dimensional matrices for which the elements are independent and identicallydistributed Gaussian random variables, the so-called Ginibre ensembles [39]. For real matrices we will use the notation M , for complex ones we use Z.
8
3
3
5
3
3
5
4
5
5
4
4
4
Figure 5: Wick’s rule for real Gaussian random matrices. Starting from hTr(M M T )3 i, we
arrange the elements around a vertex (left panel). Then, we produce all possible connections
between the marked ends of the arrows. One of them is shown, which has two faces of one
color and a single face of the other, with χ = 1. Notice that the boundaries of the edges
are not oriented, and that two of them have twists.
Normalization constants for these ensembles are defined as
Z
Z
Ω
T
†
ZR = dM e− 2 Tr(M M ) , ZC = dZe−ΩTr(ZZ ) .
(14)
They can be computed (as we do later) using singular value decomposition. Average values
are denoted by
Z
Ω
1
T
dM e− 2 Tr(M M ) f (M ),
(15)
hf (M )i =
ZR
and
Z
1
†
dZe−ΩTr(ZZ ) f (Z),
(16)
hf (Z)i =
ZC
the meaning of h·i being clear from context. We have the simple covariances
hMab Mcd i =
1
δac δbd ,
Ω
(17)
and
1
δac δbd .
(18)
Ω
Polynomial integrals may be computed using Wick’s rule, which is a combinatorial
prescription for combining covariances. It simply states that, since the elements are independent, the average of a product can be decomposed in terms of products of covariances.
In the complex case, we may consider the elements of Z fixed and then permute the elements
of Z † in all possible ways,
* n
+
n
Y
1 X Y
†
δak ,dπ(k) δbk ,cπ(k) .
(19)
Zak bk Zck dk = n
Ω
∗ ∗
Zcd i = 0,
hZab Zcd i = hZab
∗
i=
hZab Zcd
π∈Sn k=1
k=1
For example,
hZa1 b1 Zd∗1 c1 Za2 b2 Zd∗2 c2 i =
1
1
δa1 ,d1 δb1 ,c1 δa2 ,d2 δb2 ,c2 + 2 δa1 ,d2 δb1 ,c2 δa2 ,d1 δb2 ,c1 .
2
Ω
Ω
In the real case, we must consider all possible matchings among the elements,
* 2n
+
Y
1 X
∆m (a)∆m (b).
Mak bk = n
Ω
k=1
m∈Mn
9
(20)
(21)
For example,
1
1
δa1 ,a2 δa3 ,a4 δb1 ,b2 δb3 ,b4 + 2 δa1 ,a3 δa2 ,a4 δb1 ,b3 δb2 ,b4
2
Ω
Ω
1
+ 2 δa1 ,a4 δa2 ,a3 δb1 ,b4 δb2 ,b3 .
Ω
hMa1 b1 Ma2 b2 Ma3 b3 Ma4 b4 i =
(22)
These Wick’s rules have a well known diagrammatic interpretation (see, e.g., [40, 41, 42,
43, 44]). In the complex case, matrix elements are represented by ribbons having borders
oriented in the same direction but with different colors. Ribbons from elements of Z have
a marked head, while ribbons from elements of Z † have a marked tail. Ribbons coming
from traces are arranged around vertices, so that all marked ends are on the outside and
all corners have a well defined color. Wick’s rule consists in making all possible connections
between ribbons, using marked ends, respecting orientation. This produces a map (not
necessarily connected). According to Eq.(18), each edge leads to a factor Ω−1 . In the real
cases, the boundaries of the ribbons are not oriented and the maps need not be orientable:
the edges may contain a ‘twist’. We show an example for the complex case in Figure 4, and
an example for the real case in Figure 5.
3
Unitary Group
3.1
Truncations
Let U be a random matrix uniformly distributed in U (N ) with the appropriate normalized
Haar measure. Let A be the M1 × M2 upper left corner of U , with N ≥ M1 + M2 and
M1 ≤ M2 . It is known [45, 46, 47, 48, 49] that A, which satisfies AA† < 1M1 , becomes
distributed with probability density given by
P (A) =
1
det(1 − AA† )N0 ,
Y1
(23)
where
N0 = N − M1 − M2
(24)
and Y1 is a normalization constant.
The value of Y1 can be computed using the singular value decomposition A = W DV ,
where W and V are matrices from U (M1 ) and U (M2 ), respectively. Matrix D is real,
diagonal and non-negative. Let T = D 2 = diag(t1 , t2 , ...). Then [40, 50, 51],
Y1 =
Z
dW
U (M1 )
Z
dV
U (M2 )
where
∆(t) =
Z
M1
1Y
0 i=1
M1
M1 Y
Y
2 −M1
dti (1 − ti )N0 tM
|∆(t)|2 ,
i
(25)
(tj − ti ).
(26)
i=1 j=i+1
If we denote the angular integrals by
Z
U (M1 )
dW
Z
U (M2 )
dV = V1 ,
(27)
then Selberg’s integral tells us that [52, 53]
Y1 = V 1
M1
Y
Γ(j + 1)Γ(M2 + 1 − j)Γ(N − M2 − j + 1)
.
Γ(N − j + 1)
j=1
10
(28)
e
Consider now an even smaller subblock of U , which is contained in A. Namely, let U
be the N1 × N2 upper left corner of U , with N1 ≤ M1 and N2 ≤ M2 . We shall make use of
e can be computed either by
the obvious fact that integrals involving matrix elements of U
integrating over U or over A. In particular, the quantity
WgU
N (π) =
Z
dU
U (N )
n
Y
k=1
with n ≤ N1 , N2 and π ∈ Sn , can also be written as
WgU
N (π)
Z
1
=
Y1
ek,k U
e†
U
k,π(k) ,
n
Y
† N0
dA det(1 − AA )
AA† <1M1
(29)
Ak,k A†k,π(k).
(30)
k=1
Notice that, although this may not be evident at first sight, the right-hand-side of
equation (30) is actually independent of M1 and M2 .
3.2
Sum over maps
The key to the diagrammatic formulation of our integral is the identity
−N0
†
det(1 − AA† )N0 = eN0 Trlog(1−AA ) = e
P∞
1
† q
q=1 q Tr(AA )
.
(31)
We shall consider the first term in the series separately from the rest, and incorporate it
into the measure, i.e. we will write
−N0
dAe
P∞
1
† q
q=1 q Tr(AA )
−N0
= dG (A)e
P∞
1
† q
q=2 q Tr(AA )
,
(32)
where dG (A) is a Gaussian measure,
†
dG (A) = dAe−N0 Tr(AA ) .
(33)
We have
WgU
N (π)
1
=
Y1
Z
−N0
AA† <1M1
dG (A)e
P∞
1
† q
q=2 q Tr(AA )
n
Y
Ak,k A†k,π(k) .
(34)
k=1
Taking into account that the series in the exponent diverges for AA† ≥ 1M1 and that
e−∞ = 0, we extend the integration to general matrices A,
WgU
N (π)
1
=
Y1
Z
−N0
dG (A)e
P∞
1
† q
q=2 q Tr(AA )
n
Y
Ak,k A†k,π(k).
(35)
k=1
Now we are in the realm of Gaussian integrals, and may apply Wick’s rule. For each
cycle of π, the elements of A and A† in the last product can be arranged in counterclockwise
order around vertices. This produces what we call ‘marked’ vertices, in number of ℓ(π).
Formally expanding the exponential in Eq.(35) as a Taylor series in N0 will produce other
vertices, let us call them ‘internal’, all of them of even valence larger than 2. This leads
to an infinite sum over maps with arbitrary numbers of internal vertices and edges. The
contribution of a map will be proportional to (−1)v N0v−E , if it has v internal vertices and
E edges, which can be written as
(−1)v N0v−E =
(−1)ℓ(π)
F +ℓ(π)
N0
11
N0χ (−1)V .
(36)
However, the application of Wick’s rule may lead to closed boundary walks that visit
only the internal vertices and avoid the marked ones. If a map has r1 closed boundary walks
of one color and r2 closed boundary walks of the other color that avoid the marked vertices,
its contribution will be proportional to M1r1 M2r2 . Crucially, since we know that WgU
N (π)
is actually independent of M1 , M2 , we are free to consider only the cases r1 = r2 = 0, or
equivalently to take the simplifying limits M1 → 0 and M2 → 0. We will therefore be left
only with maps in which all closed boundary walks visit the marked vertices.
Another point we must address is that
R the normalization constant Y1 is not equal to
the corresponding Gaussian one, ZC = dG (A), so that the diagrammatic expression for
WgU
N (π) will be multiplied by ZC /Y1 . Using again the singular value decomposition of A,
and
M 1 M 2 Y
Z ∞Y
M1
M1
1
−N0 ti M2 −M1
2
Γ(j + 1)Γ(M2 + 1 − j),
(37)
dti e
ti
|∆(t)| =
N0
0
j=1
i=1
which can be obtained as a limiting case of the Selberg integral, we arrive at
ZC = V 1
1
N0M1 M2
M1
Y
j!(M2 − j)!
(38)
j=1
and, therefore,
M1
Y
1
ZC
(N − j)!
.
= M1 M2
Y1
(N − M2 − j)!
N0
j=1
(39)
In the end, we have the simplification
ZC
= 1.
M2 →0 Y1
lim
(40)
The limit M1 → 0 is trivial. Notice that the limit of N0 is simply N .
Because of the way we arranged the elements around the marked edges, our maps will
always have 2n faces, being n of each color. We have therefore produced the maps in B(α, χ)
and proved our Theorem 1.
3.3
Sum over factorizations
Suppose a standard permutation π ∈ Sn with cycletype α ⊢ n. We shall associate factorizations of π with the maps in B(α, χ). In order to do that, we must label the corners of
the maps. We proceed as follows.
Consider first the marked vertex with largest valency, and pick a corner delimited by
solid line. Label this corner and the corner following it in counterclockwise order with
number 1. Then label the next two corners with the number 2, and so on until the label
α1 is used twice. Proceed to another marked vertex in weakly decreasing order of valency,
and repeat the above with integers from α1 + 1 to α1 + α2 . Repeat until all marked vertices
have been labelled, producing thus the cycles of π. The same procedure is then applied to
the internal vertices, producing the cycles of another standard permutation ρ, acting on the
set {n + 1, ..., E}, where E = n + m is the number of edges in the map. Notice that ρ has
no fixed points, since all internal vertices of maps in B(α, χ) have even valence larger than
2.
Let Π = πρ. See Figure 6, where Π = (12)(34) for panel a), Π = (12)(34)(56)(78) for
panel b), Π = (12)(345)(67) for panel c), and Π = (12)(3456) for panel d).
Define the permutation ω1 to be such that its cycles have the integers in the order
they are visited by the arrows in solid line. In the example of Figure 6, this would be
12
Figure 6: Labeling the maps from Figure 1, we can associate with them certain factorizations
of permutations.
ω1 = (14)(23) for panel a), ω1 = (184)(23675) for panel b), ω1 = (17)(26453) for panel c)
and ω1 = (146)(235) for panel d). The cycles of ω1 correspond to the closed boundary walks
in solid line.
Permutation τ2 is defined analogously in terms of the arrows in dashed lines. In Figure
6, this would be τ2 = (13)(24) for panel a), τ2 = (13)(285746) for panel b), τ2 = (16)(27435)
for panel c) and τ2 = (1365)(24) for panel d). The cycles of τ2 correspond to the closed
boundary walks in dashed line.
Suppose an initial integer, i. The arrow in dashed line which departs from the corner
labelled by i arrives at the corner labelled by τ2 (i). On the other hand, the image of i
under the permutation Π corresponds to the label of an outgoing arrow in solid line which,
following ω1 , also arrives at τ2 (i). Therefore, we have, by construction, ω1 Π = τ2 or,
equivalently, writing τ1 = ω1−1 , we have the factorization
Π = τ1 τ2 .
(41)
For the maps in B(α, χ) all boundary walks visit the marked vertices, which means that
all cycles of τ1 ad of τ2 must have exactly one element in the set {1, ..., n}. Therefore, the
permutations satisfy the conditions we listed in Section 1.2.2.
When we label the vertices of the map to produce a factorization, there are two kinds of
ambiguities. First, for a vertex of valency 2j there are j possible choices for the first corner
to be labelled. Second, if there are mj vertices of valency j, there are mj ! ways
Q to order
them. Hence, to a map for which the complement is ρ there correspond zρ = j j mj mj !
factorizations, where mj is the multiplicity of part j in the cycletype of ρ. The sum in (5)
can indeed be written as
WgU
N (π) =
1
N 2n+ℓ(π)
X
χ
Nχ
X
f ∈F (π,χ)
(−1)ℓ(ρ)
,
zρ
(42)
where F(π, χ) is the set of factorizations of the kind we have described for given α and χ.
This proves our Theorem 2.
13
4
Orthogonal group
4.1
Truncations
Let O be a random matrix uniformly distributed in O(N + 1) with the appropriate normalized Haar measure. Let A be the M1 × M2 upper left corner of O, with N ≥ M1 + M2 and
M1 ≤ M2 . It is known [48, 54] that A, which satisfies AAT < 1M1 , becomes distributed
with probability density given by
P (A) =
1
det(1 − AAT )N0 /2 ,
Y2
(43)
where
N0 = N − M1 − M2
(44)
and Y2 is a normalization constant. Notice that we start with O(N + 1) and not O(N ).
The value of Y2 can be computed using the singular value decomposition A = W DV ,
where W and V are matrices from O(M1 ) and O(M2 ), respectively. Matrix D is real,
diagonal and non-negative. Let T = D 2 = diag(t1 , t2 , ...). Then [50, 51],
Y2 =
Z
dW
O(M1 )
Z
dV
O(M2 )
M1
1Y
0 i=1
If we denote the angular integrals by
Selberg’s integral that
Y2 = V 2
Z
R
(M2 −M1 −1)/2
dti (1 − ti )N0 /2 ti
O(M1 ) dW
R
O(M2 ) dV
|∆(t)|.
(45)
= V2 , then we have again from
M1
Y
Γ(j/2 + 1)Γ((M2 + 1 − j)/2)Γ((N − M2 − j)/2 + 1)
.
Γ((N − j)/2 + 1)Γ(3/2)
(46)
j=1
e
Consider now an even smaller subblock of O, which is contained in A. Namely, let O
be the N1 × N2 upper left corner of O, with N1 ≤ M1 and N2 ≤ M2 . The average value of
e can be computed either by integrating over O or over
any function of matrix elements of O
A. In particular, the quantity
WgO
N +1 (β)
=
Z
dO
O(N +1)
n
Y
k=1
ek,j O
ek,j ,
O
k
b
k
(47)
where k ≤ N1 , N2 and the j’s only satisfy some matching of cosettype β, can also be written
as
Z
n
Y
1
T N0 /2
(β)
=
WgO
Ak,jk Ak,jkb .
dA
det(1
−
AA
)
(48)
N +1
Y2 AAT <1M
k=1
1
Notice that the right-hand-side of equation (48) is actually independent of M1 and M2 .
4.2
Sum over maps
Analogously to the unitary case, we have
WgO
N +1 (β)
N0
1
=
Y2
Z
−
dG (A)e
N0
2
P∞
1
T q
q=2 q Tr(AA )
2n
Y
Ak,jk Ak,jkb ,
(49)
k=1
T
where now dG (A) = e− 2 Tr(AA ) . The diagrammatical considerations proceed as previously,
except that we use the Wick’s rule of the real case and the resulting maps need not be
14
orientable. Also, a map now contributes (−N0 /2) for each internal vertex and 1/N0 for
each edge. This gives a total contribution which is proportional to
V
v
1
N0χ
1
v−E
= F +ℓ(β) −
(−2)ℓ(β) ,
(50)
N0
−
2
2
N
where v is the number of internal vertices, V = v + ℓ is the total number of vertices, E is
the number of edges and χ = F − E + V is the Euler characteristic, where F is the number
of faces. When we take M2 → 0, and then M1 → 0, we arrive at maps with no closed
boundary walks that avoid the marked vertices, having 2n faces, n of each color. We thus
arrive at the maps in the set N B(β, χ).
The Gaussian normalization constant is
Z ∞Y
Z
M1
M1
Y
N
− 20 ti (M2 −M1 −1)/2
ti
|tj − ti |
(51)
dti e
ZR = dG (A) = V2
0
= V2
j=i+1
i=1
M1 M2 Y
M1
2
2
Γ(1 + j/2)Γ((M2 + 1 − j)/2)
,
N0
Γ(3/2)
(52)
j=1
and we have
ZR
= lim
lim
M2 →0
M2 →0 Y2
2
N0
M1 M2 Y
M1
2
j=1
Γ((N + 2 − j)/2)
= 1.
Γ((N − M2 + 2 − j)/2)
(53)
Once again, the limit M1 → 0 is trivial. This reduces N0 to N . Taking into account the
contribution of the maps, already discussed, we arrive at our Theorem 3.
4.3
Sum over factorizations
We now label the maps in N B(β, χ) in order to relate them to permutations. We only need to
change slightly the labelling procedure we used for the maps in B(α, χ) in Section 3.3. First,
we replace the labels of the corners in dashed line by ‘hatted’ versions. Second, instead of
labelling corners, we now label half-edges, by rotating the previous labels counterclockwise.
This is shown in Figure 7 (where the hatted labels are enclosed by boxes, while the normal
ones are enclosed by circles).
The unhatted labels, read in anti-clockwise order around vertices, produce a permutation
Π which is standard. This can be written as Π = πρ, where π ∈ Sn has cycletype β and
the complement ρ acts on the set {n + 1, ..., E} where E = n + m is the number of edges.
As before, ρ has no fixed points, since all internal vertices of maps in N B(β, χ) have even
valence larger than 2.
A fixed-point free involution θ can be constructed from the labels that appear at the ends
of each edge. Namely, in the examples shown in Figure 7 it is given by θ = (1b
3)(b
13)(2b
4)(b
24)
b
b
b
b
b
b
b
b
b
b
b
for a), θ = (13)(13)(25)(24)(46)(56) for b) and θ = (13)(13)(25)(24)(45) for c).
We also define the hatted version of any permutation π by the equation π
b(a) = π −1 (b
a),
b
assuming b
a = a. This is clearly an involution. Permutations that are invariant under this
operation are called ‘palindromic’, such as (12b
2b
1) or (12)(b
2b
1). Any permutation that can
be written as πb
π where π is another permutation is automatically palindromic.
Define two special fixed-point free involutions,
and
p1 = (1 b
1)(2 b
2) · · · ,
\
p2 = (b
1 2)(b
2 3) · · · (βb1 1)(β\
1 + 1 β1 + 2) · · · (β1 + β2 β1 + 1) · · · .
15
(54)
(55)
Figure 7: Labelling some of the maps from Figures 1 and 2, in the way appropriate for the
orthogonal case.
Notice that the cycles of p1 contain labels which delimit corners of dashed line, while that
the cycles of p2 contain labels which delimit corners of solid line. Notice also that they
b
factor the palindromic version of the vertex permutation, p2 p1 = Π Π.
By construction, the permutation f1 = θp1 contains every other label encountered along
boundary walks around the faces delimited by boundaries in dashed line. For example, in
Figure 7 it would be f1 = (13)(24)(b
4b
2)(b
3b
1) for a), f1 = (13)(246b
5)(5b
6b
4b
2)(31) for b) and
b
b
b
f1 = (13)(245)(542)(31) for c). In particular, f1 is always palindromic.
Conversely, permutation f2 = p2 θ contains every other label encountered along boundary
walks around the faces delimited by boundaries in solid line. In Figure 7 it would be
f2 = (14)(23)(b
3b
2)(b
4b
1) for a), f2 = (14)(2b
663)(b
3b
2)(b
4b
55b
1) for b) and f2 = (14)(2b
43)(b
3b
2)(b
1b
55)
for c). Permutation f2 needs not be palindromic. Notice that f1 and f2 are factors for the
b = f2 f1 .
palindromic version of the vertex permutation, Π Π
For the maps in N B(β, χ) all boundary walks visit the marked vertices, which means
that all cycles of f1 and of f2 must have exactly one element in the set {1, ..., n, b
1, ..., n
b}.
Therefore, the permutations satisfy the conditions we listed in Section 1.3.2. When we label
the vertices of the map to produce a factorization, the same ambiguities arise as for the
unitary group, which are accounted for by division by the factor zρ . We have therefore
arrived at factorizations in N F(β, χ) and proved our Theorem 4.
Appendix - Other factorizations
4.4
Unitary case
A monotone factorization of π is a sequence (τ1 , ..., τk ) of transpositions τi = (si ti ), ti > si
and ti ≥ ti−1 , such that π = τ1 · · · τk . The number of transpositions, k, is the length of
the factorization. Let Mαk be the number of length k monotone factorizations of π, with
cycletype α. Using the theory of Jucys-Murphy elements, Matsumoto and Novak showed
that
∞
X
n+ℓ(α)
WgU
(α)
=
(−1)
Mαk N −n−k .
(56)
N
k=0
A proper factorization of π is a sequence of permutations (τ1 , ..., τk ), in which no one is
the identity, such that
P π = τ1 · · · τk . The depth of a proper factorization is te sum of the
ranks of the factors, kj=1 r(τj ).
Let Pαk,d be the number of proper factorizations of π, with cycletype α, having length k
16
and depth d. It is known that [55]
Pαk,d
k
X Y
χ
(µ
)
1 X
λ j
|Cµj |
χλ (1n )χλ (α)
δPj r(µj ),d ,
=
n)
n!
χ
(1
λ
µ ···µ
λ⊢n
1
k
(57)
j=1
where all partitions µ are different from 1n . Starting from the character expansion of the
Weingarten function,
1 X χλ (1n )2
χλ (α),
(58)
WgU
(α)
=
N
n!2
sλ (1N )
λ⊢n
Collins used [6] the Schur function expansion
n
n
X
1
χλ (1 )N
sλ (1N ) =
|Cµ |χλ (µ)N ℓ(µ) =
1+
n!
n!
µ⊢n
X
|Cµ |
µ⊢n,µ6=1n
to arrive at the expression
WgU
N (α) =
χλ (µ) −r(µ)
N
,
χλ (1n )
∞
k
X
X Y
χλ (µj ) −r(µj )
1 X
n
k
N
.
|Cµj |
χ
(1
)χ
(α)
(−1)
λ
λ
n)
n!N n
χ
(1
λ
µ ···µ
λ⊢n
k=1
1
k
(59)
(60)
j=1
Comparing with (57) ones concludes that
WgU
N (α)
!
∞
d
X
X
k k,d
=
N −n−d .
(−1) Pα
d=0
(61)
k=1
A cycle factorization π is a sequence of permutations (τ1 , ..., τk ), in which all factors
have only one cycle besides singletons, i.e. their cycletypes are hook partitions. Inequivalent
cycle factorizations are equivalence classes of cycle factorizations, two factorizations being
equivalent if they differ by the swapping of adjacent commuting factors. Berkolaiko and
Irving show [25] that the number of such factorizations of π, with cycletype α, having length
k and depth d, denoted by us Iαk,d , satisfy
X
X
(−1)k Pαk,d = (−1)n+ℓ(α) Mαd .
(62)
(−1)k Iαk,d =
k
k
These results are indexed by depth, but one can use Euler characteristic instead, by resorting
to the equality χ = n + ℓ(α) − d.
4.5
Orthogonal case
c Let h be the operation of ‘forgetting’
Consider again permutations acting on the set [n]∪ [n].
the hat, i.e. h(a) = h(b
a) = a for all a ∈ [n].
Matsumoto defined the following analogue of monotone factorizations [31]. Let m be a
matching and let (τ1 , ..., τk ) be a sequence of transpositions τi = (si ti ), in which all ti ∈ [n]
with ti ≥ ti−1 and ti > h(si ), such that m = τ1 · · · τk (t), where t is the trivial matching. Let
fk be the number of length k such factorizations of some m with cycletype β. Then,
M
α
WgO
N (β) =
∞
X
k=0
fk N −n−k .
(−1)k M
β
(63)
The analogue of proper factorizations in this context is, for a permutation σ with cosettype β, a sequence of permutations (τ1 , ..., τk ), no one having the same cosettype as the
17
identity, such that σ = τ1 · · · τk . Let Peβk,d be the number of such factorizations having
length k and depth d. We know from [28, 29, 30] that (actually these works only consider
k = 2, but the extension to higher values of k is straightforward)
k
X
Y
X
1
|Kµj |ωλ (µj ) δPj r(µj ),d ,
(64)
χ2λ (12n )ωλ (β)
Peβk,d =
(2n)!
µ ···µ
λ⊢n
1
where
ωλ (τ ) =
k
j=1
1 X
χ2λ (τ ξ)
2n n!
(65)
ξ∈Hn
are the zonal spherical function of the Gelfand pair (S2n , Hn ) (they depend only on the
cosettype of τ ; see [38]).
The relation between the above factorizations and orthogonal Weingarten functions
comes as follows. The character-theoretic expression for the orthogonal Weingarten function
is [8]
2n n! X χ2λ (1n )
ωλ (τ ),
(66)
WgO
N (τ ) =
(2n)!
Zλ (1N )
λ⊢n
where Zλ are zonal polynomials.
Following the same procedure used for the unitary group,
P
we expand Zλ (1N ) = 2n1n! µ⊢n |Kµ |ωλ (µ)N ℓ(µ) to arrive at
WgO
N (τ )
k
∞
X Y
|Kµj |
2n n! X χ2λ (1n )ωλ (τ ) X
k
(−1)
ω (µ )N −r(µj ) .
=
n
n n! λ j
(2n)!
N
2
µ ···µ
λ⊢n
k=1
1
k
(67)
j=1
Comparing with (64), we see that
WgO
N (τ ) =
∞
d
X
X
d=0
k=1
!
(−1)k ek,d
P
N −n−d .
(2n n!)k−1 β
(68)
Berkolaiko and Kuipers have provided a combinatorial description of the coefficients in
the 1/N expansion of the function WgO
N +1 [32] (they actually worked with the so-called
Circular Orthogonal Ensemble of unitary symmetric matrices, but the Weingarten function
of that ensemble coincides [13] with WgO
N +1 ). A palindromic monotone factorization is a
sequence (τ1 , ..., τk ) of transpositions τi = (si ti ), with ti > si and ti ≥ ti−1 , such that πb
π=
k
c
τ1 · · · τk τbk · · · τb1 . Let Mβ be the number of length k palindromic monotone factorizations of
πb
π , with π a permutation of cycletype β. Then,
WgO
N +1 (β) =
∞
X
k=0
ck N −n−k .
(−1)k M
β
(69)
An appropriate analogue of inequivalent cycle factorizations is currently missing, but
we conjecture that, whatever they are, their counting function will be related to coefficients
in 1/N expansions of orthogonal Weingarten functions.
References
[1] D. Weingarten, Asymptotic behavior of group integrals in the limit of infinite rank. J.
Math. Phys. 19, 999 (1978).
[2] S. Samuel, U (N ) integrals, 1/N , and de Wit–’t Hooft anomalies. J. Math. Phys. 21,
2695 (1980).
18
[3] P. A. Mello, Averages on the unitary group and applications to the problem of disordered conductors. J. Phys. A: Math. Gen. 23 4061 (1990).
[4] P. W. Brouwer and C.W.J. Beenakker, Diagrammatic method of integration over the
unitary group, with applications to quantum transport in mesoscopic systems. J. Math.
Phys. 37, 4904 (1996).
[5] M. Degli Esposti and A. Knauf, On the form factor for the unitary group. J. Math.
Phys. 45, 4957 (2004).
[6] B. Collins, Moments and cumulants of polynomial random variables on unitary groups,
the Itzykson-Zuber integral, and free probability. Int. Math. Res. Not. 17, 953 (2003).
[7] B. Collins and P. Śniady, Integration with respect to the Haar measure on unitary,
orthogonal and symplectic group. Commun. Math. Phys. 264, 773 (2006).
[8] B. Collins and S. Matsumoto, On some properties of orthogonal Weingarten functions.
J. Math. Phys. 50, 113516 (2009).
[9] J.-B. Zuber, The large-N limit of matrix integrals over the orthogonal group, J. Phys.
A: Math. Theor. 41, 382001 (2008).
[10] T. Banica, B. Collins and J.-M. Schlenker, On polynomial integrals over the orthogonal
group. J. Combinat. Theory A 118, 78 (2011).
[11] S. Matsumoto and J. Novak, Jucys-Murphy elements and unitary matrix integrals. Int.
Math. Research Not. 2, 362 (2013).
[12] P. Zinn-Justin, Jucys-Murphy elements and Weingarten matrices. Lett. Math. Phys.
91, 119 (2010).
[13] S. Matsumoto, Weingarten calculus for matrix ensembles associated with compact
symmetric spaces. Random Matrices: Theory and Applications 2, 1350001 (2013).
[14] A.J. Scott, Optimizing quantum process tomography with unitary 2-designs. J. Phys.
A: Math. Theor. 41, 055308 (2008).
[15] M. Žnidarič, C. Pineda and I. Garcı́a-Mata, Non-Markovian behavior of small and large
complex quantum systems. Phys. Rev. Lett. 107, 080404 (2011).
[16] M. Cramer, Thermalization under randomized local Hamiltonians. New J. Phys. 14,
053051 (2012).
[17] Vinayak and M. Žnidarič, Subsystem’s dynamics under random Hamiltonian evolution.
J. Phys. A: Math. Theor. 45, 125204 (2012).
[18] M. Novaes, A semiclassical matrix model for quantum chaotic transport. J. Phys. A:
Math. Theor. 46, 502002 (2013).
[19] M. Novaes, Semiclassical matrix model for quantum chaotic transport with timereversal symmetry. Annals of Physics 361, 51 (2015).
[20] P. Zinn-Justin and J.-B. Zuber, On some integrals over the U (N ) unitary group and
their large N limit, J. Phys. A: Math. Gen. 36, 3173 (2003).
[21] B. Collins, A. Guionnet and E. Maurel-Segala, Asymptotics of unitary and orthogonal
matrix integrals, Advances in Mathematics 222, 172 (2009).
19
[22] M. Bousquet-Mélou, G. Schaeffer, Enumeration of planar constellations. Adv. in Appl.
Math. 24, 337 (2000).
[23] J. Irving, Minimal transitive factorizations of permutations into cycles. Canad. J. Math.
61, 1092 (2009).
[24] J. Bouttier, Matrix integrals and enumeration of maps. Chapter 26 in The Oxford
Handbook of Random Matrix Theory (Oxford, 2011), G. Akemann, J. Baik and P. Di
Francesco (Editors).
[25] G. Berkolaiko and
arXiv:1405.5255v2.
J.
Irving,
Inequivalent
factorizations
of
permutations.
[26] M. Bóna and B. Pittel, On the cycle structure of the product of random maximal
cycles, arXiv:1601.00319v1.
[27] O. Bernardi, A. Morales, R. Stanley and R. Du, Separation probabilities for products
of permutations, Combinatorics, Probability and Computing 23, 201 (2014).
[28] A.H. Morales and E.A. Vassilieva, Bijective evaluation of the connection coefficients of
the double coset algebra, arXiv:1011.5001v1.
[29] P.J. Hanlon, R.P. Stanley and J.R. Stembridge, Some combinatorial aspects of the
spectra of normally distributed random matrices. Contemporary Mathematics 138,
151 (1992).
[30] I.P. Goulden and D.M. Jackson, Maps in locally orientable surfaces, the double coset
algebra, and zonal polynomials. Can. J. Math. 48, 569 (1996).
[31] S. Matsumoto, Jucys–Murphy elements, orthogonal matrix integrals, and Jack measures. Ramanujan J. 26, 69 (2011).
[32] G. Berkolaiko and J. Kuipers, Combinatorial theory of the semiclassical evaluation of
transport moments I: Equivalence with the random matrix approach. J. Math. Phys.
54, 112103 (2013).
[33] S. Müller, S. Heusler, P. Braun and F. Haake, Semiclassical approach to chaotic quantum transport. New Journal of Physics 9, 12 (2007).
[34] S. Müller, S. Heusler, P. Braun, F. Haake and A. Altland, Periodic-orbit theory of
universality in quantum chaos. Phys. Rev. E 72, 046207 (2005).
[35] G. Berkolaiko and J. Kuipers, Combinatorial theory of the semiclassical evaluation
of transport moments II: Algorithmic approach for moment generating functions. J.
Math. Phys. 54, 123505 (2013).
[36] M. Novaes, Semiclassical approach to universality in quantum chaotic transport. Europhys. Lett. 98, 20006 (2012).
[37] M. Novaes, Combinatorial problems in the semiclassical approach to quantum chaotic
transport. J. Phys. A: Math. Theor. 46, 095101 (2013).
[38] I.G. Macdonald, Symmetric Functions and Hall Polynomials, 2nd edn. (Oxford University Press, Oxford, 1995).
[39] J. Ginibre, Statistical Ensembles of Complex, Quaternion, and Real Matrices. J. Math.
Phys. 6, 440 (1965).
20
[40] T.R. Morris, Chequered surfaces and complex matrices. Nuclear Physics B 356 (1991)
703.
[41] G. ’t Hooft, A planar diagram theory for strong interactions. Nucl. Phys. B 72, 461
(1974).
[42] D. Bessis, C. Itzykson and J.B. Zuber, Quantum field theory techniques in graphical
enumeration. Adv. Appl. Math. 1, 109 (1980).
[43] P. Di Francesco, Rectangular matrix models and combinatorics of colored graphs. Nuclear Physics B 648, 461 (2003).
[44] M. Novaes, Statistics of time delay and scattering correlation functions in chaotic
systems. II. Semiclassical Approximation. J. Math. Phys. 56, 062109 (2015).
[45] W.A. Friedman and P.A. Mello, Marginal distribution of an arbitrary square submatrix
of the S-matrix for Dyson’s measure. J. Phys. A: Math. Gen. 18, 425 (1985).
[46] K. Życzkowski and H.-J. Sommers, Truncations of random unitary matrices. J. Phys.
A: Math. Gen. 33, 2045 (2000).
[47] Y.A. Neretin, Hua-type integrals over unitary groups and over projective limits of
unitary groups. Duke Math. J. 114, 239 (2002).
[48] P.J. Forrester, Quantum conductance problems and the Jacobi ensemble. J. Phys. A:
Math. Gen. 39, 6861 (2006).
[49] Y.V. Fyodorov and B.A. Khoruzhenko, A few remarks on colour-flavour transformations, truncations of random unitary matrices, Berezin reproducing kernels and Selbergtype integrals. J. Phys. A: Math. Theor. 40, 669 (2007).
[50] J. Shen, On the singular values of Gaussian random matrices. Linear Algebra Appl.
326, 1 (2001).
[51] A. Edelman and N.R. Rao, Random matrix theory. Acta Numerica 14, 233 (2005).
[52] M.L. Mehta, Random Matrices, Chapter 17 (Academic Press, 2004).
[53] P.J. Forrester and S.O. Warnaar, The importance of the Selberg integral. Bull. Amer.
Math. Soc. 45, 489 (2008).
[54] B.A. Khoruzhenko, H.-J. Sommers and K. Życzkowski, Truncations of random orthogonal matrices. Phys. Rev. E 82, 040106(R) (2010).
[55] R.P. Stanley, Enumerative Combinatorics, vol. 2, Cambridge Univ. Press, Cambridge,
1999.
21
| 4 |
A REMARK ON TORSION GROWTH IN HOMOLOGY
AND VOLUME OF 3-MANIFOLDS
arXiv:1802.09244v1 [math.GR] 26 Feb 2018
HOLGER KAMMEYER
Abstract. We show that Lück’s conjecture on torsion growth in homology implies that two 3-manifolds have equal volume if the fundamental
groups have the same set of finite quotients.
The purpose of this note is to relate two well-known open problems which
both deal with a residually finite fundamental group Γ of an odd-dimensional
aspherical manifold. The first one [11, Conjecture 1.12(2)] predicts that the
ℓ2 -torsion ρ(2) (Γ) determines the exponential rate at which torsion in middledegree homology grows along a chain of finite index normal subgroups.
Conjecture A. Let M be an aspherical closed manifold of dimension 2d+1.
Suppose that Γ = π1 M is residually finite and letTΓ = Γ0 ≥ Γ1 ≥ · · · be any
chain of finite index normal subgroups of Γ with ∞
n=0 Γn = {1}. Then
lim
n→∞
log |Hd (Γn )tors |
= (−1)d ρ(2) (Γ).
[Γ : Γn ]
The term |Hd (Γn )tors | denotes the order of the torsion subgroup of Hd (Γn ).
The ℓ2 -torsion ρ(2) (Γ) is the ℓ2 -counterpart to Reidemeister torsion as surveyed in [12] and [7]. The second conjecture says that volume of 3-manifolds
can be recovered from the finite quotients of the fundamental group.
Conjecture B. Let Γ and Λ be infinite fundamental groups of connected,
b∼
b Then
closed, orientable, irreducible 3-manifolds and suppose that Γ
= Λ.
vol(Γ) = vol(Λ).
b of Γ is the projective limit over all finite
Here the profinite completion Γ
quotients of Γ. Two groups have isomorphic profinite completions if and only
if they have the same set of finite quotients [18, Corollary 3.2.8]. If Γ = π1 M
for a 3-manifold M with the stated properties, then Thurston geometrization
applies to M : there is a minimal choice of finitely many disjointly embedded
incompressible tori in M , unique up to isotopy, which cut M into pieces such
that each piece carries one out of eight geometries. The sum of the volumes
of the hyperbolic pieces gives the well-defined quantity vol(Γ). Conjecture B
is often stated as a question [2, Question 3.18]. But we dare to promote it
to a conjecture in view of the following result.
Theorem 1. Conjecture A implies Conjecture B.
The theorem seems to be folklore among the experts in the field but I
could not find a proof in the literature so that this note is meant as a service
to the community.
2010 Mathematics Subject Classification. 20E18, 57M27.
1
2
HOLGER KAMMEYER
The contrapositive of Theorem 1 says that constructing two profinitely isomorphic 3-manifold groups with differing covolume would disprove Conjecture A. Funar [4] and Hempel [6] constructed examples of closed 3-manifolds
with non-isomorphic but profinitely isomorphic fundamental groups. These
examples carry Sol and H2 ×R geometry, respectively, and thus all have zero
volume by definition. Wilkes [19] showed that Hempel’s examples are the
only ones among Seifert-fiber spaces. It seems to be open whether there exist
such examples with H3 -geometry. As a first step in the negative direction,
Bridson and Reid [3] showed that the figure eight knot group is determined
among 3-manifold groups by the profinite completion.
The paper at hand is divided into two sections. Section 1 presents the
proof of Theorem 1. As a complement, Section 2 discusses how the related
asymptotic volume conjecture and the Bergeron–Venkatesh conjecture fit into
the picture.
I wish to thank S. Kionke and J. Raimbault for helpful discussions during
the junior trimester program “Topology” at the Hausdorff Research Institute
for Mathematics in Bonn.
1. Proof of Theorem 1
For the moment, let Γ and Λ be any two finitely generated, residually
finite groups. To prepare the proof of Theorem 1, we collect a couple of
propositions from the survey article [17] and include more detailed proofs
for the sake of a self-contained treatment. We first recall that the open
b are precisely the subgroups of finite index. One direction is
subgroups of Γ
b
easy: Γ is compact and the cosets of an open subgroup form a disjoint open
cover. The converse is a deep theorem due to Nikolov and Segal [15] that
crucially relies on the assumption that Γ is finitely generated. The proof
moreover invokes the classification of finite simple groups.
The assumption that Γ is residually finite says precisely that the canonical
b is an embedding. If Q is a finite group, the universal property of
map Γ → Γ
b
b Q) → Hom(Γ, Q) is a surjection. By
Γ says that the restriction map Hom(Γ,
ϕ
b−
the above, the kernel of any homomorphism Γ
→ Q is open which implies
that ϕ is continuous and is thus determined by the values on the dense subset
b Thus Hom(Γ,
b Q) → Hom(Γ, Q) is in fact a bijection which clearly
Γ ⊂ Γ.
b Q) → Epi(Γ, Q) of surjective homomorphisms.
restricts to a bijection Epi(Γ,
This has the following consequence.
b then there is an epimorphism
Proposition 2. If Λ embeds densely into Γ,
H1 (Λ) → H1 (Γ).
Proof. Let p be a prime number which does not divide the group order
|H1 (Λ)tors | and let us set r = dimQ H1 (Γ; Q). It is apparent that we have
an epimorphism Γ → (Z/pZ)r ⊕ H1 (Γ)tors . By the above remarks, this
b → (Z/pZ)r ⊕ H1 (Γ)tors .
epimorphism extends uniquely to an epimorphism Γ
b the latter map restricts to an epimorphism
Since Λ embeds densely into Γ,
r
Λ → (Z/pZ) ⊕ H1 (Γ)tors . This epimorphism must lift to an epimorphism
Λ → Zr ⊕ H1 (Γ)tors ∼
= H1 (Γ) because p is coprime to |H1 (Λ)tors |. Of course
this last epimorphism factors through the abelianization H1 (Λ).
A REMARK ON TORSION GROWTH IN HOMOLOGY
3
b ∼
b then
Corollary 3. The abelianization is a profinite invariant: if Γ
= Λ,
∼
H1 (Γ) = H1 (Λ).
Proof. Since we have surjections in both directions the groups H1 (Γ) and
H1 (Λ) have the same free abelian rank. Thus either surjection restricts to
an isomorphism of the free parts and thus induces a surjection of the finite
torsion quotients—which then must be a bijection.
b called the profinite
Let us now endow Γ with the subspace topology of Γ,
topology of Γ. For the open subgroups of Γ we have the same situation as
b
we observed for Γ.
Proposition 4. A subgroup H ≤ Γ is open in the profinite topology if and
only if H has finite index in Γ.
b carries the coarsest topology under which the projecProof. Recall that Γ
b
tions Γ → Γ/Γi for finite index normal subgroups Γi E Γ are continuous.
b → Γ/Γi are the canonical projections, it
Since the compositions Γ → Γ
b is given by the
follows that a subbase for the subspace topology of Γ ⊂ Γ
cosets of finite index normal subgroups of Γ.
T
If H has finite index in Γ, then so does the normal core N = g∈Γ gHg−1
because N is precisely the kernel of the permutation representation
S of Γ on
the homogeneous set Γ/H defined by left translation. Thus H = h∈H hN
is open. Conversely, let H ≤ Γ be open. Then H is a union of finite
intersections of finite index normal subgroups of Γ. In particular H contains
a finite index subgroup, whence has finite index itself.
b defines a 1-1–correspondence
Proposition 5. Taking closure H 7→ H in Γ
from the open (or finite index) subgroups of Γ to the open (or finite index)
b The inverse is given by intersection H 7→ H ∩ Γ with Γ.
subgroups of Γ.
This correspondence preserves the index, sends a normal subgroup N E Γ to
∼
b and in the latter case we have Γ/N
b
a normal subgroup N E Γ,
= Γ/N .
The proof is given in [18, Prop. 3.2.2, p. 84]. Here is an easy consequence.
Corollary 6. For H1 , H2 ≤ Γ of finite index we have H1 ∩ H2 = H1 ∩ H2 .
b and we get
Proof. By the proposition H1 ∩ H2 has finite index in Γ
(H1 ∩ H2 ) ∩ Γ = (H1 ∩ Γ) ∩ (H2 ∩ Γ) = H1 ∩ H2 .
Applying the proposition again yields H1 ∩ H2 = H1 ∩ H2 .
Note that for a finitely generated, residually finite group Γ there is a
canonical choice of a chain
Γ = M1 ≥ M2 ≥ M3 ≥ · · ·
T
of finite index normal subgroups Mn E Γ satisfying ∞
n=1 Mn = {1}. Simply
define Mn to be the intersection of the (finitely many!) normal subgroups
of index at most n. By the last two results, Mn is the intersection of all
b with index at most n.
normal subgroups of Γ
T
Proposition 7. The intersection ∞
n=1 Mn is trivial.
4
HOLGER KAMMEYER
T
b
Proof. Let g ∈ ∞
n=1 Mn ⊂ Γ. Since Γ is finitely generated, it has only countably many subgroups of finite index. Therefore the description of the topolb given above shows that Γ
b is second and thus first countable. Hence
ogy of Γ
b with limi→∞ gi = g.
we can pick a sequence (gi ) from the dense subset Γ ⊂ Γ
b
b
Let pn : Γ → Γ/Mn and pbn : Γ → Γ/Mn denote the canonical projections.
Since pbn is continuous, we have
Mn = pbn (g) = lim pbn (gi )
i→∞
∼
b
and hence limi→∞ pn (gi ) = Mn ∈ Γ/Mn because Γ/M
n = Γ/Mn by Proposition 5. As Γ/Mn is discrete, the sequence pn (gi ) is eventually constant.
This means that for all n ≥ 1 there is N ≥ 1 such that for all i ≥ N we
have pn (gi ) = Mn , or equivalently gi ∈ Mn . But the open sets Mn form a
neighborhood basis of 1 ∈ Γ as follows from the description of the profinite
topology of Γ given in the proof of Proposition 4. So the last statement gives
b (and hence Γ) is Hausdorff, we conclude g = 1.
limi→∞ gi = 1. Since Γ
b is residually finite as an abstract group. Before we give
It follows that Γ
the proof of Theorem 1, we put down one more observation. If H ≤ Γ is any
b is a profinite group so that the universal
subgroup, then the closure H in Γ
b
b → H which restricts
property of H gives a canonical homomorphism η : H
to the identity on H. This is always an epimorphism because the image is
dense, as it contains H, and closed because it is compact and H is Hausdorff.
However, in general we cannot expect that η is injective, not even if H is
finitely generated. Nevertheless:
b →
Proposition 8. If H ≤ Γ has finite index, then the canonical map η : H
H is an isomorphism.
Proof. Let h ∈ ker η. The group H is finitely generated because it is a finite
b is second and hence
index subgroup of Γ. As above we conclude that H
b we can thus pick a sequence of
first countable. Since H lies densely in H,
elements hi ∈ H such that limi→∞ hi = h. By continuity of η, we obtain
limi→∞ η(hi ) = η(h) = 1 and thus limi→∞ hi = 1 in the topology of H. A
neighborhood basis of 1 ∈ H is given by the sets Mn ∩ H where Mn are the
b from above. It follows that for all n ≥ 1
finite index normal subgroups of Γ
there exists N ≥ 1 such that for all i ≥ N we have hi ∈ Mn ∩H. Since H has
finite index in Γ, it follows that any finite index normal subgroup K E H
has also finite index as a subgroup of Γ. Thus there exists n ≥ 1 such that
Mn lies in the normal core of K as a subgroup of Γ. Hence for all K E H
of finite index there exists N ≥ 1 such that for all i ≥ N we have hi ∈ K.
But the finite index normal subgroups K E H form a neighborhood basis
of 1 ∈ H in the profinite topology of H. Hence we have limi→∞ hi = 1 in
b Since H
b is Hausdorff, we conclude h = 1.
the topology of H.
Proof of Theorem 1. Note that Γ and Λ are finitely generated and residually
b∼
b
finite, as a consequence of geometrization [5]. We fix an isomorphism Γ
= Λ.
Again, let Mn ≤ Γ be the intersection of all normal subgroups of Γ of
index at most n. By Proposition 5 it follows that Ln = Λ ∩ Mn is the
intersection of all normal subgroups of Λ of index at most n and [Γ : Mn ] =
A REMARK ON TORSION GROWTH IN HOMOLOGY
5
T
T
[Λ : Ln ]. By Proposition 7 we have n Mn = {1} so that n Ln = {1}. From
dn ∼
cn so that Corollary 3 implies |H1 (Mn )tors | =
Proposition 8 we get M
=L
|H1 (Ln )tors |. A theorem of Lück and Schick [13, Theorem 0.7] conjectured
in Lott and Lück [9, Conjecture 7.7] shows that ρ(2) (Γ) = − vol(Γ)/6π and
similarly for Λ, see also [12, Theorem 4.3, p. 216]. If Conjecture A holds
true, this implies
log |H1 (Ln )tors |
log |H1 (Mn )tors |
= 6π lim
= vol(Λ).
n→∞
n→∞
[Γ : Mn ]
[Λ : Ln ]
vol(Γ) = 6π lim
2. Related conjectures
One can find companion conjectures to Conjecture A in the literature
which likewise predict an exponential rate of torsion growth in homology
proportional to volume. However, these conjectures restrict the aspherical
manifolds under consideration in one way or another. Specifically dealing
with 3-manifolds is Lê’s asymptotic volume conjecture.
Conjecture C. Let Γ be the fundamental group of a connected, orientable,
irreducible, compact 3-manifold whose boundary is either empty or a collection of tori. Then
lim sup
Γn →{1}
vol(Γ)
log |H1 (Γn )tors |
=
.
[Γ : Γn ]
6π
The conjecture appears in [8, Conjecture 1 (a)]. The volume vol(Γ) is defined by a geometric decomposition as before which also exists for toroidal
boundary. The lim sup on the left hand side is defined as the lowest upper
bound of all lim sups along sequences (Γn ) of (not necessarily nested!) finite index normal subgroups of Γ with lim supn Γn = {1}. Recall that by
definition
\ [
lim supn Γn =
Γn
N ≥0 n≥N
so that the condition lim supn Γn = {1} is actually equivalent to requiring
(
1 if g = e,
lim trC[Γ/Γn ] (gΓn ) = trC[Γ] (g) =
n→∞
0 otherwise,
for all g ∈ Γ where the traces are the usual traces of group algebras given
by the unit coefficient.
Question 9. Does Conjecture C imply Conjecture B?
The proof of Theorem 1 does not immediately carry over to Question 9 as
lim supn Γn = {1} for some sequence (Γn ) does not imply lim sup Λn = {1}
for the groups Λn = Λ ∩ Γn . Here is an example.
Example 10. Let Γ = Z×Z with (nested) chain of subgroups Γn = 2n Z×3nZ.
Q
b=Z
b × Z.
b From the description Z
b∼
Clearly, we have Γ
= p Zp it is apparent
b : N Z]
b = N for any N ≥ 1. Since N Z is the only subgroup of index N
that [Z
6
HOLGER KAMMEYER
b Thus we have Γn = 2n Z
b × 3n Z.
b
in Z, Proposition 5 implies that N Z = N Z.
It follows that
∞
\
Y
Y
b × Z.
b
Γn ∼
Zp × Z2 × {0} ×
Zq ≤ Z
= {0} ×
n=1
p>2
q>3
b be the subgroup generated by the two elements
So if we let Λ ≤ Γ
((0, 1, 1, . . .), (1, 0, 0, 0, . . .)) and ((1, 0, 0, 0 . . .), (0, 1, 1, 1, . . .))
Q
b × Z,
b then clearly Λ ∼
b so that
in p Zp × p Zp ∼
=Z
= Z × Z is dense in Γ
b→Λ=Γ
b is a surjective homomorphism of isomorphic
the canonical map Λ
finitely generated profinite groups. Hence it T
must be an isomorphism [18,
Proposition 2.5.2, p. 46]. However, we have ∞
n=1 Λn 6= {0} even though
T
∞
Γ
=
{0}.
n
n=1
Q
We remark that Lê has proven the inequality “≤” of Conjecture C, even
if the subgroups are not required to be normal. Another conjecture, which
leaves both the realm of 3-manifolds and of normal subgroups, is due to
Bergeron and Venkatesh [1, Conjecture 1.3]. It does however assume a
somewhat rigorous arithmetic setting. This is what we want to present
next.
Let G be a semisimple algebraic group, defined and anisotropic over Q.
Let Γ ≤ G(Q) be a congruence subgroup. This means that for some (and
then for any) Q-embedding ρ : G → GLn there is k ≥ 1 such that the group
ρ(Γ) contains the kernel of ρ(G) ∩ GLn (Z) → GLn (Z/kZ) as a subgroup of
finite index. Fix an algebraic representation of G on a finite-dimensional
Q-vector space W and let M ⊂ W be a Γ-invariant Z-lattice, which always
exists according to [16, Remark, p. T
173]. Let Γ = Γ0 ≥ Γ1 ≥ · · · be a
chain of congruence subgroups with n Γn = {1}. For a maximal compact
subgroup K of G = G(R), we denote by X = G/K the symmetric space
associated with G. Let g and k be the Lie algebras of G and K and let
δ(G) = rankC g ⊗ C − rankC k ⊗ C be the deficiency of G, sometimes also
known as the fundamental rank δ(X) of X.
Conjecture D. For each d ≥ 1 there is a constant cG,M,d ≥ 0 such that
log |Hd (Γn ; M )tors |
= cG,M,d vol(Γ)
n→∞
[Γ : Γn ]
lim
and cG,M,d > 0 if and only if δ(G) = 1 and dim X = 2d + 1.
In this case the volume vol(Γ) is the volume of the closed locally symmetric
space Γ\X which is defined by means of a Haar measure on G and as such
only unique up to scaling. But any rescaling of this measure would also
rescale the constant cG,M,d by the reciprocal value so that the product is
well-defined. To make sure that cG,M,d really only depends on G, M , and
d, we agree upon the following normalization of the Haar measure. The
Killing form on g restricts to a positive definite form on the subspace p
in the orthogonal Cartan decomposition g = k ⊕ p. Identifying p with the
tangent space TK X, we obtain a G-invariant metric on X by translation.
We require that the volume of Γ\X determined by Haar measure be equal
to the volume of Γ\X as Riemannian manifold.
A REMARK ON TORSION GROWTH IN HOMOLOGY
7
To relate Conjecture D to Conjecture B, we need to restrict our attention to arithmetic hyperbolic 3-manifolds. These are quotients of hyperbolic
3-space H3 by arithmetic Kleinian groups. A Kleinian group is a discrete
subgroup Γ ≤ PSL(2, C) ∼
= Isom+ (H3 ) such that vol(Γ) = vol(Γ\H3 ) < ∞.
A Kleinian group Γ ≤ PSL(2, C) is called arithmetic if there exists a semisimple linear algebraic Q-group H ≤ GLn and an epimorphism of Lie groups
φ : H(R)0 → PSL(2, C) with compact kernel such that Γ is commensurable
with φ(H(Z) ∩ H(R)0 ). Here H(R)0 denotes the unit component and two
subgroups of a third group are called commensurable if their intersection has
finite index in both subgroups. Note that we consider PSL(2, C) as a real
Lie group so that the complexified lie algebra is sl(2, C) ⊕ sl(2, C) and hence
δ(PSL(2, C)) = 1. There is an alternative and equivalent approach to the
definition of arithmetic Kleinian groups via orders in quaternion algebras
over number fields [14].
b = Λ.
b
Question 11. Let Γ and Λ be arithmetic Kleinian groups such that Γ
Suppose Conjecture D holds true. Can we conclude that vol(Γ) = vol(Λ)?
Again, various problems arise when trying to adapt the proof of Theorem 1
to settle this question in the affirmative. To be more concrete, a direct translation fails for the following reason. Let Mn be the intersection of all normal
subgroups of index at most n in the arithmetic group H(Z) corresponding
to Γ as above. Then Mn will not consist of congruence subgroups. In fact,
H(Z) has the congruence subgroup property if and only if all the groups Mn
are congruence subgroups. But the congruence subgroup property is well
known to fail for all arithmetic Kleinian groups [10]. Instead, one could try
to start with a chain of congruence subgroups Γn of Γ but then it seems
unclear if or under what circumstances the chain Λn = Γn ∩ Λ consists of
congruence subgroups in Λ.
We remark that for the trivial coefficient system Z ⊂ Q, Conjecture D
is wide open. However, in our relevant case of δ(G) = 1, Bergeron and
Venkatesh construct strongly acyclic coefficient modules M with the property that the spectrum of the Laplacian acting on M ⊗Z C-valued p-forms
on Γn \X is bounded away from zero for all p and n. In the special case
G = SL(2, C), they show that Conjecture D holds true for any strongly
acyclic M .
References
[1] N. Bergeron and A. Venkatesh, The asymptotic growth of torsion homology for arithmetic groups, J. Inst. Math. Jussieu 12 (2013), no. 2, 391–447. MR 3028790 ↑6
[2] M. Boileau and S. Friedl, Profinite completions and 3-manifold groups, RIMS
Kokyuroku 1991 (2016), 54–68. http://hdl.handle.net/2433/224629. ↑1
[3] M. Bridson and A. Reid, Profinite rigidity, fibering, and the figure-eight knot (2015).
e-print. arXiv:1505.07886 ↑2
[4] L. Funar, Torus bundles not distinguished by TQFT invariants, Geom. Topol. 17
(2013), no. 4, 2289–2344. With an appendix by Funar and Andrei Rapinchuk.
MR 3109869
↑2
[5] J. Hempel, Residual finiteness for 3-manifolds, Combinatorial group theory and topology (Alta, Utah, 1984), Ann. of Math. Stud., vol. 111, Princeton Univ. Press, Princeton, NJ, 1987, pp. 379–396. MR 895623 ↑4
[6]
, Some 3-manifold groups with the same finite quotients (2014). e-print.
arXiv:1409.3509 ↑2
8
HOLGER KAMMEYER
[7] H. Kammeyer, Introduction to ℓ2 -invariants, lecture notes, 2018. Available for download at http://topology.math.kit.edu/21_679.php. ↑1
[8] T. T. Q. Lê, Growth of homology torsion in finite coverings and hyperbolic volume
(2014). e-print. arXiv:1412.7758 ↑5
[9] J. Lott and W. Lück, L2 -topological invariants of 3-manifolds, Invent. Math. 120
(1995), no. 1, 15–60. MR 1323981 ↑5
[10] A. Lubotzky, Group presentation, p-adic analytic groups and lattices in SL2 (C), Ann.
of Math. (2) 118 (1983), no. 1, 115–130. MR 707163 ↑7
[11] W. Lück, Approximating L2 -invariants and homology growth, Geom. Funct. Anal. 23
(2013), no. 2, 622–663. MR 3053758 ↑1
[12]
, L2 -invariants: theory and applications to geometry and K-theory, Ergebnisse
der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in
Mathematics, vol. 44, Springer-Verlag, Berlin, 2002. MR 1926649 ↑1, 5
[13] W. Lück and T. Schick, L2 -torsion of hyperbolic manifolds of finite volume, Geom.
Funct. Anal. 9 (1999), no. 3, 518–567. MR 1708444 ↑5
[14] C. Maclachlan and A. W. Reid, The arithmetic of hyperbolic 3-manifolds, Graduate
Texts in Mathematics, vol. 219, Springer-Verlag, New York, 2003. MR 1937957 ↑7
[15] N. Nikolov and D. Segal, On finitely generated profinite groups. I. Strong completeness
and uniform bounds, Ann. of Math. (2) 165 (2007), no. 1, 171–238. MR 2276769 ↑2
[16] V. Platonov and A. Rapinchuk, Algebraic groups and number theory, Pure and Applied Mathematics, vol. 139, Academic Press, Inc., Boston, MA, 1994. Translated
from the 1991 Russian original by Rachel Rowen. MR 1278263 ↑6
[17] A. W. Reid, Profinite properties of discrete groups, Groups St Andrews 2013, London
Math. Soc. Lecture Note Ser., vol. 422, Cambridge Univ. Press, Cambridge, 2015,
pp. 73–104. MR 3445488 ↑2
[18] L. Ribes and P. Zalesskii, Profinite groups, Ergebnisse der Mathematik und ihrer
Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, vol. 40, SpringerVerlag, Berlin, 2000. MR 1775104 ↑1, 3, 6
[19] G. Wilkes, Profinite rigidity for Seifert fibre spaces, Geom. Dedicata 188 (2017), 141–
163. MR 3639628 ↑2
Karlsruhe Institute of Technology, Institute for Algebra and Geometry,
Germany
E-mail address: [email protected]
URL: www.math.kit.edu/iag7/~kammeyer/
| 4 |
On automorphisms of finite p-groups
Hemant Kalra and Deepak Gumber
School of Mathematics
arXiv:1803.07853v1 [math.GR] 21 Mar 2018
Thapar Institute of Engineering and Technology, Patiala - 147 004, India
emails: [email protected], [email protected]
It is proved in [J. Group Theory, 10 (2007), 859-866] that if G is a finite p-group
such that (G, Z(G)) is a Camina pair, then |G| divides | Aut(G)|. We give a very
short and elementary proof of this result.
2010 Mathematics Subject Classification: 20D15, 20D45.
Keywords: Camina pair, class-preserving automorphism.
1 Introduction Let G be a finite non-abelian p-group. The problem “Does the order,
if it is greater than p2 , of a finite non-cyclic p-group divide the order of its automorphism
group?” is a well-known problem [6, Problem 12.77] in finite group theory. Gaschütz
[4] proved that any finite p-group of order at least p2 admits a non-inner automorphism
of order a power of p. It follows that the problem has an affirmative answer for finite
p-groups with center of order p. This immediately answers the problem positively for
finite p-groups of maximal class. Otto [7] also gave an independent proof of this result.
Fouladi et al. [3] gave a supportive answer to the problem for finite p-groups of co-class
2. For more details on this problem, one can see the introduction in the paper of Yadav
[8]. In [8, Theorem A], Yadav proved that if G is a finite p-group such that (G, Z(G))
is a Camina pair, then |G| divides | Aut(G)|. He also proved the important result [8,
Corollary 4.4] that the group of all class-preserving outer automorphisms is non-trivial
for finite p-groups G with (G, Z(G)) a Camina pair.
In this paper, we give different and very short proofs of these results of Yadav using
elementary arguments.
Let G be a finite p-group. Then (G, Z(G)) is called a Camina pair if xZ(G) ⊆ xG
for all x ∈ G − Z(G), where xG denotes the conjugacy class of x in G. In particular, if
(G, G′ ) is a Camina pair, then G is called a Camina p-group.
2 Proofs We shall need the following lemma which is a simple modification of a
lemma of Alperin [1, Lemma 3].
Lemma 2.1. Let G be any group and B be a central subgroup of G contained in a normal
subgroup A of G. Then the group AutB
A (G)of all automorphisms of G that induce the
identity on both A and G/B is isomorphic onto Hom(G/A, B).
Theorem 2.2. Let G be a finite p-group such that (G, Z(G)) is a Camina pair. Then
|G| divides | Aut(G)|.
Proof. Observe that Z(G) ≤ G′ ≤ Φ(G) and, therefore, Z(G) ≤ Z(M ) for every maximal
subgroup M of G. Suppose that Z(G) < Z(M1 ) for some maximal subgroup M1 of G.
Let G = M1 hg1 i, where g1 ∈ G − M1 and g1p ∈ M1 . Let g ∈ Z(M1 ) − Z(G). Then
|Z(G)| ≤ |[g, G]| = |[g, M1 hg1 i]| = |[g, hg1 i]| ≤ p
1
implies that |Z(G)| = p. The result therefore follows by Gaschütz [4]. We therefore
suppose that Z(G) = Z(M ) for every maximal subgroup M of G. We prove that
CG (M ) ≤ M . Assume that there exists g0 ∈ CG (M0 ) − M0 for some maximal subgroup M0 of G. Then G = M0 hg0 i and thus g0 ∈ Z(G), because g0 commutes with
M0 . This is a contradiction because Z(G) ≤ Φ(G). Therefore CG (M ) ≤ M for
Z(G)
every maximal subgroup M of G. Consider the group AutM (G) which is isomorZ(G)
phic to Hom(G/M, Z(G)) by Lemma 2.1. It follows that AutM (G) is non-trivial.
Z(G)
Let α ∈ AutM (G) ∩ (Inn(G)). Then α is an inner automorphism induced by some
g ∈ CG (M ) = Z(M ). Since Z(G) = Z(M ), α is trivial. It follows that
Z(G)
|(AutM
Z(G)
(G))(Inn(G))| = |(AutM
(G))||(Inn(G))| = |Z(G)||G/Z(G)| = |G|,
because Z(G) is elementary abelian by Theorem 2.2 of [5]. This completes the proof.
Corollary 2.3. Let G be a finite Camina p-group. Then |G| divides | Aut(G)|.
Proof. It is a well known result [2] that nilpotence class of G is at most 3. Also, it follows
from [5, Lemma 2.1, Theorem 5.2, Corollary 5.3] that (G, Z(G)) is a Camina pair. The
result therefore follows from Theorem 2.2.
An automorphism α of G is called a class-preserving automorphism of G if α(x) ∈ xG
for each x ∈ G. The group of all class-preserving automorphisms of G is denoted by
Autc (G). An automophism β of G is called a central automorphism if x−1 β(x) ∈ Z(G)
for each x ∈ G. It is easy to see that if (G, Z(G)) is a Camina pair, then the group of all
central automorphisms fixing Z(G) element-wise is contained in Autc (G).
Remark 2.4. It follows from the proof of Theorem 2.2 that if G is a finite p-group such
that (G, Z(G)) is a Camina pair and |Z(G)| ≥ p2 , then
Z(G)
| Autc (G)| ≥ |(AutM
(G))(Inn(G))| = |G|.
Thus, in particular, we obtain the following result of Yadav [8].
Corollary 2.5 ([8, Corollary 4.4]). Let G be a finite p-group such that (G, Z(G)) is a
Camina pair and |Z(G)| ≥ p2 . Then Autc (G)/ Inn(G) is non-trivial.
The following example shows that Remark 2.4 is not true if |Z(G)| = p.
Example 2.6. Consider a finite p-group G of nilpotence class 2 such that (G, Z(G)) is
a Camina pair and |Z(G)| = p. Since cl(G) = 2, exp(G/Z(G)) = exp(G′ ) and hence
G′ = Z(G) = Φ(G). Let |G| = pn , where n ≥ 3, and let {x1 , x2 , . . . , xn−1 } be the
minimal generating set of G. Then
| Autc (G)| ≤
n−1
Y
|xi G | = pn−1 = |G/Z(G)|.
i=1
Acknowledgment: Research of first author is supported by Thapar Institute of Engineering and Technology and also by SERB, DST grant no. MTR/2017/000581. Research
of second author is supported by SERB, DST grant no. EMR/2016/000019.
References
[1] J. L. Alperin, Groups with finitely many automorphisms, Pacific J. Math., 12 (1962),
1-5.
2
[2] R. Dark and C. M. Scoppola, On Camina groups of prime power order, J. Algebra,
181 (1996), 787-802.
[3] S. Fouladi, A. R. Jamali and R. Orfi, Automorphism groups of finite p-groups of
co-class 2, J. Group Theory, 10 (2007), 437-440.
[4] W. Gaschütz, Nichtabelsche p-Gruppen besitzen aussere p-Automorphismen, J. Algebra, 4 (1966), 1-2.
[5] I. D. Macdonald, Some p-groups of Frobenius and extra-special type, Israel J. Math.,
40 (1981), 350-364.
[6] V. D. Mazurov and E. I. Khukhro, The Kourovka notebook, Unsolved problems in
group theory, 18th augmented edn. (Russian Academy of Sciences Siberian Division,
Institute of Mathematics, 2014). Also available at ArXiv.
[7] A. D. Otto, Central automorphisms of a finite p-group, Trans. Amer. Math. Soc.,
125 (1966), 280-287.
[8] M. K. Yadav, On automorphisms of finite p-groups, J. Group Theory, 10 (2007),
859-866.
3
| 4 |
Faster Information Gathering in Ad-Hoc
Radio Tree Networks
Marek Chrobak∗
Kevin Costello†
arXiv:1512.02179v1 [cs.DS] 7 Dec 2015
December 8, 2015
Abstract
We study information gathering in ad-hoc radio networks. Initially, each node of the network
has a piece of information called a rumor, and the overall objective is to gather all these rumors
in the designated target node. The ad-hoc property refers to the fact that the topology of the
network is unknown when the computation starts. Aggregation of rumors is not allowed, which
means that each node may transmit at most one rumor in one step.
We focus on networks with tree topologies, that is we assume that the network is a tree with
all edges directed towards the root, but, being ad-hoc, its actual topology is not known. We
provide two deterministic algorithms for this problem. For the model that does not assume any
collision detection nor acknowledgement mechanisms, we give an O(n log log n)-time algorithm,
improving the previous upper bound of O(n log n). We also show that this running time can be
further reduced to O(n) if the model allows for acknowledgements of successful transmissions.
1
Introduction
We study the problem of information gathering in ad-hoc radio networks. Initially, each node of the
network has a piece of information called a rumor, and the objective is to gather all these rumors,
as quickly as possible, in the designated target node. The nodes communicate by sending messages
via radio transmissions. At any time step, several nodes in the network may transmit. When a
node transmits a message, this message is sent immediately to all nodes within its range. When two
nodes send their messages to the same node at the same time, a collision occurs and neither message
is received. Aggregation of rumors is not allowed, which means that each node may transmit at
most one rumor in one step.
The network can be naturally modeled by a directed graph, where an edge (u, v) indicates that v
is in the range of u. The ad-hoc property refers to the fact that the actual topology of the network is
unknown when the computation starts. We assume that nodes are labeled by integers 0, 1, ..., n − 1.
An information gathering protocol determines a sequence of transmissions of a node, based on its
label and on the previously received messages.
Our results. In this paper, we focus on ad-hoc networks with tree topologies, that is the underlying ad-hoc network is assumed to be a tree with all edges directed towards the root, although the
actual topology of this tree is unknown.
∗
Department of Computer Science, University of California at Riverside, USA. Research supported by NSF grants
CCF-1217314 and CCF-1536026.
†
Department of Mathematics, University of California at Riverside, USA. Research supported by NSA grant
H98230-13-1-0228
1
We consider two variants of the problem. In the first one, we do not assume any collision
detection or acknowledgment mechanisms, so none of the nodes (in particular neither the sender
nor the intended recipient) are notified about a collision after it occurred. In this model, we give
a deterministic algorithm that completes information gathering in time O(n log log n). Our result
significantly improves the previous upper bound of O(n log n) from [5]. To our knowledge, no lower
bound for this problem is known, apart from the trivial bound of Ω(n) (since each rumor must be
received by the root in a different time step).
In the second part of the paper, we also consider a variant where acknowledgments of successful
transmissions are provided to the sender. All the remaining nodes, though, including the intended
recipient, cannot distinguish between collisions and absence of transmissions. Under this assumption, we show that the running time can be improved to O(n), which is again optimal for trivial
reasons, up to the implicit constant.
While we assume that all nodes are labelled 0, 1, ..., n − 1 (where n is the number of vertices),
our algorithms’ asymptotic running times remain the same if the labels are chosen from a larger
range 0, 1, ..., N − 1, as long as N = O(n).
Related work. The problem of information gathering for trees was introduced in [5], where the
model without any collision detection was studied. In addition to the O(n log n)-time algorithm
without aggregation – that we improve in this paper – [5] develops an O(n)-time algorithm for the
model with aggregation, where a message can include any number of rumors. Another model studied
in [5], called fire-and-forward, requires that a node cannot store any rumors; a rumor received by a
node has to be either discarded or immediately forwarded. For fire-and-forward protocols, a tight
bound of Θ(n1.5 ) is given in [5].
The information gathering problem is closely related to two other information dissemination
primitives that have been well studied in the literature on ad-hoc radio networks: broadcasting and
gossiping. All the work discussed below is for ad-hoc radio networks modeled by arbitrary directed
graphs, and without any collision detection capability.
In broadcasting, a single rumor from a specified source node has to be delivered to all other nodes
in the network. The naïve RoundRobin algorithm (see the next section) completes broadcasting in
time O(n2 ). Following a sequence of papers [6, 18, 2, 3, 21, 11] where this naïve bound was gradually
improved, it is now known that broadcasting can be solved in time O(n log D log log(D∆/n)) [10],
where D is the diameter of G and ∆ is its maximum in-degree. This nearly matches the lower
bound of Ω(n log D) from [9]. Randomized algorithms for broadcasting have also been well studied
[1, 19, 11].
The gossiping problem is an extension of broadcasting, where each node starts with its own
rumor, and all rumors need to be delivered to all nodes in the network. The time complexity
of deterministic algorithms for gossiping is a major open problem in the theory of ad-hoc radio
networks. Obviously, the lower bound of Ω(n log D) for broadcasting [9] applies to gossiping as
well, but no better lower bound is known. It is also not known whether gossiping can be solved in
time O(n polylog(n)) with a deterministic algorithm, even if message aggregation is allowed. The
best currently known upper bound is O(n4/3 log4 n) [16] (see [6, 25] for some earlier work). The
case when no aggregation is allowed (or with limited aggregation) was studied in [4]. Randomized
algorithms for gossiping have also been well studied [11, 20, 7]. Interested readers can find more
information about gossiping in the survey paper [15].
Connections to other problems. This research, as well as the earlier work in [5], was motivated by the connections between information gathering in trees and other problems in distributed
2
computing involving shared channels, including gossiping in radio networks and MAC contention
resolution.
For arbitrary graphs, assuming aggregation, one can solve the gossiping problem by running an
algorithm for information gathering and then broadcasting all rumors (as one message) to all nodes
in the network. Thus an O(n polylog(n))-time algorithm for information gathering would resolve
in positive the earlier-discussed open question about the complexity of gossiping. Due to this
connection, developing an O(n polylog(n))-time algorithm for information gathering on arbitrary
graphs is likely to be very difficult – if possible at all. We hope that developing efficient algorithms
for trees, or for some other natural special cases, will ultimately lead to some insights helpful in
resolving the complexity of the gossiping problem in arbitrary graphs.
Some algorithms for ad-hoc radio networks (see [4, 17], for example) involve constructing a
spanning subtree of the network and disseminating information along this subtree. Better algorithms
for information gathering on trees may thus be useful in addressing problems for arbitrary graphs.
The problem of contention resolution for multiple-access channels (MAC) has been widely studied in the literature. (See, for example, [12, 22, 14] and the references therein.) There are in fact
myriad of variants of this problem, depending on the characteristics of the communication model.
Generally, the instance of the MAC contention resolution problem involves a collection of transmitters connected to a shared channel (e.g. ethernet). Some of these transmitters need to send
their messages across the channel, and the objective is to design a distributed protocol that will
allow them to do that. The information gathering problem for trees is in essence an extension of
MAC contention resolution to multi-level hierarchies of channels, where transmitters have unique
identifiers, and the structure of this hierarchy is not known.
2
Preliminaries
We now provide a formal definition of our model and introduce notation, terminology, and some
basic properties used throughout the paper.
Radio networks with tree topology. In the paper we focus exclusively on radio networks with
tree topologies. Such a network will be represented by a tree T with root r and with n = |T |
nodes. The edges in T are directed towards the root, representing the direction of information flow:
a node can send messages to its parent, but not to its children. We assume that each node v ∈ T
is assigned a unique label from [n] = {0, 1, ..., n − 1}, and we denote this label by label(v).
For a node v, by deg(v) we denote the degree of v, which is the number of v’s children. For any
subtree X of T and a node v ∈ X, we denote by Xv the subtree of X rooted at v that consists of
all descendants of v in X.
For any integer γ = 1, 2, ..., n − 1 and any node v of T define the γ-height of v as follows. If v is
a leaf then the γ-height of v is 0. If v is an internal node then let g be the maximum γ-height of a
child of v. If v has fewer than γ children of γ-height equal g then the γ-height of v is g. Otherwise,
the γ-height of v is g + 1. The γ-height of v will be denoted by heightγ (v). In case when more than
one tree are under consideration, to resolve potential ambiguity we will write heightγ (v, T ) for the
γ-height of v in T . The γ-height of a tree T , denoted heightγ (T ), is defined as heightγ (r), that is
the γ-height of its root.
Its name notwithstanding, the definition of γ-height is meant to capture the “bushiness” of a
tree. For example, if T is just a path then its γ-height is equal 0 for each γ ≥ 2. The concept
of γ-height generalizes Strahler numbers [23, 24], introduced in hydrology to measure the size of
3
2
1
0
0
2
1
0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
0
0
0
0
0
0
0
Figure 1: An example showing a tree and the values of 3-heights for all its nodes.
streams in terms of the complexity of their tributaries. Figure 1 gives an example of a tree and
values of 3-heights for all its nodes.
The lemma below is a slight refinement of an analogous lemma in [5], and it will play a critical
role in our algorithms.
Lemma 1. Suppose that T has q leaves, and let 2 ≤ γ ≤ q. Then heightγ (T ) ≤ logγ q.
Equivalently, any tree having height j must have at least γ j leaves. This can be seen by induction
on j – if v is a vertex which is furthest from the room among all vertices of height j, then v by
definition has γ descendants of height j − 1, each of which has γ j−1 leaf descendants by inductive
hypothesis.
Information gathering protocols. Each node v of T has a label (or an identifier) associated
with it, and denoted label(v). When the computation is about to start, each node v has also a piece
of information, ρv , that we call a rumor. The computation proceeds in discrete, synchronized time
steps, numbered 0, 1, 2, .... At any step, v can either be in the receiving state, when it listens to radio
transmissions from other nodes, or in the transmitting state, when it is allowed to transmit. When v
transmits at a time t, the message from v is sent immediately to its parent in T . As we do not allow
rumor aggregation, this message may contain at most one rumor, plus possibly O(log n) bits of other
information. If w is v’s parent, w will receive v’s message if and only if w is in the receiving state
and no collision occurred, that is if no other child of w transmitted at time t. In Sections 3 and 4 we
do not assume any collision detection nor acknowledgement mechanisms, so if v’s message collides
with a message from one of its siblings, neither v nor w receive any notification. (In other words, w
cannot distinguish collisions between its children’s transmissions from background noise.) We relax
this requirement in Section 5, by assuming that v (and only v) will obtain an acknowledgment from
w after each successful transmission.
The objective of an information gathering protocol is to deliver all rumors from T to its root r,
as quickly as possible. Such a protocol needs to achieve its goal even without the knowledge of the
topology of T . More formally, a gathering protocol A can be defined as a function that, at each
time t, and for each given node v, determines the action of v at time t based only on v’s label and
the information received by v up to time t. The action of v at each time step t involves choosing its
state (either receiving or transmitting) and, if it is in the transmitting state, choosing which rumor
to transmit.
We will say that A runs in time T (n) if, for any tree T and any assignment of labels to its nodes,
after at most T (n) steps all rumors are delivered to r.
A simple example of an information gathering protocol is called RoundRobin. In RoundRobin
nodes transmit one at a time, in n rounds, where in each round they transmit in the order 0, 1, ..., n−1
4
of their labels. For any node v, when it is its turn to transmit, v transmits any rumor from the set
of rumors that have been received so far (including its own rumor) but not yet transmitted. In each
round, each rumor that is still not in r will get closer to r, so after n2 steps all rumors will reach r.
Strong k-selectors. Let S̄ = (S1 , S2 , ..., Sm ) be a family of subsets of {0, 1, ..., n − 1}. S̄ is
called a strong k-selector if, for each k-element set A ⊆ {0, 1, ..., n − 1} and each a ∈ A, there is
a set Si such that Si ∩ A = {a}. As shown in [13, 9], for each k there exists a strong k-selector
S̄ = (S0 , S1 , ..., Sm−1 ) with m = O(k 2 log n). We will make extensive use of strong k-selectors in
our algorithm. At a certain time in the computation our protocols will “run” S̄, for an appropriate
choice of k, by which we mean that it will execute a sequence of m consecutive steps, such that
in the jth step the nodes from Sj will transmit, while those not in Sj will stay quiet. This will
guarantee that, for any node v with at most k − 1 siblings, there will be at least one step in the
execution of S̄ where v will transmit but none of its siblings will. Therefore at least one of v’s
transmissions will be successful.
3
√
An O(n log n)-Time Protocol
√
We first give a gathering protocol SimpleGather for trees with running time O(n log n). Our
faster protocol will be presented in the next section. We fix three parameters:
√
p
p
K = 2b log nc , D = dlogK ne = O( log n), D0 = dlog K 3 e = O( log n).
We also fix a strong K-selector S̄ = (S0 , S1 , ..., Sm−1 ), where m ≤ CK 2 log n, for some integer
constant C.
By Lemma 1, we have that heightK (T ) ≤ D. We call a node v of T light if |Tv | ≤ n/K 3 ;
otherwise we say that v is heavy. Let T 0 be the subtree of T induced by the heavy nodes. By the
definition of heavy nodes, T 0 has at most K 3 leaves, so height2 (T 0 ) ≤ D0 . Also, obviously, r ∈ T 0 .
To streamline the description of our algorithm we will allow each node to receive and transmit
messages at the same time. We will also assume a preprocessing step allowing each v to know both
the size of its subtree Tv (in particular, whether it is in T 0 or not), its K-height, and, if it is in
T 0 , its 2-height in the subtree T 0 . We later explain both the preprocessing and how to modify the
algorithm to remove the receive/transmit assumption.
The algorithm consists of two epochs. Epoch 1 consists of D + 1 stages, each lasting O(n) steps.
In this epoch only light vertices participate. The purpose of this epoch is to gather all rumors
from T in T 0 . Epoch 2 has D0 + 1 stages and only heavy vertices participate in the computation
during this epoch. The purpose of epoch 2 is to deliver all rumors from T 0 to r. We describe the
computation in the two epochs separately.
A detailed description of Algorithm SimpleGather is given in Pseudocode 1. To distinguish
between computation steps (which do not consume time) and communication steps, we use command
“at time t”. When the algorithm reaches this command it waits until time step t to continue
processing. Each message transmission takes one time step. For each node v we maintain a set Bv
of rumors received by v, including its own rumor ρv .
Epoch 1: light vertices. Let 0 ≤ h ≤ D, and let v be a light vertex whose K-height equals h. Then
v will be active only during stage h which starts at time αh = (C + 1)hn. This stage is divided into
two parts.
In the first part of stage h, v will transmit according to the strong K-selector S̄. Specifically,
this part has n/K 3 iterations, where each iteration corresponds to a complete execution of S̄. At
5
any time, some of the rumors in Bv may be marked; the marking on a rumor indicates that the
algorithm has already attempted to transmit it using S̄. At the beginning of each iteration, v
chooses any rumor ρz ∈ Bv it has not yet marked, then transmits ρz in the steps that use sets Si
containing the label of v. This ρz is then marked. If the parent u of v has degree at most K, the
definition of strong K-selectors guarantees that ρz will be received by u, but if u’s degree is larger
it may not have received ρz . Note that the total number of steps required for this part of stage h
is (n/K 3 ) · m ≤ Cn, so these steps will be completed before the second part of stage h starts.
In the second part, that starts at time αh + Cn, we simply run a variant of the RoundRobin
protocol, but cycling through rumors instead of nodes: in the l-th step of this part, all nodes holding
the rumor of the node with label l transmit that rumor (note that due to the tree topology it is
impossible for two siblings to both be holding rumor `).
We claim that the following invariant holds for all h = 0, 1, ..., D:
(Ih ) Let w ∈ T and let u be a light child of w with heightK (u) ≤ h − 1. Then at time αh node w
has received all rumors from Tu .
To prove this invariant we proceed by induction on h. If h = 0 the invariant (I0 ) holds vacuously.
So suppose that invariant (Ih ) holds for some value of h. We want to prove that (Ih+1 ) is true
when stage h + 1 starts. We thus need to prove the following claim: if u is a light child of w with
heightK (u) ≤ h then at time αh+1 all rumors from Tu will arrive in w.
If heightK (u) ≤ h − 1 then the claim holds, immediately from the inductive assumption (Ih ). So
assume that heightK (u) = h. Consider the subtree H rooted at u and containing all descendants
of u whose K-height is equal to h. By the inductive assumption, at time αh any w0 ∈ H has all
rumors from the subtrees rooted at its descendants of K-height smaller than h, in addition to its
own rumor ρw0 . Therefore all rumors from Tu are already in H and each of them has exactly one
copy in H, because all nodes in H were idle before time αh .
When the algorithm executes the first part of stage h on H, then each v node in H whose parent
is also in H will successfully transmit an unmarked rumor during each pass through the strong Kselector – indeed, our definition of H guarantees that v has at most K − 1 siblings in H, so by the
definition of strong selector it must succeed at least once. We make the following additional claim:
Claim 1. At all times during stage h, the collection of nodes in H still holding unmarked rumors
forms an induced tree of H
The claim follows from induction: At the beginning of the stage the nodes in H still hold their
own original rumor, and it is unmarked since those nodes were idle so far. As the stage progresses,
each parent of a transmitting child will receive a new (and therefore not yet marked) rumor during
each run through the strong selector, so no holes can ever form.
In particular, node u will receive a new rumor during every run through the strong selector
until it has received all rumors from its subtree. Since the tree originally held at most |Tu | ≤ n/K 3
rumors originally, u must have received all rumors from its subtree after at most n/K 3 runs through
the selector.
Note that, as heightK (u) = h, u will also attempt to transmit its rumors to w during this part,
but, since we are not making any assumptions about the degree of w, there is no guarantee that
w will receive them. This is where the second part of this stage is needed. Since in the second
part each rumor is transmitted without collisions, all rumors from u will reach w before time αh+1 ,
completing the inductive step and the proof that (Ih+1 ) holds.
In particular, using Invariant (Ih ) for h = D, we obtain that after epoch 1 each heavy node w
will have received rumors from the subtrees rooted at all its light children. Therefore at that time
all rumors from T will be already in T 0 , with each rumor having exactly one copy in T 0 .
6
Pseudocode 1 SimpleGather(v)
√
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
K = 2b log nc , D = dlogK ne
Bv ← {ρv }
. Initially v has only ρv
Throughout: all rumors received by v are automatically added to Bv
if |Tv | ≤ n/K 3 then
. v is light (epoch 1)
h ← heightK (v, T ) ; αh ← (C + 1)nh
. v participates in stage h
for i = 0, 1, ..., n/K 3 − 1 do
. iteration i
at time αh + im
if Bv contains an unmarked rumor then
. Part 1: strong K-selector
choose any unmarked ρz ∈ Bv and mark it
for j = 0, 1, ..., m − 1 do
at time αh + im + j
if label(v) ∈ Sj then Transmit(ρz )
for l = 0, 1, ..., n − 1 do
at time αh + Cn + l
z ← node with label(z) = l
if ρz ∈ Bv then Transmit(ρz )
else
g ← height2 (v, T 0 ) ; αg0 ← αD+1 + 2ng
for i = 0, 1, ..., n − 1 do
at time αg0 + i
if Bv contains an unmarked rumor then
choose any unmarked ρz ∈ Bv and mark it
Transmit(ρz )
for l = 0, 1, ..., n − 1 do
at time αg0 + 2n + l
z ← node with label(z) = l
if ρz ∈ Bv then Transmit(ρz )
7
. Part 2: RoundRobin
. v is heavy (epoch 2)
. v participates in stage g
. Part 1: all nodes transmit
. Part 2: RoundRobin
w
u
H
Figure 2: Proving Invariant (Ih ). Dark-shaded subtrees of Tu consist of light nodes with K-height
at most h − 1. H consists of the descendants of u with K-height equal h.
Epoch 2: heavy vertices. In this epoch we have at most D0 + 1 stages, and only heavy nodes in
T 0 participate in the computation. When the epoch starts, all rumors are already in T 0 . In stage
D + 1 + g the nodes in T 0 whose 2-height is equal g will participate. Similar to the stages of epoch 1,
this stage has two parts and the second part executes RoundRobin, as before. The difference is
that now, in the first part, instead of using the strong K-selector, each heavy node will transmit at
each of the n steps.
We need to show that r will receive all rumors at the end. The argument is similar as for light
vertices, but with a twist, since we do not use selectors now; instead we have steps when all nodes
transmit. In essence, we show that each stage reduces by at least one the 2-depth of the minimum
subtree of T 0 that contains all rumors.
Specifically, we show that the following invariant holds for all g = 0, 1, ..., D0 :
(Jg ) Let w ∈ T 0 and let u ∈ T 0 be a child of w with height2 (u, T 0 ) ≤ g − 1. Then at time αg0 node
w has received all rumors from Tu .
We prove invariant (Jg ) by induction on g. For g = 0, (J0 ) holds vacuously. Assume that (Jg ) holds
for some g. We claim that (Jg+1 ) holds right after stage g.
Choose any child u of w with height2 (u, T 0 ) ≤ g. If height2 (u, T 0 ) ≤ g − 1, we are done, by
the inductive assumption. So we can assume that height2 (u, T 0 ) = g. Let P be the subtree of T 0
rooted at u and consisting of all descendants of u whose 2-height in T 0 is equal g. Then P is simply
a path. By the inductive assumption, for each w0 ∈ P , all rumors from the subtrees of w0 rooted
at its children of 2-height at most g − 1 are in w0 . Thus all rumors from Tu are already in P . All
nodes in P participate in stage g, but their children outside P do not transmit. Therefore each
transmission from any node x ∈ P − {u} during stage g will be successful. Due to pipelining, all
rumors from P will reach u after the first part of stage g. (This conclusion can also be derived from
treating this computation on P as an instance of the token collection game in Section 2, with each
step of the transmissions being one step of the game.) In the second part, all rumors from u will be
successfully sent to w. So after stage g all rumors from Tu will be in w, completing the proof that
(Jg+1 ) holds.
Removing simplifying assumptions. At the beginning of this section we made some simplifying
assumptions. It still remains to explain how to modify our algorithm so that it works even if these
8
assumptions do not hold. These modification are similar to those described in [5], but we include
them here for the sake of completeness.
First, we assumed a preprocessing step whereby each v knows certain parameters of its subtree
Tv , including the size, its K-height, etc. The justification for this lies in the algorithm from [5] for
information gathering in trees with aggregation. Such an algorithm can be modified to compute in
linear time any function f such that f (v) is uniquely determined by the values of f on the children
of v. The modification is that each node u, when its sends its message (which, in the algorithm
from [5] contains all rumors from Tu ), it will instead send the value of f (u). A node v, after it
receives all values of f (u) from each child u, will then compute f (v)1 .
We also assumed that each node can receive and transmit messages at the same time. We now
need to modify the algorithm so that it receives messages only in the receiving state and transmits
only in the transmitting state. For the RoundRobin steps this is trivial: a node v is in the
transmitting state only if it is scheduled to transmit, otherwise it is in the receiving state. For other
steps, we will explain the modification for light and heavy nodes separately.
Consider the computation of the light nodes during the steps when they transmit according to
the strong selector. Instead of the strong K-selector, we can use the strong (K + 1)-selector, which
will not affect the asymptotic running time. When a node v is scheduled to transmit, it enters the
transmitting state, otherwise it is in the receiving state. In the proof, where we argue that the
message from v will reach its parent, instead of applying the selector argument to v and its siblings,
we apply it to the set of nodes consisting of v, its siblings, and its parent, arguing that there will
be a step when v is the only node transmitting among its siblings and its parent is in the receiving
state.
Finally, consider the computation of the heavy nodes, at steps when all of them transmit. We
modify the algorithm so that, in any stage g, the iteration (in Line 18) of these steps is preceded
by O(n)-time preprocessing. Recall that the nodes whose 2-height in T 0 is equal g form disjoint
paths. We can run a one round of RoundRobin where each node transmits an arbitrary message.
This way, each node will know whether it is the first node on one of these paths or not. If a node
x is first on some path, say P , x sends a message along this path, so that each node y ∈ P can
compute its distance from x. Then, in the part where all nodes transmit, we replace each step by
two consecutive steps (even and odd), and we use parity to synchronize the computation along these
paths: the nodes at even positions are in the receiving state at even steps and in the transmitting
state at odd steps, and the nodes at odd positions do the opposite.
Summarizing this section, we have presented Algorithm SimpleGather √
that completes information gathering in ad-hoc radio networks with tree topologies in time O(n log n). In the next
section, we will show how to improve this bound to O(n log log n).
A Protocol with Running Time O(n log log n)
4
In this section we consider the same basic model of information gathering in trees as in Section 3,
that is, the model does not provide √
any collision detection and rumor aggregation is not allowed.
We show that how to refine our O(n log n) protocol SimpleGather to improve the running time
to O(n log log n). This is the first main result of our paper, as summarized in the theorem below.
Theorem 1. The problem of information gathering on trees, without rumor aggregation, can be
solved in time O(n log log n).
1
It needs to be emphasized here that in our model only communication steps contribute to the running time; all
calculations are assumed to be instantaneous.
9
Our protocol that achieves running time O(n log log n) will be called FastGather. This protocol can be thought of as an iterative application of the idea behind Algorithm SimpleGather
from Section 3. We assume that the reader is familiar with Algorithm SimpleGather and its analysis, and in our presentation we will focus on the high level ideas behind Algorithm FastGather,
referring the reader to Section 3 for the implementation of some details.
As before, we use notation T for the input tree and n = |T | denotes the number of vertices in
T . We assume that n is sufficiently large, and we will establish some needed lower bounds for n
as we work through the proof. We fix some arbitrary integer constant β ≥ 2. For ` = 1, 2, ..., let
−`
K` = dnβ e. So K1 = dn1/β e, the sequence (K` )` is non-increasing, and lim`→∞ K` = 2. Let L be
−`
the largest value of ` for which nβ ≥ log n. (Note that L is well defined for sufficiently large n,
since β is fixed). We thus have the following exact and asymptotic bounds:
L ≤ logβ (log n/ log log n)
KL ≥ log n
L = Θ(log log n)
KL = Θ(log n).
` ) we denote a strong K -selector of size m ≤ CK 2 log n,
For ` = 1, 2, ..., L, by S̄ ` = (S1` , S2` , ..., Sm
`
`
`
`
for some integer constant C. As discussed in Section 2, such selectors S̄ ` exist.
Let T (0) = T , and for each ` = 1, 2, ..., L, let T (`) be the subtree of T induced by the nodes
v with |Tv | ≥ n/K`3 . Each tree T (`) is rooted at r, and T (`) ⊆ T (`−1) for ` ≥ 1. For ` 6= 0, the
definition of T (`) implies also that it has at most K`3 leaves, so, by Lemma 1, its K`+1 -height is at
−`
−(`+1)
, we have
most logK`+1 (K`3 ). Since K` ≤ 2nβ and K`+1 ≥ nβ
logK`+1 (K`3 ) = 3 logK`+1 (K` )
−`
≤ 3 lognβ−(`+1) (2nβ )
≤ 3 lognβ−(`+1) nβ
= 3β +
−`
+ 3 lognβ−(`+1) 2
3β `+1
3β L+1
≤ 3β +
≤ 3β + 1,
log n
log n
where the last inequality holds as long as 3β L+1 ≤ log n, which is true if n is large enough (since
by the first of the "exact and asymptotic bounds" above we have β L ≤ log n/ log log n). We thus
obtain that the K`+1 -height of T (`) is at most D = 3β + 1 = O(1).
As in the previous section, we will make some simplifying assumptions. First, we will assume
that all nodes can receive and transmit messages at the same time. Second, we will also assume that
each node v knows the size of its subtree |Tv | and its K` -heights, for each ` ≤ L. After completing
the description of the algorithm we will explain how to modify it so that these assumptions will not
be needed.
Algorithm FastGather consists of L + 1 epochs, numbered 1, 2, ..., L + 1. In each epoch ` ≤ L
only the nodes in T (`−1) − T (`) participate. At the beginning of epoch ` all rumors will be already
collected in T (`−1) , and the purpose of epoch ` is to move all these rumors into T (`) . Each of these
L epochs will run in time O(n), so their total running time will be O(nL) = O(n log log n). In
epoch L + 1, only the nodes in T (L) participate. At the beginning of this epoch all rumors will be
already in T (L) , and when this epoch is complete, all rumors will be collected in r. This epoch will
take time O(n log log n). Thus the total running time will be also O(n log log n). We now provide
the details.
Epochs ` = 1, 2, ..., L. In epoch `, only the nodes in T (`−1) − T (`) are active, all other nodes will
be quiet. The computation in this epoch is very similar to the computation of light nodes (in
10
epoch 1) in Algorithm SimpleGather. Epoch ` starts at time γ` = (D + 1)(C + 1)(` − 1)n and
lasts (D + 1)(C + 1)n steps.
Let v ∈ T (`−1) − T (`) . The computation of v in epoch ` consists of D + 1 identical stages. Each
stage h = 0, 1, ..., D starts at time step α`,h = γ` + (C + 1)hn and lasts (C + 1)n steps.
Stage h has two parts. The first part starts at time α`,h and lasts time Cn. During this part we
execute dn/K`3 e iterations, each iteration consisting of running the strong K`+1 -selector S̄ ` . The
time needed to execute these iterations is at most
d
2 log n
2K`+1
n
2
e
·
CK
log
n
≤
Cn
·
`+1
K`3
K`3
≤ Cn ·
8n2β
−(`+1)
log n
n3β −`
8 log n
n(3−2/β)β −`
8 log n
≤ Cn · (3−2/β)β −L ≤ Cn,
n
= Cn ·
where the last inequality holds as long as n3−2/β β −L ≥ 8 log n, which is again true for sufficiently
large n (recall that β ≥ 2 is constant, and L = Θ(log log n)).
Thus all iterations executing the strong selector will complete before time α`,h + Cn. Then
v stays idle until time α`,h + Cn, which is when the second part starts. In the second part we
run the RoundRobin protocol, which takes n steps. So stage h will complete right before step
α`,h + (C + 1)n = α`,h+1 . Note that the whole epoch will last (D + 1)(C + 1)n steps, as needed.
We claim that the algorithm preserves the following invariant for ` = 1, 2, ..., L:
(I` ) Let w ∈ T and let u ∈ T − T (`−1) be a child of w. Then w will receive all rumors from Tu
before time γ` , that is before epoch ` starts.
For ` = 1, invariant (I1 ) holds vacuously, because T (0) = T . In the inductive step, assume that (I` )
holds for some epoch `. We want to show that (I`+1 ) holds right after epoch ` ends. In other words,
we will show that if w has a child u ∈ T − T (`) then w will receive all rumors from Tu before time
γ`+1 .
So let u ∈ T −T (`) . If u 6∈ T (`−1) then (I`+1 ) holds for u, directly from the inductive assumption.
We can thus assume that u ∈ T (`−1) − T (`) .
The argument now is very similar to that for Algorithm SimpleGather in Section 3, when we
analyzed epoch 1. For each h = 0, 1, ..., D we prove a refined version of condition (I`+1 ):
(`−1) − T (`) be child of w with height (u, T (`−1) ) ≤ h − 1. Then
(I`+1
K`
h ) Let w ∈ T and let u ∈ T
w will receive all rumors from Tu before time α`,h , that is before stage h.
The proof is the same as for Invariant (Ih ) in Section 3, proceeding by induction on h. For each
fixed h we consider a subtree H rooted at u and consisting of all descendants of u in T (`−1) whose
K` -height is at least h. By the inductive assumption, at the beginning of stage h all rumors from
Tu are already in H. Then, the executions of S̄ ` , followed by the execution of RoundRobin, will
move all rumors from H to w. We omit the details here.
By applying condition (I`+1
h ) with h = D, we obtain that after all stages of epoch ` are complete,
that is at right before time γ`+1 , w will receive all rumors from Tu . Thus ` invariant (I`+1 ) will
hold.
Epoch L + 1. Due to the definition of L, we have that T (L) contains at most KL3 = O(log3 n) leaves,
so its 2-depth is at most D0 = log(KL3 ) = O(log log n), by Lemma 1. The computation in this epoch
11
is similar to epoch 2 from Algorithm SimpleGather. As before, this epoch consists of D0 + 1
stages, where each stage g = 0, 1, ..., D0 has two parts. In the first part, we have n steps in which
each node transmits. In the second part, also of length n, we run one iteration of RoundRobin.
Let αg0 = γL + 2gn. To prove correctness, we show that the following invariant holds for all
stages g = 0, 1, ..., D0 :
(Jg ) Let w ∈ T (L) and let u ∈ T (L) be a child of w with height2 (u, T (L) ) ≤ g − 1. Then at time αg0
node w has received all rumors from Tu .
The proof is identical to the proof of the analogous Invariant (Jg ) in Section 3, so we omit it here.
Applying Invariant (Jg ) with g = D0 , we conclude that after stage D0 , the root r will receive all
rumors.
As for the running time, we recall that L = O(log log n). Each epoch ` = 1, 2, ..., L has D + 1 =
O(1) stages, where each stage takes time O(n), so the execution of the first L epochs will take time
O(n log log n). Epoch L + 1 has D0 + 1 = O(log log n) stages, each stage consisting of O(n) steps,
so this epoch will complete in time O(n log log n). We thus have that the overall running time of
our protocol is O(n log log n).
It remains to explain that the simplifying assumptions we made at the beginning of this section
are not needed. Computing all subtree sizes and all K` -heights can be done recursively bottom-up,
using the linear-time information gathering algorithm from [5] that uses aggregation. This was
explained in Section 3. The difference now is that each node has to compute L + 1 = O(log log n)
values K` , and, since we limit bookkeeping information in each message up to O(log n) bits, these
values need to be computed separately. Nevertheless, the total pre-computation time will still be
O(n log log n).
Removing the assumption that nodes can receive and transmit at the same time can be done
in the same way as in Section 3. Roughly, in each epoch ` = 1, 2, ..., L, any node v ∈ T (`−1) − T (`)
uses a strong (K` + 1)-selector (instead of a strong K` -selector) to determine whether to be in the
receiving or transmitting state. In epoch L the computation (in the steps when all nodes transmit)
is synchronized by transmitting a control message along induced paths, and then choosing the
receiving or transmitting state according to node parity.
Summarizing, we have proved that the running time of Algorithm SimpleGather is O(n log log n),
thus completing the proof of Theorem 1.
5
An O(n)-time Protocol with Acknowledgments
In this section we consider a slightly different communication model from that in Sections 3 and 4.
We now assume that acknowledgments of successful transmissions are provided to the sender. All
the remaining nodes, including the intended recipient, cannot distinguish between collisions and
absence of transmissions. The main result of this section, as summarized in the theorem below, is
that with this feature it is possible to achieve the optimal time O(n).
Theorem 2. The problem of information gathering on trees without rumor aggregation can be solved
in time O(n) if acknowledgments are provided.
The overall structure of our O(n) time protocol, called Algorithm LinGather, is similar to
Algorithm SimpleGather. It consists of two epochs. The first epoch does not use the acknowledgement feature, and it is in fact identical to Epoch 1 in Algorithm SimpleGather, except for a
12
different choice of the parameters. After this epoch, lasting time O(n), all rumors will be collected
in the subtree T 0 consisting of the heavy nodes.
In the second epoch, only the heavy nodes in T 0 will participate in the computation, and the
objective of this epoch is to move all rumors already collected in T 0 to the root r. The key obstacle
to be overcome in this epoch is congestion stemming from the fact that, although T 0 may be small,
its nodes have many rumors to transmit. This congestion means that simply repeatedly applying
k-selectors is no longer enough. For example, if the root has ` children, each with n/` rumors, then
repeating an `-selector n/` times would get all the rumors to the root. However, we know from the
selector size bounds in [8] that this would take total time Ω(n` log n/ log `), which is far too slow.
While in this particular scenario RoundRobin will collect all rumors in the root in time O(n), this
example can be enhanced to show that simple combinations of k-selectors and RoundRobin do
not seem to be sufficient to gather all rumors from T 0 in the root in linear time.
To overcome this obstacle, we introduce two novel tools that will play a critical role in our
algorithm. The first tool is a so-called amortizing selector family. Since a parent, say with ` children,
receives at most one rumor per round, it clearly cannot simultaneously be receiving rumors at an
average rate greater than 1` from each child individually. With the amortizing family, we will be
able to achieve this bound within a constant fraction over long time intervals, so long as each child
knows (approximately) how many siblings it is competing with.
Of course, such a family will not be useful unless a node can obtain an accurate estimate of its
parent’s degree, which will be the focus of our second tool, k-distinguishers. Using a k-distinguisher
a node can determine whether its number of active siblings ` is at least k or at most 2k. While
this information is not sufficient to determine the exact relation between ` and k, we show how to
combine different k-distinguishers to obtain another structure, called a cardinality estimator, that
will determine the value of ` within a factor of 4. Using this estimate, and the earlier described
amortizing selector, a node can quickly transmit its rumors to its parent. This will allow us to
gather all rumors from T 0 in the root in time O(n).
This section is divided into three parts. In Sections 5.1 and 5.2 we give precise definitions and
constructions of our combinatorial structures. Our O(n)-time protocol LinGather is described
and analyzed in Section 5.3.
5.1
Construction of Amortizing Selectors
We now define the concept of an amortizing selector family S̄. Similarly to a strong selector, this
amortizing family will be a collection of subsets of the underlying label set [n], though now it will
be doubly indexed. Specifically, S̄ = {Sij }, where 1 ≤ i ≤ s and each j ∈ {1, 2, 4, 8, . . . , k}, for some
parameters s and k. We say that S̄ succeeds at cumulative rate q if the following statement is true:
For each j ∈ {1, 2, 4, . . . , k2 }, each subset A ⊆ {1, . . . , n} satisfying j/2 ≤ |A| ≤ 2j, and each
q
element v ∈ A there are at least |A|
s distinct i for which
v ∈ Sij and A ∩ (Si(j/2) ∪ Sij ∪ Si(2j) ) = {v}.
In the case j = 1 the set Si(j/2) is defined to be empty. Here s can be thought of as the total
running time of the selector, j as a node’s estimate of its parent’s degree, and k as some bound on
the maximum degree handled by the selector. A node fires at time step i if and only if its index is
contained in the set Sij . What the above statement is then saying that for any subset A of siblings,
if |A| is at most k/2 and each child estimates |A| within a factor of 2 then each child will transmit
q
.
at rate at least |A|
13
Theorem 3. There are fixed constants c, C > 0 such that the following is true: For any k and
n and any s ≥ Ck 2 log n, there is an amortizing selector with parameters n, k, s succeeding with
cumulative rate c.
Let k, n, s be given such that s ≥ Ck 2 log n, where k is a constant to be determined later. We
form our selector probabilistically: For each v, i, and j, we independently include v in Sij with
probability 2−j .
Observe that by monotonicity it suffices to check the selector property for the case |A| = 2j: If
we replace A with a larger set A0 containing A and satisfying |A0 | ≤ 4|A|, then for any v ∈ A and
any i satisfying
A0 ∩ (Si(j/2) ∪ Sij ∪ Si(2j) ) = {v},
we also have
A ∩ (Si(j/2) ∪ Sij ∪ Si(2j) ) = {v}.
(c/4)s
cs
So if there are at least |A
0 | distinct i satisfying the first equality, there are at least
|A| satisfying
the second equality.
Now fix j ∈ {1, 2, 4, . . . , k}, a set A ⊆ [n] with |A| = 2j and some v ∈ A, and let the random
variables X and Y be defined by
X = | i : v ∈ Sij and A ∩ (Si(j/2) ∪ Sij ∪ Si(2j) ) = {v} |
The expected value of X is
µX
1
2 2j−1
1 2j−1
1 2j−1
=s
1−
1−
1−
.
j
j
j
2j
Utilizing the bound
1 2j−1
1 2j−1
1
2 2j−1
1−
1−
≥ 7,
1−
j
j
2j
e
we have
µX ≥ e−7
Now let c =
1
.
4e7
s
j
Applying the Chernoff bound, we get
Pr[X ≤ c
s
1
] ≤ Pr[X ≤ µX ] ≤ e−µX /8
|A|
2
7
≤ e−s/8e j .
We now use the union bound over all choices of j, v, and S. We have at most n choices of
n
v, at most log n choices for j, and at most 2j−1
≤ n2k−1 choices of S given j and v. Thus the
probability that our family is not an amortizing selector is at most
7
7
n log n · n2k−1 · e−s/8e j ≤ 2n2k log n · e−Ck log n/8e
which is smaller than 1 for sufficiently large C. This implies the existence of the amortized selector
family.
14
5.2
Construction of k-Distinguishers and Cardinality Estimators
In this section, we define k-distinguishers and show how to construct them. Let S̄ = (S1 , S2 , ..., Sm ),
where Sj ⊆ [n] for each j. For A ⊆ [n] and a ∈ A, define Hitsa,A (S̄) = {j : Sj ∩ A = {a}}, that is
Hitsa,A (S̄) is the collection of indices j for which Sj intersects A exactly on a. Note that, using this
terminology, S̄ is a strong k-selector if and only if Hitsa,A (S̄) 6= ∅ for all sets A ⊆ [n] of cardinality
at most k and all a ∈ A.
We say that S̄ is a k-distinguisher if there is a threshold value ξ (which may depend on k) such
that, for any A ⊆ [n] and a ∈ A, the following conditions hold:
if |A| ≤ k then |Hitsa,A (S̄)| > ξ, and if |A| ≥ 2k then |Hitsa,A (S̄)| < ξ.
We make no assumptions on what happens for |A| ∈ {k + 1, k + 2, ..., 2k − 1}.
The idea is this: consider a fixed a, and imagine that we have some set A that contains a, but
its other elements are not known. Suppose that we also have an access to a hit oracle that for any
set S will tell us whether S ∩ A = {a} or not. With this oracle, we can then use a k-distinguisher S̄
to extract some information about the cardinality of A by calculating the cardinality of Hitsa,A (S̄).
If |Hitsa,A (S̄)| ≤ ξ then we know that |A| > k, and if |Hitsa,A (S̄)| ≥ ξ then we know that |A| < 2k.
What we will soon show, again by a probabilistic construction, is that not-too-large k-distinguishers
exist:
Theorem 4. For any n ≥ 2 and 1 ≤ k ≤ n/2 there exists a k-distinguisher of size m = O(k 2 log n).
In our framework, the acknowledgement of a message received from a parent corresponds exactly
to such a hit oracle. So if all nodes fire according to such a k-distinguisher, each node can determine
in time O(k 2 log n) either that its parent has at least k children or that it has at most 2k children.
Now let λ be a fixed parameter between 0 and 1. For each i = 0, 1, ..., dλ log ne, let S̄ i be a
i
2 -distinguisher of size O(22i log n) and with threshold value ξi . We can then concatenate these
Pdλ log ne
k-distinguishers to obtain a sequence S̃ of size i=0
O(22i log n) = O(n2λ log n).
We will refer to S̃ as a cardinality estimator, because applying our hit oracle to S̄ we can
estimate a cardinality of an unknown set within a factor of 4, making O(n2λ log n) hit queries.
More specifically, consider again a scenario where we have a fixed a and some unknown set A
containing a, where |A| ≤ nλ . Using the hit oracle, compute the values hi = |Hitsa,A (S̄ i )|, for all
i. If i0 is the smallest i for which hi > ξi , then by definition of our distinguisher we must have
2i0 −1 < |A| < 2(2i0 ). In our gathering framework, this corresponds to each node in the tree being
able to determine in time O(n2λ log n) a value of j (specifically, i0 − 1) such that the number of
children of its parent is between 2j and 2j+2 , which is exactly what we need to be able to run the
amortizing selector.
It remains to show the existence of k-distinguishers, i.e. to prove Theorem 4. Let m = Ck 2 log n,
where C is some sufficiently large constant whose value we will determine later. We choose the
collection of random sets S̄ = (S1 , S2 , ..., Sm ), by letting each Sj be formed by independently
including each x ∈ [n] in Sj with probability 1/2k. Thus, for any set A and a ∈ A, the probability
that Sj ∩ A = {a} is (1/2k)(1 − 1/2k)|A|−1 , and the expected value of |Hitsa,A (S̄)| is
E[|Hitsa,A (S̄)|] = m ·
1
2k
1−
1 |A|−1
.
2k
(1)
Recall that to be a k-distinguisher our set needs to satisfy (for suitable ξ) the following two
properties:
(d1) if |A| ≤ k then |Hitsa,A (S̄)| > ξ
(d2) if |A| ≥ 2k then |Hitsa,A (S̄)| < ξ.
15
We claim that, for a suitable value of ξ, the probability that there exists a set A ⊆ [n] and some
a ∈ A for which S̄ does not satisfy both conditions is smaller than 1 (and in fact tends to 0) This
will be sufficient to show an existence of a k-distinguisher with threshold value ξ.
Observe that in order to be a k-distinguisher it is sufficient that S̄ satisfies (d1) for sets A with
|A| = k and satisfies (d2) for sets A with |A| = 2k. This is true because the value of |Hitsa,A (S̄)| is
monotone with respect to the inclusion: if a ∈ A ⊆ B then Hitsa,A (S̄) ⊇ Hitsa,B (S̄).
Now consider some fixed a ∈ [n] and two sets A1 , A2 ⊆ [n] such that |A1 | = k, |A2 | = 2k and
a ∈ A1 ∩ A2 . For i = 1, 2, we consider two corresponding random variables Xi = |Hitsa,Ai (S̄)| and
their expected values µi = Exp[Xi ]. For any integer k ≥ 1 we have
1
e1/2
1
e
≤ (1 −
≤ (1 −
1 k−1
2k )
1 2k−1
2k )
≤ 12 .
From (1), substituting m = Ck 2 log n, this gives us the corresponding estimates for µ1 and µ2 :
1
Ck log n
2e1/2
1
2e Ck log n
≤ µ1
≤ µ2 ≤ 14 Ck log n
Since e−1/2 > 21 , we can choose an ∈ (0, 1) and ξ for which
(1 + )µ2 < ξ < (1 − )µ1 .
Thus the probability that S̄ violates (d1) for A = A1 is
2 µ /2
1
Pr[X1 ≤ ξ] ≤ Pr[X1 ≤ (1 − )µ1 ] ≤ e−
2 e−1/2 Ck log n/4
≤ e−
,
where in the second inequality we use the Chernoff bound for deviations below the mean. Similarly,
using the Chernoff bound for deviations above the mean, we can bound the probability of S̄ violating
(d2) for A = A2 as follows:
2µ
Pr[X2 ≥ ξ] ≤ Pr[X2 ≥ (1 + )µ2 ] ≤ e−
2 e−1 Ck log n/6
≤ e−
2 /3
.
To finish
off the proof, we apply the union bound. We have at most n choices for a, at most
n
n
k−1 ≤ n2k−1 choices of A , and at most
2k−1 choices of A . Note also that
1
2
k−1 ≤ n
2k−1 ≤ n
−1/2
−1
e
/4 > e /6. Putting it all together, the probability that S̄ is not a k-distinguisher is at most
n
n
n · k−1
· Pr[X1 ≤ ξ] + n · 2k−1
· Pr[X2 ≥ ξ] ≤ n2k · (Pr[X1 ≤ ξ] + Pr[X2 ≥ ξ])
≤ 2n2k · e−
= 2nk(2−
for C large enough.
16
2 e−1 Ck log n/6
2 e−1 C/6)
< 1,
5.3
Linear-time Protocol
As before, T is the input tree with n nodes. We will recycle the notions of light and heavy nodes
from Section 3, although now we will use slightly different parameters. Let δ > 0 be a small constant,
and let K = dnδ e. We say that v ∈ T is light if |Tv | ≤ n/K 3 and we call v heavy otherwise. By T 0
we denote the subtree of T induced by the heavy nodes.
Algorithm LinGather. Our algorithm will consist of two epochs. The first epoch is essentially
identical to Epoch 1 in Algorithm SimpleGather, except for a different choice of the parameters.
The objective of this epoch is to collect all rumors in T 0 in time O(n). In the second epoch, only the
heavy nodes in T 0 will participate in the computation, and the objective of this epoch is to gather
all rumors from T 0 in the root r. This epoch is quite different from our earlier algorithms and it
will use the combinatorial structures obtained in the previous sections to move all rumors from T 0
to r in time O(n).
Epoch 1: In this epoch only light nodes will participate, and the objective of Epoch 1 is to move all
rumors into T 0 . In this epoch we will not be taking advantage of the acknowledgement mechanism.
As mentioned earlier, except for different choices of parameters, this epoch is essentially identical to
Epoch 1 of Algorithm SimpleGather, so we only give a very brief overview here. We use a strong
K-selector S̄ of size m ≤ CK 2 log n.
Let D = dlogK ne ≤ 1/δ = O(1). By Lemma 1, the K-depth of T is at most D. Epoch
1 consists of D + 1 stages, where in each stage h = 0, 1, ..., D, nodes of K-depth h participate.
Stage h consists of n/K 3 executions of S̄, followed by an execution of RoundRobin, taking total
time n/K 3 · m + n = O(n). So the entire epoch takes time (D + 1) · O(n) = O(n) as well.
The proof of correctness (namely that after this epoch all rumors are in T 0 ) is identical as for
Algorithm SimpleGather.
Epoch 2: When this epoch starts, all rumors are already gathered in T 0 , and the objective is to
push them further to the root.
For the second epoch, we restrict our attention to the tree T 0 of heavy nodes. As before, no
parent in this tree can have more than K 3 = n3δ children, since each child is itself the ancestor of
a subtree of size n/K 3 . We will further assume the existence of a fixed amortizing selector family
with parameters k = 2K 3 and s = K 8 , as well as a fixed
cardinality estimator with parameter
λ = 3δ running in time D1 = O n6δ log n = O K 6 log n .
Our protocol will be divided into stages, each consisting of 2(D1 + K 8 ) steps. A node will be
active in a given stage if at the beginning of the stage it has already received all of its rumors, but
still has at least one rumor left to transmit (it is possible for a node to never become active, if it
receives its last rumor and then finishes transmitting before the beginning of the next stage).
During each odd-numbered time step of a stage, all nodes (active or not) holding at least one
rumor they have not yet successfully passed on transmit such a rumor. The even-numbered time
steps are themselves divided into two parts. In the first D1 even steps, all active nodes participate
in the aforementioned cardinality estimator. At the conclusion of the estimator, each node knows a
j such that their parent has between 2j and 2j+2 active children. Note that active siblings do not
necessarily have the same estimate for their family size. For the remainder of the even steps, each
active node fires using the corresponding 2j+1 -selector from the amortizing family.
The stages repeat until all rumors have reached the root. Our key claim, stated below, is that
the rumors aggregate at least at a steady rate over time – each node with subtree size m in the
original tree T will have received all m rumors within O(m) steps of the start of the epoch.
(I` ) For any heavy node v such that v has subtree size m in T , and any 0 ≤ j ≤ m, that node
17
has received at least j rumors within time C(2m + j) of the beginning of Epoch 2, where C
is some sufficiently large absolute constant.
In particular, the root has received all of the rumors by time 3Cn.
We will show this invariant holds by induction on the node’s height within T 0 . If the node is a
leaf the statement follows from our analysis of Epoch 1 (the node has received all rumors from its
subtree by the beginning of epoch 2).
Now assume that a node u with subtree size m + 1 has k children within T 0 , and that those
children have subtree sizes a1 ≥ a2 ≥ · · · ≥ ak ≥ K 3 . Node u may also received some number
a0 of messages from non-heavy children (these children, if any, will have already completed their
transmissions during the previous epoch).
Let v be a child having subtree size a1 (chosen arbitrarily, if there are two children with maximal
subtree size). Let t2 be defined by
3Ca2 + 3c (a2 + · · · + ak ) + K 12 if k > 1
t2 =
0
if k = 1
We make the following additional claims.
Claim 2: By time t2 , all children except possibly v will have completed all of their transmissions.
Proof. By inductive hypothesis, all children except v will have received all of their rumors by time
3Ca2 . During each stage from that point until all non-v nodes complete, one of the following things
must be true.
• Some active node transmits the final rumor it is holding, and stops transmitting. This can
happen during at most K 3 stages, since there are at most K 3 children and each child only
becomes active once it already has received all rumors from its subtree.
• All active nodes have rumors to transmit throughout the entire stage. If there were j active
nodes total during the stage, then by the definition of our amortizing selector family, the
8
parent received at least c 2Kj rumors from each child during the stage. In particular, it must
have received at least
2K 8
c(j − 1)
≥ cK 8
j
new rumors from children other than v.
Combining the two types of stages, the non-v children will have all finished in at most
K3 +
1
(a2 + · · · + ak )
cK 8
complete stages after time 3Ca2 . Since each stage takes time 2(D1 + K 8 ) = (2 + o(1))K 8 , the bound
follows.
Claim 3: Let k, m, and v be as above. By time 2Cm, all children except possibly v have completed
their transmissions.
Proof. This is trivial for k = 1. For larger k follows from the previous claim, together with the
estimate that (for sufficiently large C)
4
5
2Cm ≥ 2C(a1 + · · · + ak ) ≥ 4Ca2 + 2C(a3 + · · · + ak ) ≥ (t2 − K 12 ) ≥ t2
3
4
Here the middle inequality holds for any C > 2/c, while the latter inequality holds since t2 ≥ a2 ≥
n/K 3 K 12 .
18
By the above claim, node v is the only node that could still be transmitting at time 2Cm.
In particular, if it has a rumor during an odd numbered time step after this point, it successfully
transmits. By assumption, v will have received at least j rumors by time C(2m + j) for each j.
This implies it will successfully broadcast j rumors by time C(2m + j) + 2 for each 0 ≤ j ≤ a1 .
By time C[2(m + 1) + j], the parent has either received all rumors from all of its children (if
j > a1 ), or at least j rumors from a1 alone (if j ≤ a1 ). Either way, it has received at least j rumors
total, and the induction is complete.
References
[1] Noga Alon, Amotz Bar-Noy, Nathan Linial, and David Peleg. A lower bound for radio broadcast. J. Comput. Syst. Sci., 43(2):290–298, 1991.
[2] Danilo Bruschi and Massimiliano Del Pinto. Lower bounds for the broadcast problem in mobile
radio networks. Distrib. Comput., 10(3):129–135, April 1997.
[3] Bogdan S. Chlebus, Leszek Gasieniec, Alan Gibbons, Andrzej Pelc, and Wojciech Rytter.
Deterministic broadcasting in ad hoc radio networks. Distributed Computing, 15(1):27–38,
2002.
[4] Malin Christersson, Leszek Gasieniec, and Andrzej Lingas. Gossiping with bounded size messages in ad hoc radio networks. In Automata, Languages and Programming, 29th International
Colloquium, ICALP’02, pages 377–389, 2002.
[5] Marek Chrobak, Kevin Costello, Leszek Gasieniec, and Darek R. Kowalski. Information gathering in ad-hoc radio networks with tree topology. In Proc. 8th International Conference on
Combinatorial Optimization and Applications (COCOA’14), pages 129–145, 2014.
[6] Marek Chrobak, Leszek Gasieniec, and Wojciech Rytter. Fast broadcasting and gossiping in
radio networks. J. Algorithms, 43(2):177–189, 2002.
[7] Marek Chrobak, Leszek Gasieniec, and Wojciech Rytter. A randomized algorithm for gossiping
in radio networks. Networks, 43(2):119–124, 2004.
[8] Andrea E. F. Clementi, Angelo Monti, and Riccardo Silvestri. Selective families, superimposed
codes, and broadcasting on unknown radio networks. In Prof. 12th Annual Symposium on
Discrete Algorithms (SODA’01), pages 709–718, 2001.
[9] Andrea E. F. Clementi, Angelo Monti, and Riccardo Silvestri. Distributed broadcast in radio
networks of unknown topology. Theor. Comput. Sci., 302(1-3):337–364, 2003.
[10] Artur Czumaj and Peter Davies. Almost optimal deterministic broadcast in radio networks.
CoRR, abs/1506.00853, 2015.
[11] Artur Czumaj and Wojciech Rytter. Broadcasting algorithms in radio networks with unknown
topology. Journal of Algorithms, 60(2):115 – 143, 2006.
[12] G. De Marco and D.R. Kowalski. Contention resolution in a non-synchronized multiple access channel. In 2013 IEEE 27th International Symposium on Parallel Distributed Processing
(IPDPS), pages 525–533, 2013.
19
[13] P. Erdös, P. Frankl, and Z. Füredi. Families of finite sets in which no set is covered by the
union of r others. Israel Journal of Mathematics, 51(1–2):79–89, 1985.
[14] Antonio Fernández Anta, Miguel A. Mosteiro, and Jorge Ramón Muñoz. Unbounded contention
resolution in multiple-access channels. Algorithmica, 67(3):295–314, 2013.
[15] Leszek Gasieniec. On efficient gossiping in radio networks. In 16th Int. Colloquium on Structural
Information and Communication Complexity (SIROCCO’09), pages 2–14, 2009.
[16] Leszek Gasieniec, Tomasz Radzik, and Qin Xin. Faster deterministic gossiping in directed ad
hoc radio networks. In Proc. Scandinavian Workshop on Algorithm Theory (SWAT’04), pages
397–407, 2004.
[17] Dariusz R. Kowalski. On selection problem in radio networks. In Proceedings of the Twentyfourth Annual ACM Symposium on Principles of Distributed Computing, PODC ’05, pages
158–166, 2005.
[18] Dariusz R. Kowalski and Andrzej Pelc. Faster deterministic broadcasting in ad hoc radio
networks. SIAM J. Discrete Math., 18(2):332–346, 2004.
[19] Eyal Kushilevitz and Yishay Mansour. An Ω(D log(N/D)) lower bound for broadcast in radio
networks. SIAM J. Comput., 27(3):702–712, 1998.
[20] Ding Liu and Manoj Prabhakaran. On randomized broadcasting and gossiping in radio networks. In 8th Annual Int. Conference on Computing and Combinatorics (COCOON’02), pages
340–349, 2002.
[21] Gianluca De Marco. Distributed broadcast in unknown radio networks. In Proc. 19th Annual
ACM-SIAM Symposium on Discrete Algorithms (SODA’08), pages 208–217, 2008.
[22] Gianluca De Marco and Dariusz R. Kowalski. Fast nonadaptive deterministic algorithm for conflict resolution in a dynamic multiple-access channel. SIAM Journal on Computing, 44(3):868–
888, 2015.
[23] A.N. Strahler. Hypsometric (area-altitude) analysis of erosional topology. Bull. Geol. Soc.
Amer., 63:117–1142, 1952.
[24] X.G. Viennot. A Strahler bijection between Dyck paths and planar trees. Discrete Mathematics,
246:317–329, 2003.
[25] Ying Xu. An O(n1.5 ) deterministic gossiping algorithm for radio networks. Algorithmica,
36(1):93–96, 2003.
20
| 7 |
Gayaz Khakimzyanov
Institute of Computational Technologies, Novosibirsk, Russia
Denys Dutykh
CNRS–LAMA, Université Savoie Mont Blanc, France
arXiv:1707.01301v1 [physics.flu-dyn] 5 Jul 2017
Oleg Gusev
Institute of Computational Technologies, Novosibirsk, Russia
Nina Shokina
Institute of Computational Technologies, Novosibirsk, Russia
Dispersive shallow water wave
modelling. Part II: Numerical
simulation on a globally flat space
arXiv.org / hal
Last modified: July 7, 2017
Dispersive shallow water wave modelling.
Part II: Numerical simulation on a globally
flat space
Gayaz Khakimzyanov, Denys Dutykh∗, Oleg Gusev, and Nina Yu. Shokina
Abstract. In this paper we describe a numerical method to solve numerically the weakly
dispersive fully nonlinear Serre–Green–Naghdi (SGN) celebrated model. Namely, our
scheme is based on reliable finite volume methods, proven to be very efficient for the
hyperbolic part of equations. The particularity of our study is that we develop an adaptive numerical model using moving grids. Moreover, we use a special form of the SGN
equations where non-hydrostatic part of pressure is found by solving a nonlinear elliptic
equation. Moreover, this form of governing equations allows to determine the natural form
of boundary conditions to obtain a well-posed (numerical) problem.
Key words and phrases: nonlinear dispersive waves; moving adaptive grids; finite volumes; conservative finite differences
MSC: [2010] 76B15 (primary), 76M12, 65N08, 65N06 (secondary)
PACS: [2010] 47.35.Bb (primary), 47.35.Fg (secondary)
Key words and phrases. nonlinear dispersive waves; non-hydrostatic pressure; moving adaptive grids;
finite volumes; conservative finite differences.
∗
Corresponding author.
Dispersive shallow water wave modelling. Part II
3 / 66
Contents
1
Introduction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2
Mathematical model
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1 Well-posedness conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Linear waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Conservative form of equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Intermediate conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 One-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Boundary conditions on the elliptic part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Unicity of the elliptic equation solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Vector short-hand notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Flat bottom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Linear dispersion relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3
Numerical method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1 Adaptive mesh construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Initial grid generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Grid motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 The SGN equations on a moving grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Predictor–corrector scheme on moving grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Predictor step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Corrector step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Well-balanced property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Numerical scheme for linearized equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Linear stability of the scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Discrete dispersion relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4
Numerical results
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1 Solitary wave propagation over the flat bottom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Uniform grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Adaptive grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 Solitary wave/wall interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Wave action on the wall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Solitary wave/bottom step interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.4 Wave generation by an underwater landslide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
A
Derivation of the non-hydrostatic pressure equation
B
Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
References
. . . . . . . . . . . . . 57
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Dispersive shallow water wave modelling. Part II
5 / 66
1. Introduction
In 1967 D. Peregrine derived the first two-dimensional Boussinesq-type system of
equations [117]. This model described the propagation of long weakly nonlinear waves
over a general non-flat bottom. From this landmark study the modern era of long wave
modelling started. On one hand researchers focused on the development of new models
and in parallel the numerical algorithms have been developed. We refer to [20] for a recent
‘reasoned’ review of this topic.
The present manuscript is the continuation of our series of papers devoted to the long
wave modelling. In the first part of this series we derived the so-called base model [92],
which encompasses a number of previously known models (but, of course, not all of nonlinear dispersive systems). The governing equations of the base model are
Ht + ∇ · [ HU ] = 0 ,
p̌
1 h
∇P
(HU )t + (ū · ∇)(HU )
=
∇h −
ūt + (ū · ∇)ū +
H
H
H
i
+ H(U · ∇) ū + HU ∇ · ū ,
(1.1)
(1.2)
def
where U := ū + U is the modified horizontal velocity and U = U (H, ū) is the
closure relation to be specified later. Depending on the choice of this variable various
models can be obtained (see [92, Section §2.4]). Variables P and p̌ are related to the
fluid pressure. The physical meaning of these variables is reminded below in Section 2. In
the present paper we propose an adaptive numerical discretization for a particular, but
very popular nowadays model which can be obtained from the base model (1.1), (1.2).
Namely, if we choose U ≡ 0 (thus, U becomes the depth-averaged velocity u) then we
obtain equations equivalent to the celebrated Serre–Green–Naghdi (SGN) equations
[72, 126, 127] (rediscovered later independently by many other researchers). This system
will be the main topic of our numerical study. Most often, adaptive techniques for dispersive
wave equations involve the so-called Adaptive Mesh Refinement (AMR) [121] (see also [15]
for nonlinear shallow water equations). The particularity of our study is that we conserve
the total number of grid points and the adaptivity is achieved by judiciously redistributing
them in space [83, 84]. The ideas of redistributing grid nodes is stemming from the works
of Bakhvalov [7], Il’in [85] and others [1, 134].
The base model (1.1), (1.2) admits an elegant conservative form [92]:
Ht + ∇ · [ HU ] = 0 ,
i
(H U )t + ∇ · Hū ⊗ U + P(H, ū) · I + H U ⊗ ū = p̌ ∇h ,
h
(1.3)
(1.4)
where I ∈ Mat 2 × 2 (R) is the identity matrix and the operator ⊗ denotes the tensorial
product. We note that the pressure function P(H, ū) incorporates the familiar hydrostatic
g H2
well-known from the Nonlinear Shallow Water Equations (NSWE)
pressure part
2
G. Khakimzyanov, D. Dutykh, et al.
6 / 66
[11, 43]. By setting U ≡ 0 we obtain readily from (1.3), (1.4) the conservative form of
the SGN equations (one can notice that the mass conservation equation (1.1) was already
in conservative form).
Nonlinear dispersive wave equations represent certain numerical difficulties since they
involve mixed derivatives (usually of the horizontal velocity variable, but sometimes of the
total water depth as well) in space and time. These derivatives have to be approximated
numerically, thus leaving a lot of room for the creativity. Most often the so-called Method
Of Lines (MOL) is employed [97, 120, 123, 128], where the spatial derivatives are discretized
first and the resulting system of coupled Ordinary Differential Equations (ODEs) is then
approached with more or less standard ODE techniques, see e.g. [76, 77]. The MOL
separates the choice of time discretization from the procedure of discretization in space,
even if the interplay between two schemes might be important. For example, it would be
natural to choose the same order of accuracy for both schemes.
Let us review the available spatial discretization techniques employed in recent numerical studies. We focus essentially on fully nonlinear weakly dispersive models, even if
some interesting works devoted to Boussinesq-type and unidirectional equations will be
mentioned. First of all, dispersive wave equations with the dispersion relation given by a
rational function (à la BBM [14, 116]) usually involve the inversion of an elliptic operator. This gives the first idea of employing the splitting technique between the hyperbolic
and elliptic operators. This idea was successfully realized in e.g. [8, 9, 18, 82]. Historically, perhaps the finite difference techniques were applied first to dispersive (and more
general non-hydrostatic) wave equations [24, 35–37, 108, 109, 143, 147]. Then, naturally
we arrive to the development of continuous Galerkin/Finite Element type discretizations
[2, 17, 45, 47, 114, 131, 139]. See also a recent review [46] and references therein. Pseudospectral Fourier-type methods can also be successfully applied to the SGN equations [52].
See [62] for a pedagogical review of pseudo-spectral and radial basis function methods for
some shallow water equations. More recently, the finite volume type methods were applied
to dispersive equations [30, 52, 57, 58, 89, 100]. In the present study we also employ a
predictor–corrector finite volume type scheme [129], which is described in details below.
The present article is organized as follows. In Section 2 we present the governing equations in 2D and 1D spatial dimensions. The numerical method is described in Section 3.
Several numerical illustrations are shown in Section 4 including the solitary wave/wall
or bottom interactions and even a realistic underwater landslide simulation. Finally, in
Section 5 we outline the main conclusions and perspectives of the present study. In Appendices A we provide some details on analytical derivations used in this manuscript.
2. Mathematical model
In this study we consider the following system of the Serre–Green–Naghdi (SGN)
equations, which describes the incompressible homogeneous fluid flow in a layer bounded
from below by the impermeable bottom y = −h(x, t) and above by the free surface
7 / 66
Dispersive shallow water wave modelling. Part II
l
y
O
α
η (x,t)
x
u(x,t)
d
h(x,t)
H(x,t)
Figure 1. Sketch of the fluid domain in 2D.
y = η (x, t), x = (x1 , x2 ) ∈ R2 :
Ht + ∇ · Hu = 0 ,
(2.1)
p̌
∇P
=
∇h ,
(2.2)
ut + (u · ∇) u +
H
H
where for simplicity we drop the bars over the horizontal velocity variable u(x, t) =
def
u1 (x, t), u2(x, t) . Function H(x, t) := h(x, t) + η(x, t) being the total water depth.
The sketch of the fluid domain is schematically depicted in Figure 1. For the derivation
of equations (2.1), (2.2) we refer to the first part of the present series of papers [92]. The
depth-integrated pressure P(u, H) is defined as
g H2
− ℘(x, t) ,
2
where ℘(x, t) is the non-hydrostatic part of the pressure:
def
P (u, H) :=
℘ (x, t)
def
:=
H2
H3
R1 +
R2 ,
3
2
(2.3)
with
def
def
R1 := D(∇ · u) − (∇ · u)2 ,
R2 := D 2 h ,
def
D := ∂t + u · ∇ .
Above, D is the total or material derivative operator. On the right hand side of equation
def
(2.2) we have the pressure trace at the bottom p̌ := p|y = −h , which can be written as
p̌ (x, t) = gH − ̺ (x, t) ,
where ̺(x, t) is again the non-hydrostatic pressure contribution:
def
̺ (x, t) :=
H2
R1 + H R2 .
2
(2.4)
8 / 66
G. Khakimzyanov, D. Dutykh, et al.
Equations above are much more complex comparing to the classical NSWE (or SaintVenant equations) [43], since they contain mixed derivatives up to the third order. From
the numerical perspective these derivatives have to be approximated. However, the problem can be simplified if we ‘extract’ a second order sub-problem for the non-hydrostatic
component of the pressure. Indeed, it can be shown (see Appendix A) that function
℘(x, t) satisfies the following second order linear elliptic equation with variable coefficients (by analogy with incompressible Navier–Stokes equations, where the pressure is
found numerically by solving a Poisson-type problem [33, 79]):
∇h
2 Υ − 3
(∇℘ · ∇h) ∇h
∇℘
℘ = F , (2.5)
− 6
−
·
+ ∇·
∇·
H
HΥ
H3
Υ
H2 Υ
{z
}
|
(⋆)
def
where Υ := 4 + | ∇h |2 and F, R are defined as
R ∇h
6R
def
u
u
F := ∇ · g ∇η +
−
+ 2 (∇ · u) 2 − 2 1 x1 1 x2 ,
Υ
HΥ
u 2 x1 u 2 x2
def
R := −g ∇η · ∇h + (u · ∇)∇h · u + htt + 2 u · ∇ht .
(2.6)
(2.7)
Symbol | · | in (2.6) denotes the determinant of a 2 × 2 matrix.
Equation (2.5) is uniformly elliptic and it does not contain time derivatives of the fluid
velocity u. If the coefficient (⋆) is positive (for instance, it is the case for a flat bottom
h(x, t) ≡ const), we deal with a positive operator and stable robust discretizations can
be proposed. Taking into account the fact that equation (2.5) is linear with respect to
the variable ℘(x, t), its discrete counterpart can be solved by direct or iterative methods∗.
Well-posedness of this equation is discussed below (see Section 2.1). The boundary conditions for equation (2.5) will be discussed below in Section 2.3.1 (in 1D case only, the
generalization to 2D is done by projecting on the normal direction to the boundary).
Introduction of the variable ℘(x, t) allows to rewrite equation (2.2) in the following
equivalent form:
ut + (u · ∇)u + g ∇H = g ∇h +
∇℘ − ̺ ∇h
.
H
(2.8)
The non-hydrostatic pressure at the bottom ̺(x, t) can be expressed through ℘ in the
following way:
i
1 h 6℘
℘
̺(x, t) =
+ H R + ∇ · ∇h .
(2.9)
Υ H
The derivation of this equation (2.9) is given in Appendix A as well. So, thanks to this
relation (2.9), the usage of equation (2.4) is not necessary anymore. Once we found the
function ℘(x, t), we can compute the bottom component from (2.9).
∗In
our implementation we use the direct Thomas algorithm, since in 1D the resulting linear system
of equations is tridiagonal with the dominant diagonal.
Dispersive shallow water wave modelling. Part II
9 / 66
Remark 1. It can be easily seen that taking formally the limit ℘ → 0 and ̺ → 0 of
vanishing non-hydrostatic pressures, allows us to recover the classical NSWE (or SaintVenant equations) [43]. Thus, the governing equations verify the Bohr correspondence
principle [16].
2.1. Well-posedness conditions
In order to obtain a well-posed elliptic problem (2.5), one has to ensure that coefficient
(⋆) is positive. This coefficient involves the bathymetry function h(x, t) and the total
water depth H(x, t). In other words, the answer depends on local depth and wave elevation.
It is not excluded that for some wave conditions the coefficient (⋆) may become negative.
In the most general case the positivity condition is trivial and, thus, not very helpful, i.e.
∇h
2 Υ − 3
(⋆) ≡
> 0.
(2.10)
·
+ ∇·
H3
Υ
H2 Υ
On the flat bottom h(x, t) → d = const we know that the above condition is satisfied
1
> 0. Consequently, by continuity of the coefficient (⋆)
since Υ → 4 and (⋆) →
2 H3
we conclude that the same property will hold for some (sufficiently small) variations of the
depth h(x, t), i.e. | ∇h | ≪ 1. In practice it can be verified that bathymetry variations
can be even finite so that condition (2.10) still holds.
Remark 2. It may appear that restrictions on the bathymetry variations are inherent to
our formulation only. However, it is the case of all long wave models, even if this assumption does not appear explicitly in the derivation. For instance, bottom irregularities will
inevitably generate short waves ( i.e. higher frequencies) during the wave propagation process. A priori, this part of the spectrum is not modeled correctly by approximate equations,
unless some special care is taken.
2.1.1 Linear waves
Let us take the limit of linear waves η → 0 in expression (⋆). It will become then
∇h
2 Υ − 3
def
⋆ ) :=
.
·
+
∇
·
(⋆) → (⋆
h3
Υ
h2 Υ
⋆ ) then takes the following form:
The positivity∗ condition of (⋆
2 Υ + h hx1 x1 1 − h2x1 + h2x2 + hx2 x2 1 + h2x1 − h2x2 − 4 hx1 hx2 hx1 x2 > 0 .
If we restrict our attention to the one-dimensional bathymetries (i.e. hx2 → 0), then we
obtain an even simpler condition:
2 1 + hx2
,
hxx > − ·
h 1 − hx2
∗Non-negativity,
to be more precise.
10 / 66
G. Khakimzyanov, D. Dutykh, et al.
where by x we denote x1 for simplicity. The last condition can be easily checked at the
problem outset. A further simplification is possible if we additionally assume that
| ∇h | ≡ | hx | < 1 ,
(2.11)
2
hxx > − .
h
(2.12)
then we have the following elegant condition:
2.2. Conservative form of equations
Equations (2.1), (2.2) admit an elegant conservative form, which is suitable for numerical
simulations:
Ht + ∇ · Hu = 0 ,
(2.13)
(Hu)t + ∇ · F = g H ∇h + ∇℘ − ̺∇h ,
(2.14)
where the flux matrix F (H, u) is the same as in NSWE (or Saint-Venant equations):
g H2
2
H u1 · u2
def H u1 +
2
.
F (H, u) :=
g H2
H u1 · u2
H u22 +
2
Notice that it is slightly different from the (fully-)conservative form given in Part I [92].
Conservative equations∗ (2.13), (2.14) can be supplemented by the energy conservation
equation which can be used to check the accuracy of simulation (in conservative case, i.e.
ht ≡ 0) and/or to estimate the energy of generated waves [54]:
h
Pi
= −p̌ ht ,
(2.15)
(H E )t + ∇ · H u E +
H
where the total energy E is defined as
g
def
(H − 2h) .
E := 12 | u | 2 + 61 H 2 (∇ · u)2 + 21 H (Dh) (∇ · u) + 21 (Dh)2 +
2
Notice that equation (2.15) is not independent. It is a differential consequence of the mass
and momentum conservations (2.13), (2.14) (as it is the case for incompressible flows in
general).
2.2.1 Intermediate conclusions
As a result, the system of nonlinear dispersive equations (2.1), (2.2) was split in two
main parts:
(1) Governing equations (2.13), (2.14) in the form of (hyperbolic) balance laws with
source terms
∗It
is not difficult to see that the mass conservation equation (2.1) is already in a conservative form in
the SGN model. Thus, equations (2.1) and (2.13) are obviously identical.
11 / 66
Dispersive shallow water wave modelling. Part II
(2) A scalar nonlinear elliptic equation to determine the non-hydrostatic part of the
pressure ℘(x, t) (and consequently ̺(x, t) as well)
This splitting idea will be exploited below in the numerical algorithm in order to apply the
most suitable and robust algorithm for each part of the solution process [9].
2.3. One-dimensional case
In this study for the sake of simplicity we focus on the two-dimensional physical problem,
i.e. one horizontal and one vertical dimensions. The vertical flow structure being resolved
using the asymptotic expansion (see Part I [92] of this series of papers), thus we deal with
def
PDEs involving one spatial (horizontal) dimension (x := x1 ) and one temporal variable
t ∈ R+ . The horizontal velocity variable u(x, t) becomes a scalar function in this case.
Below we provide the full set of governing equations (which follow directly from (2.13),
(2.14) and (2.5)):
Ht + [ H u ]x = 0 ,
h
g H2 i
= g H hx +
(H u)t + H u2 +
2 x
h h i
℘x
2 Υ − 3
x
℘ = F,
− 6
4
·
+
HΥ x
H3
Υ
H2 Υ x
(2.16)
℘x
− ̺ hx ,
(2.17)
(2.18)
def
where Υ := 4 + h2x and
def
F :=
def
R hx
gηx +
Υ
x
−
6R
+ 2 u2x ,
HΥ
R := −g ηx hx + u2 hxx + htt + 2 u hxt .
The last equations can be trivially obtained from corresponding two-dimensional versions
given in (2.6), (2.7). This set of equations will be solved numerically below (see Section 3).
2.3.1 Boundary conditions on the elliptic part
First, we rewrite elliptic equation (2.18) in the following equivalent form:
K ℘x x − K0 ℘ = F ,
where
(2.19)
h 2 Υ − 3
h i
4
def
x
.
,
K0 := 6
+
3
2
HΥ
H
Υ
H Υ x
We assume that we have to solve an initial-boundary value problem for the system (2.16)–
(2.18). If we have a closed numerical wave tank∗ (as it is always the case in laboratory
def
K :=
∗Other
possibilities have to be discussed separately.
12 / 66
G. Khakimzyanov, D. Dutykh, et al.
experiments), then on vertical walls the horizontal velocity satisfies:
∀t ∈ R+ .
u(x, t) |x = 0 = u(x, t) |x = ℓ ≡ 0,
For the situation where the same boundary condition holds on both boundaries, we introduce a short-hand notation:
=ℓ
u(x, t) |xx =
0 ≡ 0,
∀t ∈ R+ .
Assuming that equation (2.8) is valid up to the boundaries, we obtain the following boundary conditions for the elliptic equation (2.18):
x=ℓ
℘ − ̺h
x
x
− g ηx
= 0,
∀t ∈ R+ .
H
x=0
Or in terms of equation (2.19) we equivalently have:
x=ℓ
R hx
6 hx ℘
=
g
η
+
K℘ x −
x
H 2Υ
Υ
x=0
x=ℓ
,
x=0
∀t ∈ R+ .
(2.20)
The boundary conditions for the non-hydrostatic pressure component ℘ are of the 3rd
kind (sometimes they are referred to as of Robin-type). For the case where locally at the
=ℓ
boundaries the bottom is flat (to the first order), i.e. hx |xx =
0 ≡ 0, then we have the
nd
(non-homogeneous) Neumann boundary condition of the 2 kind:
=ℓ
K℘x |x = 0 = g ηx |xx =
0 ,
x=ℓ
∀t ∈ R+ .
For a classical Poisson-type equation this condition would not be enough to have a wellposed problem. However, we deal rather with a Helmholtz-type equation (if K0 > 0).
So, the flat bottom does not represent any additional difficulty for us and the unicity of
the solution can be shown in this case as well.
2.3.2 Unicity of the elliptic equation solution
The mathematical structure of equation (2.19) is very advantageous since it allows to
show the following
Theorem 1. Suppose that the Boundary Value Problem (BVP) (2.20) for equation (2.19)
admits a solution and the following conditions are satisfied:
K0 > 0,
then this solution is unique.
hx |x = 0 > 0,
hx |x = ℓ 6 0 ,
(2.21)
Proof. Assume that there are two such solutions ℘1 and ℘2 . Then, their difference
def
℘ :=
℘1 − ℘2 satisfies the following homogeneous BVP:
K ℘x x − K0 ℘ = 0 ,
(2.22)
x=ℓ
6 hx ℘
℘
= 0.
(2.23)
K x −
H 2Υ
x=0
13 / 66
Dispersive shallow water wave modelling. Part II
Let us multiply the first equation (2.22) by ℘ and integrate over the computational domain:
ˆ ℓ
ˆ ℓ
2
K ℘x x ℘ dx −
K0 ℘ dx = 0 .
0
|0
{z
}
()
Integration by parts of the first integral () yields:
ˆ ℓ
ˆ ℓ
x=ℓ
2
2
K ℘x ℘|
− K ℘x ℘|x = 0 −
K ℘x dx −
K0 ℘ dx = 0 .
0
0
And using boundary conditions (2.23) we finally obtain:
ˆ ℓ
ˆ ℓ
x=ℓ
6 hx
6 hx
2
2
2
2
℘
℘
−
−
K ℘x dx −
K0 ℘ dx = 0 .
H 2Υ
H 2Υ
0
0
x=0
Taking into account this Theorem assumptions (2.21) and the fact that K > 0, the last
identity leads to a contradiction, since the left hand side is strictly negative. Consequently,
the solution to equation (2.19) with boundary condition (2.20) is unique.
Remark 3. Conditions in Theorem 1 are quite natural. The non-negativity of coefficient
K0 has already been discussed in Section 2.1. Two other conditions mean that the water
depth is increasing in the offshore direction (hx |x = 0 > 0) and again it is decreasing
( hx |x = ℓ 6 0) when we approach the opposite shore.
2.4. Vector short-hand notation
For the sake of convenience we shall rewrite governing equations (2.16), (2.17) in the
following vectorial form:
v t + F (v) x = G (v, ℘, ̺, h) ,
(2.24)
where we introduced the following vector-valued functions:
!
Hu
def
def
H
v :=
,
F (v) := 2
g H2 ,
Hu
Hu +
2
and the source term is defined as
!
def
0
G (v, ℘, ̺, h) :=
.
g H hx + ℘x − ̺hx
The point of view that we adopt in this study is to view the SGN equations as a system of
hyperbolic equations (2.24) with source terms G (v, ℘x , ̺, h). Obviously, one has to solve
also the elliptic equation (2.18) in order to compute the source term G .
The Jacobian matrix of the advection operator coincides with that of classical NSWE
equations:
!
def dF (v)
0
1
A (v) :=
=
.
dv
−u2 + g H 2u
14 / 66
G. Khakimzyanov, D. Dutykh, et al.
Eigenvalues of the Jacobian matrix A (v) can be readily computed:
def p
λ− = u − s ,
λ+ = u + s ,
s :=
gH.
(2.25)
The Jacobian matrix appears naturally in the non-divergent form of equations (2.24):
v t + A (v) · v x = G ,
(2.26)
By multiplying both sides of the last equation by A (v) we obtain the equations for the
advection flux function F (v):
Ft + A (v) · Fx = A · G .
(2.27)
In order to study the characteristic form of equations one needs also to know the matrix
of left and right eigenvectors correspondingly:
!
!
1
def
def s
−1
1
−λ+ 1
L := 2
.
(2.28)
,
R :=
s
2 −λ− λ+
−λ− 1
If we introduce also the diagonal matrix of eigenvalues
!
def
λ− 0
Λ :=
,
0 λ+
the following relations can be easily checked:
R·Λ·L ≡ A ,
R·L = L ·R ≡ I,
where I is the identity 2 × 2 matrix.
2.4.1 Flat bottom
Equations above become particularly simple on the flat bottom. In this case the bathymetry
functions is constant, i.e.
h(x, t) ≡ d = const > 0 .
Substituting it into governing equations above, we straightforwardly obtain:
Ht + [ H u ]x = 0 ,
h
g H2 i
2
(H u)t + H u +
= ℘x ,
2 x
℘x
3 ℘
−
= g ηxx + 2 u2x .
H x
H3
15 / 66
Dispersive shallow water wave modelling. Part II
Solitary wave solution. Equations above admit an elegant analytical solution known as the
solitary wave ∗. It is given by the following expressions:
√
3αg
υ · η(x, t)
2
η(x, t) = α · sech
u(x, t) =
x − x0 − υ t ,
, (2.29)
2dυ
d + η(x, t)
where α is the wave amplitude, x0 ∈ R is the initial wave position and υ is the velocity
defined as
def p
υ :=
g (d + α) .
The non-hydrostatic pressure under the solitary wave can be readily computed as well:
℘(x, t) = g H 2 (x, t) − d 2 − d υ u(x, t) .
2
One can derive also periodic travelling waves known as cnoidal waves. For their expressions we refer to e.g. [52, 55].
2.5. Linear dispersion relation
The governing equations (2.16)–(2.18) after linearizations take the following form:
ηt + d ux = 0 ,
ut + g ηx =
℘xx
def
where c :=
form
√
−
3
d2
℘
℘x
d
,
= c2 ηxx ,
g d is the linear gravity wave speed. By looking for plane wave solutions of
η(x, t) = α ei (kx
− ω t)
u(x, t) = υ ei (kx
− ω t)
,
℘(x, t)
= ρ ei (kx
− ω t)
,
where k is the wave number, ω(k) is the wave frequency and α, υ, ρ ∈ R are some (constant) real amplitudes. The necessary condition for the existence of plane wave solutions
reads
ck
ω(k) = ± r
.
(2.30)
(kd)2
1 +
3
2π
into the last formula and dividing both sides by
By substituting the definition of k =
λ
k we obtain the relation between the phase speed cp and the wavelength λ:
c
def ω k(λ)
.
= r
cp (λ) :=
k λ
4π 2 d 2
1 +
3λ2
∗Solitary
,
waves are to be distinguished from the so-called solitons which interact elastically [48]. Since
the SGN equations are not integrable (for the notion of integrability we refer to e.g. [145]), the interaction
of solitary waves is inelastic [114].
16 / 66
G. Khakimzyanov, D. Dutykh, et al.
This dispersion relation is accurate to 2nd order at the limit of long waves kd → 0. There
are many other nonlinear dispersive wave models which share the same linear dispersion
relation, see e.g. [61, 64, 117, 149]. However, their nonlinear properties might be very
different.
3. Numerical method
The construction of numerical schemes for hyperbolic conservation laws on moving grids
was described in our previous work [94]. In the present manuscript we make an extension
of this technology to dispersive PDEs illustrated on the example of the SGN equations
(2.16)–(2.18). The main difficulty which arises in the dispersive case is handling of high
order (possibly mixed) derivatives. The SGN system is an archetype of such systems
with sufficient degree of nonlinearity and practically important applications in Coastal
Engineering [96].
3.1. Adaptive mesh construction
In the present work we employ the method of moving grids initially proposed in early
60’s by Tikhonov & Samarskii [135, 136] and developed later by Bakhvalov (1969) [7]
and Il’in (1969) [85]. This technology was recently described by the authors to steady [90]
and unsteady [94] problems. For more details we refer to our recent publications [90, 94].
An alternative recent approach can be found in e.g. [3, 4]. In the present Section we just
recall briefly the main steps of the method.
The main idea consists in assuming that there exists a (time-dependent) diffeomorphism
def
from the reference domain Q := [0, 1] to the computational domain I = [0, ℓ]:
x(q, t) : Q 7→ I .
It is natural to assume that boundaries of the domains correspond to each other, i.e.
x(0, t) = 0 ,
x(1, t) = ℓ ,
∀t > 0 .
We shall additionally assume that the Jacobian of this map is bounded from below and
above
def ∂x
0 < m 6 J(q, t) :=
6 M < +∞
(3.1)
∂q
by some real constants m and M.
The construction of this diffeomorphism x(q, t) is the heart of the matter in the moving
grid method. We employ the so-called equidistribution method. The required non-uniform
grid Ih of the computational domain I is then obtained as the image of the uniformly
distributed nodes Qh under the mapping x(q, t):
xj = x(qj , t) ,
qj = j ∆q ,
∆q =
1
,
N
17 / 66
Dispersive shallow water wave modelling. Part II
where N is the total number of grid points. Notice, that strictly speaking, we do not even
need to know the mapping x(q, t) in other points besides {qj }N
j=0 . Under condition (3.1)
it easily follows that the maximal discretization step in the physical space vanishes when
we refine the mesh in the reference domain Qh :
max | xj+1 − xj | 6 M ∆q → 0 ,
j=0 ... ,N −1
as
∆q → 0 .
3.1.1 Initial grid generation
Initially, the desired mapping x(q, 0) is obtained as a solution to the following nonlinear
elliptic problem
d
dx
̟(x)
= 0,
x(0) = 0, x(1) = ℓ ,
(3.2)
dq
dq
where we drop in this Section the 2nd constant argument 0. The function ̟(x) is the
so-called monitor function. Its choice will be specified below, but we can say that this
functions has to be positive defined and bounded from below, i.e.
̟(x) > C > 0 ,
∀x ∈ R .
In practice the lower bound C is taken for simplicity to be equal to 1. A popular choice of
the monitor function is, for example,
ϑ0 ∈ R+ ,
̟[η](x) = 1 + ϑ0 | η | ,
where η is the free surface elevation. Another possibility consists in taking into account
the free surface gradient:
ϑ1 ∈ R+ ,
̟[η](x) = 1 + ϑ1 | ηx | ,
or even both effects:
̟[η](x) = 1 + ϑ0 | η | + ϑ1 | ηx | ,
ϑ0,1 ∈ R+ .
In some simple cases equation (3.2) can be solved analytically (see e.g. [90]). However,
in most cases we have to solve the nonlinear elliptic problem (3.2) numerically. For this
purpose we use an iterative scheme, where at every stage we have a linear three-diagonal
problem to solve:
1
∆q
(n+1)
(n)
̟(xj+1/2 )
xj+1
(n+1)
− xj
∆q
(n+1)
−
(n)
̟(xj−1/2 )
xj
(n+1)
− xj−1
∆q
= 0,
n ∈ N0 .
(3.3)
The iterations are continued until the convergence is achieved to the prescribed tolerance
parameter (typically ∝ 10−10 ).
18 / 66
G. Khakimzyanov, D. Dutykh, et al.
3.1.2 Grid motion
In unsteady computations the grid motion is given by the following nonlinear parabolic
equation:
∂
∂x
∂x
̟(x, t)
= β
,
β ∈ R+ .
(3.4)
∂q
∂q
∂t
The parameter β plays the rôle of the diffusion coefficient here. It is used to control the
smoothness of nodes trajectories. Equation (3.4) is discretized using an implicit scheme:
n+1
xn+1
− xnj
xn+1
xn+1
− xn+1
1
j
j+1 − xj
j
j−1
n
n
̟j+1/2
= β
− ̟j−1/2
,
(3.5)
∆q
∆q
∆q
τ
with boundary conditions xn+1
= 0, xn+1
= ℓ as above. We would like to reiterate
0
N
that at every time step we solve only one additional (tridiagonal) linear system. Nonlinear
iterative computations are performed only once when we project the initial condition on
the ad-hoc non-uniform grid. So, the additional overhead due to the mesh motion is linear
in complexity, i.e. O(N).
Similarly to the elliptic case (3.2), equation (3.4) admits smooth solutions provided that
the monitor function ̟(x, t) is bounded from below by a positive constant. In numerical examples shown below we always take monitor functions which satisfy the condition
̟(x, t) > 1, ∀x ∈ I , ∀t > 0 . Thus, for any t > 0 equation (3.4) provides us the
required diffeomorphism between the reference domain Q and the computational domain
I.
3.2. The SGN equations on a moving grid
Before discretizing the SGN equations (2.16)–(2.18), we have to pose them on the reference domain Q. The composed functions will be denoted as:
def
ů(q, t) := (u ◦ x) (q, t) ≡ u x(q, t), t .
def
And we introduce similar notations for all other variables, e.g. H̊(q, t) := H x(q, t), t .
The conservative (2.24) and non-conservative (2.26), (2.27) forms of hyperbolic equations
read:
˚ − xt v̊
(Jv̊)t + F
= G˚,
(3.6)
q
1 ˚
1 ˚
Fq − xt v̊ q =
G,
v̊ t +
J
J
˚t + 1 A˚· F
˚q − xt v̊ q = 1 A˚· G˚,
F
J
J
where the terms on the right-hand sides are defined similarly as above:
!
0
def
G˚ :=
.
˚ − ˚
g H̊ h̊q + ℘
̺ h̊q
q
(3.7)
(3.8)
19 / 66
Dispersive shallow water wave modelling. Part II
The non-hydrostatic pressure on the bottom is computed in Q space as:
˚
˚ h̊
℘
h̊q2
6℘
def 1
def
q q
˚
̺ :=
+ H̊ R̊ +
,
Υ̊
:=
4
+
J2
J2
Υ̊
H̊
(3.9)
Finally, we just have to specify the expression for R̊:
i
η̊q h̊q
ů 2 h h̊q i
xt
2 ů − xt h
xt
def
R̊ := −g 2 +
+ h̊t −
h̊q
h̊q .
· h̊t −
+
J
J
J q
J
J
J
t
q
We have to specify also the equations which allow us to find the non-hydrostatic part of
˚ . Equation (2.19) posed on the reference domain Q reads:
the pressure field ℘
˚
˚ = F̊ ,
K̊ ℘q q − K̊0 ℘
(3.10)
where the coefficients and the right-hand side are defined as
h̊
2 J Υ̊ − 3
4
def
def
q
·
+
,
,
K̊0 := 6
K̊ :=
Υ̊
H̊ 3
J H̊ Υ̊
J H̊ 2 Υ̊ q
h η̊
ůq2
R̊ h̊q i
6 R̊ J
def
q
F̊ := g
−
+
+ 2
.
J
J
J Υ̊ q
H̊ Υ̊
Finally, the boundary conditions are specified now at q = 0 and q = 1. For the hyperbolic
part of the equations they are
ů(0, t) = 0
ů(1, t) = 0
∀t > 0 .
For the elliptic part we have the following mixed-type boundary conditions:
q=1
q=1
R̊
1
4 ℘
˚
˚ − 6 h̊q ℘
.
gη̊q +
h̊q
=
q
J
Υ̊
J H̊ Υ̊
J H̊ 2 Υ̊
q=0
q=0
(3.11)
3.3. Predictor–corrector scheme on moving grids
In this Section we describe the numerical finite volume discretization of the SGN equations on a moving grid. We assume that the reference domain Q is discretized with a
def
N
uniform grid Qh := qj = j ∆q j = 0 , with the uniform spacing ∆q = N1 . Then, the
grid Inh in the physical domain I at every time instance t = t n > 0 is given by the image of
the uniform grid Qh under the mapping x(q, t) , i.e. xnj = x(qj , t n ) , j = 0, 1, . . . , N or
def
N
simply Inh = x(Qh , t n ). We assume that we know the discrete solution∗ v̊ ♯n := v̊ jn j = 0 ,
def ˚ n N
˚ n :=
℘
℘j j = 0 at the current time t = tn and we already constructed the non-uniform
♯
def
N
grid xn+1
:= xn+1
at the following time layer t n+1 using the equidistribution method
j
♯
j =0
described above. We remind that the non-uniform grid at the following layer is constructed
based only on the knowledge of v̊ ♯n .
∗With
symbol ♯ we denote the set of solution values at discrete spatial grid nodes.
20 / 66
G. Khakimzyanov, D. Dutykh, et al.
3.3.1 Predictor step
In the nonlinear case, during the predictor step the hyperbolic part of equations is solved
two times:
def
N −1
• First, using equation (3.7) we compute the discrete solution values v̊ ∗♯, c := v̊ ∗j+1/2 j = 0
def
N −1
in the cell centers Qh, c := qj+1/2 = qj + ∆q
.
2 j =0
• Then, using equation (3.8) we compute the values of the flux vector equally in the
def
N −1
˚∗
˚∗ :=
.
F
cell centers F
♯, c
j+1/2 j = 0
We rewrite equations (3.7), (3.8) in the characteristic form by multiplying them on the left
by the matrix L̊ (of left eigenvectors of the Jacobian A˚):
1
˚q − xt v̊ q = 1 L̊ · G˚,
L̊ · F
J
J
1
˚t +
˚q − xt v̊ q = 1 Λ̊ · L̊ · G˚,
L̊ · F
Λ̊ · L̊ · F
J
J
L̊ · v̊ t +
The discretization of last equations reads:
1
n
1
n
v̊ ∗j+1/2 − v̊ nj+1/2
˚q − xt v̊ q
,
=
L̊ · F
L̊ · G˚
+
τ /2
J
J
j+1/2
j+1/2
(3.12)
˚∗
˚n
n
1
1
F
n
n
j+1/2 − Fj+1/2
−1
˚
˚
Λ̊ · L̊ · Fq − xt v̊ q
Λ̊ · L̊ · G
+
,
=
D · L̊ j+1/2 ·
τ /2
J
J
j+1/2
j+1/2
(3.13)
D−1 · L̊
n
·
j+1/2
where τ is the time step, L̊nj+1/2 is an approximation of matrix L̊ in the cell centers Qh, c (it
will be specified below). The matrix D is composed of cell parameters for each equation:
!
!
1, n
−, n
1
+
θ
0
1
+
λ
0
def
def
j+1/2
j+1/2
n
n
Dj+1/2
:=
,
Λ̊j+1/2
:=
,
2, n
n
0
1 + θj+1/2
0
1 + λ+,
j+1/2
n
with λ±,
j+1/2 being the approximations of eigenvalues (2.25) in the cell centers Qh, c (it will
be specified below). On the right-hand side the source term is
!
0
def
n
n
G˚j+1/2
:=
,
˚ − ˚
g H̊ h̊q + ℘
̺
h̊
q
q
j+1/2
where derivatives with respect to q are computed using central differences:
n
˚
℘
q, j+1/2
def
:=
˚n
℘
j+1
˚n
− ℘
j
,
∆q
def
h̊nq, j+1/2 :=
h̊nj+1 − h̊nj
.
∆q
21 / 66
Dispersive shallow water wave modelling. Part II
n
The value of the non-hydrostatic pressure trace at the bottom ˚
̺j+1/2
is computed according
˚n in cell centers are computed as:
to formula (3.9). Solution vector v̊ n and the fluxes F
♯, c
♯, c
˚n
˚n
+
def Fj+1 + Fj
˚n
:=
,
F
.
j+1/2
2
2
The derivatives of these quantities are estimated using simple finite differences:
n
n
˚n
˚n
def Fj+1 − Fj
def v̊ j+1 − v̊ j
˚n
:=
,
F
.
v̊ nq, j+1/2 :=
q, j+1/2
∆q
∆q
Finally, we have to specify the computation of some mesh-related quantities:
def
v̊ nj+1/2 :=
xnt, j
xn+1
− xnj
j
,
:=
τ
def
v̊ nj+1
v̊ nj
def
xnt, j+1/2 :=
xnt, j + xnt, j
,
2
def
Jnj+1/2 ≡ xnq, j+1/2 :=
xnj+1 − xnj
.
∆q
n
The approximation of the matrix of left eigenvectors L̊nj+1/2 and eigenvalues λ±,
j+1/2 depends
on the specification of the Jacobian matrix A˚n . Our approach consists in choosing the
j+1/2
discrete approximation in order to have at discrete level
n
˚n
F
A˚· v̊ q j+1/2 ,
q, j+1/2 ≡
(3.14)
which is the discrete analogue of the continuous identity F̊q ≡ A˚ · v̊ q . Basically, our
philosophy consists in preserving as many as possible continuous properties at the discrete
level. For example, the following matrix satisfies the condition (3.14):
!
n
0
1
n
A˚j+1/2
=
= R̊ · Λ̊ · L̊ j+1/2
n n
n
n
−uj uj+1 + g Hj+1/2 2 uj+1/2
The matrices Lnj+1/2 and Rnj+1/2 = (Lnj+1/2 )−1 are computed by formulas (2.28). The
n
Jacobian matrix A˚j+1/2
eigenvalues can be readily computed:
q
q
def
def
n
n
n
n
n
n
2 − un un
:=
λ±,
:=
(u
±
s)
,
s
(u
)
+
g
H
>
g Hj+1/2
> 0.
j+1/2
j+1/2
j j+1
j+1/2
j+1/2
j+1/2
Thanks to the discrete differentiation rule (3.14), we can derive elegant formulas for the
˚∗ by drastically simplifying the scheme (3.12), (3.13):
predicted values v̊ ∗♯, c , F
♯, c
h
in
τ
¯
v̊ ∗j+1/2 = v̊ −
R̊ · D · Λ̊ · P̊ − L̊ · G˚
,
(3.15)
2J
j+1/2
h
in
τ
¯
∗
˚
˚
˚
Fj+1/2 = F −
,
(3.16)
R̊ · D · Λ̊ · Λ̊ · P̊ − L̊ · G
2J
j+1/2
where we introduced two matrices:
n
def
def
¯
n
:= L̊ · v̊ q j+1/2 .
P̊j+1/2
Λ̊nj+1/2 := Λ̊nj+1/2 − xt,nj+1/2 · I ,
1,2
Finally, the scheme parameters θj+1/2
are chosen as it was explained in our previous works
[94, 129] for the case of Nonlinear Shallow Water Equations. This choice guarantees the
TVD property of the resulting scheme.
22 / 66
G. Khakimzyanov, D. Dutykh, et al.
˚∗ ,
Non-hydrostatic pressure computation. Once we determined the predicted values v̊ ∗♯, c , F
♯, c
we have to determine also the predicted value for the non-hydrostatic pressure components
˚ ∗ located in cell centers Q . In order to discretize the elliptic equation (3.10) we apply
℘
h, c
♯, c
the same finite volume philosophy. Namely, we integrate equation (3.10) over one cell
[qj , qj+1 ]. Right now for simplicity we consider an interior element. The approximation
near boundaries will be discussed below. The integral form of equation (3.10) reads
ˆ
qj+1
qj
˚ ∗ dq
K̊ ℘
q q
−
ˆ
qj+1
˚ ∗ dq
K̊0 ℘
ˆ
=
qj
qj+1
(3.17)
F̊ dq .
qj
The coefficients K̊, K̊0 are evaluated using the predicted value of the total water depth
n
H̊♯,∗ c . If the scheme parameter θj+1/2
≡ 0 , ∀j = 0, . . . , N − 1, then the predictor value
would lie completely on the middle layer t = t n + τ2 . However, this simple choice of
n
{θj+1/2
}jN=−1
0 does not ensure the desired TVD property [10, 129].
The solution of this integral equation will give us the predictor value for the non˚ ∗ . The finite difference scheme for equation (3.10) is obtained
hydrostatic pressure ℘
♯, c
by applying the following quadrature formulas to all the terms in integral equation (3.17):
ˆ
qj+1
qj
ˆ
qj+1
qj
ˆ
qj+1
qj
˚∗
˚∗
℘
℘
−
K̊
+
K̊
j+3/2
j+1/2
j+3/2
j+1/2
˚
·
K̊ ℘
q q dq ≃
2
∆q
∗
K̊j+1/2 + K̊j−1/2
−
·
2
˚ ∗ dq ≃
K̊0 ℘
∆q ·
h 12 Jn
(H̊ ∗ )3
h
+
·
Υ̊ − 3 i
Υ̊
˚∗
℘
j+1/2
˚∗
− ℘
j−1/2
,
∆q
j+1/2
3 h̊nq
Υ̊ Jn (H̊ ∗ )2
i
j+3/2
−
h
3 h̊nq
Υ̊ Jn (H̊ ∗ )2
i
j−1/2
˚∗
℘
j+1/2 ,
η̊ ∗
(ů∗ )2
η̊ ∗
R̊ h̊nq
R̊ h̊nq
6 R̊ Jn
q
q
q
− g n +
.
F̊ dq ≃ ∆q · 2 n −
+ g n +
J
J
J
Υ̊ Jn j+1
Υ̊ Jn j
Υ̊ H̊ ∗ j+1/2
In approximation formulas above we introduced the following notations:
def
K̊j+1/2 :=
h
4
Υ̊ Jn H̊ ∗
i
j+1/2
,
def
Υ̊j+1/2 := 4 +
h̊n 2
q
n
J
j+1/2
,
def
Jnj :=
Jnj+1/2 + Jnj−1/2
2
.
23 / 66
Dispersive shallow water wave modelling. Part II
In this way we obtain a three-point finite difference approximation of the elliptic equation
(3.10) in interior of the domain, i.e. j = 1, . . . , N − 2 :
˚∗
˚∗
˚∗
˚∗
K̊j+1/2 + K̊j−1/2 ℘j+1/2 − ℘j−1/2
K̊j+3/2 + K̊j+1/2 ℘j+3/2 − ℘j+1/2
·
−
·
2
∆q
2
∆q
n
h 12 Jn Υ̊ − 3 i
i
i
h
h
3 h̊nq
3 h̊q
− ∆q ·
·
=
−
+
j+1/2
Υ̊
(H̊ ∗ )3
Υ̊ Jn (H̊ ∗ )2 j+3/2
Υ̊ Jn (H̊ ∗ )2 j−1/2
(ů∗ )2
η̊ ∗
η̊ ∗
R̊ h̊nq
R̊ h̊nq
6 R̊ Jn
q
q
q
∆q · 2 n −
+ g n +
− g n +
. (3.18)
J
J
J
Υ̊ Jn j+1
Υ̊ Jn j
Υ̊ H̊ ∗ j+1/2
Two missing equations are obtained by approximating the integral equation (3.17) in intervals adjacent to the boundaries. As a result, we obtain a linear system of equations
N −1
˚∗
where unknowns are {℘
j+1/2 }j = 0 . The approximation in boundary cells will be illustrated
on the left boundary [q0 ≡ 0, q1 ]. The right-most cell [qN −1 , qN ≡ 1] can be treated
similarly. Let us write down one-sided quadrature formulas for the first cell:
K̊3/2 + K̊1/2
·
2
+
˚∗
℘
3/2
h
˚∗
˚∗
− ℘
4℘
1/2
q
−
∗
∆q
J H̊ Υ̊
{z
|
1
3 h̊nq
Υ̊ Jn (H̊ ∗ )2
i
3/2
+
h
q=0
}
h 12 Jn Υ̊ − 3 i
∗
˚
℘
− 1/2 ∆q ·
·
1/2
Υ̊
(H̊ ∗ )3
3 h̊nq
Υ̊ Jn (H̊ ∗ )2
(ů∗ )2
η̊ ∗
6 R̊ Jn
q
q
= ∆q · 2 n −
+ g n
∗
J
J
Υ̊ H̊ 1/2
i
+
˚∗
6 h̊nq ℘
J (H̊ ∗ )2 Υ̊
q=0
{z
}
|
2
η̊ ∗
R̊ h̊nq
R̊ h̊nq
q
+
− g n +
J
Υ̊ Jn 1
Υ̊ Jn
{z
|
3
1/2
.
q =0
}
It can be readily noticed that terms 1 + 2 + 3 vanish thanks to the boundary
condition (3.11) (the part at q = 0). The same trick applies to the right-most cell
[qN −1 , qN ≡ 1]. We reiterate on the fact that in our scheme the boundary conditions
are taken into account exactly. Consequently, in two boundary cells we obtain a two-point
finite difference approximation to equation (3.10). The resulting linear system of equations
can be solved using e.g. the direct Thomas algorithm with linear complexity O(N). Under
6 0 the numerical solution exists, it is
> 0 , h̊q
the conditions K̊0 > 0 , h̊q
q =0
unique and stable [122].
q=1
24 / 66
G. Khakimzyanov, D. Dutykh, et al.
3.3.2 Corrector step
During the corrector step we solve again separately the hyperbolic and elliptic parts of
the SGN equations. In order to determine the vector of conservative variables v̊ n+1
we use
♯
an explicit finite volume scheme based on the conservative equation (3.6):
˚∗ − xt · v̊ ∗
F
(Jv̊)n+1
− (Jv̊)nj
j
+
τ
j+1/2
˚∗ − xt · v̊ ∗
F
−
∆q
j−1/2
= G˚j∗ ,
(3.19)
where
def
G˚j∗ :=
0
,
˚∗
− ˚
̺∗ h̊nq + ♭ + ℘
q
g H̊ n + ♭
∗
˚
℘
q, j
def
:=
˚∗
℘
j+1/2
j
˚∗
− ℘
j−1/2
,
∆q
and
n+1
n+1
n
n
H̊j+1
+ H̊j−1
+ 2 H̊jn+1 + 2 H̊jn + H̊j+1
+ H̊j−1
,
8
n+1
n+1
n
n
def h̊j+1 − h̊j−1 + h̊j+1 − h̊j−1
:=
.
4 ∆q
def
H̊jn + ♭ :=
h̊nq + ♭
(3.20)
(3.21)
The algorithm of the corrector scheme can be summarized as follows:
(1) From the mass conservation equations (the first component in (3.19)) we find the
total water depth H̊♯n+1 in interior nodes of the grid
(2) Using the method of characteristics and the boundary conditions ůn+1
= ůn+1
≡ 0
0
N
n+1
n+1
we determine the total water depth H̊0 , H̊N in boundary points q0 ≡ 0 and
qN ≡ 1
(3) Then, using the momentum conservation equation (the second component in (3.19))
we find the momentum values (H̊ ů)n+1
on the next time layer.
♯
In this way, we obtain an explicit scheme despite the fact that the right hand side G˚♯∗
depends on the water depth H̊♯n+1 at the new time layer t = tn+1 .
˚ n+1 is comNon-hydrostatic pressure correction. The non-hydrostatic pressure correction ℘
♯
puted by integrating locally the elliptic equation (3.10) around each grid point:
ˆ
qj+1/2
qj−1/2
˚ n+1 dq −
K̊ ℘
q
q
ˆ
qj+1/2
qj−1/2
˚ n+1 dq =
K̊0 ℘
ˆ
qj+1/2
qj−1/2
F̊ n+1 dq ,
j = 1, . . . , N −1 ,
25 / 66
Dispersive shallow water wave modelling. Part II
The details of integrals approximations are similar to the predictor step described above.
Consequently, we provide directly the difference scheme in interior nodes:
˚ n+1
℘
j+1
˚ n+1
˚ n+1 − ℘
˚ n+1
℘
− ℘
j
j
j−1
Kj+1/2
− Kj−1/2
=
∆q
∆q
n+1
n+1
n+1
(
Υ̊
−
3)
J
h̊
(
Υ̊
−
3)
J
h̊
q
q
˚
− 6 ℘j
∆q
=
+ ∆q
−
+
Υ̊ H̊ 3
Υ̊ J H̊ 2 j−1/2
Υ̊ H̊ 3
Υ̊ J H̊ 2 j+1/2
η̊
ů2
η̊
6 R̊ J n+1
R̊ h̊q n+1
R̊ h̊q n+1
q
q
q
−
+
+
− g
, (3.22)
∆q 2
+ g
J
J
J
Υ̊ J j+1/2
Υ̊ J j−1/2
Υ̊ H̊ j
where
4
def
Kj+1/2 :=
Υ̊ J H̊
n+1 ,
def
Υ̊n+1
j+1/2 :=
4+
j+1/2
h h̊n+1 − h̊n+1 i2
j+1
j
n+1
xn+1
j+1 − xj
,
def
Jn+1
j+1/2 :=
n+1
xn+1
j+1 − xj
.
∆q
In order to complete the scheme description, we have to specify the discretization of the
elliptic equation (3.10) in boundary cells. To be specific we take again the left-most cell
[q0 ≡ 0, q1/2 ]. The integral equation in this cell reads:
ˆ q1/2
ˆ q1/2
ˆ q1/2
˚ n+1
n+1
˚
℘
℘
K̊ q
dq −
K̊0
dq =
F̊ n+1 dq .
q
q0
q0
q0
And the corresponding difference equation is
K1/2
˚ n+1
℘
1
n+1
˚ n+1
˚ n+1
˚ n+1
℘
n+1
4℘
− ℘
(
Υ̊
−
3)
J
h̊
6
h̊
q
q
q
0
˚
− 6 ℘0
=
−
∆q
+
+
∆q
Υ̊ H̊ 3
J H̊ Υ̊ q = 0
J H̊ 2 Υ̊ 1/2
J H̊ 2 Υ̊ q = 0
|
| {z }
{z
}
31
32
n+1
n+1
ů2
η̊
R̊ h̊q
R̊ h̊q
3 R̊ J
η̊q
q
q
.
+
−
+
+ ∆q
g
− g
J
J
J
Υ̊ J
Υ̊ J q = 0
Υ̊ H̊
1/2
|
{z
}
33
By taking into account the boundary condition (3.11) we obtain that three under-braced
terms vanish:
31 + 32 + 33 ≡ 0 .
A similar two-point approximation can be obtained by integrating over the right-most cell
h
∆q i
qN −1/2 , qN ≡ 1 −
, 1 . In this way we obtain again a three-diagonal system of
2
linear equations which can be efficiently solved with the Thomas algorithm [81].
26 / 66
G. Khakimzyanov, D. Dutykh, et al.
Stability of the scheme. In order to ensure the stability of (nonlinear) computations, we
impose a slightly stricter restriction on the time step τ than the linear analysis given below
predicts (see Section 3.4.1). Namely, at every time layer we apply the same restriction as
for hyperbolic (non-dispersive) Nonlinear Shallow Water Equations [94]:
n, ±
max{ Cj+1/2
} 6 1,
j
n, ±
where Cj+1/2
are local Courant numbers [40] which are defined as follows
τ h | λ± − xt | in
def
n, ±
:=
Cj+1/2
.
∆q
J
j+1/2
3.3.3 Well-balanced property
It can be easily established that the predictor–corrector scheme presented above preserves
exactly the so-called states ‘lake-at-rest’:
Lemma 1. Assume that the bottom is stationary ( i.e. ht ≡ 0 , but not necessary flat)
and initially the fluid is at the ‘lake-at-rest’ state, i.e.
η̊j0 ≡ 0 ,
ůj0 ≡ 0
j = 0, 1, 2, . . . , N .
(3.23)
Then, the predictor–corrector scheme will preserve this state at all time layers.
Proof. In order to prove this Lemma, we employ the mathematical induction [80]. First,
we have to discuss the generation of the initial grid and how it will be transformed to the
next time layer along with the discrete numerical solution:
x0♯ ֒→ x1♯ ,
v̊ 0♯ ֒→ v̊ ∗c, ♯ ֒→ v̊ 1♯ .
Then, by assuming that our statement is true at the nth time layer, we will have to show
that it is true on the upcoming (n + 1)th layer. This will complete the proof [80].
If the monitoring function ̟(x, t) depends only on the free surface elevation η(x, t) and
fluid velocity u(x, t), then the monitoring function ̟(x, t) ≡ 1 thanks to Lemma assumption (3.23). And the equidistribution principle (3.2) will give us the uniform mesh. However, in most general situations one can envisage the grid adaptation upon the bathymetry
profile∗ h(x, t). Consequently, in general we can expect that the mesh will be non-uniform
even under condition (3.23), since hx 6= 0 . However, we know that the initial grid satisfies
the fully converged discrete equidistribution principle (3.3). From now on we assume that
the initial grid is generated and it is not necessarily uniform. In order to construct the grid
at the next layer, we solve just one linear equation (3.5). Since, system (3.5) is diagonally
dominant, its solution exists and it is unique [122]. It is not difficult to check that the set
of values {xj1 ≡ xj0 }N
j = 0 solves the system (3.5). It follows from two observations:
• The right-hand side of (3.5) vanishes when x1j ≡ x0j , ∀j = 0, . . . , N .
0
• The monitor function {̟j+1/2
}jN=−10 is evaluated on the previous time layer t = 0 .
∗In
the present study we do not consider such example. However, the idea of grid adaptation upon
the bathymetry function certainly deserves to be studied more carefully.
27 / 66
Dispersive shallow water wave modelling. Part II
Thus, we obtain that x♯1 ≡ x♯0 . Consequently, we have xt,0 j
∀j = 0, . . . , N .
≡ 0 and Jj1 = Jj0 ,
˚ 0 and ˚
In order to complete the predictor step we need to determine the quantities ℘
̺♯0
♯
0
on which depends the source term G˚j+1/2
. These quantities are uniquely determined by
˚ 0 are obtained by solving linear equations
prescribed initial conditions. For instance, ℘
♯
(3.22). We showed above also that the solution to this equation is unique. We notice
also that the right-hand side in equation (3.22) vanishes under conditions of this Lemma.
˚ 0 ≡ 0 . By applying a finite difference analogue of equation
Consequently, we obtain ℘
♯
(3.9) we obtain also that ̺♯0 ≡ 0 . As the result, at the ‘lake-at-rest’ state the right-hand
side of predictor equations (3.12), (3.13) reads
!
0
0
G˚j+1/2
=
.
0
(g h̊ h̊q )j+1/2
Taking into account the fact that the mesh does not evolve x0♯ ֒→ x1♯ ≡ x0♯ , we obtain
q
¯0
0
0
g h̊j+1/2 ,
xt,0 j ≡ 0 and thus Λ̊j+1/2
≡ Λ̊j+1/2
, sj+1/2
≡
!
!
h̊q, j+1/2
h̊q, j+1/2
0
0
¯
.
,
L̊ · G˚ j+1/2 ≡
Λ̊ · P̊ j+1/2 ≡
h̊q, j+1/2
h̊q, j+1/2
Consequently, the predictor step (3.15), (3.16) gives us the following values:
0
v̊ ∗j+1/2 ≡ v̊ j+1/2
,
˚∗
˚0
F
j+1/2 ≡ Fj+1/2 .
For the sake of clarity, we rewrite the last predictions in component-wise form:
!
!
0
h̊
j+1/2
˚∗
2
v̊ ∗j+1/2 ≡
,
F
.
g h̊j+1/2
j+1/2 ≡
0
2
∗
Thus, H̊j+1/2
≡ h̊j+1/2 . As an intermediate conclusion of the predictor step we have:
∗
ηj+1/2
≡ 0,
ů∗j+1/2 ≡ 0 ,
˚ ∗, ˚
∗
and all dispersive corrections ℘
♯ ̺♯ vanish as well by applying similar arguments to
equation (3.18).
The corrector step (3.19), written component-wise reads:
(J ů H̊)j1 − (J ů H̊)j0
τ
(J H̊)j1 − (J H̊)j0
= 0,
τ
2
2
g h̊j+1/2
− g h̊j−1/2
♭
+
= g H̊ h̊q j
2 ∆q
From the first equation above taking into account that Jj1 ≡ Jj0 and H̊j0 = h̊j we obtain
H̊j1 = h̊j . And thus, by the definition of the total water depth we obtain η̊j1 ≡ 0 . In
28 / 66
G. Khakimzyanov, D. Dutykh, et al.
the second equation above by condition (3.23) we have that ůj0 ≡ 0 . Moreover, in the
left-hand side:
2
2
g h̊j+1/2
− g h̊j−1/2
2 ∆q
= g
(h̊j+1 − h̊j−1 ) · (h̊j+1 + 2 h̊j + h̊j−1 )
.
8 ∆q
(3.24)
The right-hand side of the same corrector equation can be rewritten using definitions (3.20),
(3.21) as
♭
2 h̊j+1 + 4 h̊j + 2 h̊j−1 2 h̊j+1 − 2 h̊j−1
g H̊ h̊q j = g
·
.
(3.25)
8
4 ∆q
Comparing equation (3.24) with (3.25) yields the desired well-balanced property of the
predictor–corrector scheme and thus ůj1 ≡ 0 .
By assuming that (3.23) is verified at the time layer t = t n and repeating precisely
the same reasoning as above (by substituting superscripts 0 ← n and 1 ← n + 1) we
obtain that (3.23) is verified at the next time layer t = t n+1 . It completes the proof of
this Lemma.
We would like to mention that the well-balanced property of the proposed scheme was
checked also in numerical experiments on various configurations of general uneven bottoms
(not reported here for the sake of manuscript compactness) — in all cases we witnessed
the preservation of the ‘lake-at-rest’ state up to the machine precision. This validates our
numerical implementation of the proposed algorithm.
3.4. Numerical scheme for linearized equations
In order to study the numerical scheme stability and its dispersive properties, we consider the discretization of the linearized SGN equations on a uniform unbounded grid (for
simplicity we consider an IVP without boundary conditions). The governing equations
after linearization can be written as (we already gave these equations in Section 2.5)
ηt + d ux = 0 ,
ut + g ηx =
℘xx
−
3
d2
℘
1
d
℘x ,
= c2 ηxx ,
√
where c =
g d is the speed of linear gravity waves. We shall apply to these PDEs
precisely the same scheme as described above. Since the grid is uniform, we can return
to the original notation, i.e. v̊ ≡ v, etc. Let ∆x be the discretization step in the
computational domain Ih and τ is the local time step. We introduce the following finite
difference operators (illustrated on the free surface elevation η ♯n ):
ηt,nj
ηjn+1 − ηjn
:=
,
τ
def
ηx,n j
n
ηj+1
− ηjn
:=
,
∆x
def
n
η(x),
j
n
n
ηj+1
− ηj−1
:=
,
2 ∆x
def
29 / 66
Dispersive shallow water wave modelling. Part II
n
n
n
n
n
n
ηj+1
− 2 ηjn + ηj−1
def ηxx, j + ηxx, j+1
def ηxx, j+1 − ηxx, j
n
n
,
η
,
η
.
:=
:=
xx, j+1/2
xxx, j
∆x2
2
∆x
+∞
∗
∗
Then, at the predictor step we compute auxiliary quantities {ηj+1/2
}+∞
j = −∞ , {uj+1/2 }j = −∞
∗
and {℘j+1/2 }+∞
j = −∞ . First, we solve the hyperbolic part of the linearized SGN equations:
∗
n
ηj+1/2
− 21 ηj+1
+ ηjn
+ d unx, j = 0 ,
∗
τj+1/2
u∗j+1/2 − 12 unj+1 + unj
1 ℘n
+ g ηx,n j =
,
∗
τj+1/2
d x, j
def
n
ηxx,
j :=
and then we solve the elliptic equation to find {℘j+1/2 }+∞
j = −∞ :
∗
℘∗j+3/2
∗
∗
∗
∗
η∗
− 2 ηj+1/2
+ ηj−1/2
− 2 ℘j+1/2 + ℘j−1/2
3 ℘∗
2 j+3/2
− 2 j+1/2 = c
,
∆x2
d
∆x2
def τ
∗
n
n
where τj+1/2
:=
(1 + θj+1/2
) and θj+1/2
is the numerical scheme parameter [94],
2
whose choice guarantees the TVD property (strictly speaking the proof was done for scalar
hyperbolic equations only).
Then, the predicted values are used on the second — corrector step, to compute all
n+1
n+1 +∞
physical quantities {ηjn+1 }+∞
}j = −∞ and {℘j }+∞
j = −∞ , {uj
j = −∞ on the next time layer
n+1
t = t :
u∗j+1/2 − u∗j−1/2
= 0,
(3.26)
ηt,nj + d
∆x
∗
∗
∗
∗
ηj+1/2
− ηj−1/2
1 ℘j+1/2 − ℘j−1/2
n
ut, j + g
=
,
(3.27)
∆x
d
∆x
3 ℘n+1
n+1
℘n+1
= c2 ηxx,
(3.28)
xx, j −
j .
d2 j
It can be easily checked that the scheme presented above has the first order accuracy if
n
n
θj+1/2 = const , ∀j and the second order if θj+1/2
≡ 0 , ∀j . However, the last condition
can be somehow relaxed. There is an interesting case of quasi-constant values of the scheme
parameter:
n
θj+1/2
= O (τ + ∆x) .
In this case the scheme is second order accurate as well. In the present Section we perform
n
a theoretical analysis of the scheme and we shall assume for simplicity that θj+1/2
≡ const.
Consequently, from now on we shall drop the index j + 1/2 in the intermediate time step
∗
τj+1/2
.
3.4.1 Linear stability of the scheme
In this Section we apply the so-called von Neumann stability analysis to the predictor–
corrector scheme described above [28]. In order to study the scheme stability, first we
30 / 66
G. Khakimzyanov, D. Dutykh, et al.
+∞
∗
∗
exclude algebraically the predicted values {ηj+1/2
}+∞
j = −∞ , {uj+1/2 }j = −∞ from difference
equations. The resulting system reads:
n
∗℘
ηt,nj + d un(x), j = τ ∗ c2 ηxx,
j − τ
xx, j ,
n
unt, j
℘∗j+3/2 − 2 ℘∗j+1/2 + ℘∗j−1/2
+ g
n
η(x),
j
∗ 2
= τ c
unxx, j
− ℘j−1/2
, (3.30)
∆x
− τ ∗ d unxxx, j .
(3.31)
1
+
d
℘∗j+1/2
(3.29)
∗
3 ℘∗
n
= c2 ηxx,
j+1/2
d2 j+1/2
We substitute in all difference relations above the following elementary harmonics
∆x 2
ηjn = Λ0 ρn e i j ξ ,
−
unj = Ψ0 ρn e i j ξ ,
℘nj
= Φ0 ρn ei j ξ ,
℘∗j+1/2
= Φ∗0 (ρ) e i (j+1/2) ξ ,
(3.32)
def
where ξ := k · ∆x ∈ [0, π] is the scaled wavenumber and ρ is the transmission factor
between the time layers tn and tn+1 . As a result, from equations (3.28) and (3.31) we
obtain the following expressions for Φ0 and Φ∗0 :
h
i
2 2
4 c2 d 2 2
∗
n 2c d
∗ 4d
2
Φ0 =
ג
Λ
,
Φ
(ρ)
=
ρ
ג
Λ
sin(ξ)
−
i
τ
ג
Ψ
,
0
0
0
0
3 ℏ ∆x 2
3 ℏ ∆x 2
∆x
where we introduced some short-hand notations:
ξ
4 d2 2
τ
def
def
def
def
def
, k := 4 c2 ℵ 2 ג2 , i := c ℵ sin(ξ) , ℏ := 1 +
, ג:= sin
ג.
ℵ :=
∆x
2
3 ∆x 2
By substituting just obtained expressions for Φ0 and Φ∗0 into equations (3.29), (3.30) we
obtain two linear equations with respect to amplitudes Λ0 and Ψ0 :
h
2 c2 ℵ2 (1 + θ) 2 i
ρ − 1 +
גΛ0 + i ℵ d sin(ξ) Ψ0 = 0 ,
ℏ
h
2 c2 ℵ2 (1 + θ) 2 i
g ℵ sin(ξ)
Λ0 + ρ − 1 +
גΨ0 = 0 .
i
ℏ
ℏ
The necessary condition to have non-trivial solutions gives us an algebraic equation for the
transmission factor ρ:
k (1 + θ)
k2 (1 + θ)2
i2
(ρ − 1) +
+
= 0.
ℏ
4 ℏ2
ℏ
This quadratic equation admits two distinct roots:
(ρ − 1)2 +
ρ± = 1 −
k (1 + θ)
i
± i√ .
2ℏ
ℏ
(3.33)
The necessary stability condition | ρ | 6 1 is equivalent to the following condition on
quadratic equation coefficients:
h
4 ς2 i
d
def
2 2
2
c ℵ (1 + θ) ζ − 1 +
ζ (ζ + θ) 6 0 ,
ς :=
,
(3.34)
3
∆x
def
which has to be fulfilled for all ζ := ג2 ∈ [0, 1]. The parameter ς characterizes the grid
resolution relative to the mean water depth. This parameter appears in stability condition
Dispersive shallow water wave modelling. Part II
31 / 66
along with the Courant ratio ℵ. It is one of the differences with non-dispersive equations
whose discretization stability depends only on ℵ.
Further thoughts about stability. When long waves travel towards the shoreline, their shoal-
ing process is often accompanied with the formation of undular bores [75, 116]. Undular
bores have dispersive nature and cannot be correctly described by dispersionless models.
In [75] it was shown that satisfactory description of dispersive effects in shallow water
d
def
= 2 ∼ 4 . In another study [70] it was shown
environments is obtained for ς :=
∆x
that for satisfactory modeling of trans-oceanic wave propagation it is sufficient to choose
ς ≈ 4 in deep ocean and ς ≈ 1 in shallow coastal areas. In other words, it is sufficient to
choose the grid size equal to water depth in shallow waters and in deep areas — four times
smaller than the water depth. On coarser grids the numerical dispersion may dominate
√
3
over the physical one [70]. In the present study we shall assume that parameter ς >
.
2
Substituting into equation (3.34) the value ζ ≡ 0 we obtain that for stability reasons
necessarily the scheme parameter θ > 0 . Since the predictor layer should be in between
time layers t = tn and t = tn+1 we have θ 6 1 . Then, for fixed values of parameters ς
and θ the stability condition (3.34) takes the following form:
r
4
1 + ς2 θ
3
.
cℵ 6
1 + θ
For θ ≡ 0 the last condition simply becomes:
cℵ 6 1,
and it does not depend on parameter ς. However, when θ > 0, then the scheme stability
depends on the mesh refinement ς relative to the mean water depth. Surprisingly, more we
refine the grid, less stringent becomes the stability barrier. In the asymptotic limit ς ≫ 1
we obtain the following restriction on the time step τ :
√
2 θ
1
τ 6 √
τ0 < √ τ0 ≈ 0.58 τ0 ,
3 (1 + θ)
3
def
where τ0 := dc ≡ √dgd is the characteristic time scale of gravity waves. Above we used
the following obvious inequality:
√
1 + θ > 2 θ,
∀ θ ∈ R+ .
So, in practice for sufficiently refined grids the stability condition de facto does not
involve the grid spacing ∆x anymore. This property is very desirable for numerical simulations. For the sake of comparison we give here (without underlying computations) the
stability restriction of the same predictor–corrector scheme for NSWE equations:
cℵ 6 √
1
.
1 + θ
32 / 66
G. Khakimzyanov, D. Dutykh, et al.
So, another surprising conclusion obtained from this linear stability analysis is that the
SGN equations require in fine a less stringent condition on the time step than corresponding
dispersionless NSWE. Most probably, this conclusion can be explained by the regularization
effect of the dispersion. Indeed, the NSWE bores are replaced by smooth undular bores
whose regularity is certainly higher. The smoothness of solutions allows to use a larger time
step τ to propagate the numerical solution. This conclusion was checked in (fully nonlinear)
numerical experiments (not reported here) where the time step τ was artificially pushed
towards the stability limits. In general, the omission of dispersive effects yields a stricter
stability condition. The authors of [71] came experimentally to similar conclusions about
the time step limit in dispersive and hydrostatic simulations. Our theoretical analysis
reported above may serve as a basis of rational explanation of this empirical fact.
This result is to be compared with a numerical scheme proposed in [41] for a weakly
nonlinear weakly dispersive water wave model. They used splitting technique and solved an
elliptic equation to determine the non-hydrostatic pressure correction. The main drawback
of the scheme proposed in [41] is the stability condition:
∆x > 1.5 d .
One can easily see that a numerical computation with a sufficiently refined grid is simply
impossible with that scheme. Our method is free of such drawbacks.
3.4.2 Discrete dispersion relation
The dispersion relation properties are crucial to understand and explain the behaviour
of the numerical solution [101]. In this Section we perform the dispersion relation analysis
of the proposed above predictor–corrector scheme. This analysis is based on the study
of elementary plane-wave solutions (3.32). The continuous case was already analyzed in
Section 2.5. Dispersive properties of the scheme can be completely characterized by the
def
phase error ∆ϕ := φ − ϕ committed during solution transfer from time layer t = tn
to t = tn+1 = tn + τ . Here we denote by φ the phase shift due to the SGN equations
dynamics and ϕ is its discrete counterpart. From equations (2.30) and (3.33) we obtain
correspondingly:
φ = arg(e−i ω τ ) ≡ −ω τ = ± r
cℵξ
, ξ ∈ [0, π] ,
ς 2 ξ2
1 +
3
i
h
k (1 + θ)
/| ρ | ,
ϕ = arg ρ = ± arccos 1 −
2ℏ
(3.35)
(3.36)
In other words, the phase change φ is predicted by the ‘exact’ SGN equations properties,
while ϕ comes from the approximate dynamics as predicted by the predictor–corrector
Dispersive shallow water wave modelling. Part II
33 / 66
scheme. Since we are interested in long wave modelling, we can consider Taylor expansions of the phase shifts in the limit ξ → 0 (assuming that ς and ℵ are kept constant):
h
i
cℵ 2 3
φ = ± cℵξ −
ς ξ + O(ξ 4 ) ,
6
i
h
cℵ
(c ℵ)2 (3 θ + 1) − 1 − ς 2 ξ 3 + O(ξ 4 ) .
ϕ = ± cℵξ +
6
The asymptotic expression for the phase error is obtained by subtracting above expressions:
∆ϕ = ∓
cℵ
(c ℵ)2 (3 θ + 1) − 1 ξ 3 + O(ξ 4 ) .
6
From the last relation one can see that the leading part of the phase error has the same
asymptotic order as the ‘physical’ dispersion of the SGN equations. In general, this result is
not satisfactory. However, this situation can be improved if for the given scheme parameter
θ > 0, the Courant ratio ℵ is chosen according to the following formula:
cℵ = √
1
.
1 + 3θ
In this case the numerical phase error will be one order lower than the physical dispersion
of the SGN system.
In Figure 2 we represent graphically phase shifts predicted by various models. The
dashed line (1) is the phase shift of the predictor–corrector scheme given by equation (3.36)
(taken with + sign) for the parameters values θ = 0 , c ℵ = 1 , ς = 2 . The continuous
dispersion relation are shown with the dotted line (3) (the SGN equations, formula (3.35))
and the solid line (4) (full Euler equations):
s
tanh(ς ξ)
φEuler = ± c ℵ ξ
.
ςξ
It can be seen that our predictor–corrector scheme provides a better approximation to the
dispersion relation than the scheme proposed by Peregrine [116] (dash-dotted line (2) in
Figure 2). The analysis of the discrete dispersion relation of Peregrine’s scheme is not
given here, but we provide only the final result for the phase change:
i2
.
φPeregrine = ± arccos 1 −
2ℏ
In Figure 2 one can see that the predictor–corrector scheme (curve (1)) approximates well
the dispersion relation of the SGN equations (curve (3)) up to ξ = k · ∆x > π4 . In terms
of the wave length λ we obtain that λ ? 8 ∆x and for ς = 2 we obtain the inequality
λ ? 4 d. So, as the main result of the present analysis we conclude that our scheme is
able to propagate accurately water waves whose length is four times longer than the mean
water depth d.
34 / 66
G. Khakimzyanov, D. Dutykh, et al.
Φ
1.2
4
3
0.8
1
0.4
2
0.0
0
1
2
ξ
3
Figure 2. Phase shifts in different models: (1) predictor–corrector scheme; (2)
Peregrine’s numerical scheme [116]; (3) the SGN equations; (4) full Euler
equations.
4. Numerical results
Below we present a certain number of test cases which aim to validate and illustrate the
performance of the numerical scheme described above along with our implementation of
this method.
4.1. Solitary wave propagation over the flat bottom
As we saw above in Section 2.4.1, in a special case of constant water depth h(x, t) =
d the SGN equations admit solitary wave solutions (given by explicit simple analytical
formulas) which propagate with constant speed without changing their shapes.
4.1.1 Uniform grid
These analytical solutions can be used to estimate the accuracy of the fully discrete
numerical scheme. Consequently, we take a sufficiently large domain [0, ℓ] with ℓ p
= 80.
In this Section all lengths are relative to the water depth d, and time is scaled with g/d.
35 / 66
Dispersive shallow water wave modelling. Part II
0.4
0.4
η /d
η /d
1
2
3
4
0.3
0.2
0.2
0.1
0.1
0
0
55
60
65
70
x/d
1
2
3
4
0.3
75
10
(a)
15
20
t(g/d)
1/2
25
(b)
Figure 3. Propagation of a solitary wave over the flat bottom: (a) free surface
profile at t = 20; (b) wave gauge data at x = 60. Various lines denote: (1) —
N = 80, (2) — N = 160, (3) — N = 320, (4) — the exact analytical
solution given by formula (2.29).
For instance, if the solitary wave amplitude α = 0.7, then α d = 0.7 d in dimensional
variables. So, the solitary wave is initially located at x0 = 40. In computations below we
take a solitary wave of amplitude α = 0.4. In general, the SGN travelling wave solutions
approximate fairly well those of the full Euler model up to amplitudes α > 21 (see [49]
for comparisons).
In Figure 3 we show a zoom on free surface profile (a) at t = 20 and wave gauge data
(b) in a fixed location x = 60 for various spatial (and uniform) resolutions. By this time,
the solitary wave propagated the horizontal distance of 20 mean water depths. It can be
seen that the numerical solution converges to the analytical one.
In order to quantify the accuracy of the numerical solution we measure the relative l∞
discrete error:
def
k εh k∞ := α−1 k ηh − η k∞ ,
where ηh stands for the numerical and η – for the exact free surface profiles. The factor
α−1 is used to obtain the dimensionless error. Then, the order of convergence k can be
estimated as
k ε h k∞
k ≃ log2
.
k εh/2 k∞
The numerical results in Table 1 indicate that k → 2, when N → +∞. This validates
the proposed scheme and the numerical solver.
36 / 66
G. Khakimzyanov, D. Dutykh, et al.
N
ς
80
160
320
640
1280
2560
1
2
4
8
16
32
k ε h k∞
0.2442
0.1277
0.3344 × 10−1
0.8639 × 10−2
0.2208 × 10−2
0.5547 × 10−3
k
—
0.94
1.93
1.95
1.97
1.99
Table 1. Numerical estimation of the convergence order for the analytical
d
solitary wave propagation test case. The parameter ς = ∆x
characterizes
the mesh resolution relative to the mean water depth d.
4.1.2 Adaptive grid
In order to show the performance of the adaptive algorithm, we adopt two monitor
functions in our computations:
̟0 [ η ] (x, t) = 1 + ϑ0 | η(x, t) | ,
̟1 [ η ](x, t) = 1 + ϑ0 | η(x, t) | + ϑ1 | ηx (x, t) | ,
(4.1)
(4.2)
where ϑ 0, 1 > 0 are some positive constants. In numerical simulations we use ϑ 0 =
ϑ 1 = 10 and only N = 80 grid points. Above we showed that numerical results are
rather catastrophic when these 80 grid points are distributed uniformly (see Figure 3).
Numerical results on adaptive moving grids obtained with monitor functions ̟ 0, 1 (x, t)
are shown in Figure 4. The monitor function ̟ 0 (x, t) ensures that points concentrate
around the wave crest, leaving the areas in front and behind relatively rarefied. The visual
comparison of panels 4(b) and 4(c) shows that the inclusion of the spatial derivative ηx
into the monitor function ̟1 (x, t) yields the increase of dense zones around the wave crest.
With an adaptive grid involving only N = 80 points we obtain a numerical solution of
quality similar to the uniform grid with N = 320 points.
4.2. Solitary wave/wall interaction
For numerous practical purposes in Coastal Engineering it is important to model correctly wave/structure interaction processes [118]. In this Section we apply the above
proposed numerical algorithm to the simulation of a simple solitary wave/vertical wall
interaction. The reason is two-fold:
(1) Many coastal structures involve vertical walls as building elements,
(2) This problem is well studied by previous investigators and, consequently, there is
enough available data/results for comparisons
37 / 66
Dispersive shallow water wave modelling. Part II
20
t(g/d)
η /d
t(g/d)
1/2
1/2
20
0.4
1
2
3
4
0.3
0.2
0.1
15
15
10
10
5
5
0
55
60
65
(a)
70
x/d
75
0
30
40
50
(b)
60
x/d
70
0
30
40
50
60
x/d
70
(c)
Figure 4. Propagation of a solitary wave over the flat bottom simulated with
moving adapted grids: (a) free surface profile at t = 20; (b) trajectory of some
grid points predicted with monitor function ̟0 (x, t) ; (c) the same but with
monitor function ̟1 (x, t) . On panel (a) the lines are defined as: (1) —
numerical solution on a uniform fixed grid; (2) — numerical solution predicted
with monitor function ̟0 (x, t) ; (3) — the same with ̟1 (x, t) ; (4) — exact
analytical solution.
We would like to underline that this problem is equivalent to the head-on collision of two
equal solitary waves due to simple symmetry considerations. This ‘generalized’ problem
was studied in the past using experimental [111, 125], numerical [26, 67] and analytical techniques [21, 113, 132]. More recently this problem gained again some interest of researchers
[22, 25, 39, 52, 57, 58, 107, 138]. Despite the simple form of the obstacle, the interaction
process of sufficiently large solitary waves with it takes a highly non-trivial character as it
will be highlighted below.
Figure 5(a) shows the free surface dynamics as it is predicted by the SGN equations
solved numerically using the moving grid with N = 320 nodes. The initial condition
consists of an exact analytical solitary wave (2.29) of amplitude α = 0.4 moving rightwards
to the vertical wall (where the wall boundary condition u = 0 is imposed∗ on the velocity,
for the pressure see Section 2.3.1). The computational domain is chosen to be sufficiently
large [0, ℓ] = [0, 80], so there is no interaction with the boundaries at t = 0. Initially the
solitary wave is located at x0 = 40 (right in the middle). The bottom is flat h(x, t) =
d = const in this test case. From Figure 5(a) it can be clearly seen that the reflection
process generates a train of weakly nonlinear waves which propagate with different speeds
in agreement with the dispersion relation. The moving grid was constructed using the
monitor function ̟1 (x, t) from the previous Section (see the definition in equation (4.2)).
with ϑ0 = ϑ1 = 10. The resulting trajectories of mesh nodes are shown in Figure 5(b).
The grid is clearly refined around the solitary wave and nodes follow it. Moreover, we
∗The
same condition is imposed on the left boundary as well, even if during our simulation time there
are no visible interactions with the left wall boundary.
38 / 66
G. Khakimzyanov, D. Dutykh, et al.
(a)
(b)
Figure 5. Solitary wave (amplitude α = 0.4)/vertical wall interaction in the
framework of the SGN equations: (a) space-time plot of the free surface
elevation; (b) nodes trajectories. For the sake of clarity every 5th node is shown
only, the total number of nodes N = 320.
would like to note also a slight mesh refinement even in the dispersive tail behind the
reflected wave (it is not clearly seen in Figure 5(b) since we show only every 5th node).
One of the main interesting characteristics that we can compute from these numerical
experiments is the maximal wave run-up R on the vertical wall:
def
R :=
sup
0 6 t 6 T
{η(ℓ, t)} .
The sup is taken in some time window when the wave/wall interaction takes place. For the
class of incident solitary wave solutions it is clear that maximal run-up R will depend on
the (dimensionless) solitary wave amplitude α. In [132] the following asymptotic formula
was derived in the limit α → 0:
R (α) = 2 α 1 + 14 α + 83 α2 + O(α4 ) .
(4.3)
The last approximation was already checked against full the Euler simulations [39, 67]
and even laboratory experiments [111]. Figure 6 shows the dependence of the maximal
run-up R on the incident solitary wave amplitude α as it is predicted by our numerical
model, by formula (4.3) and several other experimental [42, 110, 111, 144] and numerical
[26, 39, 67] studies. In particular, one can see that almost all models agree fairly well up to
the amplitudes α > 0.4. Then, there is an apparent ‘separation’ of data in two branches.
Again, our numerical model gives a very good agreement with experimental data from
[110, 111, 144] up to the amplitudes α > 0.7.
39 / 66
Dispersive shallow water wave modelling. Part II
2.4
R/d
1
2
3
4
5
6
7
8
9
2.0
1.6
1.2
0.8
0.4
0.0
0
0.1
0.2
0.3
0.4
0.5
0.6
α /d
0.7
Figure 6. Dependence of the maximal run-up R on the amplitude α of the
incident solitary wave. Experimental data: (1) — [144], (2) — [111], (3) — [42],
(4) — [110]. Numerical data: (5) — [67], (6) — [26], (7) — [39]. The solid line
(8) — our numerical results, the dashed line (9) — the analytical prediction (4.3).
4.2.1 Wave action on the wall
The nonlinear dispersive SGN model can be used to estimate also the wave force exerted
on the vertical wall. Moreover, we shall show below that this model is able to capture the
non-monotonic behaviour of the force when the incident wave amplitude is increased. This
effect was first observed experimentally [144] and then numerically [148].
For the 2D case with flat bottom the fluid pressure p(x, y, t) can be expressed:
h H2
(y + d)2 i
p(x, y, t)
R1 ,
= g H − (y + d) −
−
ρ
2
2
def
− d 6 y 6 η(x, t) , (4.4)
with R1 := uxt + u uxx − u2x . The horizontal wave loading exerted on the vertical wall
located at x = ℓ is given by the following integral:
ˆ η(ℓ,t)
F0 (t)
H3
g H2
=
−
R̄1 ,
p(ℓ, y, t) dy =
ρ
2
3
−d
40 / 66
η /d
F/ρgd
2
G. Khakimzyanov, D. Dutykh, et al.
1.5
1.5
1
2
3
4
1.0
1.0
0.5
0.5
0.0
0.0
25
30
35
40
45
1
2
3
4
t(g/d)
1/2
25
(a)
30
35
40
45
t(g/d)
1/2
(b)
Figure 7. Solitary wave/vertical wall interaction: (a) time series of wave
run-up on the wall; (b) dynamic wave loading on the wall. Different lines
correspond to different incident solitary wave amplitudes: (1) — α = 0.1, (2) —
α = 0.3, (3) — α = 0.5, (4) — α = 0.7.
where due to boundary conditions R̄1 = uxt − u2x . After removing the hydrostatic force,
we obtain the dynamic wave loading computed in our simulations:
h H2
F(t)
H3
d2 i
−
= g
−
R̄1 .
ρ
2
2
3
The expression for corresponding tilting moment can be found in [52, Remark 3]. Figure 7
shows the wave elevation (a) and the dynamic wave loading (b) on the vertical wall. From
Figure 7(b) it can be seen that the force has one maximum for small amplitude solitary
waves. However, when we gradually increase the amplitude (i.e. α ? 0.4), the second
(local) maximum appears. For such large solitary waves a slight run-down phenomenon
can be noticed in Figure 7(a). We reiterate that this behaviour is qualitatively and quantitatively correct comparing to the full Euler equations [25, 39]. However, the complexity
of the nonlinear dispersive SGN model and, consequently, the numerical algorithm to solve
it, is much lower.
4.3. Solitary wave/bottom step interaction
Water waves undergo continuous changes while propagating over general uneven bottoms.
Namely, the wave length and wave amplitude are modified while propagating over bottom
irregularities. Such transformations have been studied in the literature [44, 98]. In the
present Section we focus on the process of a Solitary Wave (SW) transformation over a
Dispersive shallow water wave modelling. Part II
41 / 66
bottom step. In the early work by Madsen & Mei (1969) [106] it was shown using long
wave approximation that a solitary wave can be disintegrated into a finite number of SWs
with decreasing amplitudes while passing over an underwater step. This conclusion was
supported in [106] by laboratory data as well. This test case was used later in many works,
see e.g. [29, 53, 99].
We illustrate the behaviour of the adaptive numerical algorithm as well as the SGN
model on the solitary wave/bottom interaction problem. The bottom bathymetry is given
by the following discontinuous function:
(
−h0 , 0 6 x 6 xs ,
y = −h(x) =
−hs , xs < x 6 ℓ ,
where ℓ is the numerical wave tank length, h0 (respectively hs ) are the still water depths
on the left (right) of the step located at x = xs . We assume also that 0 < hs < h0 .
The initial condition is a solitary wave located at x = x0 and propagating rightwards.
For the experiment cleanliness we assume that initially the solitary wave does not ‘feel’ the
step. In other words it is located sufficiently far from the abrupt change in bathymetry. In
our experiment we choose x0 so that η(xs ) > 0.01 α, where α is the SW amplitude. The
main parameters in this problem are the incident wave amplitude α and the bottom step
jump ∆bs = h0 − hs . Various theoretical and experimental studies show that a solitary
wave undergoes a splitting into a reflected wave and a finite number of solitary waves after
passing over an underwater step. See [98] for a recent review on this topic. Amplitudes
and the number of solitary waves over the step were determined in [88] in the framework of
the shallow water theory. These expressions were reported later in [124] and this result was
improved recently in [115]. However, in the vicinity of the step, one may expect important
vertical accelerations of fluid particles, which are simplified (or even neglected) in shallow
water type theories. Nevertheless, in [115] a good agreement of this theory with numerical
and experimental data was reported.
There is also another difficulty inherent to the bottom step modelling. In various derivations of shallow water models there is an implicit assumption that the bathymetry gradient ∇h is bounded (or even small | ∇h | ≪ 1, e.g. in the Boussinesq-type equations
[19]). On the other hand, numerical tests and comparisons with the full (Euler and even
Navier–Stokes) equations for finite values of | ∇h | ∼ O(1) show that resulting approximate models have a larger applicability domain than it was supposed at the outset [19].
In the case of a bottom step, the bathymetry function is even discontinuous which is an
extreme case we study in this Section.
There are two main approaches to cope with this problem. One consists in running
the approximate model directly on discontinuous bathymetry, and the resulting eventual
numerical instabilities are damped out by ad-hoc dissipative terms (see e.g. references
in [115]). The magnitude of these terms allows to increase the scheme dissipation, and
overall computation appears to be stable. The difficulty of this approach consists in the
fine tuning of dissipation, since
• Insufficient dissipation will make the computation unstable,
G. Khakimzyanov, D. Dutykh, et al.
42 / 66
• Excessive dissipation will yield unphysical damping of the solution.
An alternative approach consists in replacing the discontinuous bathymetry by a smoothed
ℓs
ℓs
version over certain length xs −
, where ℓs is the smoothing length on
, xs +
2
2
which the jump from h0 to hs is replaced by a smooth variation. For instance, in all
numerical computations reported in [124] the smoothing length was chosen to be ℓs = 60
cm independently of the water depths before h0 and after hs the step. In another work [66]
the smoothing length was arbitrarily set to ℓs = 20 cm independently of other parameters.
Unfortunately, in a recent work [146] the smoothing procedure was not described at all. Of
course, this method is not perfect since the bathymetry is slightly modified. However, one
can expect that sufficiently long waves will not ‘notice’ this modification. This assumption
was confirmed by the numerical simulations reported in [35, 66, 124].
In the present work we also employ the bottom smoothing procedure. However, the
smoothing length ℓs is chosen in order to have a well-posed problem for the elliptic operator
(2.5). For simplicity, we use the sufficient condition (2.12) (obtained under restriction
(2.11)), which is not necessarily optimal, but it allows us to invert stably the nonlinear
elliptic operator (2.18). Namely, the smoothed step has the following analytical expression:
ℓs
,
−h0 ,
0 6 x 6 xs −
2
ℓs
ℓs
∆bs
y = −h(x) = −h0 +
(4.5)
· 1 + sin ζ , xs −
6 x 6 xs +
,
2
2
2
ℓs
−h ,
xs +
6 x 6 ℓ,
s
2
def π(x − xs )
where ζ :=
. For this bottom profile, the inequalities (2.11), (2.12) take the
ℓs
form:
π π
π ∆bs
,
cos ζ < 1 , ∀ ζ ∈ − ,
2 ℓs
2 2
2
π 2 ∆bs
sin ζ > −
.
2
h
+
h
∆bs
2 ℓs
0
s
−
sin ζ
2
2
These inequalities have corresponding solutions:
π p
π ∆bs
ℓs >
,
ℓs >
h0 ∆bs .
2
2
The last inequalities are verified simultaneously if the second inequality is true. If we
assume that the bottom step height ∆bs is equal to the half of the water depth before it,
then we obtain the following condition:
π
ℓs > √ h0 ≈ 1.11 h0 .
2 2
We underline that the last condition is only sufficient and stable numerical computations
can most probably be performed even for shorter smoothing lengths ℓs . For instance, we
tested the value ℓs = h0 and everything went smoothly.
43 / 66
Dispersive shallow water wave modelling. Part II
Parameter
Value
Wave tank length, ℓ
35 m
3.65 cm
Solitary wave amplitude, α
11 m
Solitary wave initial position, x0
20 cm
Water depth before the step, h0
Water depth after the step, hs
10 cm
h0
Water depth used in scaling, d
10 cm
Bottom step jump, ∆bs
Bottom step location, xs
14 m
350
Number of grid points, N
17.6 s
Simulation time, T
Table 2. Values of various numerical parameters used in the solitary
wave/bottom step interaction test case.
In [124] the results of 80 experiments are reported for various values of α and h0 (for fixed
values of the bottom jump ∆bs = 10 cm). In our work we repeated all experiments from
[124] using the SGN equations solved numerically with the adaptive predictor–corrector algorithm described above. In general, we obtained a very good agreement with experimental
data from [124] in terms of the following control parameters:
• number of solitary waves moving over the step,
• amplitudes of solitary waves over the step,
• amplitude of the (main) reflected wave.
We notice that the amplitude of the largest solitary wave over the step corresponds perfectly
to the measurements. However, the variation in the amplitude of subsequent solitary waves
over the step could reach in certain cases 20%.
Remark 4. The conduction of laboratory experiments on the solitary wave/bottom step
interaction encounters a certain number of technical difficulties [27, 124] that we would like
to mention. First of all, the wave maker generates a solitary wave with some dispersive
components. Moreover, one has to take the step sufficiently long so that the transmitted
wave has enough time to develop into a finite number of visible well-separated solitary waves.
Finally, the reflections of the opposite wave flume’s wall are to be avoided as well in order
not to pollute the measurements. Consequently, the successful conduction of experiments
and accurate measurement of wave characteristics requires a certain level of technique. We
would like to mention the exemplary experimental work [78] on the head-on collision of
solitary waves.
Below we focus on one particular case of α = 3.65 cm. All other parameters are
given in Table 2. It corresponds to the experiment N◦ 24 from [124]. The free surface
dynamics is depicted in Figure 8(a) and the trajectories of every second grid node are
44 / 66
G. Khakimzyanov, D. Dutykh, et al.
(a)
(b)
Figure 8. Interaction of a solitary wave with an underwater step: (a)
space-time plot of the free surface elevation y = η(x, t) in the dimensional time
interval [0 s, 17.6 s]; (b) trajectories of every second grid node. Numerical
parameters are provided in Table 2.
shown in Figure 8(b). For the mesh adaptation we use the monitor function (4.2) with
ϑ1 = ϑ2 = 10 . In particular, one can see that three solitary waves are generated over
the step. This fact agrees well with the theoretical predictions [88, 115]. Moreover, one
can see that the distribution of grid points follows perfectly all generated waves (over the
step and the reflected wave). Figure 9(a) shows the free surface dynamics in the vicinity
of the bottom step. In particular, one can see that the wave becomes notoriously steep
by the time instance t = 3 s and during later times it splits into one reflected and three
transmitted waves. The free surface profile at the final simulation time y = η(x, T ) is
depicted in Figure 9(b). On the same panel the experimental measurements are shown
with empty circles ◦ , which show a very good agreement with our numerical simulations.
In our numerical experiments we go even further since a nonlinear dispersive wave model
(such as the SGN equations employed in this study) can provide also information about
the internal structure of the flow (i.e. beneath the free surface). For instance, the nonhydrostatic component of the pressure field can be easily reconstructed∗:
h η + y + 2h
i
pd (x, y, t)
= −(η − y)
· R1 + R2 ,
−h(x) 6 y 6 η(x, t) . (4.6)
ρ
2
where the quantities R1,2 are defined in (2.3) as (see also the complete derivation in [92]):
R1 = uxt + u uxx − ux2 ,
R2 = ut hx + u [ u hx ]x .
∗Please,
notice that formula (4.4) is not applicable here, since the bottom is not flat anymore.
45 / 66
Dispersive shallow water wave modelling. Part II
η /d
η /d
0.20
0.25
1
2
3
4
5
6
0.15
0.10
0.20
1
2
0.15
0.05
0.10
0.00
0.05
0.00
-0.05
60
70
(a)
80
x/d
140
150
160
170
x/d
(b)
Figure 9. Free surface profiles y = η(x, t) during the interaction process of a
solitary wave with an underwater step: (a) initial condition (1), t = 1.5 s (2),
t = 2.0 s (3), t = 2.5 s (4), t = 3.0 s (5), smoothed bottom profile given by
formula (4.5) (6); (b) free surface profile y = η(x, T ) (1) at the final simulation
time t = T . The experimental points (2) are taken from [124], experiment
N◦ 24. Numerical parameters are provided in Table 2.
We do not consider the hydrostatic pressure component since its variation is linear with
water depth y:
ph = ρ g (η − y) .
Even if the dispersive pressure component pd might be negligible comparing to the hydrostatic one ph , its presence is crucial to balance the effects of nonlinearity, which results
in the existence of solitary waves, as one of the most widely known effects in dispersive
wave propagation [48]. The dynamic pressure field and several other physical quantities
under a solitary wave were computed and represented graphically in the framework of the
full Euler equations in [51]. A good qualitative agreement with our results can be reported. The balance of dispersive and nonlinear effects results also in the symmetry of
the non-hydrostatic pressure distribution with respect to the wave crest. It can be seen in
Figure 10(a,d ) before and after the interaction process. On the other hand, during the interaction process the symmetry is momentaneously broken (see Figure 10(b,c)). However,
with the time going on, the system relaxes again to a symmetric∗ pressure distribution
shown in Figure 10(d ).
Knowledge of the solution
to the SGN equations allows to reconstruct also the velocity
†
field ũ(x, y, t), ṽ(x, y, t) in the fluid bulk. Under the additional assumption that the
∗The
†This
symmetry here is understood with respect to the vertical axis passing by the wave crest.
information can be used later to compute fluid particle trajectories [69], for example.
46 / 66
G. Khakimzyanov, D. Dutykh, et al.
pd /ρ gd
y/d
0.005
0.002
-0.000
-0.003
-0.005
-0.008
-0.010
-0.013
-0.015
-0.018
0.2
0.0
0.0
-0.2
-0.4
-0.4
56
0.005
0.003
0.001
-0.001
-0.003
-0.006
-0.008
-0.010
-0.012
-0.014
0.2
-0.2
54
pd /ρgd
y/d
x/d
58
68
70
(a)
(b)
pd /ρ gd
y/d
0.007
0.004
0.001
-0.002
-0.005
-0.008
-0.011
-0.014
-0.017
-0.021
0.2
0.0
0.0
-0.4
-0.4
(c)
80
x/d
0.011
0.004
-0.004
-0.011
-0.018
-0.025
-0.032
-0.040
-0.047
-0.054
0.2
-0.2
78
pd /ρgd
y/d
-0.2
76
x/d
72
164
166
168
x/d
(d)
Figure 10. Non-hydrostatic pressure distribution during a solitary
wave/underwater step interaction process at different instances of time: (a)
t = 0.1 s, (b) t = 2.0 s, (c) t = 3.0 s, (d) t = 17.5 s. Numerical parameters
are provided in Table 2.
flow is potential, one can derive the following asymptotic (neglecting the terms of the order
4
2
O(µ4 ) ≡ O λd4 in the horizontal velocity ũ(x, y, t) and of the order O(µ2 ) ≡ O λd2 for
the vertical one ṽ(x, y, t)) representation formula [64] (see also the derivation in [92] for
47 / 66
Dispersive shallow water wave modelling. Part II
y/d
y/d
0.2
0.2
0.0
0.0
-0.2
-0.2
-0.4
-0.4
-0.6
65
70
x/d
75
(a)
-0.6
60
70
x/d
80
(b)
Figure 11. Reconstructed velocity field in the fluid during the solitary wave
interaction process with an underwater step: (a) t = 2.0 s , (b) t = 3.0 s . Solid
blue lines show a few streamlines. Numerical parameters are provided in Table 2.
the 3D case with moving bottom):
ũ(x, y, t) = u +
H2
(y + h)2
uxx ,
− y − h · [ u hx ]x + ux hx +
−
2
6
2
(4.7)
H
ṽ(x, y, t) = −u hx − (y + h) ux .
(4.8)
The formulas above allow to compute the velocity vector field in the fluid domain at any
time (when the solution H(x, t), u(x, t) is available) and in any point (x, y) above the
bottom y = −h(x) and beneath the free surface y = η(x, t) . Figure 11 shows a numerical
application of this reconstruction technique at two different moments of time t = 2 and 3
s during the interaction process with the bathymetry change. In particular, in Figure 11(a)
one can see that important vertical particle velocities emerge during the interaction with
the bottom step. In subsequent time moments one can see the division of the flow in two
structures (see Figure 11(b)): the left one corresponds to the reflected wave, while the
right structure corresponds to the transmitted wave motion. The reconstructed velocity
fields via the SGN model compare fairly well with the 2D Navier–Stokes predictions
[115]. However, the computational complexity of our approach is significantly lower than
the simulation of the full Navier–Stokes equations. This is probably the main advantage
of the proposed modelling methodology.
G. Khakimzyanov, D. Dutykh, et al.
48 / 66
4.4. Wave generation by an underwater landslide
As the last illustration of the proposed above numerical scheme, we model wave generation by the motion of an underwater landslide over uneven bottom. This test-case is
very challenging since it involves rapid bottom motion (at least of its part). We recall
that all previous tests were performed on a static bottom (i.e. ht ≡ 0 ). The numerical simulation of underwater landslides is an important application where the inclusion
of non-hydrostatic effects is absolutely crucial [105]. Moreover, the accurate prediction of
generated waves allows to assess more accurately the natural hazard induced by unstable
sliding masses (rockfalls, debris, ground movements) [140].
Usually, the precise location of unstable underwater masses is unknown and the numerical simulation is a preferred tool to study these processes. The landslide can be modelled
as a solid undeformable body moving down the slope [34, 60, 74, 142]. Another possibility
consists in representing the landslide as another fluid layer of higher density (possibly also
viscosity) located near the bottom [23, 68]. In some works the landslide motion was not
simulated (e.g. [86]) and the initial wave field generated by the landslide motion was determined using empirical formulas [73]. Then, this initial condition was propagated using
an appropriate water wave model [86]. However, strictly speaking the employed empirical
models are valid only for an absolutely rigid landslide sliding down a constant slope. Subsequent numerical simulations showed that the bottom shape influences quite substantially
the generated wave field [13]. Consequently, for realistic modelling of real world cases one
needs to take into account the actual bathymetry [103] and even possible deformations of
the landslide during its motion [102]. In a recent experimental work [102] the deformability of the landslide was achieved by composing it with four solid parts interconnected
by springs. The idea to represent a landslide as a finite number of blocks was used in
numerical [119] and theoretical [137] investigations. In the present study we use the quasideformable∗ landslide model [12, 56, 59]. In this model the landslide deforms according to
encountered bathymetry changes, however, at every time instance, all components of the
velocity vector are the same in all particles which constitute the landslide (as in a solid
rigid body). We shall use two long wave models:
• The SGN equations (fully nonlinear non-hydrostatic weakly dispersive model)
• NSWE equations† (standard hydrostatic dispersionless model)
The advantage of the SGN equations over other frequently used long wave models [86, 105,
141] are:
• The Galilean invariance
• The energy balance equation (consistent with the full Euler [65])
NSWE were employed in [5] to model the real world 16th October 1979 Nice event. It looks
like the consensus on the importance of dispersive effects in landslide modeling is far from
∗This
†The
[94].
model can be visualized if you imagine a landslide composed of infinitely many solid blocks.
numerical algorithm to solve NSW equations on a moving grid was presented and validated in
Dispersive shallow water wave modelling. Part II
49 / 66
being achieved. For example, in [86] the authors affirm that the inclusion of dispersion
gives results very similar to NSWE. In other works [71, 104, 133] the authors state that
dispersive effects significantly influence the resulting wave field, especially during long
time propagation. Consequently, in the present study we use both the SGN and NSWE
equations to shed some light on the rôle of dispersive effects.
Consider a 1D fluid domain bounded from below by the solid (static) impermeable
bottom given by the following function:
h+ + h−
h+ − h−
h0 (x) =
+
tanh ̥(x − ξ̥ ) ,
(4.9)
2
2
where h+ and h− are water depths at ± ∞ correspondingly (the domain we take is finite,
of course). We assume for definiteness that
h+ < h− < 0 .
We have also by definition
hh − h i
1
2 tan θ0
def
def
0
+
> 0,
ξ̥ :=
> 0,
ln
̥ :=
h− − h+
2̥
h− − h+
where h0 ≡ h0 (0) is water depth in x = 0 and θ0 is the maximal slope angle, which is
reached at the inflection point ξ̥ . It can be easily checked that
h+ + h−
< h0 < h− .
2
Equation (4.9) gives us the static part of the bottom shape. The following equation prescribes the shape of the bathymetry including the unsteady component:
y = −h(x, t) = h0 (x) + ζ(x, t) ,
where function ζ(x, t) prescribes the landslide shape. In the present study we assume that
the landslide initial shape is given by the following analytical formula:
h 2 π x − x (0) i
~
ν
c
1 + cos
, | x − xc (0) | 6
,
2
ν
2
ζ(x, 0) =
ν
0 ,
,
| x − xc (0) | >
2
where xc (0), ~ and ν are initial landslide position, height and width (along the axis Ox)
correspondingly. Initially we put the landslide at the unique∗ point where the water depth
is equal to h0 = 100 m, i.e.
hh − h i
1
0
+
≈ 8 323.5 m .
ln
xc (0) = ξ̥ −
2̥
h− − h+
For t > 0 the landslide position xc (t) and its velocity v(t) are determined by solving
a second order ordinary differential equation which describes the balance of all the forces
acting on the sliding mass [12]. This model is well described in the literature [56, 59] and
we do not reproduce the details here.
∗This
point is unique since the static bathymetry h0 (x) is a monotonically increasing function of its
argument x .
50 / 66
G. Khakimzyanov, D. Dutykh, et al.
Parameter
Value
Fluid domain length, ℓ
80 000 m
−5.1 m
Water depth, h0 (0)
−500 m
Rightmost water depth, h+
−5 m
Leftmost water depth, h−
Maximal bottom slope, θ0
6◦
20 m
Landslide height, ~
5000 m
Landslide length, ν
Initial landslide position, xc (0)
8 323.5 m
1.0
Added mass coefficient, Cw
1.0
Hydrodynamic resistance coefficient, Cd
1.5
Landslide density, ρsl /ρw
∗
Friction angle, θ
1◦
1000 s
Final simulation time, T
400
Number of grid points, N
Monitor function parameter, ϑ0
200
Table 3. Numerical and physical parameters used in landslide simulation.
In Figure 12(a) we show the dynamics of the moving bottom from the initial condition
at t = 0 to the final simulation time t = T . All parameters are given in Table 3. It
can be clearly seen that landslide’s motion significantly depends on the underlying static
bottom shape. In Figure 12(b) we show landslide’s barycenter trajectory x = xc (t) (line
1), its velocity v = v(t) (line 2) and finally the static bottom profile y = h0 (x) (line
3). From the landslide speed plot in Figure 12(b) (line 2), one can see that the mass is
accelerating during the first 284.2 s and slows down during 613.4 s. The distances traveled
by the landslide during these periods have approximatively the same ratio ≈ 2 . It is
also interesting to notice that the landslide stops abruptly its motion with a negative (i.e.
nonzero) acceleration.
In order to simulate water waves generated by the landslide, we take the fluid domain
I = [0, ℓ]. For simplicity, we prescribe wall boundary conditions∗ at x = 0 and x = ℓ.
Undisturbed water depth at both ends is h0 and ≈ h+ respectively. The computational
domain length ℓ is chosen to be sufficiently large to avoid any kind of reflections from the
right boundary. Initially the fluid is at rest with undisturbed free surface, i.e.
η(x, 0) ≡ 0 ,
∗It
u(x, 0) ≡ 0 .
would be better to prescribe transparent boundary conditions here, but this question is totally open
for the SGN equations.
51 / 66
Dispersive shallow water wave modelling. Part II
0
900
10
v, m/s
20
30
t, s
3
0
y, m
1
600
-200
300
2
-400
0
(a)
0
10000
20000
x, m
(b)
Figure 12. Generation of surface waves by an underwater landslide motion: (a)
dynamics of the moving bottom; (b) graphics of functions (1) x = xc (t) , (2)
v = v(t) , (3) y = h0 (x) . Two outer red circles denote landslide initial t = 0
and terminal t = 897.6 s positions. Middle red circle denotes landslide position
at the moment of time t = 284.2 s where landslide’s speed is maximal
vmax ≈ 26.3 m/s . The black square shows the inflection point ξ̥ position. The
maximal speed is achieved well below the inflection point ξ̥ . Numerical
parameters are given in Table 3.
Segment I is discretized using N = 400 points. In order to redistribute optimally mesh
nodes, we employ the monitor function defined in equation (4.1), which refines the grid
where the waves are large (regardless whether they are of elevation or depression type).
In Figure 13 we show the surface y = η(x, t) in space-time, which shows the main
features of the generated wave field. The left panel (a) is the dispersive SGN prediction,
while (b) is the computation with NSWE that we include into this study for the sake
of comparison. For instance, one can see that the dispersive wave system is much more
complex even if NSWE seem to reproduce the principal wave components. The dispersive
components follow the main wave travelling rightwards. There is also at least one depression wave moving towards the shore. The motion of grid points is shown in Figure 14. The
initial grid was chosen to be uniform, since the free surface was initially flat. However,
during the wave generation process the grid adapts to the solution. The numerical method
redistributes the nodes according to the chosen monitor function ̟0 [ η ] (x, t) , i.e. where
the waves are large (regardless whether they are of elevation or depression type). We would
like to underline the fact that in order to achieve a similar accuracy on a uniform grid, one
would need about 4 N points.
In Figure 15 we show two snapshots of the free surface elevation at two moments of time
(a) and wave gauge records collected at two different spatial locations (b). In particular,
52 / 66
G. Khakimzyanov, D. Dutykh, et al.
(a)
(b)
Figure 13. Generation of surface waves y = η(x, t) by an underwater
landslide motion: (a) the SGN model (dispersive); (b) NSWE equations
(dispersionless). Numerical parameters are given in Table 3.
we observe that there is a better agreement between NSWE and the SGN model in shallow
regions (i.e. towards x = 0), while a certain divergence between two models becomes
more apparent in deeper regions (towards the right end x = ℓ).
In the previous Section 4.3 we showed the internal flow structure during nonlinear transformations of a solitary wave over a static step. In this Section we show that SGN equations
can be used to reconstruct and to study the physical fields in situations where the bottom
moves abruptly. In order to reconstruct the non-hydrostatic field between moving bottom
and the free surface, one can use formula (4.6), but the quantity R2 has some extra terms
due to the bottom motion:
R2 = ut hx + u [ u hx ]x + htt + 2 u hxt .
In Figure 16 we show the non-hydrostatic pressure field at two different moments of time.
More precisely, we show a zoom on the area of interest around the landslide only. In panel
(a) t = t1 = 150 s and the landslide barycenter is located at xc (t1 ) = 9456 m . Landslide
moves downhill with the speed v(t1 ) = 15.72 m/s and it continues to accelerate. In particular, one can see that there is a zone of positive pressure in front of the landslide and a zone
of negative pressure just behind. This fact has implications on the fluid particle trajectories
around the landslide. In right panel (b) we show the moment of time t = t2 = 400 s .
At this moment xc (t2 ) = 15 264 m and v(t2 ) = 21.4 m/s . The non-hydrostatic pressure
distribution qualitatively changed. Zones of positive and negative pressure switched their
respective positions. Moreover, in Figure 15 we showed that dispersive effects start to be
noticeable at the free surface only after t > 400 s and by t = 800 s they are flagrant.
In Figure 17 we show the velocity fields in the fluid bulk at corresponding moments of
53 / 66
t, s
Dispersive shallow water wave modelling. Part II
800
600
400
200
0
0
20000
40000
60000
x, m
Figure 14. Trajectories of every second grid node during the underwater
landslide simulation in the framework of the SGN equations. Numerical
parameters are given in Table 3.
time t1 and t2 . We notice some similarities between the fluid flow around a landslide with
an air flow around an airfoil. To our knowledge the internal hydrodynamics of landslide
generated waves on a general non-constant sloping bottom and in the framework of SGN
equations has not been shown before.
We remind that in the presence of moving bottom one should use the following reconstruction formulas for the velocity field (which are slightly different from (4.7), (4.8)):
H2
(y + h)2
uxx ,
− y − h · [ ht + u hx ]x + ux hx +
−
2
6
2
ṽ(x, y, t) = −ht − u hx − (y + h) ux .
ũ(x, y, t) = u +
H
54 / 66
G. Khakimzyanov, D. Dutykh, et al.
y, m
y, m
1
2
3
4
2
3
0
0
-3
-2
1
2
3
4
-6
-4
-9
0
20000
40000
60000
0
x, m
200
(a)
400
600
800
t, s
(b)
Figure 15. Generation of surface waves by an underwater landslide: (a) free
surface elevation profiles y = η(x, t1, 2 ) at t1 = 300 s (1,3) and t2 = 800 s
(2,4); (b) free surface elevation y = η(x1, 2 , t) as a function of time in two
spatial locations x1 = 20000 m (1,3) and x2 = 40000 m (2,4). The SGN
predictions are represented with solid lines (1,2) and NSWE with dashed lines
(3,4). Numerical parameters are given in Table 3.
y, m
pd , kPa
y, m
pd , kPa
0.0
41.780
35.960
30.140
24.320
18.500
12.681
6.861
1.041
-4.779
-10.599
0.0
41.780
35.960
30.140
24.320
18.500
12.681
6.861
1.041
-4.779
-10.599
-100.0
-200.0
-100.0
-200.0
-300.0
-300.0
-400.0
-400.0
-500.0
-500.0
5000
7500
10000
(a)
12500
x, m
12500
15000
17500
x, m
20000
(b)
Figure 16. Generation of surface waves by an underwater landslide. Isolines of
the non-hydrostatic pressure at two moments of time: t = 150 s (a); t = 400 s
(b). Numerical parameters are given in Table 3.
55 / 66
Dispersive shallow water wave modelling. Part II
y, m
y, m
0.0
0.0
-100.0
-100.0
-200.0
-200.0
-300.0
-300.0
-400.0
-400.0
-500.0
-500.0
5000
7500
10000
(a)
12500
x, m
12500
15000
17500
x, m
20000
(b)
Figure 17. Generation of surface waves by an underwater landslide. The
reconstructed velocity field at two moments of time: t = 150 s (a); t = 400 s
(b). Numerical parameters are given in Table 3.
These formulas naturally become (4.7), (4.8) if the bottom is static, i.e. ht ≡ 0 .
5. Discussion
Above we presented a detailed description of the numerical algorithm and a number of
numerical tests which illustrate its performance. The main conclusions and perspectives
of this study are outlined below.
5.1. Conclusions
In the second part of our series of papers we focused on the development of numerical
algorithms for shallow water propagation over globally flat spaces (i.e. we allow some
variations of the bathymetry in the limits discussed in Section 2.1). The main distinction
of our work is that the proposed algorithm allows for local mesh adaptivity by moving
the grid points where they are needed. The performance of our method was illustrated
on several test cases ranging from purely academic ones (e.g. propagation of a solitary
waves, which allowed us to estimate the overall accuracy of the scheme) to more realistic
applications with landslide-generated waves [12]. The mathematical model chosen in this
study allows us to have a look into the distribution of various physical fields in the fluid
bulk. In particular, in some interesting cases we reconstructed the velocity field and the
hydrostatic pressure distribution beneath the free surface.
G. Khakimzyanov, D. Dutykh, et al.
56 / 66
We studied the linear stability of the proposed finite volume discretization. It was shown
that the resulting scheme possesses an interesting and possible counter-intuitive property:
smaller we take the spatial discretization step ∆x, less restrictive is becoming the stability
CFL-type condition on the time step τ . This result was obtained using the classical von
Neumann analysis [28]. However, we show (and we compute it) that there exists the upper
limit of allowed time steps. Numerical schemes with such properties seem to be new.
We considered also in great detail the question of wall boundary conditions∗ for the SGN
system. It seems that this issue was not properly addressed before. The wall boundary
condition for the elliptic part of equations follows naturally from the form of the momentum
equation we chose in this study.
Finally, in numerical experiments we showed how depth-integrated SGN equations can
be used to study nonlinear transformations of water waves over some bathymetric features
(such as an underwater step or a sliding mass). Moreover, we illustrated clearly that SGN
equations (and several other approximate dispersive wave models) can be successfully used
to reconstruct the flow field under the wave. The accuracy of this reconstruction will be
studied in future works by direct comparisons with the full Euler equations where these
quantities are resolved.
5.2. Perspectives
The main focus of our study was set on the adaptive spatial discretization. The first
natural continuation of our study is the generalization to 3D physical problems (i.e. involving two horizontal dimensions). The main difficulty is to generalize the mesh motion
algorithm to this case, even if some ideas have been proposed in the literature [6].
In the present computations the time step was chosen to ensure the linear CFL condition.
In other words, it was chosen in order to satisfy the numerical solution stability. In future
works we would like to incorporate an adaptive time stepping procedure along the lines of
e.g. [130] aimed to meet the prescribed error tolerance. Of course, the extension of the
numerical method presented in this study to three-dimensional flows (i.e. two horizontal
dimensions) represents the main important extension of our work. Further improvement
of the numerical algorithm can be expected if we include also some bathymetric features
(such as ∇h) into the monitor function ̟[η, h](x, t) . Physically this improvement is fully
justified since water waves undergo constant transformations over bottom irregularities (as
illustrated in Sections 4.3 & 4.4). A priori, everything is ready to perform these further
numerical experiments.
Ideally, we would like to generalize the algorithm presented in this study for the Serre–
Green–Naghdi (SGN) equations to the base model in its most general form (1.3), (1.4).
In this way we would be able to incorporate several fully nonlinear shallow water models
(discussed in Part I [91]) in the same numerical framework. It would allow the great
∗The
x=ℓ
wall boundary condition for the velocity component u(x, t) is straightforward, i.e. u(x, t)|x = 0 =
0. However, there was an open question of how to prescribe the boundary conditions for the elliptic part
of the equations.
Dispersive shallow water wave modelling. Part II
57 / 66
flexibility in applications to choose and to assess the performance of various approximate
models.
Moreover, in the present study we raised the question of boundary conditions for SGN
equations. However, non-reflecting (or transparent) boundary conditions would allow to
take much smaller domains in many applications. Unfortunately, this question is totally
open to our knowledge for the SGN equations (however, it is well understood for NSWE).
In future works we plan to fill in this gap as well.
Finally, the SGN equations possess a number of variational structures. The Hamiltonian formulation can be found e.g. in [87]. Various Lagrangians can be found in
[38, 63, 95, 112]. Recently, a multi-symplectic formulation for SGN equations has been
proposed [32]. All these available variational structures raise an important question: after
the discretization can we preserve them at the discrete level as well? It opens beautiful
perspectives for the development of structure-preserving numerical methods as it was done
for the classical Korteweg–de Vries [50] and nonlinear Schrödinger [31] equations.
In the following parts of this series of papers we shall discuss the derivation of the SGN
equations on a sphere [91] and their numerical simulation using the finite volume method
[93].
Acknowledgments
This research was supported by RSCF project No 14–17–00219. The authors would
like to thank Prof. Emmanuel Audusse (Université Paris 13, France) who brought our
attention to the problem of boundary conditions for the SGN equations.
A. Derivation of the non-hydrostatic pressure equation
In this Appendix we give some hints for the derivation of the non-hydrostatic pressure
equation (2.5) and relation (2.9). Let us start with the latter. For this purpose we rewrite
equation (2.8) in a more compact form using the total derivative operator:
Du = −g ∇η +
∇℘ − ̺∇h
,
(A.1)
H
and ̺ (see equations (2.3) and (2.4) corre-
By definition of non-hydrostatic quantities ℘
spondingly) we obtain:
3℘
H
̺ =
+
R2 .
2H
4
We have to substitute into the last relation the expression for R2 :
R2 = (Du) · ∇h + u · (u · ∇)∇h + htt + 2 u · ∇ht ,
along with the expression (A.1) for the horizontal acceleration Du of fluid particles. After
simple algebraic computations one obtains (2.9).
58 / 66
G. Khakimzyanov, D. Dutykh, et al.
The derivation of equation (2.5) is somehow similar. First, from definitions (2.3), (2.4)
we obtain another relation between non-hydrostatic pressures:
H3
H
R1 +
̺,
12
2
with R1 rewritten in the following form:
℘
(A.2)
=
R1 = ∇ · (Du) − 2 (∇ · u)2 + 2
u 1 x1 u 1 x2
.
u 2 x1 u 2 x2
Substituting into equation (A.2) the just shown relation (2.9) with the last expression for
R1 , yields the required equation (2.5).
B. Acronyms
In the text above the reader could encounter the following acronyms:
SW: Solitary Wave
AMR: Adaptive Mesh Refinement
BBM: Benjamin–Bona–Mahony
BVP: Boundary Value Problem
CFL: Courant–Friedrichs–Lewy
IVP: Initial Value Problem
MOL: Method Of Lines
ODE: Ordinary Differential Equation
PDE: Partial Differential Equation
SGN: Serre–Green–Naghdi
TVD: Total Variation Diminishing
NSWE: Nonlinear Shallow Water Equations
References
[1] G. B. Alalykin, S. K. Godunov, L. L. Kireyeva, and L. A. Pliner. Solution of OneDimensional Problems in Gas Dynamics on Moving Grids. Nauka, Moscow, 1970. 5
Dispersive shallow water wave modelling. Part II
59 / 66
[2] J. S. Antunes Do Carmo, F. J. Seabra Santos, and E. Barthélemy. Surface waves propagation
in shallow water: A finite element model. Int. J. Num. Meth. Fluids, 16(6):447–459, mar
1993. 6
[3] C. Arvanitis and A. I. Delis. Behavior of Finite Volume Schemes for Hyperbolic Conservation
Laws on Adaptive Redistributed Spatial Grids. SIAM J. Sci. Comput., 28(5):1927–1956,
jan 2006. 16
[4] C. Arvanitis, T. Katsaounis, and C. Makridakis. Adaptive Finite Element Relaxation
Schemes for Hyperbolic Conservation Laws. ESAIM: Mathematical Modelling and Numerical Analysis, 35(1):17–33, 2010. 16
[5] S. Assier-Rzadkieaicz, P. Heinrich, P. C. Sabatier, B. Savoye, and J. F. Bourillet. Numerical
Modelling of a Landslide-generated Tsunami: The 1979 Nice Event. Pure Appl. Geophys.,
157(10):1707–1727, oct 2000. 48
[6] B. N. Azarenok, S. A. Ivanenko, and T. Tang. Adaptive Mesh Redistibution Method Based
on Godunov’s Scheme. Commun. Math. Sci., 1(1):152–179, 2003. 56
[7] N. S. Bakhvalov. The optimization of methods of solving boundary value problems with
a boundary layer. USSR Computational Mathematics and Mathematical Physics, 9(4):139–
166, jan 1969. 5, 16
[8] V. B. Barakhnin and G. S. Khakimzyanov. On the algorithm for one nonlinear dispersive
shallow-water model. Russ. J. Numer. Anal. Math. Modelling, 12(4):293–317, 1997. 6
[9] V. B. Barakhnin and G. S. Khakimzyanov. The splitting technique as applied to the solution
of the nonlinear dispersive shallow-water equations. Doklady Mathematics, 59(1):70–72,
1999. 6, 11
[10] T. J. Barth and M. Ohlberger. Finite Volume Methods: Foundation and Analysis. In
E. Stein, R. de Borst, and T. J. R. Hughes, editors, Encyclopedia of Computational Mechanics. John Wiley & Sons, Ltd, Chichester, UK, nov 2004. 22
[11] E. Barthélémy. Nonlinear shallow water theories for coastal waves. Surveys in Geophysics,
25:315–337, 2004. 6
[12] S. A. Beisel, L. B. Chubarov, D. Dutykh, G. S. Khakimzyanov, and N. Y. Shokina. Simulation of surface waves generated by an underwater landslide in a bounded reservoir. Russ.
J. Numer. Anal. Math. Modelling, 27(6):539–558, 2012. 48, 49, 55
[13] S. A. Beisel, L. B. Chubarov, and G. S. Khakimzyanov. Simulation of surface waves generated by an underwater landslide moving over an uneven slope. Russ. J. Numer. Anal.
Math. Modelling, 26(1):17–38, 2011. 48
[14] T. B. Benjamin, J. L. Bona, and J. J. Mahony. Model equations for long waves in nonlinear
dispersive systems. Philos. Trans. Royal Soc. London Ser. A, 272:47–78, 1972. 6
[15] M. J. Berger, D. L. George, R. J. LeVeque, and K. T. Mandli. The GeoClaw software for
depth-averaged flows with adaptive refinement. Advances in Water Resources, 34(9):1195–
1206, sep 2011. 5
[16] N. Bohr. Über die Serienspektra der Element. Zeitschrift für Physik, 2(5):423–469, oct
1920. 9
[17] J. L. Bona, V. A. Dougalis, and O. A. Karakashian. Fully discrete Galerkin methods for the
Korteweg-de Vries equation. Computers & Mathematics with Applications, 12(7):859–884,
jul 1986. 6
[18] P. Bonneton, F. Chazel, D. Lannes, F. Marche, and M. Tissier. A splitting approach for the
fully nonlinear and weakly dispersive Green-Naghdi model. J. Comput. Phys., 230:1479–
1498, 2011. 6
G. Khakimzyanov, D. Dutykh, et al.
60 / 66
[19] M.-O. Bristeau, N. Goutal, and J. Sainte-Marie. Numerical simulations of a non-hydrostatic
shallow water model. Comput. & Fluids, 47(1):51–64, aug 2011. 41
[20] M. Brocchini. A reasoned overview on Boussinesq-type models: the interplay between
physics, mathematics and numerics. Proc. R. Soc. A, 469(2160):20130496, oct 2013. 5
[21] J. G. B. Byatt-Smith. The reflection of a solitary wave by a vertical wall. J. Fluid Mech.,
197:503–521, 1988. 37
[22] F. Carbone, D. Dutykh, J. M. Dudley, and F. Dias. Extreme wave run-up on a vertical cliff.
Geophys. Res. Lett., 40(12):3138–3143, 2013. 37
[23] M. J. Castro, M. de la Asuncion, J. Macias, C. Parés, E. D. Fernandez-Nieto, J. M. GonzalezVida, and T. Morales de Luna. IFCP Riemann solver: Application to tsunami modelling
using CPUs. In M. E. Vazquez-Cendon, A. Hidalgo, P. Garcia-Navarro, and L. Cea, editors, Numerical Methods for Hyperbolic Equations: Theory and Applications, pages 237–244.
CRC Press, Boca Raton, London, New York, Leiden, 2013. 48
[24] V. Casulli. A semi-implicit finite difference method for non-hydrostatic, free-surface flows.
Int. J. Num. Meth. Fluids, 30(4):425–440, jun 1999. 6
[25] J. Chambarel, C. Kharif, and J. Touboul. Head-on collision of two solitary waves and
residual falling jet formation. Nonlin. Processes Geophys., 16:111–122, 2009. 37, 40
[26] R. K.-C. Chan and R. L. Street. A computer study of finite-amplitude water waves. J.
Comp. Phys., 6(1):68–94, aug 1970. 37, 38, 39
[27] K.-A. Chang, T.-J. Hsu, and P. L.-F. Liu. Vortex generation and evolution in water waves
propagating over a submerged rectangular obstacle. Coastal Engineering, 44(1):13–36, sep
2001. 43
[28] J. G. Charney, R. Fjörtoft, and J. Neumann. Numerical Integration of the Barotropic
Vorticity Equation. Tellus, 2(4):237–254, nov 1950. 29, 56
[29] F. Chazel. Influence of bottom topography on long water waves. M2AN, 41:771–799, 2007.
41
[30] F. Chazel, D. Lannes, and F. Marche. Numerical simulation of strongly nonlinear and
dispersive waves using a Green-Naghdi model. J. Sci. Comput., 48:105–116, 2011. 6
[31] J.-B. Chen, M.-Z. Qin, and Y.-F. Tang. Symplectic and multi-symplectic methods for the
nonlinear Schrödinger equation. Computers & Mathematics with Applications, 43(8-9):1095–
1106, apr 2002. 57
[32] M. Chhay, D. Dutykh, and D. Clamond. On the multi-symplectic structure of the SerreGreen-Naghdi equations. J. Phys. A: Math. Gen, 49(3):03LT01, jan 2016. 57
[33] A. Chorin. Numerical solution of the Navier-Stokes equations. Math. Comp., 22:745–762,
1968. 8
[34] L. B. Chubarov, S. V. Eletsky, Z. I. Fedotova, and G. S. Khakimzyanov. Simulation of surface waves by an underwater landslide. Russ. J. Numer. Anal. Math. Modelling, 20(5):425–
437, 2005. 48
[35] L. B. Chubarov, Z. I. Fedotova, Y. I. Shokin, and B. G. Einarsson. Comparative Analysis
of Nonlinear Dispersive Shallow Water Models. Int. J. Comp. Fluid Dyn., 14(1):55–73, jan
2000. 6, 42
[36] L. B. Chubarov and Y. I. Shokin. The numerical modelling of long wave propagation in the
framework of non-linear dispersion models. Comput. & Fluids, 15(3):229–249, jan 1987.
[37] R. Cienfuegos, E. Barthélemy, and P. Bonneton. A fourth-order compact finite volume
scheme for fully nonlinear and weakly dispersive Boussinesq-type equations. Part II: boundary conditions and validation. Int. J. Num. Meth. Fluids, 53(9):1423–1455, mar 2007. 6
Dispersive shallow water wave modelling. Part II
61 / 66
[38] D. Clamond and D. Dutykh. Practical use of variational principles for modeling water
waves. Phys. D, 241(1):25–36, 2012. 57
[39] M. J. Cooker, P. D. Weidman, and D. S. Bale. Reflection of a high-amplitude solitary wave
at a vertical wall. J. Fluid Mech., 342:141–158, 1997. 37, 38, 39, 40
[40] R. Courant, K. Friedrichs, and H. Lewy. Über die partiellen Differenzengleichungen der
mathematischen Physik. Mathematische Annalen, 100(1):32–74, 1928. 26
[41] M. H. Dao and P. Tkalich. Tsunami propagation modelling - a sensitivity study. Nat.
Hazards Earth Syst. Sci., 7:741–754, 2007. 32
[42] V. H. Davletshin. Force action of solitary waves on vertical structures. In Tsunami meeting,
pages 41–43, Gorky, 1984. Institute of Applied Physics. 38, 39
[43] A. J. C. de Saint-Venant. Théorie du mouvement non-permanent des eaux, avec application
aux crues des rivières et à l’introduction des marées dans leur lit. C. R. Acad. Sc. Paris,
73:147–154, 1871. 6, 8, 9
[44] M. W. Dingemans. Water wave propagation over uneven bottom. World Scientific, Singapore,
1997. 40
[45] V. A. Dougalis and O. A. Karakashian. On Some High-Order Accurate Fully Discrete
Galerkin Methods for the Korteweg-de Vries Equation. Mathematics of Computation,
45(172):329, oct 1985. 6
[46] V. A. Dougalis and D. E. Mitsotakis. Theory and numerical analysis of Boussinesq systems:
A review. In N. A. Kampanis, V. A. Dougalis, and J. A. Ekaterinaris, editors, Effective
Computational Methods in Wave Propagation, pages 63–110. CRC Press, 2008. 6
[47] V. A. Dougalis, D. E. Mitsotakis, and J.-C. Saut. On some Boussinesq systems in two space
dimensions: Theory and numerical analysis. Math. Model. Num. Anal., 41(5):254–825, 2007.
6
[48] P. G. Drazin and R. S. Johnson. Solitons: An introduction. Cambridge University Press,
Cambridge, 1989. 15, 45
[49] A. Duran, D. Dutykh, and D. Mitsotakis. On the Galilean Invariance of Some Nonlinear
Dispersive Wave Equations. Stud. Appl. Math., 131(4):359–388, nov 2013. 35
[50] D. Dutykh, M. Chhay, and F. Fedele. Geometric numerical schemes for the KdV equation.
Comp. Math. Math. Phys., 53(2):221–236, 2013. 57
[51] D. Dutykh and D. Clamond. Efficient computation of steady solitary gravity waves. Wave
Motion, 51(1):86–99, jan 2014. 45
[52] D. Dutykh, D. Clamond, P. Milewski, and D. Mitsotakis. Finite volume and pseudo-spectral
schemes for the fully nonlinear 1D Serre equations. Eur. J. Appl. Math., 24(05):761–787,
2013. 6, 15, 37, 40
[53] D. Dutykh and F. Dias. Dissipative Boussinesq equations. C. R. Mecanique, 335:559–583,
2007. 41
[54] D. Dutykh and F. Dias. Energy of tsunami waves generated by bottom motion. Proc. R.
Soc. A, 465:725–744, 2009. 10
[55] D. Dutykh and D. Ionescu-Kruse. Travelling wave solutions for some two-component shallow
water models. J. Diff. Eqns., 261(2):1099–1114, jul 2016. 15
[56] D. Dutykh and H. Kalisch. Boussinesq modeling of surface waves due to underwater landslides. Nonlin. Processes Geophys., 20(3):267–285, may 2013. 48, 49
[57] D. Dutykh, T. Katsaounis, and D. Mitsotakis. Finite volume schemes for dispersive wave
propagation and runup. J. Comput. Phys., 230(8):3035–3061, apr 2011. 6, 37
G. Khakimzyanov, D. Dutykh, et al.
62 / 66
[58] D. Dutykh, T. Katsaounis, and D. Mitsotakis. Finite volume methods for unidirectional
dispersive wave models. Int. J. Num. Meth. Fluids, 71:717–736, 2013. 6, 37
[59] D. Dutykh, D. Mitsotakis, S. A. Beisel, and N. Y. Shokina. Dispersive waves generated
by an underwater landslide. In E. Vazquez-Cendon, A. Hidalgo, P. Garcia-Navarro, and
L. Cea, editors, Numerical Methods for Hyperbolic Equations: Theory and Applications,
pages 245–250. CRC Press, Boca Raton, London, New York, Leiden, 2013. 48, 49
[60] F. Enet and S. T. Grilli. Experimental study of tsunami generation by three-dimensional
rigid underwater landslides. J. Waterway, Port, Coastal and Ocean Engineering, 133(6):442–
454, 2007. 48
[61] R. C. Ertekin, W. C. Webster, and J. V. Wehausen. Waves caused by a moving disturbance
in a shallow channel of finite width. J. Fluid Mech., 169:275–292, aug 1986. 16
[62] M. S. Fabien. Spectral Methods for Partial Dierential Equations that Model Shallow Water
Wave Phenomena. Master, University of Washington, 2014. 6
[63] Z. I. Fedotova and E. D. Karepova. Variational principle for approximate models of wave
hydrodynamics. Russ. J. Numer. Anal. Math. Modelling, 11(3):183–204, 1996. 57
[64] Z. I. Fedotova and G. S. Khakimzyanov. Shallow water equations on a movable bottom.
Russ. J. Numer. Anal. Math. Modelling, 24(1):31–42, 2009. 16, 46
[65] Z. I. Fedotova, G. S. Khakimzyanov, and D. Dutykh. Energy equation for certain approximate models of long-wave hydrodynamics. Russ. J. Numer. Anal. Math. Modelling,
29(3):167–178, jan 2014. 48
[66] Z. I. Fedotova and V. Y. Pashkova. Methods of construction and the analysis of difference
schemes for nonlinear dispersive models of wave hydrodynamics. Russ. J. Numer. Anal.
Math. Modelling, 12(2), 1997. 42
[67] J. D. Fenton and M. M. Rienecker. A Fourier method for solving nonlinear water-wave
problems: application to solitary-wave interactions. J. Fluid Mech., 118:411–443, apr 1982.
37, 38, 39
[68] E. D. Fernandez-Nieto, F. Bouchut, D. Bresch, M. J. Castro-Diaz, and A. Mangeney. A new
Savage-Hutter type models for submarine avalanches and generated tsunami. J. Comput.
Phys., 227(16):7720–7754, 2008. 48
[69] G. R. Flierl. Particle motions in large-amplitude wave fields. Geophysical & Astrophysical
Fluid Dynamics, 18(1-2):39–74, aug 1981. 45
[70] S. Glimsdal, G. K. Pedersen, K. Atakan, C. B. Harbitz, H. P. Langtangen, and F. Lovholt.
Propagation of the Dec. 26, 2004, Indian Ocean Tsunami: Effects of Dispersion and Source
Characteristics. Int. J. Fluid Mech. Res., 33(1):15–43, 2006. 31
[71] S. Glimsdal, G. K. Pedersen, C. B. Harbitz, and F. Løvholt. Dispersion of tsunamis: does
it really matter? Natural Hazards and Earth System Science, 13(6):1507–1526, jun 2013.
32, 49
[72] A. E. Green, N. Laws, and P. M. Naghdi. On the theory of water waves. Proc. R. Soc.
Lond. A, 338:43–55, 1974. 5
[73] S. Grilli, S. Vogelmann, and P. Watts. Development of a 3D numerical wave tank for
modeling tsunami generation by underwater landslides. Engng Anal. Bound. Elem., 26:301–
313, 2002. 48
[74] S. T. Grilli and P. Watts. Tsunami Generation by Submarine Mass Failure. I: Modeling,
Experimental Validation, and Sensitivity Analyses. Journal of Waterway Port Coastal and
Ocean Engineering, 131(6):283, 2005. 48
Dispersive shallow water wave modelling. Part II
63 / 66
[75] J. Grue, E. N. Pelinovsky, D. Fructus, T. Talipova, and C. Kharif. Formation of undular
bores and solitary waves in the Strait of Malacca caused by the 26 December 2004 Indian
Ocean tsunami. J. Geophys. Res., 113(C5):C05008, may 2008. 31
[76] E. Hairer, S. P. Nørsett, and G. Wanner. Solving ordinary differential equations: Nonstiff
problems. Springer, 2009. 6
[77] E. Hairer and G. Wanner. Solving Ordinary Differential Equations II. Stiff and DifferentialAlgebraic Problems. Springer Series in Computational Mathematics, Vol. 14, 1996. 6
[78] J. Hammack, D. Henderson, P. Guyenne, and M. Yi. Solitary wave collisions. In Proc. 23rd
International Conference on Offshore Mechanics and Arctic Engineering, 2004. 43
[79] F. H. Harlow and J. E. Welch. Numerical Calculation of Time-Dependent Viscous Incompressible Flow of Fluid with Free Surface. Phys. Fluids, 8:2182, 1965. 8
[80] H. Hermes. Introduction to Mathematical Logic. Universitext. Springer Berlin Heidelberg,
Berlin, Heidelberg, 1973. 26
[81] N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM Philadelphia, 2nd
ed. edition, 2002. 25
[82] J. Horrillo, Z. Kowalik, and Y. Shigihara. Wave Dispersion Study in the Indian OceanTsunami of December 26, 2004. Marine Geodesy, 29(3):149–166, dec 2006. 6
[83] W. Huang. Practical Aspects of Formulation and Solution of Moving Mesh Partial Differential Equations. J. Comp. Phys., 171(2):753–775, aug 2001. 5
[84] W. Huang and R. D. Russell. Adaptive mesh movement - the MMPDE approach and its
applications. J. Comp. Appl. Math., 128(1-2):383–398, mar 2001. 5
[85] A. M. Il’in. Differencing scheme for a differential equation with a small parameter affecting the highest derivative. Mathematical Notes of the Academy of Sciences of the USSR,
6(2):596–602, aug 1969. 5, 16
[86] M. Ioualalen, S. Migeon, and O. Sardoux. Landslide tsunami vulnerability in the Ligurian
Sea: case study of the 1979 October 16 Nice international airport submarine landslide and
of identified geological mass failures. Geophys. J. Int., 181(2):724–740, mar 2010. 48, 49
[87] R. S. Johnson. Camassa-Holm, Korteweg-de Vries and related models for water waves. J.
Fluid Mech., 455:63–82, 2002. 57
[88] A. Kabbaj. Contribution à l’étude du passage des ondes de gravité et de la génération
des ondes internes sur un talus, dans le cadre de la théorie de l’eau peu profonde. Thèse,
Université Scientifique et Médicale de Grenoble, 1985. 41, 44
[89] M. Kazolea and A. I. Delis. A well-balanced shock-capturing hybrid finite volume-finite
difference numerical scheme for extended 1D Boussinesq models. Appl. Numer. Math.,
67:167–186, 2013. 6
[90] G. Khakimzyanov and D. Dutykh. On supraconvergence phenomenon for second order
centered finite differences on non-uniform grids. J. Comp. Appl. Math., 326:1–14, dec 2017.
16, 17
[91] G. S. Khakimzyanov, D. Dutykh, and Z. I. Fedotova. Dispersive shallow water wave modelling. Part III: Model derivation on a globally spherical geometry. Submitted, pages 1–40,
2017. 56, 57
[92] G. S. Khakimzyanov, D. Dutykh, Z. I. Fedotova, and D. E. Mitsotakis. Dispersive shallow
water wave modelling. Part I: Model derivation on a globally flat space. Submitted, pages
1–40, 2017. 5, 7, 10, 11, 44, 46
[93] G. S. Khakimzyanov, D. Dutykh, and O. Gusev. Dispersive shallow water wave modelling.
Part IV: Numerical simulation on a globally spherical geometry. Submitted, pages 1–40,
G. Khakimzyanov, D. Dutykh, et al.
[94]
[95]
[96]
[97]
[98]
[99]
[100]
[101]
[102]
[103]
[104]
[105]
[106]
[107]
[108]
[109]
[110]
[111]
[112]
[113]
[114]
64 / 66
2017. 57
G. S. Khakimzyanov, D. Dutykh, D. E. Mitsotakis, and N. Y. Shokina. Numerical solution
of conservation laws on moving grids. Submitted, pages 1–28, 2017. 16, 21, 26, 29, 48
J. W. Kim, K. J. Bai, R. C. Ertekin, and W. C. Webster. A derivation of the Green-Naghdi
equations for irrotational flows. J. Eng. Math., 40(1):17–42, 2001. 57
J. W. Kim and R. C. Ertekin. A numerical study of nonlinear wave interaction in regular
and irregular seas: irrotational Green-Naghdi model. Marine Structures, 13(4-5):331–347,
jul 2000. 16
H. O. Kreiss and G. Scherer. Method of lines for hyperbolic equations. SIAM Journal on
Numerical Analysis, 29:640–646, 1992. 6
A. A. Kurkin, S. V. Semin, and Y. A. Stepanyants. Transformation of surface waves over a
bottom step. Izvestiya, Atmospheric and Oceanic Physics, 51(2):214–223, mar 2015. 40, 41
Z. Lai, C. Chen, G. W. Cowles, and R. C. Beardsley. A nonhydrostatic version of FVCOM:
1. Validation experiments. J. Geophys. Res., 115(C11):C11010, nov 2010. 41
O. Le Métayer, S. Gavrilyuk, and S. Hank. A numerical scheme for the Green-Naghdi
model. J. Comp. Phys., 229(6):2034–2045, 2010. 6
D. Y. Le Roux. Spurious inertial oscillations in shallow-water models. J. Comp. Phys.,
231(24):7959–7987, oct 2012. 32
E. K. Lindstrøm, G. K. Pedersen, A. Jensen, and S. Glimsdal. Experiments on slide generated waves in a 1:500 scale fjord model. Coastal Engineering, 92:12–23, oct 2014. 48
P. L.-F. Liu, T.-R. Wu, F. Raichlen, C. E. Synolakis, and J. C. Borrero. Runup and rundown
generated by three-dimensional sliding masses. J. Fluid Mech., 536(1):107–144, jul 2005. 48
F. Løvholt, G. Pedersen, and G. Gisler. Oceanic propagation of a potential tsunami from
the La Palma Island. J. Geophys. Res., 113(C9):C09026, sep 2008. 49
P. Lynett and P. L. F. Liu. A numerical study of submarine-landslide-generated waves and
run-up. Proc. R. Soc. A, 458(2028):2885–2910, dec 2002. 48
O. S. Madsen and C. C. Mei. The transformation of a solitary wave over an uneven bottom.
J. Fluid Mech., 39(04):781–791, dec 1969. 41
P. A. Madsen, H. B. Bingham, and H. Liu. A new Boussinesq method for fully nonlinear
waves from shallow to deep water. J. Fluid Mech., 462:1–30, 2002. 37
P. A. Madsen, R. Murray, and O. R. Sorensen. A new form of the Boussinesq equations
with improved linear dispersion characteristics. Coastal Engineering, 15:371–388, 1991. 6
P. A. Madsen and O. R. Sorensen. A new form of the Boussinesq equations with improved
linear dispersion characteristics. Part 2. A slowly-varying bathymetry. Coastal Engineering,
18:183–204, 1992. 6
S. V. Manoylin. Some experimental and theoretical methods of estimation of tsunami wave
action on hydro-technical structures and seaports. Technical report, Siberian Branch of
Computing Center, Krasnoyarsk, 1989. 38, 39
T. Maxworthy. Experiments on collisions between solitary waves. J. Fluid Mech, 76:177–185,
1976. 37, 38, 39
J. W. Miles and R. Salmon. Weakly dispersive nonlinear gravity waves. J. Fluid Mech.,
157:519–531, 1985. 57
S. M. Mirie and C. H. Su. Collision between two solitary waves. Part 2. A numerical study.
J. Fluid Mech., 115:475–492, 1982. 37
D. Mitsotakis, B. Ilan, and D. Dutykh. On the Galerkin/Finite-Element Method for the
Serre Equations. J. Sci. Comput., 61(1):166–195, feb 2014. 6, 15
Dispersive shallow water wave modelling. Part II
65 / 66
[115] E. Pelinovsky, B. H. Choi, T. Talipova, S. B. Wood, and D. C. Kim. Solitary wave transformation on the underwater step: Asymptotic theory and numerical experiments. Applied
Mathematics and Computation, 217(4):1704–1718, oct 2010. 41, 44, 47
[116] D. H. Peregrine. Calculations of the development of an undular bore. J. Fluid Mech.,
25(02):321–330, mar 1966. 6, 31, 33, 34
[117] D. H. Peregrine. Long waves on a beach. J. Fluid Mech., 27:815–827, 1967. 5, 16
[118] D. H. Peregrine. Water-Wave Impact on Walls. Annu. Rev. Fluid Mech., 35:23–43, 2003.
36
[119] B. Ranguelov, S. Tinti, G. Pagnoni, R. Tonini, F. Zaniboni, and A. Armigliato. The
nonseismic tsunami observed in the Bulgarian Black Sea on 7 May 2007: Was it due to a
submarine landslide? Geophys. Res. Lett., 35(18):L18613, sep 2008. 48
[120] S. C. Reddy and L. N. Trefethen. Stability of the method of lines. Numerische Mathematik,
62(1):235–267, 1992. 6
[121] G. Sadaka. Solution of 2D Boussinesq systems with FreeFem++: the flat bottom case.
Journal of Numerical Mathematics, 20(3-4):303–324, jan 2012. 5
[122] A. A. Samarskii. The Theory of Difference Schemes. CRC Press, New York, 2001. 23, 26
[123] W. E. Schiesser. Method of lines solution of the Korteweg-de vries equation. Computers
Mathematics with Applications, 28(10-12):147–154, 1994. 6
[124] F. J. Seabra-Santos, D. P. Renouard, and A. M. Temperville. Numerical and Experimental
study of the transformation of a Solitary Wave over a Shelf or Isolated Obstacle. J. Fluid
Mech, 176:117–134, 1987. 41, 42, 43, 45
[125] F. J. Seabra-Santos, A. M. Temperville, and D. P. Renouard. On the weak interaction of
two solitary waves. Eur. J. Mech. B/Fluids, 8(2):103–115, 1989. 37
[126] F. Serre. Contribution à l’étude des écoulements permanents et variables dans les canaux.
La Houille blanche, 8:830–872, 1953. 5
[127] F. Serre. Contribution to the study of long irrotational waves. La Houille blanche, 3:374–388,
1956. 5
[128] L. F. Shampine. ODE solvers and the method of lines. Numerical Methods for Partial
Differential Equations, 10(6):739–755, 1994. 6
[129] Y. I. Shokin, Y. V. Sergeeva, and G. S. Khakimzyanov. Predictor-corrector scheme for the
solution of shallow water equations. Russ. J. Numer. Anal. Math. Modelling, 21(5):459–479,
jan 2006. 6, 21, 22
[130] G. Söderlind and L. Wang. Adaptive time-stepping and computational stability. J. Comp.
Appl. Math., 185(2):225–243, 2006. 56
[131] O. R. Sørensen, H. A. Schäffer, and L. S. Sørensen. Boussinesq-type modelling using an
unstructured finite element technique. Coastal Engineering, 50(4):181–198, feb 2004. 6
[132] C. H. Su and R. M. Mirie. On head-on collisions between two solitary waves. J. Fluid
Mech., 98:509–525, 1980. 37, 38
[133] D. R. Tappin, P. Watts, and S. T. Grilli. The Papua New Guinea tsunami of 17 July 1998:
anatomy of a catastrophic event. Nat. Hazards Earth Syst. Sci., 8:243–266, 2008. 49
[134] P. D. Thomas and C. K. Lombart. Geometric conservation law and its application to flow
computations on moving grid. AIAA Journal, 17(10):1030–1037, 1979. 5
[135] A. N. Tikhonov and A. A. Samarskii. Homogeneous difference schemes. Zh. vych. mat.,
1(1):5–63, 1961. 16
[136] A. N. Tikhonov and A. A. Samarskii. Homogeneous difference schemes on non-uniform nets.
Zh. vych. mat., 2(5):812–832, 1962. 16
G. Khakimzyanov, D. Dutykh, et al.
66 / 66
[137] S. Tinti, E. Bortolucci, and C. Vannini. A Block-Based Theoretical Model Suited to Gravitational Sliding. Natural Hazards, 16(1):1–28, 1997. 48
[138] J. Touboul and E. Pelinovsky. Bottom pressure distribution under a solitonic wave reflecting
on a vertical wall. Eur. J. Mech. B/Fluids, 48:13–18, nov 2014. 37
[139] M. Walkley and M. Berzins. A finite element method for the two-dimensional extended
Boussinesq equations. Int. J. Num. Meth. Fluids, 39(10):865–885, aug 2002. 6
[140] S. N. Ward. Landslide tsunami. J. Geophysical Res., 106:11201–11215, 2001. 48
[141] P. Watts, S. T. Grilli, J. T. Kirby, G. J. Fryer, and D. R. Tappin. Landslide tsunami case
studies using a Boussinesq model and a fully nonlinear tsunami generation model. Natural
Hazards And Earth System Science, 3(5):391–402, 2003. 48
[142] P. Watts, F. Imamura, and S. T. Grilli. Comparing model simulations of three benchmark
tsunami generation cases. Science of Tsunami Hazards, 18(2):107–123, 2000. 48
[143] G. Wei and J. T. Kirby. Time-Dependent Numerical Code for Extended Boussinesq Equations. J. Waterway, Port, Coastal and Ocean Engineering, 121(5):251–261, sep 1995. 6
[144] N. N. Zagryadskaya, S. V. Ivanova, L. S. Nudner, and A. I. Shoshin. Action of long waves
on a vertical obstacle. Bulletin of VNIIG, 138:94–101, 1980. 38, 39
[145] V. E. Zakharov. What Is Integrability? Springer Series in Nonlinear Dynamics, 1991. 15
[146] Y. Zhang, A. B. Kennedy, N. Panda, C. Dawson, and J. J. Westerink. Boussinesq-GreenNaghdi rotational water wave theory. Coastal Engineering, 73:13–27, mar 2013. 42
[147] B. B. Zhao, R. C. Ertekin, and W. Y. Duan. A comparative study of diffraction of shallowwater waves by high-level IGN and GN equations. J. Comput. Phys., 283:129–147, feb 2015.
6
[148] M. I. Zheleznyak. Influence of long waves on vertical obstacles. In E. N. Pelinovsky, editor,
Tsunami Climbing a Beach, pages 122–139. Applied Physics Institute Press, Gorky, 1985.
39
[149] M. I. Zheleznyak and E. N. Pelinovsky. Physical and mathematical models of the tsunami
climbing a beach. In E. N. Pelinovsky, editor, Tsunami Climbing a Beach, pages 8–34.
Applied Physics Institute Press, Gorky, 1985. 16
G. Khakimzyanov: Institute of Computational Technologies, Siberian Branch of the
Russian Academy of Sciences, Novosibirsk 630090, Russia
E-mail address: [email protected]
D. Dutykh: LAMA, UMR 5127 CNRS, Université Savoie Mont Blanc, Campus Scientifique, F-73376 Le Bourget-du-Lac Cedex, France
E-mail address: [email protected]
URL: http://www.denys-dutykh.com/
O. Gusev: Institute of Computational Technologies, Siberian Branch of the Russian
Academy of Sciences, Novosibirsk 630090, Russia
E-mail address: [email protected]
N. Yu. Shokina: Institute of Computational Technologies, Siberian Branch of the
Russian Academy of Sciences, Novosibirsk 630090, Russia
E-mail address: [email protected]
URL: https://www.researchgate.net/profile/Nina_Shokina/
| 5 |
Beat by Beat:
Classifying Cardiac Arrhythmias with Recurrent Neural Networks
arXiv:1710.06319v2 [cs.LG] 24 Oct 2017
Patrick Schwab, Gaetano C Scebba, Jia Zhang, Marco Delai, Walter Karlen
Mobile Health Systems Lab, Department of Health Sciences and Technology
ETH Zurich, Switzerland
Abstract
With tens of thousands of electrocardiogram (ECG)
records processed by mobile cardiac event recorders every
day, heart rhythm classification algorithms are an important tool for the continuous monitoring of patients at risk.
We utilise an annotated dataset of 12,186 single-lead ECG
recordings to build a diverse ensemble of recurrent neural networks (RNNs) that is able to distinguish between
normal sinus rhythms, atrial fibrillation, other types of arrhythmia and signals that are too noisy to interpret. In
order to ease learning over the temporal dimension, we introduce a novel task formulation that harnesses the natural
segmentation of ECG signals into heartbeats to drastically
reduce the number of time steps per sequence. Additionally, we extend our RNNs with an attention mechanism that
enables us to reason about which heartbeats our RNNs focus on to make their decisions. Through the use of attention, our model maintains a high degree of interpretability,
while also achieving state-of-the-art classification performance with an average F1 score of 0.79 on an unseen test
set (n=3,658).
1.
Introduction
Cardiac arrhythmias are a heterogenous group of conditions that is characterised by heart rhythms that do not
follow a normal sinus pattern. One of the most common arrhythmias is atrial fibrillation (AF) with an agedependant population prevalence of 2.3 - 3.4% [1]. Due
to the increased mortality associated with arrhythmias, receiving a timely diagnosis is of paramount importance for
patients [1, 2]. To diagnose cardiac arrhythmias, medical
professionals typically consider a patient’s electrocardiogram (ECG) as one of the primary factors [2]. In the past,
clinicians recorded these ECGs mainly using multi-lead
clinical monitors or Holter devices. However, the recent
advent of mobile cardiac event recorders has given patients
the ability to remotely record short ECGs using devices
with a single lead.
We propose a machine-learning approach based on recurrent neural networks (RNNs) to differentiate between
various types of heart rhythms in this more challenging setting with just a single lead and short ECG record lengths.
To ease learning of dependencies over the temporal dimension, we introduce a novel task formulation that harnesses
the natural beat-wise segmentation of ECG signals. In addition to utilising several heartbeat features that have been
shown to be highly discriminative in previous works, we
also use stacked denoising autoencoders (SDAE) [3] to
capture differences in morphological structure. Furthermore, we extend our RNNs with a soft attention mechanism [4–7] that enables us to reason about which ECG
segments the RNNs prioritise for their decision making.
2.
Methodology
Our cardiac rhythm classification pipeline consists of
multiple stages (figure 1). The core idea of our setup is
to extract a diverse set of features from the sequence of
heartbeats in an ECG record to be used as input features
to an ensemble of RNNs. We blend the individual models’ predictions into a per-class classification score using a
multilayer perceptron (MLP) with a softmax output layer.
The following paragraphs explain the stages shown in figure 1 in more detail.
ECG Dataset. We use the dataset of the PhysioNet Computing in Cardiology (CinC) 2017 challenge [8]
which contains 12,186 unique single-lead ECG records of
varying length. Experts annotated each of these ECGs
as being either a normal sinus rhythm, AF, an other ar-
ECG
Preprocessing
Features
Level 1
Models
Level 2
Blender
Model1
Normalise
Segment
Extract
Features
...
Modeln
Blend
Classification
Normal
AF
Other
Noise
Figure 1. An overview of our ECG classification pipeline.
rhythmia or too noisy to classify. The challenge organisers
keep 3,658 (30%) of these ECG records private as a test
set. Additionally, we hold out a non-stratified random subset of 20% of the public dataset as a validation set. For
some RNN configurations, we further augment the training data with labelled samples extracted from other PhysioNet databases [9–12] in order to even out misbalanced
class sizes in the training set. As an additional measure
against the imbalanced class distribution of the dataset, we
weight each training sample’s contribution to the loss function to be inversely proportional to its class’ prevalence in
the overall dataset.
Normalisation. Prior to segmentation, we normalise the
ECG recording to have a mean value of zero and a standard
deviation of one. We do not apply any additional filters as
all ECGs were bandpass-filtered by the recording device.
Segmentation. Following normalisation, we segment
the ECG into a sequence of heartbeats. We decide to reformulate the given task of classifying arrhythmias as a sequence classification task over heartbeats rather than over
raw ECG readings. The motivation behind the reformulation is that it significantly reduces the number of time steps
through which the error signal of our RNNs has to propagate. On the training set, the reformulation reduces the
mean number of time steps per ECG from 9000 to just 33.
To perform the segmentation, we use a customised QRS
detector based on Pan-Tompkin’s [13] that identifies Rpeaks in the ECG recording. We extend their algorithm by
adapting the threshold with a moving average of the ECG
signal to be more resilient against the commonly encountered short bursts of noise. For the purpose of this work,
we define heartbeats using a symmetric fixed size window
with a total length of 0.66 seconds around R-peaks. We
pass the extracted heartbeat sequence in its original order
to the feature extraction stage.
Feature Extraction. We extract a diverse set of features
from each heartbeat in an ECG recording. Specifically, we
extract the time since the last heartbeat (δRR), the relative
wavelet energy (RWE) over five frequency bands, the total wavelet energy (TWE) over those frequency bands, the
R amplitude, the Q amplitude, QRS duration and wavelet
entropy (WE). Previous works demonstrated the efficacy
of all of these features in discriminating cardiac arrhythmias from normal heart rhythms [14–18]. In addition to
the aforementioned features, we also train two SDAEs on
the heartbeats in an unsupervised manner with the goal of
learning more nuanced differences in morphology of individual heartbeats. We train one SDAE on the extracted
heartbeats of the training set and the other on their wavelet
coefficients. We then use the encoding side of the SDAEs
to extract low-dimensional embeddings of each heartbeat
and each heartbeat’s wavelet coefficients to be used as additional input features. Finally, we concatenate all ex-
tracted features into a single feature vector per heartbeat
and pass them to the level 1 models in original heartbeat
sequence order.
Level 1 Models. We build an ensemble of level 1 models to classify the sequence of per-beat feature vectors. To
increase the diversity within our ensemble, we train RNNs
in various binary classification settings and with different
hyperparameters. We use RNNs with 1 - 5 recurrent layers
that consist of either Gated Recurrent Units (GRU) [19] or
Bidirectional Long Short-Term Memory (BLSTM) units
[20], followed by an optional attention layer, 1 - 2 forward
layers and a softmax output layer. Additionally, we infer a
nonparametric Hidden Semi-Markov Model (HSMM) [21]
with 64 initial states for each class in an unsupervised setting. In total, our ensemble of level 1 models consists of
15 RNNs and 4 HSMMs. We concatenate the ECG’s normalised log-likelihoods under the per-class HSMMs and
the RNNs’ softmax outputs into a single prediction vector.
We pass the prediction vector of the level 1 models to the
level 2 blender model.
Level 2 Blender. We use blending [22] to combine the
predictions of our level 1 models and a set of ECG-wide
features into a final per-class classification score. The additional features are the RWE and WE over the whole ECG
and the absolute average deviation (AAD) of the WE and
δRR of all beats. We employ a MLP with a softmax output layer as our level 2 blender model. In order to avoid
overfitting to the training set, we train the MLP on the validation set.
Hyperparameter Selection. To select the hyperparameters of our level 1 RNNs, we performed a grid search on
the range of 0 - 75% for the dropout and recurrent dropout
percentages, 60 - 512 for the number of units per hidden
layer and 1 - 5 for the number of recurrent layers. We
found that RNNs trained with 35% dropout, 65% recurrent dropout, 80 units per hidden layer and 5 recurrent layers (plus an additional attention layer) achieve consistently
strong results across multiple binary classification settings.
For our level 2 blender model, we utilise Bayesian optimisation [23] to select the number of layers, number of hidden units per layer, dropout and number of training epochs.
We perform a 5-fold cross validation on the validation set
to select the blender model’s hyperparameters.
2.1.
Attention over Heartbeats
Attention mechanisms have been shown to allow for
greater interpretability of neural networks in a variety of
tasks in computer vision and natural language processing [4–7]. In this work, we apply soft attention over the
heartbeats contained in ECG signals in order to gain a
deeper understanding of the decision-making process of
our RNNs. Consider the case of an RNN that is processing
a sequence of T heartbeats. The topmost recurrent layer
Normal: 94 %
Normal vs. all
Other vs. all
Other: 67 %
ECG
at
(s)
(s)
Figure 2. A visualisation of the attention values at (top) of two different RNNs over two sample ECG recordings (bottom).
The graphs on top of the ECG recordings show the attention values at associated with each identified heartbeat (dashed
line). The labels in the left and right corners of the attention value graphs show the settings the model was trained for and
their classification confidence, respectively. The recording on the left (A02149) represents a normal sinus rhythm. Due to
the regular heart rhythm in the ECG, a distinctive pattern of approximately equally weighted attention on each heartbeat
emerges from our RNN that was trained to distinguish between normal sinus rhythms and all other types of rhythms. The
recording on the right (A04661) is labelled as an other arrhythmia. The RNN trained to identify other arrhythmias focuses
primarily on a sudden, elongated pause in the heart rhythm to decide that the record is most likely an other arrhythmia.
outputs a hidden state ht at every time step t ∈ [1, T ] of
the sequence. We extend some of our RNNs with additive
soft attention over the hidden states ht to obtain a context
vector c that attenuates the most informative hidden states
ht of a heartbeat sequence. Based on the definition in [6],
we use the following set of equations:
ut = tanh(Wbeat ht + bbeat )
(1)
at = sof tmax(uTt ubeat )
X
c=
at ht
(2)
[24]. In contrast, we achieve state-of-the-art performance
with significantly fewer trainable parameters by harnessing
the natural heartbeat segmentation of ECGs and discriminative features from previous works. Additionally, we
pay consideration to the fact that interpretability remains
a challenge in applying machine learning to the medical
domain [25] by extending our models with an attention
mechanism that enables medical professionals to reason
about which heartbeats contributed most to the decisionmaking process of our RNNs.
(3)
t
Where equation (1) is a single-layer MLP with a weight
matrix Wbeat and bias bbeat to obtain ut as a hidden representation of ht [6]. In equation (2), we calculate the attention factors at for each heartbeat by computing a softmax over the dot-product similarities of every heartbeat’s
ut to the heartbeat context vector ubeat . ubeat corresponds
to a hidden representation of the most informative heartbeat [6]. We jointly optimise Wbeat , bbeat and ubeat with
the other RNN parameters during training. In figure 2, we
showcase two examples of how qualitative analysis of the
attention factors at of equation (2) provides a deeper understanding of our RNNs’ decision making.
3.
Related Work
Our work builds on a long history of research in detecting cardiac arrhythmias from ECG records by making use
of features that have been shown to be highly discriminative in distinguishing certain arrhythmias from normal
heart rhythms [14–18]. Recently, Rajpurkar et al. proposed a 34-layer convolutional neural network (CNN) to
reach cardiologist-level performance in classifying a large
set of arrhythmias from mobile cardiac event recorder data
4.
Results and Conclusion
We present a machine-learning approach to distinguishing between multiple types of heart rhythms. Our approach
utilises an ensemble of RNNs to jointly identify temporal
and morphological patterns in segmented ECG recordings
of any length. In detail, our approach reaches an average
F1 score of 0.79 on the private test set of the PhysioNet
CinC Challenge 2017 (n = 3, 658) with class-wise F1
scores of 0.90, 0.79 and 0.68 for normal rhythms, AF and
other arrhythmias, respectively. On top of its state-of-theart performance, our approach maintains a high degree of
interpretability through the use of a soft attention mechanism over heartbeats. In the spirit of open research, we
make an implementation of our cardiac rhythm classification system available through the PhysioNet 2017 Open
Source Challenge.
Future Work. Based on our discussions with a cardiologist, we hypothesise that the accuracy of our models could be further improved by incorporating contextual
information, such as demographic information, data from
other clinical assessments and behavioral aspects.
Acknowledgements
This work was partially funded by the Swiss National
Science Foundation (SNSF) project No. 167302 within
the National Research Program (NRP) 75 “Big Data” and
SNSF project No. 150640. We thank Prof. Dr. med. Firat Duru for providing valuable insights into the decisionmaking process of cardiologists.
References
[1]
Ball J, Carrington MJ, McMurray JJ, Stewart S. Atrial fibrillation: Profile and burden of an evolving epidemic in
the 21st century. International Journal of Cardiology 2013;
167(5):1807–1824.
[2] Camm AJ, Kirchhof P, Lip GY, Schotten U, Savelieva I,
Ernst S, Van Gelder IC, Al-Attar N, Hindricks G, Prendergast B, et al. Guidelines for the management of atrial fibrillation. European Heart Journal 2010;31:2369–2429.
[3] Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol
PA. Stacked denoising autoencoders: Learning useful
representations in a deep network with a local denoising
criterion. Journal of Machine Learning Research 2010;
11(Dec):3371–3408.
[4] Bahdanau D, Cho K, Bengio Y. Neural machine translation
by jointly learning to align and translate. In International
Conference on Learning Representations, 2015.
[5] Xu K, Ba J, Kiros R, Cho K, Courville A, Salakhudinov R,
Zemel R, Bengio Y. Show, attend and tell: Neural image
caption generation with visual attention. In International
Conference on Machine Learning. 2015; 2048–2057.
[6] Yang Z, Yang D, Dyer C, He X, Smola AJ, Hovy EH. Hierarchical attention networks for document classification.
In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language
Technologies. 2016; 1480–1489.
[7] Zhang Z, Xie Y, Xing F, McGough M, Yang L. MDNet: A Semantically and Visually Interpretable Medical
Image Diagnosis Network. In International Conference on
Computer Vision and Pattern Recognition, arXiv preprint
arXiv:1707.02485, 2017.
[8] Clifford GD, Liu CY, Moody B, Lehman L, Silva I, Li Q,
Johnson AEW, Mark RG. AF classification from a short
single lead ECG recording: The Physionet Computing in
Cardiology Challenge 2017. In Computing in Cardiology,
2017.
[9] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM,
Ivanov PC, Mark RG, Mietus JE, Moody GB, Peng CK,
Stanley HE. PhysioBank, PhysioToolkit, and PhysioNet:
Components of a New Research Resource for Complex
Physiologic Signals. Circulation 2000;101(23):e215–e220.
[10] Moody GB, Mark RG. The impact of the MIT-BIH arrhythmia database. IEEE Engineering in Medicine and Biology
Magazine 2001;20(3):45–50.
[11] Moody G. A new method for detecting atrial fibrillation using RR intervals. In Computers in Cardiology. IEEE, 1983;
227–230.
[12] Greenwald SD, Patil RS, Mark RG. Improved detection and
classification of arrhythmias in noise-corrupted electrocardiograms using contextual information. In Computers in
Cardiology. IEEE, 1990; 461–464.
[13] Pan J, Tompkins WJ. A real-time QRS detection algorithm. IEEE Transactions on Biomedical Engineering 1985;
3:230–236.
[14] Sarkar S, Ritscher D, Mehra R. A detector for a chronic
implantable atrial tachyarrhythmia monitor. IEEE Transactions on Biomedical Engineering 2008;55(3):1219–1224.
[15] Tateno K, Glass L. Automatic detection of atrial fibrillation
using the coefficient of variation and density histograms of
RR and δRR intervals. Medical and Biological Engineering
and Computing 2001;39(6):664–671.
[16] Garcı́a M, Ródenas J, Alcaraz R, Rieta JJ. Application of
the relative wavelet energy to heart rate independent detection of atrial fibrillation. computer methods and programs
in biomedicine 2016;131:157–168.
[17] Ródenas J, Garcı́a M, Alcaraz R, Rieta JJ. Wavelet entropy
automatically detects episodes of atrial fibrillation from
single-lead electrocardiograms. Entropy 2015;17(9):6179–
6199.
[18] Alcaraz R, Vayá C, Cervigón R, Sánchez C, Rieta J.
Wavelet sample entropy: A new approach to predict termination of atrial fibrillation. In Computers in Cardiology.
IEEE, 2006; 597–600.
[19] Chung J, Gulcehre C, Cho K, Bengio Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Neural Information Processing Systems, Workshop
on Deep Learning, arXiv preprint arXiv:1412.3555, 2014.
[20] Graves A, Jaitly N, Mohamed Ar. Hybrid speech recognition with deep bidirectional lstm. In Automatic Speech
Recognition and Understanding, IEEE Workshop on. IEEE,
2013; 273–278.
[21] Johnson MJ, Willsky AS. Bayesian nonparametric hidden
semi-markov models. Journal of Machine Learning Research 2013;14(Feb):673–701.
[22] Wolpert DH. Stacked generalization. Neural networks
1992;5(2):241–259.
[23] Bergstra J, Yamins D, Cox DD. Hyperopt: A python library
for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in Science
Conference. 2013; 13–20.
[24] Rajpurkar P, Hannun AY, Haghpanahi M, Bourn C, Ng AY.
Cardiologist-level arrhythmia detection with convolutional
neural networks. arXiv preprint, arXiv:1707.01836, 2017.
[25] Cabitza F, Rasoini R, Gensini G. Unintended consequences
of machine learning in medicine. Journal of the American
Medical Association 2017;318(6):517–518.
Address for correspondence:
Patrick Schwab, ETH Zurich
Balgrist Campus, BAA D, Lengghalde 5, 8092 Zurich
[email protected]
| 9 |
arXiv:cs/0512057v1 [cs.PL] 14 Dec 2005
Resource Control for Synchronous
Cooperative Threads∗
Roberto M. Amadio
Université Paris 7 †
Silvano Dal Zilio
CNRS Marseille‡
7th May 2017
Abstract
We develop new methods to statically bound the resources needed for the execution of systems of concurrent, interactive threads. Our study is concerned with
a synchronous model of interaction based on cooperative threads whose execution
proceeds in synchronous rounds called instants. Our contribution is a system of compositional static analyses to guarantee that each instant terminates and to bound the
size of the values computed by the system as a function of the size of its parameters
at the beginning of the instant.
Our method generalises an approach designed for first-order functional languages
that relies on a combination of standard termination techniques for term rewriting
systems and an analysis of the size of the computed values based on the notion of
quasi-interpretation. We show that these two methods can be combined to obtain an
explicit polynomial bound on the resources needed for the execution of the system
during an instant.
As a second contribution, we introduce a virtual machine and a related bytecode thus
producing a precise description of the resources needed for the execution of a system.
In this context, we present a suitable control flow analysis that allows to formulate
the static analyses for resource control at byte code level.
1
Introduction
The problem of bounding the usage made by programs of their resources has already
attracted considerable attention. Automatic extraction of resource bounds has mainly focused on (first-order) functional languages starting from Cobham’s characterisation [18]
of polynomial time functions by bounded recursion on notation. Following work, see e.g.
∗
Work partially supported by ACI Sécurité Informatique CRISS.
Laboratoire Preuves, Programmes et Systèmes, UMR-CNRS 7126.
‡
Laboratoire d’Informatique Fondamentale de Marseille, UMR-CNRS 6166
†
1
[8, 19, 21, 23], has developed various inference techniques that allow for efficient analyses
while capturing a sufficiently large range of practical algorithms.
Previous work [10, 24] has shown that polynomial time or space bounds can be obtained by
combining traditional termination techniques for term rewriting systems with an analysis
of the size of computed values based on the notion of quasi-interpretation. Thus, in a
nutshell, resource control relies on termination and bounds on data size.
This approach to resource control should be contrasted with traditional worst case execution time technology (see, e.g., [30]): the bounds are less precise but they apply to a
larger class of algorithms and are functional in the size of the input, which seems more
appropriate in the context of the applications we have in mind (see below). In another
direction, one may compare the approach with the one based on linear logic (see, e.g., [7]):
while in principle the linear logic approach supports higher-order functions, it does not
offer yet a user-friendly programming language.
In [3, 4], we have considered the problem of automatically inferring quasi-interpretations
in the space of multi-variate max-plus polynomials. In [1], we have presented a virtual
machine and a corresponding bytecode for a first-order functional language and shown
how size and termination annotations can be formulated and verified at the level of the
bytecode. In particular, we can derive from the verification an explicit polynomial bound
on the space required to execute a given bytecode.
In this work, we aim at extending and adapting these results to a concurrent framework.
As a starting point, we choose a basic model of parallel threads interacting on shared
variables. The kind of concurrency we consider is a cooperative one. This means that by
default a running thread cannot be preempted unless it explicitly decides to return the control to the scheduler. In preemptive threads, the opposite hypothesis is made: by default
a running thread can be preempted at any point unless it explicitly requires that a series
of actions is atomic. We refer to, e.g., [28] for an extended comparison of the cooperative
and preemptive models. Our viewpoint is pragmatic: the cooperative model is closer to
the sequential one and many applications are easier to program in the cooperative model
than in the preemptive one. Thus, as a first step, it makes sense to develop a resource
control analysis for the cooperative model.
The second major design choice is to assume that the computation is regulated by a notion
of instant. An instant lasts as long as a thread can make some progress in the current
instant. In other terms, an instant ends when the scheduler realizes that all threads are
either stopped, or waiting for the next instant, or waiting for a value that no thread can
produce in the current instant. Because of this notion of instant, we regard our model
as synchronous. Because the model includes a logical notion of time, it is possible for a
thread to react to the absence of an event.
The reaction to the absence of an event is typical of synchronous languages such as Esterel [9]. Boussinot et al. have proposed a weaker version of this feature where the
reaction to the absence happens in the following instant [13] and they have implemented
it in various programming environments based on C, Java, and Scheme [31]. Applications suited to this programming style include: event-driven applications, graphical user
interfaces, simulations (e.g. N-bodies problem, cellular automata, ad hoc networks), web
2
services, multiplayer online games, . . . Boussinot et al. have also advocated the relevance
of this concept for the programming of mobile code and demonstrated that the possibility
for a ‘synchronous’ mobile agent to react to the absence of an event is an added factor
of flexibility for programs designed for open distributed systems, whose behaviours are
inherently difficult to predict. These applications rely on data structure such as lists and
trees whose size needs to be controlled.
Recently, Boudol [12] has proposed a formalisation of this programming model. Our analysis will essentially focus on a small fragment of this model without higher-order functions,
and where the creation of fresh memory cells (registers) and the spawning of new threads
is only allowed at the very beginning of an instant. We believe that what is left is still expressive and challenging enough as far as resource control is concerned. Our analysis goes
in three main steps. A first step is to guarantee that each instant terminates (Section 3.1).
A second step is to bound the size of the computed values as a function of the size of the
parameters at the beginning of the instant (Section 3.2). A third step, is to combine the
termination and size analyses. Here we show how to obtain polynomial bounds on the
space and time needed for the execution of the system during an instant as a function of
the size of the parameters at the beginning of the instant (Section 3.3).
A characteristic of our static analyses is that to a great extent they make abstraction of
the memory and the scheduler. This means that each thread can be analysed separately,
that the complexity of the analyses grows linearly in the number of threads, and that an
incremental analysis of a dynamically changing system of threads is possible. Preliminary
to these analyses, is a control flow analysis (Section 2.1) that guarantees that each thread
performs each read instruction (in its body code) at most once in an instant. This condition is instrumental to resource control. In particular, it allows to regard behaviours as
functions of their initial parameters and the registers they may read in the instant. Taking
this functional viewpoint, we are able to adapt the main techniques developed for proving
termination and size bounds in the first-order functional setting.
We point out that our static size analyses are not intended to predict the size of the system after arbitrarily many instants. This is a harder problem which in general requires
an understanding of the global behaviour of the system and/or stronger restrictions on the
programs we can write. For the language studied in this paper, we advocate a combination
of our static analyses with a dynamic controller that at the end of each instant checks the
size of the parameters of the system and may decide to stop some threads taking too much
space.
Along the way and in appendix A, we provide a number of programming examples illustrating how certain synchronous and/or concurrent programming paradigms can be
represented in our model. These examples suggest that the constraints imposed by the
static analyses are not too severe and that their verification can be automated.
As a second contribution, we describe a virtual machine and the related bytecode for our
programming model (Section 4). This provides a more precise description of the resources
needed for the execution of the systems we consider and opens the way to the verification
of resource bounds at the bytecode level, following the ‘typed assembly language’ approach
adopted in [1] for the purely functional fragment of the language. More precisely, we de3
scribe a control flow analysis that allows to recover the conditions for termination and size
bounds at bytecode level and we show that the control flow analysis is sufficiently liberal
to accept the code generated by a rather standard compilation function.
Proofs are available in appendix B.
2
A Model of Synchronous Cooperative Threads
A system of synchronous cooperative threads is described by (1) a list of mutually recursive
type and constructor definitions and (2) a list of mutually recursive function and behaviour
definitions relying on pattern matching. In this respect, the resulting programming language is reminiscent of Erlang [5], which is a practical language to develop concurrent
applications. The set of instructions a behaviour can execute is rather minimal. Indeed,
our language can be regarded as an intermediate code where, for instance, general patternmatching has been compiled into a nesting of if then else constructs and complex control
structures have been compiled into a simple tail-recursive form.
Types We denote type names with t, t′ , . . . and constructors with c, c′ , . . . We will also
denote with r, r′ , . . . constructors of arity 0 and of ‘reference’ type (see equation of kind (2)
below) and we will refer to them as registers (thus registers are constructors). The values
v, v ′, . . . computed by programs are first order terms built out of constructors. Types and
constructors are declared via recursive equations that may be of two kinds:
(1)
(2)
t = . . . | c of t1 , . . . , tn | . . .
t = Ref (t′ ) with . . . | r = v | . . .
In (1) we declare a type t with a constructor c of functional type (t1 , . . . , tn ) → t. In (2)
we declare a type t of registers referencing values of type t′ and a register r with initial
value v. As usual, type definitions can be mutually recursive (functional and reference
types can be intermingled) and it is assumed that all types and constructors are declared
exactly once. This means that we can associate a unique type with every constructor
and that with respect to this association we can say when a value is well-typed. For
instance, we may define the type nat of natural numbers in unary format by the equation
nat = z | s of nat and the type llist of linked lists of natural numbers by the equations
nlist = nil | cons of (nat, llist) and llist = Ref (nlist) with r = cons(z, r). The last definition
declares a register r of type llist with initial value the infinite (cyclic) list containing only
z’s.
Finally, we have a special behaviour type, beh. Elements of type beh do not return a value
but produce side effects. We denote with β either a regular type or beh.
Expressions We let x, y, . . . denote variables ranging over values. The size |v| of a value
v is defined by |c| = 0 and |c(v1 , . . . , vn )| = 1 + |v1 | + · · · + |vn |. In the following, we
will use the vectorial notation a to denote either a vector a1 , . . . , an or a sequence a1 · · · an
4
of elements. We use σ, σ ′ , . . . to denote a substitution [v/x], where v and x have the
same length. A pattern p is a well-typed term built out of constructors and variables. In
particular, a shallow linear pattern p is a pattern c(x1 , . . . , xn ), where c is a constructor of
arity n and the variables x1 , . . . , xn are all distinct. Expressions, e, and expression bodies,
eb, are defined as:
e ::= x | c(e1 , . . . , ek ) | f (e1 , . . . , en )
eb ::= e | match x with p then eb else eb
where f is a functional symbol of type (t1 , . . . , tn ) → t, specified by an equation of the
kind f (x1 , . . . , xn ) = eb, and where p is a shallow linear pattern.
A closed expression body eb evaluates to a value v according to the following standard
rules:
(e1 )
(e4 )
r⇓r
(e2 )
e⇓v
c(e) ⇓ c(v)
(e3 )
[v/x]eb 1 ⇓ v
match c(v) with c(x)
⇓v
then eb 1 else eb 2
e ⇓ v,
(e5 )
f (x) = eb, [v/x]eb ⇓ v
f (e) ⇓ v
eb 2 ⇓ v c 6= d
match c(v) with d(x)
⇓v
then eb 1 else eb 2
Since registers are constructors, rule (e1 ) is a special case of rule (e2 ); we keep the rule for
clarity.
Behaviours Some function symbols may return a thread behaviour b, b′ , . . . rather than
a value. In contrast to ‘pure’ expressions, a behaviour does not return a result but produces
side-effects by reading and writing registers. A behaviour may also affect the scheduling
status of the thread executing it. We denote with b, b′ , . . . behaviours defined as follows:
b ::= stop | f (e) | yield .b | next.f (e) | ̺ := e.b |
read ̺ with p1 ⇒ b1 | · · · | pn ⇒ bn | [ ] ⇒ f (e) |
match x with c(x) then b1 else b2
where: (i) f is a functional symbol of type t1 , . . . , tn → beh, defined by an equation
f (x) = b, (ii) ̺, ̺′ , . . . range over variables and registers, and (iii) p1 , . . . , pn are either
shallow linear patterns or variables. We also denote with [ ] a special symbol that will be
used in the default case of read expressions (see the paragraph Scheduler below). Note
that if the pattern pi is a variable then the following branches including the default one
can never be executed.
The effect of the various instructions is informally described as follows: stop, terminates
the executing thread for ever; yield .b, halts the execution and hands over the control to
the scheduler — the control should return to the thread later in the same instant and
execution resumes with b; f (e) and next.f (e) switch to another behaviour immediately
or at the beginning of the following instant; r := e.b, evaluates the expression e, assigns
its value to r and proceeds with the evaluation of b; read r with p1 ⇒ b1 | · · · | pn ⇒ bn |
[ ] ⇒ b, waits until the value of r matches one of the patterns p1 , . . . , pn (there could be no
5
delay) and yields the control otherwise; if at the end of the instant the thread is always
stuck waiting for a matching value then it starts the behaviour b in the following instant;
match v with p then b1 else b2 filters the value v according to the pattern p, it never blocks
the execution. Note that if p is a pattern and v is a value there is at most one matching
substitution σ such that v = σp.
X
Behaviour reduction is described by the 9 rules below. A reduction (b, s)→(b′ , s′) means
that the behaviour b with store s runs an atomic sequence of actions till b′ , producing a
store s′ , and returning the control to the scheduler with status X. A status is a value in
{N, R, S, W } that represents one of the four possible state of a thread — N stands for next
(the thread will resume at the beginning of the next instant), R for run, S for stopped,
and W for wait (the thread is blocked on a read statement).
(b1 )
S
(stop, s) → (stop, s)
(b2 )
R
(yield .b, s) → (b, s)
(b3 )
N
(next .f (e), s) → (f (e), s)
X
X
(b2 , s) → (b′ , s′ ), c 6= d
([v/x]b1 , s) → (b′ , s′ )
match c(v)
match c(v)
(b
)
(b4 )
5
X
X
with c(x)
with d(x)
, s → (b′ , s′ )
, s → (b′ , s′ )
then b1 else b2
then b1 else b2
X
no pattern matches s(r)
s(r) = σp, (σb, s) → (b′ , s′ )
(b7 )
(b6 )
W
X
(read r . . . , s) → (read r . . . , s)
(read r with · · · | p ⇒ b | . . . , s) → (b′ , s′ )
(b8 )
e ⇓ v,
f (x) = b,
X
([v/x]b, s) → (b′ , s′ )
X
(f (e), s) → (b′ , s′ )
(b9 )
e ⇓ v,
X
(b, s[v/r]) → (b′ , s′ )
X
(r := e.b, s) → (b′ , s′ )
We denote with be either an expression body or a behaviour. All expressions and behaviours are supposed to be well-typed. As usual, all formal parameters are supposed to be
distinct. In the match x with c(y) then be 1 else be 2 instruction, be 1 may depend on y but
not on x while be 2 may depend on x but not on y.
Systems We suppose that the execution environment consists of n threads and we associate with every thread a distinct identity that is an index in Zn = {0, 1, . . . , n − 1}. We
let B, B ′ , . . . denote systems of synchronous threads, that is finite mappings from thread
indexes to pairs (behaviour, status). Each register has a type and a default value — its
value at the beginning of an instant — and we use s, s′ , . . . to denote a store, an association
between registers and their values. We suppose that at the beginning of each instant the
store is so , such that each register is assigned its default value. If B is a system and i ∈ Zn
is a valid thread index then we denote with B1 (i) the behaviour executed by the thread i
and with B2 (i) its current status. Initially, all threads have status R, the current thread
index is 0, and B1 (i) is a behaviour expression of the shape f (v) for all i ∈ Zn . System
reduction is described by a relation (B, s, i) → (B ′ , s′ , i′ ): the system B with store s and
6
current thread (index) i runs an atomic sequence of actions and becomes (B ′ , s′ , i′ ).
X
′ ′
′
′
(s1 ) (B1 (i), s) → (b , s ), B2 (i) = R,′ B′ = B[(b , X)/i],
(B, s, i) → (B [(B1 (k), R)/k], s′ , k)
X
(s2 )
(B1 (i), s) → (b′ , s′ ), B2 (i) = R, B ′ = B[(b′ , X)/i],
B ′′ = U (B ′ , s′ ), N (B ′′ , so , 0) = k
(B, s, i) → (B ′′ , so , k)
N (B ′ , s′ , i) = k
N (B ′ , s′ , i) ↑,
Scheduler The scheduler is determined by the functions N and U. To ensure progress
of the scheduling, we assume that if N returns an index then it must be possible to run the
corresponding thread in the current instant and that if N is undefined (denoted N (. . . ) ↑)
then no thread can be run in the current instant.
If N (B, s, i) = k then B2 (k) = R or ( B2 (k) = W and
B1 (k) = read r with · · · | p ⇒ b | . . . and some pattern
matches s(r) i.e., ∃σ σp = s(r) )
If N (B, s, i) ↑ then
∀k ∈ Zn , B2 (k) ∈ {N, S} or ( B2 (k) = W,
B1 (k) = read r with . . . , and no pattern matches s(r) )
When no more thread can run, the instant ends and the function U performs the
following status transitions: N → R, W → R. We assume here that every thread in status
W takes the [ ] ⇒ . . . branch at the beginning of the next instant. Note that the function
N is undefined on the updated system if and only if all threads are stopped.
if B(i) = (b, S)
(b, S)
U (B, s)(i) =
(b, R)
if B(i) = (b, N )
(f (e), R) if B(i) = (read r with · · · | [ ] ⇒ f (e), W )
Example 1 (channels and signals) The read instruction allows to read a register subject to certain filter conditions. This is a powerful mechanism which recalls, e.g., Linda
communication [15], and that allows to encode various forms of channel and signal communication.
(1) We want to represent a one place channel c carrying values of type t. We introduce
a new type ch(t) = empty | full of t and a register c of type Ref (ch(t)) with default value
empty. A thread should send a message on c only if c is empty and it should receive a
message only if c is not empty (a received message is discarded). These operations can be
modelled using the following two derived operators:
send (c, e).b
=def read c with empty ⇒ c := full(e).b
receive(c, x).b =def read c with full(x) ⇒ c := empty.b
(2) We want to represent a fifo channel c carrying values of type t such that a thread
can always emit a value on c but may receive only if there is at least one message in the
7
channel. We introduce a new type fch(t) = nil | cons of t, fch(t) and a register c of type
Ref (fch(t)) with default value nil. Hence a fifo channel is modelled by a register holding a
list of values. We consider two read operations — freceive to fetch the first message on the
channel and freceiveall to fetch the whole queue of messages — and we use the auxiliary
function insert to queue messages at the end of the list:
fsend (c, e).b
=def read c with l ⇒ c := insert(e, l).b
freceive(c, x).b =def read c with cons(x, l) ⇒ c := l.b
freceiveall (c, x).b =def read c with cons(y, l) ⇒ c := nil.[cons(y, l)/x]b
insert(x, l)
=
match l with cons(y, l′ ) then cons(y, insert(x, l′ ))
else cons(x, nil)
(3) We want to represent a signal s with the typical associated primitives: emitting a signal
and blocking until a signal is present. We define a type sig = abst | prst and a register s of
type Ref (sig) with default value abst, meaning that a signal is originally absent:
emit(s).b =def s := prst.b
wait (s).b =def read s with prst ⇒ b
Example 2 (cooperative fragment) The cooperative fragment of the model with no
synchrony is obtained by removing the next instruction and assuming that for all read
instructions the branch [ ] ⇒ f (e) is such that f (. . . ) = stop. Then all the interesting
computation happens in the first instant; threads still running in the second instant can
only stop. By using the representation of fifo channels presented in Example 1(2) above,
the cooperative fragment is already powerful enough to simulate, e.g., Kahn networks [20].
Next, to make possible a compositional and functional analysis for resource control, we
propose to restrict the admissible behaviours and we define a simple preliminary control
flow analysis that guarantees that this restriction is met. We then rely on this analysis to
define a symbolic representation of the states reachable by a behaviour. Finally, we extract
from this symbolic control points suitable order constraints which are instrumental to our
analyses for termination and value size limitation within an instant.
2.1
Read Once Condition
We require and statically check on the call graph of the program (see below) that threads
can perform any given read instruction at most once in an instant.
1. We assign to every read instruction in a system a distinct fresh label, y, and we
collect all these labels in an ordered sequence, y1 , . . . , ym . In the following, we will
sometimes use the notation read hyi ̺ with . . . in the code of a behaviour to make
visible the label of a read instruction.
2. With every function symbol f defined by an equation f (x) = b we associate the set
L(f ) of labels of read instructions occurring in b.
8
3. We define a directed call graph G = (N, E) as follows: N is the set of function symbols
in the program defined by an equation f (x) = b and (f, g) ∈ E if g ∈ Call(b) where
Call (b) is the collection of function symbols in N that may be called in the current
instant and which is formally defined as follows:
Call (stop) = Call (next .g(e)) = ∅ Call (f (e)) = {f }
Call (yield .b) = Call (̺ := e.b) = Call (b)
Call (match x with p then b1 else b2 ) = Call (b1 ) S
∪ Call (b2 )
Call (read ̺ with p1 ⇒ b1 | · · · | pn ⇒ bn | [ ] ⇒ b) = i=1,...,n Call (bi )
We write f E ∗ g if the node g is
the node f in the graph G. We denote
S reachable from
∗
with R(f ) the set of labels {L(g) | f E g} and with yf the ordered sequence of
labels in R(f ).
The definition of Call is such that for every sequence of calls in the execution of a
thread within an instant we can find a corresponding path in the call graph.
Definition 3 (read once condition) A system satisfies the read once condition if in the
call graph there are no loops that go through a node f such that L(f ) 6= ∅.
Example 4 (alarm) We consider the representation of signals as in Example 1(3). We
assume two signals sig and ring. The behaviour alarm(n, m) will emit a signal on ring if
it detects that no signal is emitted on sig for m consecutive instants. The alarm delay is
reset to n if the signal sig is present.
alarm(x, y) = match y with s(y ′ )
then read hui sig with prst ⇒ next.alarm(x, x) | [ ] ⇒ alarm(x, y ′ )
else ring := prst.stop
Hence u is the label associated with the read instruction and L(alarm) = {u}. Since the
call graph has just one node, alarm, and no edges, the read once condition is satisfied.
To summarise, the read once condition is a checkable syntactic condition that safely
approximates the semantic property we are aiming at.
Proposition 5 If a system satisfies the read once condition then in every instant every
thread runs every read instruction at most once (but the same read instruction can be run
by several threads).
The following simple example shows that without the read once restriction, a thread
can use a register as an accumulator and produce an exponential growth of the size of the
data within an instant.
9
Example 6 (exponentiation) We recall that nat = z | s of nat is the type of tally natural numbers. The function dble defined below doubles the value of its parameter so that
|dble(n)| = 2|n|. We assume r is a register of type nat with initial value s(z). Now consider
the following recursive behaviour:
dble(n) = match n with s(n′ ) then s(s(dble(n′ ))) else z
exp(n) =
match n with s(n′ )
then read r with m ⇒ r := dble(m).exp(n′ )
else stop
The function exp does not satisfy the read once condition since the call graph has a loop
on the exp node. The evaluation of exp(n) involves |n| reads to the register r and, after
each read operation, the size of the value stored in r doubles. Hence, at end of the instant,
the register contains a value of size 2|n| .
The read once condition does not appear to be a severe limitation on the expressiveness
of a synchronous programming language. Intuitively, in most synchronous algorithms every
thread reads some bounded number of variables before performing some action. Note that
while the number of variables is bounded by a constant, the amount of information that
can be read in each variable is not. Thus, for instance, a ‘server’ thread can just read
one variable in which is stored the list of requests produced so far and then it can go on
scanning the list and replying to all the requests within the same instant.
2.2
Control Points
From a technical point of view, an important consequence of the read once condition is
that a behaviour can be described as a function of its parameters and the registers it may
read during an instant. This fact is used to associate with a system satisfying the read
once condition a finite number of control points.
A control point is a triple (f (p), be, i) where, intuitively, f is the currently called function,
p represents the patterns crossed so far in the function definition plus possibly the labels
of the read instructions that still have to be executed, be is the continuation, and i is an
integer flag in {0, 1, 2} that will be used to associate with the control point various kinds
of conditions.
If the function f returns a value and is defined by the equation f (x) = eb, then we associate with f the set C(f, x, eb) defined as follows:
C(f, p, eb) = case eb of
: {(f (p), eb, 0)}
e
match x with c(y)
: {(f (p), eb, 2)} ∪ C(f, [c(y)/x]p, eb 1 ) ∪ C(f, p, eb 2 )
then eb 1 else eb 2
On the other hand, suppose the function f is a behaviour defined by the equation f (x) = b.
Then we generate a fresh function symbol f + whose arity is that of f plus the size of R(f ),
thus regarding the labels yf (the ordered sequence of labels in R(f )) as part of the formal
10
parameters of f + . The set of control points associated with f + is the set C(f + , (x · yf ), b)
defined as follows:
C(f + , p, b) = case b of
(C1 ) stop
: {(f + (p), b, 2)}
(C2 ) g(e)
: {(f + (p), b, 0)}
′
(C3 ) yield .b
: {(f + (p), b, 2)} ∪ C(f + , p, b′ )
(C4 ) next.g(e) : {(f + (p), b, 2), (f + (p), g(e), 2)}
(C5 ) ̺ := e.b′
: {(f + (p), b, 2), (f + (p), e, 1)} ∪ C(f +, p, b′ )
match x with c(y)
{(f + (p), b, 2)} ∪ C(f + , ([c(y)/x]p), b1 )
(C6 )
:
then b1 else b2
∪ C(f + , p, b2 )
{(f + (p), b, 2), (f + (p), g(e), 2)}
read hyi ̺ with p1 ⇒ b1 | · · · |
: ∪ C(f + , ([p1 /y]p), b1 ) ∪ . . .
(C7 )
pn ⇒ bn | [ ] ⇒ g(e)
∪ C(f +, ([pn /y]p), bn )
By inspecting the definitions, we can check that a control point (f (p), be, i) has the
property that Var(be) ⊆ Var (p).
Definition 7 An instance of a control point (f (p), be, i) is an expression body or a behaviour be ′ = σ(be), where σ is a substitution mapping the free variables in be to values.
The property of being an instance of a control point is preserved by expression body
evaluation, behaviour reduction and system reduction. Thus the control points associated
with a system do provide a representation of all reachable configurations. Indeed, in
Appendix B we show that it is possible to define the evaluation and the reduction on pairs
of control points and substitutions.
Proposition 8 Suppose (B, s, i) → (B ′ , s′ , i′ ) and that for all thread indexes j ∈ Zn , B1 (j)
is an instance of a control point. Then for all j ∈ Zn , we have that B1′ (j) is an instance
of a control point.
In order to prove the termination of the instant and to obtain a bound on the size of
computed value, we associate order constraints with control points:
Control point
(f (p), e, 0)
(f + (p), g(e), 0)
(f + (p), e, 1)
(f + (p), be, 2)
Associated constraint
f (p) ≻0 e
f + (p) ≻0 g+ (e, yg )
f + (p) ≻1 e
no constraints
A program will be deemed correct if the set of constraints obtained from all the function
definitions can be satisfied in suitable structures. We say that a constraint e ≻i e′ has index
i. We rely on the constraints of index 0 to enforce termination of the instant and on those
of index 0 or 1 to enforce a bound on the size of the computed values. Note that the
constraints are on pure first order terms, a property that allows us to reuse techniques
developed in the standard term rewriting framework (cf. Section 3).
11
Example 9 With reference to Example 4, we obtain the following control points:
(alarm + (x, y, u), match . . . , 2)
(alarm + (x, y, u), prst, 1)
(alarm + (x, s(y ′ ), u), read . . . , 2)
(alarm + (x, s(y ′ ), prst), next.alarm(x, x), 2)
(alarm + (x, y, u), ring := prst.stop, 2)
(alarm + (x, z, u), stop, 2)
(alarm + (x, s(y ′ ), u), alarm (x, y ′ ), 2)
(alarm + (x, s(y ′ ), prst), alarm(x, x), 2)
The triple (alarm + (x, y, u), prst, 1) is the only control point with a flag different from 2. It
corresponds to the constraint alarm + (x, y, u) ≻1 prst, where u is the label associated with
the only read instruction in the body of alarm. We note that no constraints of index 0 are
generated and so, in this simple case, the control flow analysis can already establish the
termination of the thread and all is left to do is to check that the size of the data is under
control, which is also easily verified.
In Example 2, we have discussed a possible representation of Kahn networks in the
cooperative fragment of our model. In general Kahn networks there is no bound on the
number of messages that can be written in a fifo channel nor on the size of the messages.
Much effort has been put into the static scheduling of Kahn networks (see, e.g., [22, 16,
17]). This analysis can be regarded as a form of resource control since it guarantees that
the number of messages in fifo channels is bounded (but says nothing about their size).
The static scheduling of Kahn network is also motivated by performance issues, since it
eliminates the need to schedule threads at run time. Let us look in some detail at the
programming language Lustre, that can be regarded as a language for programming
Kahn networks that can be executed synchronously.
Example 10 (read once vs. Lustre) A Lustre network is composed of four types of
nodes: the combinatorial node, the delay node, the when node, and the merge node. Each
node may have several input streams and one output stream. The functional behaviour of
each type of node is defined by a set of recursive definitions. For instance, the node When
has one boolean input stream b — with values of type bool = false | true — and one input
stream s of values. A When node is used to output values from s whenever b is true. This
behaviour may be described by the following recursive definitions: When(false · b, x · s) =
When(b, s), When(true · b, x · s) = x · When(b, s), and When(b, s) = ǫ otherwise. Here is a
possible representation of the When node in our model, where the input streams correspond
to one place channels b, c (cf. Example 1(1)), the output stream to a one place channel c′
and at most one element in each input stream is processed per instant.
When() = read hui b with
full(true) ⇒ read hvi c with full(x) ⇒ c′ := x.next.When() | [ ] ⇒ When()
| full(false) ⇒ next.When()
| [ ] ⇒ When()
While the function When has no formal parameters, we consider the function When + with
two parameters u and v in our size and termination analyses.
12
3
Resource Control
Our analysis goes in three main steps: first, we guarantee that each instant terminates
(Section 3.1), second we bound the size of the computed values as a function of the size
of the parameters at the beginning of the instant (Section 3.2), and third we combine the
termination and size analyses to obtain polynomial bounds on space and time (Section 3.3).
As we progress in our analysis, we refine the techniques we employ. Termination is reduced
to the general problem of finding a suitable well-founded order over first-order terms.
Bounding the size of the computed values is reduced to the problem of synthesizing a
quasi-interpretation. Finally, the problem of obtaining polynomial bounds is attacked
by combining recursive path ordering termination arguments with quasi-interpretations.
We selected these techniques because they are well established and they can handle a
significant spectrum of the programs we are interested in. It is to be expected that other
characterisations of complexity classes available in the literature may lead to similar results.
3.1
Termination of the Instant
We recall that a reduction order > over first-order terms is a well-founded order that is
closed under context and substitution: t > s implies C[t] > C[s] and σt > σs, where C is
any one hole context and σ is any substitution (see, e.g, [6]).
Definition 11 (termination condition) We say that a system satisfies the termination
condition if there is a reduction order > such that all constraints of index 0 associated with
the system hold in the reduction order.
In this section, we assume that the system satisfies the termination condition. As
expected this entails that the evaluation of closed expressions succeeds.
Proposition 12 Let e be a closed expression. Then there is a value v such that e ⇓ v and
e ≥ v with respect to the reduction order.
Moreover, the following proposition states that a behaviour will always return the
control to the scheduler.
Proposition 13 (progress) Let b be an instance of a control point. Then for all stores
X
s, there exist X, b′ and s′ such that (b, s) → (b′ , s′ ).
Finally, we can guarantee that at each instant the system will reach a configuration in
which the scheduler detects the end of the instant and proceeds to the reinitialisation of
the store and the status (as specified by rule (s2 )).
Theorem 14 (termination of the instant) All sequences of system reductions involving only rule (s1 ) are finite.
13
Proposition 13 and Theorem 14 are proven by exhibiting a suitable well-founded measure which is based both on the reduction order and the fact that the number of reads a
thread may perform in an instant is finite.
Example 15 (monitor max value) We consider a recursive behaviour monitoring the
register i (acting as a fifo channel) and parameterised on a number x representing the
largest value read so far. At each instant, the behaviour reads the list l of natural numbers
received on i and assigns to o the greatest number in x and l.
f (x)
= yield .read hii i with l ⇒ f1 (maxl (l, x))
f1 (x)
= o := x.next.f (x)
max (x, y) = match x with s(x′ )
then match y with s(y ′ ) then s(max (x′ , y ′ )) else s(x′ )
else y
maxl (l, x) = match l with cons(y, l′ ) then maxl (l′ , max (x, y)) else x
It is easy to prove the termination of the thread by recursive path ordering, where the
function symbols are ordered as f + > f1+ > maxl > max , the arguments of maxl are
compared lexicographically from left to right, and the constructor symbols are incomparable
and smaller than any function symbol.
3.2
Quasi-interpretations
Our next task is to control the size of the values computed by the threads. To this end,
we propose a suitable notion of quasi-interpretation (cf. [10, 3, 4]).
Definition 16 (assignment) Given a program, an assignment q associates with constructors and function symbols, functions over the non-negative reals R+ such that:
(1) If c is a constant then qc is the constant 0.
(2) If c is a constructor with arity n ≥ 1 then qc is a function in (R+ )n → R+ such that
qc (x1 , . . . , xn ) = d + Σi∈1..n xi , for some d ≥ 1.
(3) If f is a function (name) with arity n then qf : (R+ )n → R+ is monotonic and for all
i ∈ 1..n we have qf (x1 , . . . , xn ) ≥ xi .
An assignment q is extended to all expressions e as follows, giving a function expression
qe with variables in Var (e):
qx = x , qc(e1 ,...,en) = qc (qe1 , . . . , qen ) , qf (e1 ,...,en) = qf (qe1 , . . . , qen ) .
Here qx is the identity function and, e.g., qc (qe1 , . . . , qen ) is the functional composition of
the function qc with the functions qe1 , . . . , qen . It is easy to check that there exists a constant
δq depending on the assignment q such that for all values v we have |v| ≤ qv ≤ δq · |v|.
Thus the quasi-interpretation of a value is always proportional to its size.
14
Definition 17 (quasi-interpretation) An assignment is a quasi-interpretation, if for
all constraints associated with the system of the shape f (p) ≻i e, with i ∈ {0, 1}, the
inequality qf (p) ≥ qe holds over the non-negative reals.
Quasi-interpretations are designed so as to provide a bound on the size of the computed
values as a function of the size of the input data. In the following, we assume given a
suitable quasi-interpretation, q, for the system under investigation.
Example 18 With reference to Examples 6 and 15, the following assignment is a quasiinterpretation (the parameter i corresponds to the label of the read instruction in the body
of f ). We give no quasi-interpretations for the function exp because it fails the read once
condition:
qnil = qz = 0 , qs (x) = x + 1 , qcons (x, l) = x + l + 1 , qdble (x) = 2 · x ,
qf + (x, i) = x + i ,
qf + (x) = x ,
qmaxl (x, y) = qmax (x, y) = max (x, y) .
1
One can show [3, 4] that in the purely functional fragment of our language every value
v computed during the evaluation of an expression f (v1 , . . . , vn ) satisfies the following
condition:
|v| ≤ qv ≤ qf (v1 ,...,vn ) = qf (qv1 , . . . , qvn ) ≤ qf (δq · |v1 |, . . . , δq · |vn |) .
(1)
We generalise this result to threads as follows.
Theorem 19 (bound on the size of the values) Given a system of synchronous threads
B, suppose that at the beginning of the instant B1 (i) = f (v) for some thread index i. Then
the size of the values computed by the thread i during an instant is bounded by qf + (v,u)
where u are the values contained in the registers at the time they are read by the thread (or
some constant value, if they are not read at all).
Theorem 19 is proven by showing that quasi-interpretations satisfy a suitable invariant.
In the following corollary, we note that it is possible to express a bound on the size of the
computed values which depends only on the size of the parameters at the beginning of the
instant. This is possible because the number of reads a system may perform in an instant
is bounded by a constant.
Corollary 20 Let B be a system with m distinct read instructions and n threads. Suppose
B1 (i) = fi (vi ) for i ∈ Zn . Let c be a bound of the size of the largest parameter of the
functions fi and the largest default value of the registers. Suppose h is a function bounding
all the quasi-interpretations, that is, for all the functions fi+ we have h(x) ≥ qf + (x, . . . , x)
i
over the non-negative reals. Then the size of the values computed by the system B during
an instant is bounded by hn·m+1 (c).
15
Example 21 The n·m iterations of the function h predicted by Corollary 20 correspond to
a tight bound, as shown by the following example. We assume n threads and one register,
r, of type nat with default value z. The control of each thread is described as follows:
f (x0 ) = read r with x1 ⇒ r := dble(max (x1 , x0 )).
read r with x2 ⇒ r := dble(x2 ).
......
read r with xm ⇒ r := dble(xm ).next .f (dble(xm )) .
For this system we have c ≥ |x0 | and h(x) = qdble (x) = 2 · x. It is easy to show that,
at the end of an instant, there have been n · m assignments to the register r (m for every
thread in the system) and that the value stored in r is dble n·m (x0 ) of size 2n·m · |x0 |.
3.3
Combining Termination and Quasi-interpretations
To bound the space needed for the execution of a system during an instant we also need to
bound the number of nested recursive calls, i.e. the number of frames that can be found on
the stack (a precise definition of frame is given in the following Section 4). Unfortunately,
quasi-interpretations provide a bound on the size of the frames but not on their number
(at least not in a direct implementation that does not rely on memoization). One way
to cope with this problem is to combine quasi-interpretations with various families of
reduction orders [24, 10]. In the following, we provide an example of this approach based
on recursive path orders which is a widely used and fully mechanizable technique to prove
termination [6].
Definition 22 We say that a system terminates by LPO, if the reduction order associated
with the system is a recursive path order where: (1) symbols are ordered so that function
symbols are always bigger than constructor symbols and two distinct constructor symbols
are incomparable; (2) the arguments of function symbols are compared with respect to the
lexicographic order and those of constructor symbols with respect to the product order.
Note that because of the hypotheses on constructors, this is actually a special case of
the lexicographic path order. For the sake of brevity, we still refer to it as LPO.
Definition 23 We say that a system admits a polynomial quasi-interpretation if it has a
quasi-interpretation where all functions are bounded by a polynomial.
The following property is a central result of this paper.
Theorem 24 If a system B terminates by LPO and admits a polynomial quasi-interpretation
then the computation of the system in an instant runs in space polynomial in the size of
the parameters of the threads at the beginning of the instant.
The proof of Theorem 24 is based on Corollary 20 that provides a polynomial bound
on the size of the computed values and on an analysis of nested calls in the LPO order
that can be found in [10]. The point is that the depth of such nested calls is polynomial
in the size of the values and that this allows to effectively compute a polynomial bounding
the space necessary for the execution of the system.
16
Example 25 We can check that the order used in Example 15 for the functions f + , f1+ , max
and maxl is indeed a LPO. Moreover, from the quasi-interpretation given in Example 18,
we can deduce that the function h(x) has the shape a · x + b (it is affine). In practice,
many useful functions admit quasi-interpretations bound by an affine function such as the
max-plus polynomials considered in [3, 4].
The combination of LPO and polynomial quasi-interpretation actually provides a characterisation of PSPACE. In order to get to PTIME a further restriction has to be imposed.
Among several possibilities, we select one proposed in [11]. We say that the system terminates by linear LPO if it terminates by LPO as in definition 22 and moreover if in all the
constraints f (p) ≻0 e or f + (p) ≻0 g + (e) of index 0 there is at most one function symbol
on the right hand side which has the same priority as the (unique) function symbol on the
left-hand side. For instance, the Example 15 falls in this case. In op. cit., it is shown by a
simple counting argument that the number of calls a function may generate is polynomial
in the size of its arguments. One can then restate theorem 24 by replacing LPO with linear
LPO and PSPACE with PTIME.
We stress that these results are of a constructive nature, thus beyond proving that a system
‘runs in PSPACE (or PTIME)’, we can extract a definite polynomial that bounds the size
needed to run a system during an instant. In general, the bounds are rather rough and
should be regarded as providing a qualitative rather than quantitative information.
In the purely functional framework, M. Hofmann [19] has explored the situation where
a program is non-size increasing which means that the size of all intermediate results is
bounded by the size of the input. Transferring this concept to a system of threads is
attractive because it would allow to predict the behaviour of the system for arbitrarily
many instants. However, this is problematic. For instance, consider again example 25.
By Theorem 24, we can prove that the computation of a system running the behaviour
f (x0 ) in an instant requires a space polynomial in the size of x0 . Note that the parameter
of f is the largest value received so far in the register i. Clearly, bounding the value of
this parameter for arbitrarily many instants requires a global analysis of the system which
goes against our wish to produce a compositional analysis in the sense explained in the
Introduction. An alternative approach which remains to be explored could be to develop
linguistic tools and a programming discipline that allow each thread to control locally the
size of its parameters.
4
A Virtual Machine
We describe a simple virtual machine for our language thus providing a concrete intuition
for the data structures required for the execution of the programs and the scheduler.
Our motivations for introducing a low-level model of execution for synchronous threads are
twofold: (i) it offers a simple formal definition for the space needed for the execution of an
instant (just take the maximal size of a machine configuration), and (ii) it explains some
of the elaborate mechanisms occurring during the execution, like the synchronisation with
17
the read instruction and the detection of the end of an instant. A further motivation which
is elaborated in Section 4.5 is the possibility to carry on the static analyses for resource
control at bytecode level. The interest of bytecode verification is now well understood, and
we refer the reader to [25, 26].
4.1
Data Structures
We suppose given the code for all the threads running in a system together with a set
of types and constructor names and a disjoint set of function names. A function name f
will also denote the sequence of instructions of the associated code: f [i] stands for the ith
instruction in the (compiled) code of f and |f | stands for the number of instructions.
The configuration of the machine is composed of a store s, that maps registers to their
current values, a sequence of records describing the state of each thread in the system, and
three local registers owned by the scheduler whose role will become clear in Section 4.3.
A thread identifier, t, is simply an index in Zn . The state of a thread t is a pair (st t , Mt )
where st t is a status and Mt is the memory of the thread. A memory M is a sequence
of frames, and a frame is a triple (f, pc, ℓ) composed of a function name, the value of the
program counter (a natural number in 1..|f |), and a stack of values ℓ = v1 · · · vk . We
denote with |ℓ| the number of values in the stack. The status of a thread is defined as
in the source language, except for the status W which is refined into W (j, n) where: j is
the index where to jump at the next instant if the thread does not resume in the current
instant, and n is the (logical) time at which the thread is suspended (cf. Section 4.3).
4.2
Instructions
The set of instructions of the virtual machine together with their operational meaning is
described in Table 1. All instructions operate on the frame of the current thread t and the
memory Mt — the only instructions that depend on or affect the store are read and write.
For every segment of bytecode, we require that the last instruction is either return, stop
or tcall and that the jump index j in the instructions branch c j and wait j is within
the segment.
4.3
Scheduler
In Table 2 we describe a simple implementation of the scheduler. The scheduler owns three
registers: (1) tid that stores the identity of the current thread, (2) time for the current time,
and (3) wtime for the last time the store was modified. The notion of time here is of a
logical nature: time passes whenever the scheduler transfers control to a new thread. Like
in the source language, so denotes the store at the beginning of each instant.
The scheduler triggers the execution of the current instruction of the current thread, whose
index is stored in tid, with a call to run(tid). The call returns the label X associated with
the instruction in Table 1. By convention, take X = ǫ when no label is displayed. If X 6= ǫ
then the scheduler must take some action. Assume tid stores the thread index t. We denote
18
Table 1: Bytecode instructions
f [pc]
load k
branch c j
branch c j
build c n
call g n
tcall g n
return
read r
read k
write r
write k
Current memory
Following memory
′
M · (f, pc, ℓ · v · ℓ ) → M · (f, pc + 1, ℓ · v · ℓ′ · v), |ℓ| = k − 1
M · (f, pc, ℓ · c(v1 , . . . , vn )) → M · (f, pc + 1, ℓ · v1 · · · vn )
M · (f, pc, ℓ · d(. . .)) → M · (f, j, ℓ · d(. . .)) c 6= d
M · (f, pc, ℓ · v1 · · · vn ) → M · (f, pc + 1, ℓ · c(v1 , . . . , vn ))
M · (f, pc, ℓ · v1 · · · vn ) → M · (f, pc, ℓ · v1 · · · vn ) · (g, 1, v1 · · · vn )
M · (f, pc, ℓ · v1 · · · vn ) → M · (g, 1, v1 · · · vn )
M · (g, pc ′ , ℓ′ · v′ ) · (f, pc, ℓ · v) → M · (g, pc ′ + 1, ℓ′ · v), ar (f ) = |v′ |
(M · (f, pc, ℓ), s) → (M · (f, pc + 1, ℓ · s(r)), s)
(M · (f, pc, ℓ · r · ℓ′ ), s) → (M · (f, pc + 1, ℓ · r · ℓ′ · s(r)), s), |ℓ| = k − 1
(M · (f, pc, ℓ · v), s) → (M · (f, pc + 1, ℓ), s[v/r])
(M · (f, pc, ℓ · r · ℓ′ · v), s) → (M · (f, pc + 1, ℓ · r · ℓ′ ), s[v/r]), |ℓ| = k − 1
stop
M · (f, pc, ℓ) → ǫ
yield
M · (f, pc, ℓ) → M · (f, pc + 1, ℓ)
next
M · (f, pc, ℓ) → M · (f, pc + 1, ℓ)
wait j
M · (f, pc, ℓ · v) → M · (f, j, ℓ)
S
R
N
W
pc tid the program counter of the top frame (f, pc t , ℓ) in Mt , if any, Itid the instruction f [pc t ]
(the current instruction in the thread) and st tid the state st t of the thread. Let us explain
the role of the status W (j, n) and of the registers time and wtime. We assume that a thread
waiting for a condition to hold can check the condition without modifying the store. Then
a thread waiting since time m may pass the condition only if the store has been modified
at a time n with m < n. Otherwise, there is no point in passing the control to it1 . With
this data structure we also have a simple method to detect the end of an instant, it arises
when no thread is in the running status and all waiting threads were interrupted after the
last store modification occurred.
In models based on preemptive threads, it is difficult to foresee the behaviour of the
scheduler which might depend on timing information not available in the model. For
this reason and in spite of the fact that most schedulers are deterministic, the scheduler
is often modelled as a non-deterministic process. In cooperative threads, as illustrated
here, the interrupt points are explicit in the program and it is possible to think of the
scheduler as a deterministic process. Then the resulting model is deterministic and this
fact considerably simplifies its programming, debugging, and analysis.
1
Of course, this condition can be refined by recording the register on which the thread is waiting, the
shape of the expected value,. . .
19
Table 2: An implementation of the scheduler
for t in Zn do { st t := R; }
s := so ; tid := time := wtime := 0;
while (tid ∈ Zn ) {
if Itid = (write ) then wtime := time;
if Itid = (wait j )
then st tid := W (pc tid + 1, time);
X := run(tid);
if X 6= ǫ then {
if X 6= W then st tid := X;
tid := N (tid, st );
if tid ∈ Zn
then { st tid := R; time := time + 1; }
else { s := so ; wtime := time;
tid := N (0, st );
forall i in Zn do {
if st i = W (j, ) then pc i := j;
if st i 6= S then st i := R; } } }
If N (tid, st) = k ∈ Zn
If N (tid, st) ∈
/ Zn
(initialisation)
(the initial thread is of index 0)
(loop until all threads are blocked)
(record store modified)
(save continuation for next instant)
(run current thread)
(update thread status)
(compute index of next active thread)
(test whether all threads are blocked)
(if not, prepare next thread to run)
(else, initialisation of the new instant)
(select thread to run, starting from 0)
Conditions on N :
then st k = R or (st k = W (j, n) and n < wtime)
then ∀k ∈ Zn (st k 6= R and
(st k = W (j, n) implies n ≥ wtime))
20
Table 3: Compilation of source code to bytecode
Compilation of expression bodies:
=
C ′ (e, η) · return
(branch c j) · C(eb 1 , η ′ · y) · if η = η ′ · x
match x with c(y)
(j : C(eb 2 , η))
,η
=
C
then eb 1 else eb 2
(load
i(x, η)) · (branch c j) · o.w.
C(eb 1 , η · y) · (j : C(eb 2 , η · x))
C(e, η)
Auxiliary compilation of expressions:
C ′ (x, η)
= (load i(x, η))
C ′ (c(e1 , . . . , en ), η) = C ′ (e1 , η) · . . . · C ′ (en , η) · (build c n)
C ′ (f (e1 , . . . , en ), η) = C ′ (e1 , η) · . . . · C ′ (en , η) · (call f n)
Compilation of behaviours:
C(stop, η)
C(f (e1 , . . . , en ), η)
C(yield .b, η)
C(next.f (e), η)
C(̺ := e.b, η)
=
=
=
=
=
stop
C ′ (e1 , η) · · · C ′ (en , η) · (tcall f n)
yield · C(b, η)
next · C(f (e), η)
′
C
(e, η) · (write i(̺, η))′ · C(b, η)
(branch c j) · C(b1 , η · y) · if η = η ′ · x
match x with c(y)
(j : C(b2 , η))
C
,η
=
then b1 else b2
(load i(x, η)) · (branch c j) · o.w.
C(b1 , η · y) · (j : C(b2 , η · x))
j0 : (read i(̺, η)) · . . . ·
read ̺ with · · · | cℓ (yℓ ) ⇒ bℓ |
C
,η
= jℓ : (branch cℓ jℓ+1 ) · C(bℓ , η · yℓ )·
· · · y k ⇒ bk · · ·
j
: · · · jk : C(bk , η · yk )
ℓ+1
j0 : (read i(̺, η)) · . . . ·
read ̺ with · · · | cℓ (yℓ ) ⇒ bℓ |
,η
= jℓ : (branch cℓ jℓ+1 ) · C(bℓ , η · yℓ )·
C
· · · | [ ] ⇒ g(e)
jℓ+1 : · · · jn : (wait j0 ) · C(g(e), η)
21
4.4
Compilation
In Table 3, we describe a possible compilation of the intermediate language into bytecode.
We denote with η a sequence of variables. If x is a variable and η a sequence then i(x, η)
is the index of the rightmost occurrence of x in η. For instance, i(x, x · y · x) = 3. By
convention, i(r, η) = r if r is a register constant. We also use the notation j : C(be, η) to
indicate that j is the position of the first instruction of C(be, η). This is just a convenient
notation since, in practice, the position can be computed explicitly. With every function
definition f (x1 , . . . , xn ) = be we associate the bytecode C(be, x1 · · · xn ).
Example 26 (compiled code) We show below the result of the compilation of the function alarm in Example 4:
1
2
3
4
5
4.5
:
:
:
:
:
branch s 12
read sig
branch prst 8
next
load 1
6
7
8
9
10
:
:
:
:
:
load 1
tcall alarm 2
wait 2
load 1
load 2
11
12
13
14
:
:
:
:
tcall alarm 2
build prst 0
write ring
stop
Control Flow Analysis Revisited
As a first step towards control flow analysis, we analyse the flow graph of the bytecode
generated.
Definition 27 (flow graph) The flow graph of a system is a directed graph whose nodes
are pairs (f, i) where f is a function name in the program and i is an instruction index,
1 ≤ i ≤ |f |, and whose edges are classified as follows:
Successor: An edge ((f, i), (f, i+ 1)) if f [i] is a load, branch, build, call, read, write,
or yield instruction.
Branch: An edge ((f, i), (f, j)) if f [i] = branch c j.
Wait: An edge ((f, i), (f, j)) if f [i] = wait j.
Next: An edge ((f, i), (f, i + 1)) if f [i] is a wait or next instruction.
Call: An edge ((f, i), (g, 1)) if f [i] = call g n or f [i] = tcall g n.
The following is easily checked by inspecting the compilation function. Properties Tree
and Read-Wait entail that the only cycles in the flow graph of a function correspond to
the compilation of a read instruction. Property Next follows from the fact that, in a
behaviour, an instruction next is always followed by a function call f (e). Property ReadOnce is a transposition of the read once condition (Section 2.1) at the level of the bytecode.
22
Proposition 28 The flow graph associated with the compilation of a well-formed system
satisfies the following properties:
Tree: Let G′ be the flow graph without wait and call edges. Let G′f be the full subgraph of
G′ whose nodes have the shape (f, i). Then G′f is a tree with root (f, 1).
Read-Wait: If f [i] = wait j then f [j] = read r and there is a unique path from (f, j) to
(f, i) and in this path, every node corresponds to a branch instruction.
Next: Let G′ be the flow graph without call edges. If ((f, i), (f, i + 1)) is a next edge then
for all nodes (f, j) accessible from (f, i + 1), f [j] is not a read instruction.
Read-Once: Let G′ be the flow graph without wait edges and next edges. If the source
code satisfies the read once condition then there is no loop in G′ that goes through a
node (f, i) such that f [i] is a read instruction.
In [1], we have presented a method to perform resource control verifications at bytecode
level. This work is just concerned with the functional fragment of our model. Here,
we outline its generalisation to the full model. The main problem is to reconstruct a
symbolic representation of the values allocated on the stack. Once this is done, it is rather
straightforward to formulate the constraints for the resource control. We give first an
informal description of the method.
1. For every segment f of bytecode instructions with, say, formal parameters x1 , . . . , xn
and for every instruction i in the segment, we compute a sequence of expressions
e1 · · · em and a substitution σ.
2. The expressions (ei )i∈1..m are related to the formal parameters via the substitution
σ. More precisely, the variables in the expressions are contained in σx1 , . . . , σxn and
the latter forms a linear pattern.
3. Next, let us look at the intended usage of the formal expressions. Suppose at run
time the function f is called with actual parameters u1 , . . . , un and suppose that
following this call, the control reaches instruction i with a stack ℓ. Then we would
like that:
• The values u1 , . . . , un match the pattern σx1 , . . . , σxn via some substitution ρ.
• The stack ℓ contains exactly m values v1 , . . . , vm whose types are the ones of
e1 , . . . , em , respectively.
• Moreover ρ(ei ) is an over-approximation (w.r.t. size and/or termination) of
the value vi , for i = 1, . . . , m. In particular, if ei is a pattern, we want that
ρ(ei ) = vi .
We now describe precisely the generation of the expressions and the substitutions. This
computation is called shape analysis in [1]. For every function f and index i such that f [i]
is a read instruction we assume a fresh variable xf,i . Given a total order on the function
23
symbols, such variables can be totally ordered with respect to the index (f, i). Moreover,
for every index i in the code of f , we assume a countable set xi,j of distinct variables.
We assume that the bytecode comes with annotations assigning a suitable type to every
constructor, register, and function symbol. With every function symbol f of type t → beh,
comes a fresh function symbol f + of type t, t′ → beh so that |t′| is the number of read
instructions accessible from f within an instant. Then, as in the definition of control points
(Section 2.2), the extra arguments in f + corresponds to the values read in the registers
within an instant. The order is chosen according to the order of the variables associated
with the read instructions.
In the shape analysis, we will consider well-typed expressions obtained by composition of
such fresh variables with function symbols, constructors, and registers. In order to make
explicit the type of a variable x we will write xt .
For every function f , the shape analysis computes a vector σ = σ1 , . . . , σ|f | of substitutions
and a vector E = E1 , . . . , E|f | of sequences of well-typed expressions. We let Ei and σi
denote the sequence Ei and the substitution σi respectively (the ith element in the vector),
and Ei [k] the k th element in Ei . We also let hi = |Ei | be the length of the ith sequence.
1
n
We assume σ1 = id and E1 = xt1,1
· · · xt1,n
, if f : t1 , . . . , tn → β is a function of arity n.
The main case is the branch instruction:
f [i] =
branch c j
Conditions
c : t → t, Ei = E · e, e : t,
and either e = c(e), σi+1 = σi , Ei+1 = E · e
or e = d(e), c 6= d, σj = σi , Ej = Ei
n
1
)/x],
or e = xt , σj = σi , Ej = Ei , σ ′ = [c(xti+1,h
, . . . , xti+1,h
i+1
i
′
′
σi+1 = σ ◦ σi , Ei+1 = σ (E) · xi+1,hi · · · xi+1,hi+1 .
The constraints for the remaining instructions are given in Table 4, where it is assumed
that σi+1 = σi except for the instructions tcall and return (that have no direct successors
in the code of the function).
Example 29 We give the shape of the values on the stack (a side result of the shape analysis) for the bytecode obtained from the compilation of the function f defined in Example 15:
Instruction
1 : yield
2 : read i
3 : load 1
Shape
x
x
x·l
Instruction
4 : call maxl 2
5 : call f1 1
6 : return
Shape
x·l·x
x · maxl (l, x)
x · f1 (maxl (l, x))
Note that the code has no branch instruction, hence the substitution σ is always the identity.
Once the shapes are generated it is rather straightforward to determine a set of constraints
that entails the termination of the code and a bound on the size of the computed values.
For instance, assuming the reduction order is a simplification order, it is enough to require
that f + (x, l) > f1 (maxl (l, x)), i.e. the shape of the returned value, f1 (maxl(l, x)), is less
than the shape of the call, f + (x, l).
24
Table 4: Shape analysis at bytecode level
f [i] =
load k
build c n
call g n
tcall g n
return
read r
read k
write r
write k
yield
next
wait j
Conditions
k ∈ 1..hi , Ei+1 = Ei · Ei [k]
c : t → t, Ei = E · e, |e| = n, e : t, Ei+1 = E · c(e)
g : t → t, Ei = E · e, |e| = n, e : t, Ei+1 = E · g(e)
g : t → β, Ei = E · e, |e| = n, e : t
f : t → t, Ei = E · e, e : t
r : Ref (t), Ei+1 = Ei · xtf,i
k ∈ 1..hi , Ei [k] : Ref (t), Ei+1 = Ei · xtf,i
r : Ref (t), Ei = E · e, e : t, Ei+1 = E
k ∈ 1..hi , Ei [k] : Ref (t), Ei = E · e, e : t, Ei+1 = E
Ei+1 = Ei
Ei+1 = Ei
Ei = Ej · xtf,j , Ei+1 = Ej , σi = σj
If one can find a reduction order and an assignment satisfying the constraints generated
from the shape analysis then one can show the termination of the instant and provide
bounds on the size of the computed values. We refrain from developing this part which is
essentially an adaptation of Section 3 at bytecode level. Moreover, a detailed treatment
of the functional fragment is available in [1]. Instead, we state that the shape analysis
is always successful on the bytecode generated by the compilation function described in
Table 3 (see Appendix B.8). This should suggest that the control flow analysis is not overly
constraining though it can certainly be enriched in order to take into account some code
optimisations.
Theorem 30 The shape analysis succeeds on the compilation of a well-formed program.
5
Conclusion
The execution of a thread in a cooperative synchronous model can be regarded as a sequence
of instants. One can make each instant simple enough so that it can be described as a
function — our experiments with writing sample programs show that the restrictions we
impose do not hinder the expressivity of the language. Then well-known static analyses
used to bound the resources needed for the execution of first-order functional programs can
be extended to handle systems of synchronous cooperative threads. We believe this provides
some evidence for the relevance of these techniques in concurrent/embedded programming.
We also expect that our approach can be extended to a richer programming model including
more complicated control structures.
The static analyses we have considered do not try to analyse the whole system. On
the contrary, they focus on each thread separately and can be carried out incrementally.
Moreover, it is quite possible to perform them at bytecode level. These characteristics are
25
particularly interesting in the framework of ‘mobile code’ where threads can enter or leave
the system at the end of each instant as described in [12].
Acknowledgements and Publication History We would like to thank the referees
for their valuable comments. Thanks to G. Boudol and F. Dabrowski for comments and
discussions on a preliminary version of this article that was presented at the 2004 International Conference on Concurrency Theory. In the present paper, we consider a more
general model which includes references as first class values and requires a reformulation
of the control flow analysis. Moreover, we present a new virtual machine, a number of
examples, and complete proofs not available in the conference paper.
References
[1] R. Amadio, S. Coupet-Grimal, S. Dal-Zilio, and L. Jakubiec. A functional scenario
for bytecode verification of resource bounds. In Proceedings of CSL – International
Conference on Computer Science Logic, Lecture Notes in Computer Science 3210,
Springer, 2004.
[2] R. Amadio, S. Dal-Zilio. Resource control for synchronous cooperative threads. In
Proceedings CONCUR – 15th International Conference on Concurrency Theory, Lecture Notes in Computer Science 3170, Springer, 2004.
[3] R. Amadio. Max-plus quasi-interpretations. In Proceedings of TLCA – 6th International Conference on Typed Lambda Calculi and Applications, Lecture Notes in
Computer Science 2701, Springer, 2003.
[4] R. Amadio. Synthesis of max-plus quasi-interpretations. In Fundamenta Informaticae,
65(1-2):29-60, 2005.
[5] J. Armstrong, R. Virding, C. Wikström, M. Williams. Concurrent Programming in
Erlang. Prentice-Hall 1996.
[6] F. Baader and T. Nipkow. Term rewriting and all that. Cambridge University Press,
1998.
[7] P. Baillot and V. Mogbil, Soft lambda calculus: a language for polynomial time computation. In Proceedings of FOSSACS – 7th International Conference on Foundations
of Software Science and Computation Structures, Lecture Notes in Computer Science
2987, Springer, 2004.
[8] S. Bellantoni and S. Cook. A new recursion-theoretic characterization of the poly-time
functions. Computational Complexity, 2:97–110, 1992.
[9] G. Berry and G. Gonthier, The Esterel synchronous programming language. Science
of computer programming, 19(2):87–152, 1992.
26
[10] G. Bonfante, J.-Y. Marion, and J.-Y. Moyen. On termination methods with space
bound certifications. In Proceedings Perspectives of System Informatics, Lecture Notes
in Computer Science 2244, Springer, 2001.
[11] G. Bonfante, J.-Y. Marion, J.-Y. Moyen. Quasi-interpretations. Internal report LORIA, November 2004, available from the authors.
[12] G. Boudol, ULM, a core programming model for global computing. In Proceedings
of ESOP – 13th European Symposium on Programming, Lecture Notes in Computer
Science 2986, Springer, 2004.
[13] F. Boussinot and R. De Simone, The SL Synchronous Language. IEEE Trans. on
Software Engineering, 22(4):256–266, 1996.
[14] J. Buck. Scheduling dynamic dataflow graphs with bounded memory using the token
flow model. PhD thesis, University of California, Berkeley, 1993.
[15] N. Carriero and D. Gelernter. Linda in Context. Communication of the ACM, 32(4):
444-458, 1989.
[16] P. Caspi. Clocks in data flow languages. Theoretical Computer Science, 94:125–140,
1992.
[17] P. Caspi and M. Pouzet. Synchronous Kahn networks. In Proceedings of ICFP – ACM
SIGPLAN International Conference on Functional Programming, SIGPLAN Notices
31(6), ACM Press, 1996.
[18] A. Cobham. The intrinsic computational difficulty of functions. In Proceedings Logic,
Methodology, and Philosophy of Science II, North Holland, 1965.
[19] M. Hofmann. The strength of non size-increasing computation. In Proceedings of
POPL – 29th SIGPLAN-SIGACT Symposium on Principles of Programming Languages, ACM, 2002.
[20] G. Kahn. The semantics of a simple language for parallel programming. In Proceedings
IFIP Congress, North-Holland, 1974.
[21] N. Jones. Computability and complexity, from a programming perspective. MIT-Press,
1997.
[22] E. Lee and D. Messerschmitt. Static scheduling of synchronous data flow programs
for digital signal processing. IEEE Transactions on Computers, 1:24–35, 1987.
[23] D. Leivant. Predicative recurrence and computational complexity i: word recurrence
and poly-time. Feasible mathematics II, Clote and Remmel (eds.), Birkhäuser:320–
343, 1994.
27
[24] J.-Y. Marion. Complexité implicite des calculs, de la théorie à la pratique. Université
de Nancy. Habilitation à diriger des recherches, 2000.
[25] G. Morriset, D. Walker, K. Crary and N. Glew. From system F to typed assembly
language. In ACM Transactions on Programming Languages and Systems, 21(3):528569, 1999.
[26] G. Necula. Proof carrying code. In Proceedings of POPL – 24th SIGPLAN-SIGACT
Symposium on Principles of Programming Languages, ACM, 1997.
[27] M. Odersky. Functional nets. In Proceedings of ESOP – 9th European Symposium on
Programming, Lecture Notes in Computer Science 1782, Springer, 2000.
[28] J. Ousterhout. Why threads are a bad idea (for most purposes). Invited talk at the
USENIX Technical Conference, 1996.
[29] Th. Park. Bounded scheduling of process networks. PhD thesis, University of California, Berkeley, 1995.
[30] P. Puschner and A. Burns (eds.), Real time systems 18(2/3), Special issue on Worstcase execution time analysis, 2000.
[31] Reactive
Programming,
INRIA
http://www-sop.inria.fr/mimosa/rp.
A
Sophia-Antipolis,
Mimosa
Project.
Readers-Writers and Other Synchronisation Patterns
A simple, maybe the simplest, example of synchronisation and resource protection is the
single place buffer. The buffer (initially empty) is implemented by a thread listening to
two signals. The first on the register put to fill the buffer with a value if it is empty, the
second on the register get to emit the value stored in the buffer by writing it in the special
register result and flush the buffer. In this encoding, the register put is a one place channel
and get is a signal as in Example 1. Moreover, owing to the read once condition, we are
not able to react to several put/get requests during the same instant — only if the buffer
is full can we process one get and one put request in the same instant. Note that the value
of the buffer is stored on the function call to full (v), hence we use function parameters as
a kind of private memory (to compare with registers that model shared memory).
empty() = read put with full(x) ⇒ next.full (x) | [ ] ⇒ empty()
full(x)
= read get with prst ⇒ result := x.yield .empty() | [ ] ⇒ full(x)
Another common example of synchronisation pattern is a situation where we need to
protect a resource that may be accessed both by ‘readers’ (which access the resource without modifying it) and ‘writers’ (which can access and modify the resource). This form of
28
access control is common in databases and can be implemented using traditional synchronisation mechanisms such as semaphores, but this implementation is far from trivial [27].
In our encoding, a control thread secures the access to the protected resource. The other
threads, which may be distinguished by their identity id (a natural number), may initiate a
request to access / release the resource by sending a special value on the dedicated register
req. The thread regulating the resource may acknowledge at most one request per instant
and allows the sender of a request to proceed by writing its id on the register allow at
the next instant. The synchronisation constraints are as follows: there can be multiple
concurrent readers, there can be only one writer at any one time, pending write requests
have priority over pending read requests (but do not preempt ongoing read operations).
We define a new algebraic datatype for assigning requests:
request = startRead(nat) | startWrite(nat) | endRead | endWrite | none
The value startRead(id ) indicates a read request from the thread id , the other constructors correspond to requests for starting to write, ending to read or ending to write — the
value none stands for no requests. A startRead operation requires that there are no pending
writes to proceed. In that case we increment the number of ongoing readers and allow the
caller to proceed. By contrast, a startWrite puts the monitor thread in a state waiting to
process the pending write request (function pwrite), which waits for the number of readers
to be null and then allows the thread that made the pending write request to proceed. An
endRead and endWrite request is always immediately acknowledged.
The thread protecting the resource starts with the behaviour onlyreader(z), defined in Table 5, meaning the system has no pending requests for reading or writing. The behaviour
onlyreader(x) encodes the state of the controller when there is no pending write and x
readers. In a state with x pending readers, when a startWrite request from the thread
id is received, the controller thread switches to the behaviour pwrite(id, x), meaning that
the thread id is waiting to write and that we should wait for x endRead requests before
acknowledging the request to write.
A thread willing to read on the protected resource should repeatedly try to send its request
on the register req then poll the register allow, e.g., with the behaviour askRead(id ).read allow
with id ⇒ · · · where askRead (id ) is a shorthand for read req with none ⇒ req := startRead(id ).
The code for a thread willing to end a read session is similar. It is simple to change our
encoding so that multiple requests are stored in a fifo queue instead of a one place buffer.
B
B.1
Proofs
Preservation of Control Points Instances
Proposition 31 8 Suppose (B, s, i) → (B ′ , s′ , i′ ) and that for all thread indexes j ∈ Zn ,
B1 (j) is an instance of a control point. Then for all j ∈ Zn , we have that B1′ (j) is an
instance of a control point.
29
Table 5: Code for the Readers-Writers pattern
onlyreader (x) = match x with s(x′ ) then read req with
endRead ⇒ next.onlyreader (x′ )
| startWrite(y) ⇒ next.pwrite(y, s(x′ ))
| startRead(y) ⇒ next.allow := y.onlyreader (s(s(x′ )))
| [ ] ⇒ onlyreader (s(x′ ))
else read req with
startWrite(y) ⇒ next.allow := y.pwrite(y, z)
| startRead(y) ⇒ next.allow := y.onlyreader (s(z))
| [ ] ⇒ onlyreader (z)
= match x with s(x′ ) then
match x′ with s(x′′ ) then read req with
endRead ⇒ next.pwrite(id , s(x′′ ))
| [ ] ⇒ pwrite(id , s(s(x′′ )))
else read req with
endRead ⇒ next.allow := id .pwrite(id , z)
| [ ] ⇒ pwrite(id , s(z))
else read req with
endWrite ⇒ next.onlyreader (z)
| [ ] ⇒ pwrite(id , z)
pwrite(id , x)
Proof. Let (f (p), be, i) be a control point of an expression body or of a behaviour. In
Table 6, we reformulate the evaluation and the reduction by replacing expression bodies
or behaviours by triples (f (p), be, σ) where (f (p), be, i) is a control point and σ is a substitution mapping the variables in p to values. By convention, we take σ(r) = r if r is a
register.
We claim that the evaluation and reduction in Table 6 are equivalent to those presented
in Section 2 in the following sense:
1. (f (p), e0 , σ) ⇓ v iff σe0 ⇓ v.
X
X
2. (f + (p), b0 , s, σ) → (g + (q), b′0 , s′, σ ′ ) iff σb0 → σ ′ b′0 .
In the following proofs we will refer to the rules in Table 6. The revised formulation
X
makes clear that if b is an instance of a control point and (b, s)→(b′ , s′ ) then b′ is an
instance. It remains to check that being an instance is a property preserved at the level
of system reduction. We proceed by case analysis on the last reduction rule used in the
derivation of (B, s, i) → (B ′ , s′ , i′ ).
(s1 ) One of the threads performs one step. The property follows by the analysis on
behaviours.
(s2 ) One of the threads performs one step. Moreover, the threads in waiting status take the
30
Table 6: Expression body evaluation and behaviour reduction revised
(e0 )
(e1 )
(f (p), x, σ) ⇓ σ(x)
(f (p), r, σ) ⇓ r
(f (p), ei , σ) ⇓ vi i ∈ 1..n,
(f (p), ei , σ) ⇓ vi i ∈ 1..n
g(x) = eb, (g(x), eb, [v/x]) ⇓ v
(e3 )
(e2 )
(f (p), c(e), σ) ⇓ c(v)
(f (p), g(e), σ) ⇓ v
σ(x) = d(. . .),
σ(x) = c(v),
(f (p), eb 2 , σ) ⇓ v
(f ([c(x)/x]p), eb 1 , [v/x] ◦ σ) ⇓ v
(e5 )
(e4 )
match x
match x
f (p),
f (p),
with c(x)
with c(x)
, σ ⇓ v
, σ ⇓ v
then eb 1 else eb 2
then eb 1 else eb 2
(b1 )
S
(f + (p), stop, σ, s) → (f + (p), stop, σ, s)
(b2 )
(b3 )
(b4 )
R
(f + (p), yield .b, σ, s) → (f + (p), b, σ, s)
N
(f + (p), next .g(e), σ, s) → (f + (p), g(e), σ, s)
X
(f + ([c(x)/x]p), b1 , [v/x]◦ σ, s) → (f1+ (p′ ), b′ , σ ′ , s′ )
match x with c(x)
X
f + (p),
, σ, s → (f1+ (p′ ), b′ , σ ′ , s′ )
then b1 else b2
σ(x) =
c(v),
X
+ ′
+
′
′ ′
σ(x)
= d(. . .), c 6= d, (f (p), b2 , σ,
s) → (f1 (p ), b , σ , s )
(b5 )
match x with c(x)
X
f + (p),
, s, σ → (f1+ (p′ ), b′ , σ ′ , s′ )
then b1 else b2
no pattern matches s(σ(̺))
(b6 )
W
(f + (p), read ̺ with . . . , σ, s) → (f + (p), read ̺ with . . . , σ, s)
X
(b7 )
σ1 (p) = s(σ(̺)), (f + ([p/y]p), b, σ1 ◦ σ, s) → (f1+ (p′ ), b′ , σ ′ , s′ )
X
(f + (p), read hyi ̺ with · · · | p ⇒ b | . . . , σ, s) → (f1+ (p′ ), b′ , σ ′ , s′ )
σe ⇓ v, g(x) = b,
(b8 )
X
(g+ (x, yg ), b, [v/x], s) → (f1+ (p′ ), b′ , σ ′ , s′ )
X
(f + (p), g(e), σ, s) → (f1+ (p′ ), b′ , σ ′ , s′ )
X
(b9 )
σe ⇓ v, (f + (p), b, σ, s[v/σ(̺)]) → (f1+ (p′ ), b′ , σ ′ , s′ )
X
(f + (p), ̺ := e.b, σ, s) → (f1+ (p′ ), b′ , σ ′ , s′ )
31
[ ] ⇒ g(e) branch of the read instructions that were blocking. A thread read ̺ . . . | [ ] ⇒
g(e) in waiting status is an instance of a control point (f + (p), read ̺ . . . | [ ] ⇒ g(e0), j).
By (C7 ), (f + (p), g(e0 ), 2) is a control point, and g(e) is one of its instances.
✷
B.2
Evaluation of Closed Expressions
Proposition 32 12 Let e be a closed expression. Then there is a value v such that e ⇓ v
and e ≥ v with respect to the reduction order.
As announced, we refer to the rules in Table 6. We recall that the order > or ≥ refers
to the reduction order that satisfies the constraints of index 0. We start by proving the
following working lemma.
Lemma 33 For all well formed triples, (f (p), eb, σ), there is a value v such that (f (p), eb, σ) ⇓
v. Moreover, if eb is an expression then σ(eb) ≥ v else f (σp) ≥ v.
Proof. We proceed by induction on the pair (f (σp), eb) ordered lexicographically from
left to right. The first argument is ordered according to the reduction order and the second
according to the structure of the expression body.
eb ≡ x. We apply rule (e0 ) and σ(x) ≥ σ(x).
eb ≡ r. We apply rule (e1 ) and σ(r) = r ≥ r.
eb ≡ c(e1 , . . . , en ). We apply rule (e2 ). By inductive hypothesis, (f (p), ei , σ) ⇓ vi for
i ∈ 1..n and σei ≥ vi . By definition of reduction order, we derive σ(c(e1 , . . . , en )) ≥
c(v1 , . . . , vn ).
eb ≡ f (e1 , . . . , en ). We apply rule (e3 ). By inductive hypothesis, (f (p), ei , σ) ⇓ vi for
i ∈ 1..n and σei ≥ vi . By the definition of the generated constraints f (p) > g(e), which
by definition of reduction order implies that f (σp) > g(σe) ≥ g(v) = g([v/x]x). Thus by
inductive hypothesis, g(x, eb, [v/x]) ⇓ v. We conclude by showing by case analysis that
g(σe) ≥ v.
• eb is an expression. By the constraint we have g(x) > eb, and by inductive hypothesis
[v/x]eb ≥ v. So g(σe) ≥ g(v) > [v/x]eb ≥ v.
• eb is not an expression. Then by inductive hypothesis, g(v) ≥ v and we know
g(σe) ≥ g(v).
eb ≡ match x with c(x) . . . . We distinguish two cases.
• σ(x) = c(v). Then rule (e4 ) applies. Let σ ′ = [v/x]◦σ. Note that σ ′ ([c(x)/x]p) = σp.
By inductive hypothesis, we have that (f ([c(x)/x]p), eb 1 , σ ′ ) ⇓ v. We show by case
analysis that f (σp) ≥ v.
32
– eb 1 is an expression. By inductive hypothesis, σ ′ (eb 1 ) ≥ v. By the constraint,
f ([c(x)/x]p) > eb 1 . Hence, f (σp) = f (σ ′ [c(x)/x]p) > σ ′ (eb 1 ).
– eb 2 is not an expression. By inductive hypothesis, we have that f (σp) equals
f (σ ′ [c(x)/x]p) ≥ v.
• σ(x) = d(. . .) with c 6= d. Then rule (e5 ) applies and an argument simpler than the
one above allows to conclude.
✷
Relying on Lemma 33 we can now prove Proposition 12, that if e is a closed expression
and e ⇓ v then e ≥ v in the reduction order. Proof. We proceed by induction on the
structure of e.
e is value v. Then v ⇓ v and v ≥ v.
e ≡ c(e1 , . . . , en ). By inductive hypothesis, ei ⇓ vi and ei ≥ vi for i ∈ 1..n. By definition
of reduction order, c(e) ≥ c(v).
e ≡ f (e1 , . . . , en ). By inductive hypothesis, ei ⇓ vi and ei ≥ vi for i ∈ 1..n. Suppose
f (x) = eb. By Lemma 33, (f (x), eb, [v/x]) ⇓ v and either f (v) ≥ v or f (x) > eb and
σ(eb) ≥ v. We conclude by a simple case analysis.
✷
B.3
Progress
Proposition 34 13 Let b be an instance of a control point. Then for all stores s, there
X
exists a store s′ and a status X such that (b, s) → (b′ , s′ ).
Proof. We start by defining a suitable well-founded order. If b is a behaviour, then let
nr (b) be the maximum number of reads that b may perform in an instant. Moreover, let
ln(b) be the length of b inductively defined as follows:
ln(stop) = ln(f (e)) = 0 ln(yield .b) = ln(̺ := e.b) = 1 + ln(b) ln(next.f (e)) = 2
ln(match x with c(x) then b1 else b2 ) = 1 + max (ln(b1 ), ln(b2 ))
ln(read ̺ with . . . | pi ⇒ bi | . . . | [ ] ⇒ f (e)) = 1 + max (. . . , ln(bi ), . . .)
If the behaviour b is an instance of the control point γ ≡ (f + (p), b0 , i) via a substitution σ
then we associate with the pair (b, γ) a measure:
µ(b, γ) =def (nr(b), f + (σp), ln(b)) .
We assume that measures are lexicographically ordered from left to right, where the
order on the first and third component is the standard order on natural numbers and the
order on the second component is the reduction order considered in study of the termination conditions. This is a well-founded order. Now we show the assertion by induction on
µ(b, γ). We proceed by case analysis on the structure of b.
b ≡ stop. Rule (b1 ) applies, with X = S, and the measure stays constant.
33
b ≡ yield .b′ . Rule (b2 ) applies, with X = R, and the measure decreases because ln(b)
decreases.
b ≡ next.b′ . Rule (b3 ) applies, with X = N, and the measure decreases because ln(b)
decreases.
b ≡ match . . . .
creases.
Rules (b4 ) or (b5 ) apply and the measure decreases because ln(b) de-
b ≡ read . . . . If no pattern matches then rule (b6 ) applies and the measure is left unchanged. If a pattern matches then rule (b7 ) applies and the measure decreases because
nr (b) decreases and then the induction hypothesis applies.
b ≡ g(e). Rule (b8 ) applies to (f + (p), g(e0 ), σ), assuming e = σe0 . By Proposition 12, we
know that e ⇓ v and e ≥ v in the reduction order. Suppose g is associated to the declaration g(x) = b. The constraint associated with the control point requires f + (p) > g + (e0 , yg ).
Then using the properties of reduction orders we observe:
f + (σp) > g + (σe0 , yg ) = g + (e, yg ) ≥ g + (v, yg )
Thus the measure decreases because f + (σp) > g + (v, yg ), and then the induction hypothesis applies.
b ≡ ̺ := e.b′ . By Proposition 12, we have e ⇓ v. Hence rule (b9 ) applies, the measure
decreases because ln(b) decreases, and then the induction hypothesis applies.
✷
Remark 35 We point out that in the proof of proposition 13, if X = R then the measure
decreases and if X ∈ {N, S, W } then the measure decreases or stays the same. We use this
observation in the following proof of Theorem 14.
B.4
Termination of the Instant
Theorem 36 14 All sequences of system reductions involving only rule (s1 ) are finite.
Proof. We order the status of threads as follows: R > N, S, W . With a behaviour B1 (i)
coming with a control point γi , we associate the pair µ′ (i) = (µ(B1 (i), γi ), B2 (i)) where
µ is the measure defined in the proof of Proposition 13. Thus µ′ (i) can be regarded as
a quadruple with a lexicographic order from left to right. With a system B of n threads
we associate the measure µB =def (µ′ (0), . . . , µ′(n − 1)) that is a tuple. We compare such
tuples using the product order. We prove that every system reduction sequence involving
only rule (s1 ) terminates by proving that this measure decreases during reduction. We
recall the rule below:
X
(B1 (i), s) → (b′ , s′ ), B2 (i) = R, B ′ = B[(b′ , X)/i], N (B ′ , s′ , i) = k
(B, s, i) → (B ′ [(B1′ (k), R)/k], s′ , k)
34
Let B ′′ = B ′ [(B1′ (k), R)/k]. We proceed by case analysis on X and B2′ (k).
If B2′ (k) = R then µ′ (k) is left unchanged. The only other case is B2′ (k) = W . In this
case the conditions on the scheduler tell us that i 6= k. Indeed, the thread k must be
blocked on a read r instruction and it can only be scheduled if the value stored in r has
been modified, which means than some other thread than k must have modified r. For the
same reason, some pattern in the read r instruction of B1 (k) matches s′ (r), which means
that the number of reads that B1 (k) may perform in the current instant decreases and that
µ′ (k) also decreases.
X
By hypothesis we have (B1 (i), s) → (b′ , s′ ), hence by Remark 35, µ′ (i) decreases or stays
the same. By the previous line of reasoning µ′ (k) decreases and the other measures µ′ (j)
stay the same. Hence the measure µB decreases, as needed.
✷
B.5
Bounding the Size of Values for Threads
Theorem 37 19 Given a system of synchronous threads B, suppose that at the beginning
of the instant B1 (i) = f (v) for some thread index i. Then the size of the values computed
by the thread i during an instant is bounded by qf + (v,u) where u are the values contained
in the registers at the time they are read by the thread (or some constant value, if they are
not read at all).
In Table 6, we have defined the reduction of behaviours as a big step semantics. In Table 7
we reformulate the operational semantics following a small step approach. First, note that
there are no rules corresponding to (b1 ), (b3 ) or (b6 ) since these rules either terminate
or suspend the computation of the thread in the instant. Second, the reduction makes
abstraction of the memory and the scheduler. Instead, the reduction relation is parameterized on an assignment δ associating values with the labels of the read instructions.
The assignment δ is a kind of oracle that provides the thread with the finitely many values
(because of the read once condition) it may read within the current instant. The assignment δ provides a safe abstraction of the store s used in the transition rules of Table 6.
Note that the resulting system represents more reductions than can actually occur in the
original semantics within an instant. Namely, a thread can write a value v in r and then
proceed to read from r a value different from v without yielding the control. This kind
of reduction is impossible in the original semantics. However, since we do not rely on a
precise monitoring of the values written in the store, this loss of precision does not affect
our analysis.
Next we prove that if (f + (p), b, σ) →δ (g + (q), b′ , σ ′ ) then qf + (σ′′ ◦σ(p)) ≥ qg+ (σ′ (q)) over the
non-negative reals, where σ ′′ is either the identity or the restriction of δ to the label of the
read instruction in case (b′ 7 ).
Proof. By case analysis on the small step rules. Cases (b′ 2 ), (b′ 5 ) and (b′ 9 ) are immediate.
(b′ 4 ) The assertion follows by a straightforward computation on substitutions.
(b′ 7 ) Then σ ′′ (y) = δ(y) = [σ1 (p)/y] and recalling that patterns are linear, we note that:
f + ((σ ′′ ◦ σ)(p)) = f + ((σ1 ◦ σ)[p/y](p)).
35
Table 7: Small step reduction within an instant
(b′ 2 ) (f + (p), yield .b, σ) →δ (f + (p), b, σ)
match x with c(x)
, σ →δ (f + ([c(x)/x]p), b1 , [v/x] ◦ σ) if (1)
(b′ 4 ) f + (p),
then b1 else b2
match x with c(x)
(b′ 5 ) f + (p),
, σ →δ (f + (p), b2 , σ) if σ(x) = d(. . .), c 6= d
then b1 else b2
(b′ 7 ) (f + (p), read hyi ̺ with · · · | p ⇒ b | . . . , σ) →δ (f + ([p/y]p), b, σ1 ◦ σ) if (2)
(b′ 8 ) (f + (p), g(e), σ) →δ (g+ (x, yg ), b, [v/x]) if σe ⇓ v and g(x) = b
(b′ 9 ) (f + (p), ̺ := e.b, σ) →δ (f + (p), b, σ) if σe ⇓ v
where: (1) ≡ σ(x) = c(v) and (2) ≡ σ1 (p) = δ(y).
(b′ 8 ) By the properties of quasi-interpretations, we know that qσ(e) ≥ qv . By the constraints generated by the control points, we derive that qf + (p) ≥ qg+ (e,yg ) over the nonnegative reals. By the substitutivity property of quasi-interpretations, this implies that
qf + (σ(p)) ≥ qg+ (σ(e,yg )) . Thus we derive, as required: qf + (σ(p)) ≥ qg+ (σ(e,yg )) ≥ qg+ (v,yg ) .
✷
It remains to support our claim that all values computed by the thread i during an
instant have a size bounded by qf (v,u) where u are either the values read by the thread or
some constant value.
Proof. By inspecting the shape of behaviours we see that a thread computes values either
when writing into a register or in recursive calls. We consider in turn the two cases.
Writing Suppose (f + (p, yf ), b, σ) →∗δ (g + (q), ̺ := e.b′ , σ ′ ) by performing a series of reads
recorded by the substitution σ ′′ . Then the invariant we have proved above implies that:
qf + ((σ′′ ◦σ)(p,yf )) ≥ qg+ (σ′ q) over the non-negative reals. If some of the variables in yf are not
instantiated by the substitution σ ′′ , then we may replace them by some constant. Next,
we observe that the constraint of index 1 associated with the control point requires that
qg+ (q) ≥ qe and that if σ(e) ⇓ v then this implies qg+ (σ′ (q)) ≥ qσ′ (e) ≥ qv ≥ |v|.
Recursive call Suppose (f + (p, yf ), b, σ) →∗δ (g + (q), h(e), σ ′ ) by performing a series of
reads recorded by the substitution σ ′′ . Then the invariant we have proved above implies
that: qf + ((σ′′ ◦σ)(p,yf )) ≥ qg+ (σ′ (q)) over the non-negative reals. Again, if some of the variables
in yf are not instantiated by the substitution σ ′′ , then we may replace them by some
constant value. Next we observe that the constraint of index 0 associated with the control
point requires that qg+ (q) ≥ qh+ (e,yh ) . Moreover, if σ ′ (e) ⇓ v then qg+ (σ′ (q)) ≥ qh+ (σ′ (e,yh )) ≥
qh+ (v,yh ) ≥ qvi ≥ |vi |, where vi is any of the values in v. The last inequation relies
on the monotonicity property of assignments, see property (3) in Definition 16, that is
✷
qh+ (z1 , . . . , zn ) ≥ zj for all j ∈ 1..n.
B.6
Bounding the Size of Values for Systems
Corollary 38 20 Let B be a system with m distinct read instructions and n threads.
36
Suppose B1 (i) = fi (vi ) for i ∈ Zn . Let c be a bound of the size of the largest parameter of the
functions fi and the largest default value of the registers. Suppose h is a function bounding
all the quasi-interpretations, that is, for all the functions fi+ we have h(x) ≥ qfi+ (x, . . . , x)
over the non-negative reals. Then the size of the values computed by the system B during
an instant is bounded by hn·m+1 (c).
Proof. Because of the read once condition, during an instant a system can perform a
(successful) read at most n · m times. We proceed by induction on the number k of reads
the system has performed so far to prove that the size of the values is bounded by hk+1 (c).
k = 0 If no read has been performed, then Theorem 19 can be applied to show that all
values have size bound by h(c).
k > 0 Inductively, the size of the values in the parameters and the registers is bounded
by hk (c). Theorem 19 says that all the values that can be computed before performing a
new read have a size bound by h(hk (c)) = hk+1 (c).
✷
B.7
Combination of LPO and Polynomial Quasi-interpretations
Theorem 39 24 If a system B terminates by LPO and admits a polynomial quasi-interpretation then the computation of the system in an instant runs in space polynomial in the
size of the parameters of the threads at the beginning of the instant.
Proof. We can always choose a polynomial for the function h in corollary 20. Hence,
hnm+1 is also a polynomial. This shows that the size of all the values computed by the
system is bounded by a polynomial. The number of values in a frame depends on the
number of formal parameters and local variables and it can be statically bound. It remains
to bound the number of frames on the stack. Note that behaviours are tail recursive.
This means that the stack of each thread contains a frame that never returns a value plus
possibly a sequence of frames that relate to the evaluation of expressions.
From this point on, one can follow the proof in [10]. The idea is to exploit the characteristics
of the LPO order: a nested sequence of recursive calls f1 (v1 ), . . . , fn (vn ) must satisfy
f1 (v1 ) > · · · > fn (vn ), where > is the LPO order on terms. Because of the polynomial
bound on the size of the values and the characteristics of the LPO on constructors, one
can provide a polynomial bound on the length of such strictly decreasing sequences and
therefore a polynomial bound on the size of the stack needed to execute the system.
✷
B.8
Compiled Code is Well-shaped
Theorem 40 30 The shape analysis succeeds on the compilation of a well-formed program.
Let be be either a behaviour or an expression body, η be a sequence of variables, and
E be a sequence of expressions. We say that the triple (be, η, E) is compatible if for all
variables x free in be, the index i(x, η) is defined and if η[k] = x then E[k] = x. Moreover,
37
we say that the triple is strongly compatible if it is compatible and |η| = |E|. In the
following we will neglect typing issues that offer no particular difficulty. First we prove the
following lemma.
Lemma 41 If (e, η, E) is compatible then the shape analysis of C ′ (e, η) starting from the
shape E succeeds and produces a shape E · e.
Proof. By induction on the structure of e.
e ≡ x Then C ′ (x, η) = load i(x, η). We know that i(x, η) is defined and η[k] = x implies
E[k] = x. So the shape analysis succeeds and produces E · x.
e ≡ c(e1 , . . . , en ) Then C ′ (c(e1 , . . . , en ), η) = C ′ (e1 , η) · · · C ′ (en , η)(build c n). We note
that if e′ is a subexpression of e, e′′ is another expression, and (e, η, E) is compatible then
(e′ , η, E · e′′ ) is compatible too. Thus we can apply the inductive hypothesis to e1 , . . . , en
and derive that the shape analysis of C ′ (e1 , η) starting from E succeeds and produces
E · e1 ,. . . , and the shape analysis of C ′ (en , η) starting from E · e1 · · · en−1 succeeds and
produces E · e1 · · · en . Then by the definition of shape analysis of build we can conclude.
e ≡ f (e1 , . . . , en ) An argument similar to the one above applies.
Next we generalise the lemma to behaviours and expression bodies.
✷
Lemma 42 If (be, η, E) is strongly compatible then the shape analysis of C(be, η) starting
from the shape E succeeds.
Proof. be ≡ e We have that C(e, η) = C ′ (e, η)·return and the shape analysis on C ′ (e, η)
succeeds, producing at least one expression.
be ≡ match x with c(y) then eb 1 else eb 2 Following the definition of the compilation
function, we distinguish two cases:
• η ≡ η ′ · x: Then C(be, η) = (branch c j) · C(eb 1 , η ′ · y) · (j : C(eb 2 , η) ). By the
hypothesis of strong compatibility, E ≡ E ′ · x and by definition of shape analysis on
branch we get on the then branch a shape [c(y)/x]E ′ · y up to variable renaming. We
observe that (eb 1 , η ′ · y, [c(y)/x]E ′ · y) are strongly compatible (note that here we rely
on the fact that η ′ and E ′ have the same length). Hence, by inductive hypothesis,
the shape analysis on C(eb 1 , η ′ · y) succeeds. As for the else branch, we have a shape
E ′ · x and since (eb 2 , η ′ · x, E ′ · x) are strongly compatible we derive by inductive
hypothesis that the shape analysis on C(eb 2 , η) succeeds.
• η 6≡ η ′ · x: The compiled code starts with (load i(x, η)) which produces a shape E · x.
Then the analysis proceeds as in the previous case.
be ≡ stop The shape analysis succeeds.
be ≡ f (e1 , . . . , en ) By lemma 41, we derive that the shape analysis of C ′ (e1 , η)· . . .·C ′ (en , η)
38
succeeds and produces E · e1 · · · en . We conclude applying the definition of the shape
analysis for tcall.
be ≡ yield .b The instruction yield does not change the shape and we can apply the
inductive hypothesis on b.
be ≡ next.g(e) The instruction next does not change the shape and we can apply the
inductive hypothesis on g(e).
be ≡ ̺ := e.b By lemma 41, we have the shape E · e. By definition of the shape analysis
on write, we get back to the shape E and then we apply the inductive hypothesis on b.
be ≡ match . . . The same argument as for expression bodies applies.
be ≡ read ̺ with c1 (y1 ) ⇒ b1 | . . . | cn (yn ) ⇒ bn | [ ] ⇒ g(e) We recall that the compiled
code is:
j0 : (read i(̺, η)) · (branch c1 j1 ) · C(b1 , η · y1 ) · · ·
jn−1 : (branch cn jn ) · C(bn , η · yn ) · jn : (wait j0 ) · C(g(e), η)
The read instruction produces a shape E · y. Then if a positive branch is selected, we
have a shape E · yk for k ∈ 1..n. We note that the triples (bk , η · yk , E · yk ) are strongly
compatible and therefore the inductive hypothesis applies to C(bk , η · yk ) for k ∈ 1..n. On
the other hand, if the last default branch [ ] is selected then by definition of the shape
analysis on wait we get back to the shape E and again the inductive hypothesis applies
to C(g(e), η). The case where a pattern can be a variable is similar.
To conclude the proof we notice that for every function definition f (x) = be, taking
η = x = E we have that (be, η, E) are strongly compatible and thus by lemma 42 the
shape analysis succeeds on C(be, η) starting from E.
✷
39
| 6 |
The Price of Selection in Differential Privacy
Mitali Bafna*
Jonathan Ullman†
arXiv:1702.02970v1 [cs.DS] 9 Feb 2017
February 13, 2017
Abstract
In the differentially private top-k selection problem, we are given a dataset X ∈ {±1}n×d , in
which each row belongs to an individual and each column corresponds to some binary attribute,
and our goal is to find a set of k d columns whose means are approximately as large as possible.
Differential privacy requires that our choice of these k columns does not depend too much on
any on individual’s dataset. This problem can be solved using the well known exponential
mechanism and composition properties of differential privacy. In the high-accuracy regime,
where we require the
p error of the selection procedure to be to be smaller than the so-called
sampling error α ≈ ln(d)/n, this procedure succeeds given a dataset of size n & k ln(d).
We prove a matching lower bound, showing that a dataset of size n & k ln(d) is necessary for
private top-k selection in this high-accuracy regime. Our lower bound is the first to show that
selecting the k largest columns requires more data than simply estimating the value of those k
columns, which can be done using a dataset of size just n & k.
* IIT Madras, Department of Computer Science and Engineering. This research was performed while the author was a
visiting scholar at Northeastern University. [email protected]
† Northeastern University, College of Computer and Information Science. [email protected]
1
Introduction
The goal of privacy-preserving data analysis is to enable rich statistical analysis of a sensitive
dataset while protecting the privacy of the individuals who make up that dataset. It is especially
desirable to ensure differential privacy [DMNS06], which ensures that no individual’s information
has a significant influence on the information released about the dataset. The central problem
in differential privacy research is to determine precisely what statistics can be computed by
differentially private algorithms and how accurately they can be computed.
The seminal work of Dinur and Nissim [DN03] established a “price of privacy”: If we release the
answer to & n statistics on a dataset of n individuals,
and we do so with error that is asymptotically
√
smaller than the sampling error of ≈ 1/ n, then an attacker can reconstruct nearly all of the
sensitive information in the dataset, violating any reasonable notion of privacy. For example, if
we have a dataset X = (x1 , . . . , xn ) ∈ {±1}n×d and we want to privately approximate
its marginal
√
1 Pn
vector q = n i=1 xi , then it is suffices to introduce error of magnitude Θ( d/n) to each entry of
q [DN03, DN04, BDMN05, DMNS06], and this amount of error is also necessary [BUV14, SU15].
Thus, when d n, the error must be asymptotically larger the sampling error.
Top-k Selection. In many settings, we are releasing the marginals of the dataset in order to find
a small set of “interesting” marginals, and we don’t need the entire vector. For example, we may
be interested in finding only the attributes that are unusually frequent in the dataset. Thus, an
appealing approach to overcome the limitations on computing marginals is to find only the top-k
(approximately) largest coordinates of the marginal vector q, up to some error α.1
Once we find these
√ k coordinates, we can approximate the corresponding marginals with
additional error O( k/n). But, how much error must we have in the top-k selection itself? The
simplest way to solve this problem is to greedily find k coordinates using the differentially
√ private
exponential mechanism [MT07]. This approach finds the top-k marginals up to error . k log(d)/n.
The sparse vector algorithm [DNPR10, RR10, HR10] would provide similar guarantees.
Thus, when k d, we can find the top-k marginals and approximate their values with much less
error than
√ approximating the entire vector of marginals. However, the bottleneck in this approach
is the k log(d)/n error in the selection procedure, and this log(d) factor is significant
in very
p
high-dimensional datasets. For comparison, the sampling error for top-k selection is ≈ log(d)/n so
the error introduced is asymptotically larger than the sampling error when k log(d) n. However,
the best known lower bound for top-k selection follows by scaling down
the lower bounds for
√
releasing the entire marginal vector, and say that the error must be & k/n.
Top-k selection is a special case of fundamental data analysis procedures like variable selection
and sparse regression. Moreover, private algorithms for selection problems underlie many powerful results in differential privacy: private control of false discovery rate [DSZ15], algorithms for
answering exponentially many queries [RR10, HR10, GRU12, JT12, Ull15], approximation algorithms [GLM+ 10], frequent itsemset mining [BLST10], sparse regression [ST13], and the optimal
analysis of the generalization error of differentially private algorithms [BNS+ 16]. Therefore it is
important to precisely understand optimal algorithms for differentially private top-k selection.
Our main result says that existing differentially private algorithms for top-k selection are
essentially optimal in the high-accuracy regime where the error is required to be asymptotically
smaller than the sampling error.
1 Here, the algorithm has error α if it returns a set S ⊆ {1, . . . , d} consisting of of k coordinates, and for each coordinate
j ∈ S, qj ≥ τ − α, where τ is the k-th largest value among all the coordinates {q1 , . . . , qd }.
1
Theorem 1.1 (Sample Complexity
Lower Bound for Approximate Top-k). There exist functions
p
n = Ω(k log(d)) and α = Ω( log(d)/n) such that for every d and every k = d o(1) , there is no differentially
private algorithm M that takes an arbitrary dataset X ∈ {±1}n×d and (with high probability) outputs an
α-accurate top-k marginal vector for X.
Tracing Attacks. Our lower bounds for differential privacy follow from a tracing attack [HSR+ 08,
SOJH09, BUV14, SU15, DSS+ 15, DSSU17]. In a tracing attack, the dataset X consists of data for n
individuals drawn iid from some known distribution over {±1}d . The attacker is given data for a
target individual y ∈ {±1}d who is either one of the individuals in X (“IN”), or is an independent
draw from the same distribution (“OUT”). The attacker is given some statistics about X (e.g. the
top-k statistics) and has to determine if the target y is in or out of the dataset. Tracing attacks are
a significant privacy violation, as mere presence in the dataset can be sensitive information, for
example if the dataset represents the case group in a medical study [HSR+ 08].
Our results give a tracing attack for top-k statistics in the case where the dataset is drawn
uniformly at random. For simplicity, we state the properties of our tracing attack for the case of
the exact top-k marginals. We refer the reader to Section 4 for a detailed statement in the case of
approximate top-k marginals, which is what we use to establish Theorem 1.1.
Theorem 1.2 (Tracing Attack for Exact Top-k). For every ρ > 0, every n ∈ N, and every k d 2n
such that k log(d/k) ≥ O(n log(1/ρ)), there exists an attacker A : {−1, 1}d × {0, 1}d → {IN, OUT} such that
the following holds: If we choose X = (x1 , . . . , xn ) ∈ {±1}n×d uniformly at random, and t(X) is the exact
top-k vector2 of X, then
1. If y ∈ {±1}d is uniformly random and independent of X, then P [A(y, t(X)) = OUT] ≥ 1 − ρ, and
2. for every i ∈ [n], P [A(xi , t(X)) = IN] ≥ 1 − ρ.
While the assumption of uniformly random data is restrictive, it is still sufficient to provide
a lower bound for differential privacy. Tracing attacks against algorithms that release the entire marginal vector succeed under weaker assumptions—each column can have a different and
essentially arbitrary bias as long as columns are independent. However, for top-k statistics, a
stronger assumption on the column biases is necessary—if the column biases are such that t(X)
contains a specific set of columns with overwhelming probability, then t(X) reveals essentially
no information about X, so tracing will fail. Under the weaker assumption that some unknown
set of k columns “stand out” by having significantly larger bias than other columns, we can use
the propose-test-release framework [DL09] to find the exact top-k vector when n & log(d). An
interesting future direction is to characterize which distributional assumptions are sufficient to
bypass our lower bound.
We remark that, since our attack “traces” all rows of the dataset (i.e. A(xi , t(X)) = IN for every
i ∈ [n]), the attack bears some similarities to a reconstruction attack [DN03, DMT07, DY08, KRSU10,
KRS13, NTZ13]. However, the focus on high-dimensional data and the style of analysis is much
closer to the literature on tracing attacks.
1.1
Proof Overview
Our results use a variant of the inner product attack introduced in [DSS+ 15] (and inspired by the
work on fingerprinting codes [BS98, Tar08] and their connection to privacy [Ull13, BUV14, SU15]).
2 Due to the presence of ties, there is typically not a unique top-k. For technical reasons, and for simplicity, we let t(X)
denote the unique lexicographically first top-k vector and refer to it as “the” top-k vector.
2
Given a target individual y ∈ {±1}d , and a top-k vector t ∈ {±1}d , the attack is
if hy, ti ≥ τ
IN
A(y, t) =
OUT otherwise
√
where τ = Θ( k) is an appropriately chosen threshold. The key to the analysis is to show that,
when X = (x1 , . . . , xn ) ∈ {±1}n×d and y ∈ {±1}d are chosen uniformly at random, t(X) is an accurate
top-k vector of X, then
E [hy, t(X)i] = 0
and
∀i ∈ [n] E [hxi , t(X)i] > 2τ.
If we can establish these two facts then Theorem 1.2 will follow from concentration inequalities for
the two inner products.
Suppose t(X) is the exact top-k vector. Since each coordinate of y is uniform in {±1} and
independent of X, we can write
i P h i h
i
P h
E [hy, t(X)i] = j E yj · t(X)j = j E yj E t(X)j = 0.
Moreover, for every fixed vector t ∈ {±1}d with k non-zero coordinates, hy, ti is a sum of k indepen√
dent, bounded random variables. Therefore, by Hoeffding’s inequality we have that hy, ti = O( k)
with high probability. Since y, X are independent, this bound also holds with high probability
√
when X is chosen randomly and t(X) is its top-k vector. Thus, for an appropriate τ = Θ( k),
A(y, t(X)) = OUT with high probability.
Now, consider the case where y = xi is a row of X, and we want to show that E [hxi , t(X)i] is
sufficiently large. Since X is chosen uniformly at random.
One can show that, when k d 2n , the
p
top-k largest marginals of X are all at least γ = Ω( log(d/k)/n). Thus, on average, when t(X)j = 1,
we can think of xi,j ∈ {±1} as a random variable with expectation ≥ γ. Therefore,
E [hxi , t(X)i] = E
hP
i
p
x
≥
kγ
=
Ω
k
log(d/k)/n
j:t(X)j =1 i,j
Even though xi and t(X) are not independent, and do not
√ entries, we show that
p have independent
with high probability over the choice of X, hxi , t(X)i ≥ k log(d/k)/n − O( k) with high probability.
Thus, if k log(d/k) & n, we have that A(xi , t(X)) = IN with high probability.
Extension to Noisy Top-k. The case of α-approximate top-k statistics does not change the analysis
of hy, ti in that case that y is independent of x, but does change the analysis of hxi , ti when xi is a
row of X. It is not too difficult to show that for a random row xi , E[hxi , t̂i] & k(γ − α), but it is not
necessarily true that hxi , ti is large for every row xi . The problem is that for relevant choices of α, a
random dataset has many more than k marginals that are within α of being in the top-k, and the
algorithm could choose a subset of k of these to prevent a particular row xi from being traced. For
example, if there are 3k columns of X that could be chosen in an α-accurate top-k vector, then with
high probability, there exists a vector t specifying k of the columns on which xi = 0, which ensures
that hxi , ti = 0.
We can, however, show that hxi , t̂i > τ for at least (1 − c)n rows of X for an arbitrarily small
constant c > 0. This weaker tracing guarantee is still enough to rule out (ε, δ)-differential privacy
for any reasonable setting of ε, δ (Lemma 2.5), which gives us Theorem 1.1. The exact statement
and parameters are slightly involved, so we refer the reader to Section 4 for a precise statement
and analysis of our tracing attack in the case of approximate top-k statistics (Theorem 4.1).
3
2
Preliminaries
Definition 2.1 (Differential Privacy). For ε ≥ 0, ρ ∈ [0, 1] we say that a randomized algorithm
M : {±1}n×d → R is (ε, δ)-differentially private if for every two datasets X, X 0 ∈ {±1}n×d , such that
X, X 0 differ in at most one row, we have that,
P [M(X) ∈ S] ≤ eε · P M(X 0 ) ∈ S + δ.
∀S ⊆ R
Definition 2.2 (Marginals). For a dataset X ∈ {±1}n×d , its marginal vector q(X) = (q1 (X), . . . , qd (X)) is
P
the average of the rows of X. That is, qj (X) = n1 ni=1 Xi,j . We use the notation
q(1) (X) ≥ q(2) (X) ≥ . . . ≥ q(d) (X)
to refer to the sorted marginals. We will also define π : [d] → [d] to be the lexicographically first
permutation that puts the marginals in sorted order. That is, we define π so that qπ(j) = q(j) and if
j < j 0 are such that qj = qj 0 , then π(j) < π(j 0 ).
Definition 2.3 (Accurate Top-k Vector). Given a dataset X ∈ {±1}n×d and a parameter α ≥ 0, a
vector t̂ ∈ {0, 1}d is an α−accurate top-k vector of X if,
1. t̂ has exactly k non-zero coordinates, and
2. (t̂i = 1) ⇒ (qi (X) ≥ q(k) (X) − α).
When α = 0, we define the exact top-k vector of X as t(X) ∈ {0, 1}d to be the lexicographically first
0-accurate top-k vector.3 Specifically, we define t(X) so that
(t(X)j = 1) ⇔ j ∈ {π(1), . . . , π(k)}.
We refer to these set of columns as the top-k columns of X.
For comparison with our results, we state a positive result for privately releasing an αapproximate top-k vector, which is an easy consequence of the exponential mechanism and
composition theorems for differential privacy
Theorem 2.4. For every n, d, k ∈ N, and ε, δ, β ∈ (0, 1), there is an (ε, δ)-differentially private algorithm
that takes as input a dataset X ∈ {±1}n×d , and with probability at least 1 − β, outputs an α-accurate top-k
vector of X, for
p
k · ln(1/δ) · ln(kd/β)
α = O
εn
2.1
Tracing Attacks
Intuitively, tracing attacks violate differential privacy because if the target individual y is outside
the dataset, then A(y, M(X)) reports OUT with high probability, whereas if y were added to the
dataset to obtain X 0 , A(y, M(X 0 )) reports IN with high probability. Therefore M(X), M(X 0 ) must
have very different distributions, which implies that M is not differentially private. The next lemma
formalizes and quantifies this property.
3 Due to ties, there may not be a unique 0-accurate top-k vector of X. For technical reasons we let t(X) be the unique
lexicographically first 0-accurate top-k vector, so we are justified in treating t(X) as a function of X.
4
Lemma 2.5 (Tracing Violates DP). Let M : {±1}n×d → R be a (possibly randomized) algorithm. Suppose
there exists an algorithm A : {±1}d × R → {IN, OUT} such that when X ∼ {±1}n×d and y ∼ {±1}d are
independent and uniformly random,
1. (Soundness) P [A(y, M(X)) = IN] ≤ ρ
2. (Completeness) P [# {i | A(xi , M(X)) = IN} ≥ n − m] ≥ 1 − ρ.
Then M is not (ε, δ)-differentially private for any ε, δ such that eε ρ + δ < 1 − ρ − m
n . If ρ < 1/4 and
m > 3n/4, then there are absolute constants ε0 , δ0 > 0 such that M is not (ε0 , δ0 )-differentially private.
The constants ε0 , δ0 can be made arbitrarily close to 1 by setting ρ and m/n to be appropriately
small constants. Typically differentially private algorithms typically satisfy (ε, δ)-differential
privacy where ε = o(1), δ = o(1/n), so ruling out differential privacy with constant (ε, δ) is a strong
lower bound.
2.2
Probabilistic Inequalities
We will make frequent use of the following concentration and anticoncentration results for sums
of independent random variables.
Lemma 2.6 (Hoeffding Bound). Let Z1 , . . . , Zn be independent random variables supported on {±1}, and
P
let Z = n1 ni=1 Zi . Then
∀ν > 0
1
2
P [Z − E[Z] ≥ ν] ≤ e− 2 ν n .
Hoeffding’s bound on the upper tail also applies to random variables that are negative-dependent,
which in this case means that setting any set of the variables B to +1 only makes the variables in
[n] \ B more likely to be −1 [PS97]. Similarly, if the random variables are positive-dependent (their
negations are negative-dependent), then Hoeffding’s bound applies to the lower tail.
Theorem 2.7 (Chernoff Bound). Let Z1 , . . . , Zn be a sequence of independent {0, 1}-valued random
P
variables, let Z = ni=1 Zi , and let µ = E[Z]. Then
ν2
1. (Upper Tail)
∀ν > 0 P [Z ≥ (1 + ν)µ] ≤ e− 2+ν µ , and
2. (Lower Tail)
∀ν ∈ (0, 1) P [Z ≤ (1 − ν)µ] ≤ e− 2 ν µ .
1
2
Theorem 2.8 (Anticoncentration [LT13]). Let Z1 , . . . , Zn be independent and uniform in {±1}, and let
P
Z = n1 ni=1 Zi . Then for every β > 0, there exists Kβ > 1 such that for every n ∈ N,
Kβ 1
∀v ∈ √ ,
n Kβ
"
3
#
P [Z ≥ ν] ≥ e−
1+β 2
2 ν n
.
Tracing Using the Top-k Vector
Given a (possibly approximate) top-k vector t of a dataset X, and a target individual y, we define
the following inner product attack.
5
Aρ,d,k (y, t) :
p
Input: y ∈ {±1}d and t ∈ {0, 1}d . Let τ = 2k ln(1/ρ).
If hy, ti > τ, output IN; else output OUT.
In this section we will analyze this attack when X ∈ {±1}n×d is a uniformly random matrix, and
t = t(X) is the exact top-k vector of X. In this case, we have the following theorem.
Theorem 3.1. There is a universal constant C ∈ (0, 1) such that if ρ > 0 is any parameter and n, d, k ∈ N
satisfy d ≤ 2Cn , k ≤ Cd and k ln(d/2k) ≥ 8n ln(1/ρ), then Aρ,d,k has the following properties: If X ∼
{±1}n×d , y ∼ {±1}d are independent and uniform, and t(X) is the exact top-k vector of X, then
h
i
1. (Soundness) P Aρ,d,k (y, t(X)) = IN ≤ ρ, and
h
i
2. (Completeness) for every i ∈ [n], P Aρ,d,k (xi , t(X)) = OUT < ρ + e−k/4 .
We will prove the soundness and completeness properties separately in Lemmas 3.2 and 3.3,
respectively. The proof of soundness is straightforward.
Lemma 3.2 (Soundness). For every ρ > 0, n ∈ N, and k ≤ d ∈ N, if X ∼ {±1}n×d , y ∼ {±1}d are
independent and uniformly random, and t(X) is the exact top-k vector, then
q
P hy, t(X)i ≥ 2k ln(1/ρ) ≤ ρ.
p
Proof. Recall that τ := 2k ln(1/ρ). Since X, y are independent, we have
P [hy, t(X)i ≥ τ]
h
i
P hy, t(X)i ≥ τ t(X) = IT · P [t(X) = IT ]
X,y
=
X
T ⊆[d]:|T |=k
=
X
T ⊆[d]:|T |=k
≤
max
T ⊆[d]:|T |=k
X,y
X
X
P
yj ≥ τ · P [t(X) = IT ]
X
y
j∈T
X
P
yj ≥ τ
y
(X, y are independent)
j∈T
For every fixed T , the random variables {yj }j∈T are independent and uniform on {±1}, so by
Hoeffding’s inequality,
q
X
P
yj ≥ 2k ln(1/ρ) ≤ ρ.
j∈T
This completes the proof of the lemma.
6
We now turn to proving the completeness property, which will following immediately from the
following lemma.
Lemma 3.3 (Completeness). There is a universal constant C ∈ (0, 1) such that for every ρ > 0, n ∈ N,
d ≤ 2Cn , and k ≤ Cd, if X ∼ {±1}n×d is chosen uniformly at random, t(X) is the exact top-k vector, and xi
is any row of X,
r
q
ln(d/2k)
P hxi , t(X)i ≤ k
− 2k ln(1/ρ) ≤ ρ + e−k/4 .
n
To see how the completeness
p property of
p Theorem 3.1 follows from the lemma, observe if
k ln(d/2k) ≥ 8n ln(1/ρ), then k ln(d/2k)/n
− 2k ln(1/ρ)
≥ τ. Therefore Lemma 3.3 implies that
h
i
−k/4
P [hxi , t(X)i < τ] ≤ ρ + e
, so P Aρ,d,k (xi , t(X)) = IN ≥ 1 − ρ − e−k/4 .
Before proving the lemma, we will need a few claims about the distribution of hxi , t(X)i. The
first claim asserts that, although X ∈ {±1}n×d is uniform, the k columns of X with the largest
marginals are significantly biased.
Claim 3.4. There is a universal constant C ∈ (0, 1), such that for every n ∈ N, d ≤ 2Cn and k ≤ Cd, if
X ∈ {±1}n×d is drawn uniformly at random, then
r
ln(d/2k)
≤ e−k/4 .
P q(k) (X) <
n
Proof of Claim 3.4. For every j ∈ [d], define Ej to be the event that
1X
qj =
xij >
n
r
i∈[n]
ln(d/2k)
.
n
We would like to apply Theorem 2.8 to the random variable
"
p
K 1
ln (d/2k) /n ∈ √1 ,
n K1
1P
n i xij .
To do so, we need
#
where K1 is the universal constant from that theorem (applied with β = 1). These inequalities will
be satisfied as long as d ≤ 2Cn , and k ≤ Cd for a suitable universal constant C ∈ (0, 1). Applying
Theorem 2.8 gives
r
h i
ln(d/2k) 2k
∀j ∈ [d] P Ej = P qj >
.
≥
n
d
P
By linearity of expectation, we have that E[ j Ej ] ≥ 2k. Since the columns of X are independent,
and the events Ej only depend on a single column of X, the events Ej are also independent.
P
Therefore, we can apply a Chernoff bound (Theorem 2.7) to j Ej to get
d
X
P
Ej < k ≤ e−k/4 .
j=1
q
P
If j Ej ≥ k, then there exist k values qj that are larger than
value. This completes the proof of the claim.
7
ln(d/2k)
,
n
so q(k) is also at least this
The previous claim establishes that if we restrict X to its top-k columns, the resulting matrix
Xt ∈ {±1}n×k is a random matrix whose mean entry is significantly larger than 0. This claim is
enough to establish that the inner product hxi , t(X)i is large in expectation over X. However, since
XT its columns are not necessarily independent, which prevents us from applying concentration to
get the high probability statement we need. However, the columns of Xt are independent if we
condition on the value and location of the (k + 1)-st marginal.
Claim 3.5. Let X ∈ {±1}n×d be a random matrix from a distribution with independent columns, and let
t(X) be its marginals. For every q ∈ [−1, 1], k, j ∈ [d], T ∈ [d]
, the conditional distribution
k
X | (q(k+1) = q) ∧ (π(k + 1) = j) ∧ (t(X) = IT )
also has independent columns.
Proof of Claim 3.5. Suppose we condition on the value of the (k + 1)-st marginal, q(k+1) = q, its location, π(k + 1) = j, and the set of top-k marginals t = IT . By definition of the (exact, lexicographically
first) top-k vector, we have that if ` < j, then ` ∈ T if and only if q` ≥ q. Similarly, if ` > j, then
` ∈ T if and only if q` > q. Since we have conditioned on a fixed tuple (q, j, T ), the statements q` > q
and q` ≥ q now depend only on the `-th column. Thus, since the columns of X are independent,
they remain independent even when condition on any tuple (q, j, T ). Specifically, if ` < j and ` ∈ T ,
P
then column ` is drawn independently from the conditional distribution ((u1 , . . . , un ) | n1 i ui ≥ q),
where (u1 , . . . , un ) ∈ {±1}n are chosen independently and uniformly at random. Similarly, if ` > j
P
and ` ∈ T , then column ` is drawn independently from ((u1 , . . . , un ) | n1 i ui > q).
Now we are ready to prove Lemma 3.3.
q
Proof of Lemma 3.3. For convenience, define γ =
of X. We can write
ln(d/2k)
n
p
and τc = kγ − 2k ln(1/ρ). Fix any row xi
P [hxi , t(X)i < τc ]
h
i
h
i
≤ P hxi , t(X)i < τc q(k+1) ≥ γ + P q(k+1) < γ
h
i
≤ P hxi , t(X)i < τc q(k+1) ≥ γ + e−k/4
(Claim 3.4)
h
i
≤
max
P hxi , t(X)i < τc (q(k+1) = q) ∧ (π(k + 1) = j) ∧ (t(X) = IT ) + e−k/4
(1)
q≥γ,j∈[d],T ∈([d]
k )
Let Gq,j,T be the event (q(k+1) = q) ∧ (π(k + 1) = j) ∧ (t(X) = IT ). By linearity of expectation, we can
write
h
i
E hxi , t(X)i < τc Gq,j,T ≥ kq ≥ kγ.
P
Using Claim 3.5, we have that `:t(X)` =1 xi` conditioned on Gδ,j,T is a sum of independent {±1}valued random variables. Thus,
q
h
i
P hxi , t(X)i < τc | Gq,j,T = P hxi , t(X)i < kγ − 2k ln(1/ρ) | Gq,j,T
q
≤ P hxi , t(X)i < kq − 2k ln(1/ρ) | Gq,j,T
(q ≥ γ)
(Hoeffding)
≤ρ
Combining with (1) completes the proof.
8
4
Tracing Using an Approximate Top-k Vector
In this section we analyze the inner product attack when it is given an arbitrary approximate top-k
vector.
Theorem 4.1. For every ρ > 0, there exist universal constants C, C 0 ∈ (0, 1) (depending only on ρ) such
that if d ∈ N is sufficiently large and n, d, k ∈ N and α ∈ (0, 1) satisfy
r
ln(d/2k)
Cn
C
0
d ≤ 2 , k ≤ d , n = C k ln(d/2k), and α ≤ C
,
n
and t̂ : {±1}n×d → {0, 1}d is any randomized algorithm such that
h
i
∀X ∈ {±1}n×d P t̂(X) is an α-approximate top-k vector for X ≥ 1 − ρ,
then Aρ,d,k (Section 3) has the following properties: If X ∼ {±1}n×d , y ∈ {±1}d are independent and
uniform, then
h
i
1. (Soundness) P Aρ,d,k (y, t(X)) = IN ≤ ρ, and
h n
o
i
2. (Completeness) P # i ∈ [n] Aρ,d,k (xi , t̂(X)) = IN < (1 − e2 ρ)n < 2ρ + 2e−k/6 .
The proof of the soundness property is nearly identical to the case of exact statistics so we will
focus only on proving the completeness property.
Intuitively, the proof of completeness takes the same general form as it did for the case of exact
top-k statistics. First, for some parameter γ > 0, with high probability X has at least k marginals
that are at least γ. Therefore, any marginal j contained in any α-accurate top-k vector has value
qj ≥ λ := γ − α. From this, we can conclude that the expectation of hxi , t̂(X)i ≥ k(γ − α) where
the expectation is taken over the choices of X, t̂(X), and i. However, unlike the case of the exact
marginals, the k columns selected by t̂ may be significantly correlated so that for some choices of i,
hxi , t̂(X)i is small with high probability. At a high level we solve this problem as follows: first, we
restrict to the set of dλ columns j such that qj ≥ γ − α, which remain mutually independent. Then
we argue that for every fixed α-accurate top-k vector specifying a subset of k of these columns,
with overwhelming probability the inner product is large for most choices of i ∈ [n]. Finally, we
take a union bound over all dkλ possible choices of α-accurate top-k vector. To make the union
bound tolerable, we need that with high probability dλ is not too big. Our choice of γ was such
that only about k columns are above γ, therefore if we take λ very close to γ, we will also be able
to say that dλ is not too much bigger than k. By assuming that the top-k vector is α-accurate for
α γ, we get that λ = γ − α is very close to γ.
Before stating the exact parameters and conditions in Lemma 4.4, we will need to state and
prove a few claims about random matrices.
Claim 4.2. For every β > 0, there is a universal constant C ∈ (0, 1) (depending only on β), such
that q
for every n ∈ N, d ≤ 2Cn and k ≤ Cd, if X ∈ {±1}n×d is drawn uniformly at random, then for
γ :=
2
1+β
·
ln(d/2k)
,
n
we have
h
i
P q(k) (X) < γ ≤ e−k/4 .
The above claim is just a slightly more general version of Claim 3.4 (in which we have fixed
β = 1), so we omit its proof.
9
Claim 4.3. For every n, d ∈ N and every λ ∈ (0, 1), if X ∈ {±1}n×d is drawn uniformly at random, then
for dλ := 2d exp(− 12 λ2 n)
h n
o
i
P # j qj > λ > dλ ≤ e−dλ /6 .
Proof of Claim 4.3. For every j ∈ [d], define Ej to be the event that qj =
P
are independent, applying Hoeffding’s bound to i xij gives,
∀j ∈ [d]
1P
n i xij
> λ. Since the xij ’s
h i
h
i
2
P Ej = P qj > λ ≤ e−λ n/2 .
P
2
By linearity of expectation, we have that E[ j Ej ] ≤ de−λ n/2 = 12 dλ . Since the columns of X are
P
independent, we can apply a Chernoff bound (Theorem 2.7) to j Ej , which gives
P
hP
d
i
−dλ /6
E
>
d
.
λ ≤e
j=1 j
This completes the proof of the claim.
Now we are ready to state our exact claim about the completeness of the attack when given an
α-accurate top-k vector.
Lemma 4.4 (Completeness). For every ρ > 0, there exist universal constants C2 , C3 , C4 , C5 ∈ (0, 1)
(depending only on ρ) such that if n, d, k ∈ N and α ∈ (0, 1) satisfy,
r
ln(2d)
,
4k ≤ min{(2d)C2 , 4C4 d}, 8n ln(1/ρ) = C32 k ln(2d), d ≤ 2C4 n α ≤ C5
n
and t̂ is an algorithm that, for every X ∈ {±1}n×d , outputs an α-accurate top-k vector with probability at
least 1 − ρ, then for a uniformly random X ∈ {±1}n×d , we have
h n
o
i
P # i ∈ [n] | hxi , t̂(X)i ≥ τc < (1 − e2 ρ)n < 2ρ + e−k/4 + e−k/6 ,
q
where τc := C3 k
ln(2d)
n
p
− 2k ln(1/ρ).
To see how the completeness property of Theorem 4.1 follows from the lemma, observe if
8n ln(1/ρ) = C32 k ln(2d), then
r
τc = C 3 k
ln(2d)
−
n
q
2k ln(1/ρ) =
q
2k ln(1/ρ) = τ
where τ is the threshold in Aρ,d,k . Therefore Lemma 4.4 implies that
h n
o
i
P # i ∈ [n] Aρ,d,k (xi , t̂(X)) = IN < (1 − e2 ρ)n < ρ + e−k/4 + e−k/6 .
The universal constants C, C 0 will be C = min{C2 , C4 , C5 } − δ for an arbitrarily small δ > 0, and
= C32 . As long as d is sufficiently large the conditions k ≤ d C in Theorem 4.1 will imply the
corresponding condition in the above lemma.
C0
10
Proof of Lemma 4.4. First, we will condition everything on the event
Gα := {t̂ = t̂(X) is an α-accurate top-k vector of X}.
By assumption, for every X ∈ {±1}n×d , P [Gα ] ≥ 1 − ρ.
For convenience define the constant c := e2 ρ, so that the lemma asserts that, with high probability, A(xi , t̂(X)) = IN for at least (1 − c)n rows xi . As in the proof of completeness for the case of exact
top-k, we will first condition on the event that at least k marginals are above the threshold γ. Now,
by Claim 4.2, with an appropriate choice of
r
s
ln(d/2k)
2
c
γ :=
,
β :=
·
c
16 ln(1/ρ)
1 + 16 ln(1/ρ)
n
and the assumptions that k ≤ C4 d and d ≤ 2C4 n for some universal constant C4 depending only on
β, the event
r
ln(2d)
Gγ :=
q
(X)
≥
γ
=
C
,
(k)
1
n
will hold with probability 1 − e−k/4 . Here we define the universal constants
s
2
c
C1 :=
C2 :=
.
c
1 + 8 ln(1/ρ)
2c + 4 ln(1/ρ)
depending only on ρ. These constants were chosen so that provided 4k ≤ (2d)C2 , the inequality in
the definition of Gγ will be satisfied.
In light of the above analysis, we condition the rest of the analysis on the event Gγ , which is
h i
satisfies P Gγ ≥ 1 − e−k/4 .
If we condition on Gα and Gγ , then for any marginal j chosen by t̂ (i.e. t̂j = 1), then we can say
that qj ≥ λ for any λ ≤ γ − α. Now, we define the constants
s
C3 :=
2
c
1 + 4 ln(1/ρ)
C5 := C1 − C3 > 0,
where one can
−C3 > 0 holds for all choices of c. Now by our assumption
q verify that the inequality C1q
that α < C5
ln(2d)
n ,
we can define λ := C3
ln(2d)
n .
For any matrix X ∈ {±1}n×d , we can define Sλ = Sλ (X) ⊆ {1, . . . , d} to be the set of columns of X
whose marginals are greater than λ. The analysis above says that, conditioned onGγ and Gα , if
t̂j = 1, then j ∈ Sλ . Note that, if X is chosen uniformly at random, and we define X≥λ ∈ {±1}n×|Sλ | to
be the restriction of X to the columns contained in Sλ , then the columns of X≥λ remain independent.
The size of Sλ is a random variable supported on {0, 1, . . . , d}. In our analysis we will need to
condition on the even that |Sλ | d. Using Claim 4.3 we have that if
dλ := 2de−
λ2 n
2
then the event
GS := {|Sλ (X)| ≤ dλ }
11
satisfies P [GS ] ≥ 1 − e−dλ /6 ≥ 1 − e−k/6 where we have used the fact that dλ ≥ k. This fact is not
difficult to verify form our choice of parameters. Intuitively, since λ ≤ γ, and there are at least
k marginals larger than γ, there must also typically be at least k marginals larger than λ. We
condition the remainder of the analysis on the event GS .
Later in the proof we require that the size of Sλ is small with high probability. Using Claim 4.3
λ2 n
we can say that the size of Sλ is at most dλ = 2de− 2 with probability at least, 1 − e−dλ /6 . When
q(k) ≥ γ, the number of marginals greater than λ would be at least k. So dλ > k and the error
probability e−dλ /6 is at most e−k/6 . We will henceforth condition on the event that |Sλ (X)| ≤ dλ .
We will say that
p the attack A fails on t̂ when we fail to trace more than cn rows, i.e. A fails when
{i : hxi , t̂i < kλ − 2k ln(1/ρ)} > cn = e2 ρn. Formally we have that,
h
i
h
i
h
i
P A fails on t̂ ≤ P A fails on t̂ ∧ Gα ∧ Gγ ∧ GS + P ¬Gα ∨ ¬Gγ ∨ ¬GS
h
i
≤ P A fails on t̂ ∧ Gα ∧ Gγ ∧ GS + ρ + e−k/4 + e−k/6
(2)
Thus, to complete the proof, it suffices to show that
h
i
h
i
P A fails on t̂ ∧ Gα ∧ Gγ ∧ GS = P (A fails on t̂) ∧ (t̂ is α-accurate) ∧ (q(k) ≥ γ) ∧ (|Sλ | ≤ dλ )
h
i
≤ P (A fails on t̂) ∧ (t̂ ⊆ Sλ ) ∧ (|Sλ | ≤ dλ )
"
!
!
#
Sλ
≤ P ∃v ∈
A fails on v ∧ (|Sλ | ≤ dλ )
k
where we have abused notation and written t̂ ⊆ Sλ to mean that t̂j = 1 =⇒ j ∈ Sλ , and used v ∈ Skλ
to mean that v is a subset of Sλ of size
k.
h exactly
i
Sλ
We will now upper bound P (∃v ∈ k A fails on v) ∧ (|Sλ | ≤ dλ ) . Observe that, since the
columns of X are identically distributed, this probability is independent of the specific choice
of Sλ and depends only on |Sλ |. Further, decreasing the size of Sλ only decreases the probability.
Thus, we will fix a set S of size exactly
assume Sλi = S. Thus, for our canonical choice of set
h dλ and
S
S = {1, . . . , dλ }, we need to bound P ∃v ∈ k A fails on v .
Consider a fixed vector v ⊆ S. That is, a vector v ∈ {0, 1}d such that vj = 1 =⇒ j ∈ S. Define the
event Ei,v to be the event that hxi , vi is too small for some specific row i and some specific vector
v ⊆ S. That is,
q
Ei,v := hxi , vi < τc := kλ −
2k ln(1/ρ) .
Since the columns of XS are independent, for a fixed i and v, by Hoeffding’s inequality gives
hP
i
P Ei,v = P j:vj =1 xi,j < τc ≤ ρ.
We have proved that the probability that hxi , vi is small, is small for a given row. We want to
bound the probability that hxi , vi is small for an entire set of rows R ⊆ [n]. Unfortunately, since we
require that qj ≥ λ for every column j ∈ S, the rows xi are no longer independent. However, the
rows satisfy a negative-dependence condition, captured in the following claim.
Claim 4.5. For every R ⊆ [n],
^
P Ei,v ≤ ρ|R| .
i∈R
12
To maintain the flow of the analysis, we defer the proof of this claim to Section 4.1
By definition, A fails on v only if there exists a set R of exactly cn = e2 ρn rows such that
Taking a union bound over all such sets R and all v , we have
"
!
#
!
!
S
dλ
n
P ∃v ∈
A fails on v ≤
·
· ρcn
k
k
cn
!k
enρ cn
edλ
·
≤
k
cn
V
i∈R Ei,v .
≤ dλk · e−cn
where we have used the identity
2
− λ2 n
a
b
b
≤ ( ea
b ) . We have already set the parameter λ, and set dλ =
2de
. Thus, all that remains is to show that for our choice of parameters dλk · e−cn ≤ ρ, which is
equivalent to cn ≥ ln(1/ρ) + k ln(dλ ). Substituting our choice of λ gives the condition
kλ2 n
≥ ln(1/ρ) + k ln(2d) − cn
2
C32 k ln(2d)
,
8 ln(1/ρ)
One can check that, for our choice of n =
and our choice of λ = C3
been defined above, the preceding equation is satisfied.
Thus, we have established that
"
!
#
S
P ∃v ∈
A fails on v ≤ dλk · e−cn ≤ ρ.
k
q
ln(2d)
n
where C3 has
As we have argued above, this implies that
h
i
P A fails on t̂ ≤ 2ρ + e−k/4 + e−k/6
This completes the proof of the completeness lemma.
4.1
Proof of Claim 4.5
Recall that, for a given X ∈ {±1}n×d , Ei,v is the event that hxi , vi < τc for a specific row i and a specific
vector v ⊆ S, where S = Sλ is the set of columns j of X such that qj ≥ λ. Thus, we can think of
XS ∈ {±1}n×|S| as a matrix with |S| independent columns that are uniformly random subject to the
constraint that each column’s mean is at least λ. Since, flipping some entries of XS from −1 to +1
eS in which each column’s mean is
can only increase hxi , vi, we will in fact use the distribution X
exactly λn. Thus, when we refer to the probabilities of events involving random variables xi,j , we
will use this distribution on XS as the probability space. Additionally, since v is fixed, and the
probability is the same for all v, we will simply write Ei to cut down on notational clutter.
For a specific set R ⊆ [n], we need to calculate
^
P Ei .
XS
i∈R
13
We can write
^ X
^
P Ei = P xij < τc
e
e
X
X
S
i∈R
S
i∈R j∈v
X
≤ P
xij < |R|τc .
eS
X
(3)
i∈R,j∈v
The key property of XS is that its entries xij are positively correlated. That is, for every set
I ⊂ [n] × S of variables xij , we have,
h
i Y h
i
P ∀(i, j) ∈ I xij = −1 ≤
P xij = −1 .
(4)
eS
X
(i,j)∈I
eS
X
eS are independent if we partition the elements of I into sets I1 , . . . , Ik ,
Since the columns of X
where each set Il has pairs of I which come from the column l, then,
i
h
i Y h
P ∀(i, j) ∈ I xij = −1 =
P ∀(i, j) ∈ Il xij = −1 .
eS
X
l∈[k]
eS
X
n
o
So it is enough to show that equation 4 holds when I = (i1 , l), . . . , (ip , l) . For simplicity of notation
we will refer to these elements of I as {1, . . . , p}. We have that,
p
Y
h
i
P [∀a ∈ I xa = −1] =
P xa = −1 (∀b ∈ {1, . . . , a − 1} , xb = −1) .
(5)
a=1
We will show that each of the terms in the product is smaller than P [xa = −1]. For a fixed a ∈ I,
let B be the set {1, . . . , a − 1} and let E be the event that (∀b ∈ B, xb = −1). Since every column of XS
sums to nλ, we have
X
E
xil B = nλ.
i∈[n]
On the other hand, since the bits in B are all set to −1 and all the other bits in column l are equal in
expectation,
X
h
i
E
xil B = −|B| + (n − |B|) · E xa B ,
i∈[n]
which means that
E[xa B] ≥ λ = E[xa ].
h
i
Since P [xa = −1] = (1 − E[xa ])/2, we get that P xa = −1 B ≤ P [xa = −1]. Substituting this back
into (5), we get that the variables are positively correlated.
We have that,
X
E
xij = |R|kλ,
i∈R,j∈v
14
and since Hoeffding’s inequality applies equally well to positively-correlated random variables [PS97],
we also have
p
q
|R| 2k ln(1/ρ) 2
X
X
= ρ|R| .
P
xij ≤ |R|τc ≤ P
xij < |R|kλ − |R| 2k ln(1/ρ) ≤ exp −
2|R|k
e
e
XS
XS
i∈R,j∈v
i∈R,j∈V
Substituting this in equation 3, we get that,
^
P Ei ≤ ρ|R| .
e
X
S
i∈R
eS , we have
Finally, we use the fact that, by our definition of the distributions XS , X
^
^
P Ei ≤ P Ei ≤ ρ|R| .
XS
e
X
i∈R
S
i∈R
This completes the proof.
Acknowledgements
We are grateful to Adam Smith and Thomas Steinke for many helpful discussions about tracing
attacks and private top-k selection.
References
[BDMN05] Avrim Blum, Cynthia Dwork, Frank McSherry, and Kobbi Nissim. Practical privacy:
the SuLQ framework. In PODS, 2005.
[BLST10]
Raghav Bhaskar, Srivatsan Laxman, Adam Smith, and Abhradeep Thakurta. Discovering frequent patterns in sensitive data. In Proceedings of the 16th ACM SIGKDD
international conference on Knowledge discovery and data mining, pages 503–512. ACM,
2010.
[BNS+ 16]
Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, and Jonathan
Ullman. Algorithmic stability for adaptive data analysis. In Proceedings of the 48th
Annual ACM SIGACT Symposium on Theory of Computing, pages 1046–1059. ACM,
2016.
[BS98]
Dan Boneh and James Shaw. Collusion-secure fingerprinting for digital data. IEEE
Transactions on Information Theory, 44(5):1897–1905, 1998.
[BUV14]
Mark Bun, Jonathan Ullman, and Salil P. Vadhan. Fingerprinting codes and the price
of approximate differential privacy. In STOC, 2014.
[DL09]
Cynthia Dwork and Jing Lei. Differential privacy and robust statistics. In Proceedings
of the forty-first annual ACM symposium on Theory of computing, pages 371–380. ACM,
2009.
15
[DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to
sensitivity in private data analysis. In TCC, 2006.
[DMT07]
Cynthia Dwork, Frank McSherry, and Kunal Talwar. The price of privacy and the limits
of lp decoding. In Proceedings of the thirty-ninth annual ACM symposium on Theory of
computing, pages 85–94. ACM, 2007.
[DN03]
Irit Dinur and Kobbi Nissim. Revealing information while preserving privacy. In
PODS, 2003.
[DN04]
Cynthia Dwork and Kobbi Nissim. Privacy-preserving datamining on vertically partitioned databases. In CRYPTO, 2004.
[DNPR10] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N Rothblum. Differential
privacy under continual observation. In Proceedings of the forty-second ACM symposium
on Theory of computing, pages 715–724. ACM, 2010.
[DSS+ 15]
Cynthia Dwork, Adam D. Smith, Thomas Steinke, Jonathan Ullman, and Salil P. Vadhan.
Robust traceability from trace amounts. In FOCS, 2015.
[DSSU17]
Cynthia Dwork, Adam Smith, Thomas Steinke, and Jonathan Ullman. Exposed a survey
of attacks on private data. Annual Review of Statistics and Its Application, 2017.
[DSZ15]
Cynthia Dwork, Weijie Su, and Li Zhang. Private false discovery rate control. arXiv
preprint arXiv:1511.03803, 2015.
[DY08]
Cynthia Dwork and Sergey Yekhanin. New efficient attacks on statistical disclosure
control mechanisms. In Advances in Cryptology - CRYPTO 2008, 28th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 17-21, 2008. Proceedings,
pages 469–480, 2008.
[GLM+ 10] Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, and Kunal Talwar.
Differentially private combinatorial optimization. In Proceedings of the twenty-first
annual ACM-SIAM symposium on Discrete Algorithms, pages 1106–1125. Society for
Industrial and Applied Mathematics, 2010.
[GRU12]
Anupam Gupta, Aaron Roth, and Jonathan Ullman. Iterative constructions and private
data release. In TCC, 2012.
[HR10]
Moritz Hardt and Guy N. Rothblum. A multiplicative weights mechanism for privacypreserving data analysis. In FOCS, 2010.
[HSR+ 08]
Nils Homer, Szabolcs Szelinger, Margot Redman, David Duggan, Waibhav Tembe, Jill
Muehling, John V Pearson, Dietrich A Stephan, Stanley F Nelson, and David W Craig.
Resolving individuals contributing trace amounts of dna to highly complex mixtures
using high-density snp genotyping microarrays. PLoS genetics, 4(8):e1000167, 2008.
[JT12]
Prateek Jain and Abhradeep Thakurta. Mirror descent based database privacy. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques,
pages 579–590. Springer, 2012.
16
[KRS13]
Shiva Prasad Kasiviswanathan, Mark Rudelson, and Adam Smith. The power of
linear reconstruction attacks. In Proceedings of the Twenty-Fourth Annual ACM-SIAM
Symposium on Discrete Algorithms, pages 1415–1433. Society for Industrial and Applied
Mathematics, 2013.
[KRSU10] Shiva Prasad Kasiviswanathan, Mark Rudelson, Adam Smith, and Jonathan Ullman.
The price of privately releasing contingency tables and the spectra of random matrices
with correlated rows. In Proceedings of the 42nd ACM Symposium on Theory of Computing,
STOC 2010, Cambridge, Massachusetts, USA, 5-8 June 2010, pages 775–784, 2010.
[LT13]
Michel Ledoux and Michel Talagrand. Probability in Banach Spaces: isoperimetry and
processes. Springer Science & Business Media, 2013.
[MT07]
Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In
Foundations of Computer Science, 2007. FOCS’07. 48th Annual IEEE Symposium on,
pages 94–103. IEEE, 2007.
[NTZ13]
Aleksandar Nikolov, Kunal Talwar, and Li Zhang. The geometry of differential privacy:
the sparse and approximate cases. In STOC, 2013.
[PS97]
Alessandro Panconesi and Aravind Srinivasan. Randomized distributed edge coloring via an extension of the chernoff–hoeffding bounds. SIAM Journal on Computing,
26(2):350–368, 1997.
[RR10]
Aaron Roth and Tim Roughgarden. Interactive privacy via the median mechanism. In
STOC, pages 765–774. ACM, June 5–8 2010.
[SOJH09]
Sriram Sankararaman, Guillaume Obozinski, Michael I Jordan, and Eran Halperin.
Genomic privacy and limits of individual detection in a pool. Nature genetics, 41(9):965–
967, 2009.
[ST13]
Adam Smith and Abhradeep Thakurta. Differentially private model selection via
stability arguments and the robustness of the lasso. J Mach Learn Res Proc Track,
30:819–850, 2013.
[SU15]
Thomas Steinke and Jonathan Ullman. Between pure and approximate differential
privacy. CoRR, abs/1501.06095, 2015.
[Tar08]
Gábor Tardos. Optimal probabilistic fingerprint codes. Journal of the ACM (JACM),
55(2):10, 2008.
[Ull13]
Jonathan Ullman. Answering n2+o(1) counting queries with differential privacy is hard.
In STOC, 2013.
[Ull15]
Jonathan Ullman. Private multiplicative weights beyond linear queries. In PODS,
2015.
17
| 8 |
Priming Neural Networks
arXiv:1711.05918v2 [cs.CV] 17 Nov 2017
Amir Rosenfeld , Mahdi Biparva , and John K.Tsotsos
Department of Electrical Engineering and Computer Science
York University
Toronto, ON, Canada, M3J 1P3
{amir@eecs, mhdbprv@cse,tsotsos@cse}.yorku.ca
Abstract
Visual priming is known to affect the human visual system to allow detection of scene elements, even those that
may have been near unnoticeable before, such as the presence of camouflaged animals. This process has been shown
to be an effect of top-down signaling in the visual system
triggered by the said cue. In this paper, we propose a mechanism to mimic the process of priming in the context of object detection and segmentation. We view priming as having a modulatory, cue dependent effect on layers of features
within a network. Our results show how such a process
can be complementary to, and at times more effective than
simple post-processing applied to the output of the network,
notably so in cases where the object is hard to detect such as
in severe noise. Moreover, we find the effects of priming are
sometimes stronger when early visual layers are affected.
Overall, our experiments confirm that top-down signals can
go a long way in improving object detection and segmentation.
Figure 1: Visual priming: something is hidden in plain sight
in this image. It is unlikely to notice it without a cue on
what it is (for an observer that has not seen this image before). Once a cue is given, perception is modified to allow
successful detection. See footnote at bottom of this page for
the cue, and supplementary material for the full answer.
1. Introduction
ing, (2) priming and (3) pruning. Freely viewing the image,
the default strategy, likely reveals nothing more than a dry
grassy field near a house. Introducing a cue about a target in the image1 results in one of two possibilities. The
first, also known as priming, is modification to the computation performed when viewing the scene with the cue in
mind. The second, which we call pruning - is a modification to the decision process after all the computation is
finished. When the task is to detect objects, this can mean
retaining all detections match the cue, even very low confidence ones and discarding all others. While both are viable
ways to incorporate the knowledge brought on by the cue,
Psychophysical and neurophysiological studies of the
human visual system confirm the abundance of top-down
effects that occur when an image is observed. Such topdown signals can stem from either internal (endogenous)
processes of reasoning and attention or external (exogenous) stimuli- i.e. cues - that affect perception (cf. [35],
Chapter 3 for a more detailed breakdown). External stimuli having such effects are said to prime the visual system,
and potentially have a profound effect on an observer’s perception. This often results in an “Aha!” moment for the
viewer, as he/she suddenly perceives the image differently;
Fig. 1 shows an example of such a case. We make here
the distinction between 3 detection strategies: (1) free view-
1 Object
1
in image: Ø5O
priming often highly increases the chance of detecting the
cued object. Viewing the image for an unlimited amount of
time and pruning the results is less effective; in some cases,
detection is facilitated only by the cue. We claim that priming allows the cue to affect the visual process from early
layers, allowing detection where it was previously unlikely
to occur in free-viewing conditions. This has also recently
gained some neurophysiological evidence [2].
In this paper, we propose a mechanism to mimic the process of visual priming in deep neural networks in the context of object detection and segmentation. The mechanism
transforms an external cue about the presence of a certain
class in an image (e.g., “person”) to a modulatory signal
that affects all layers of the network. This modulatory effect is shown via experimentation to significantly improve
object detection performance when the cue is present, more
so than a baseline which simply applies post-processing to
the network’s result. Furthermore, we show that priming
early visual layers has a greater effect that doing so for
deeper layers. Moreover, the effects of priming are shown to
be much more pronounced in difficult images such as very
noisy ones.
The remainder of the paper is organized as follows: in
Sec. 2 we go over related work from computers vision, psychology and neurophysiology. In Sec. 3 we go over the details of the proposed method. In Sec. 4 we elaborate on various experiments where we evaluate the proposed method in
scenarios of object detection and segmentation. We finish
with some concluding remarks.
2. Related Work
Context has been very broadly studied in cognitive neuroscience [4, 3, 23, 37, 38, 24, 16] and in computer vision
[12, 10, 34, 33, 27, 39, 22]. It is widely agreed [30] that
context plays crucial role for various visual tasks. Attempts
have been made to express a tangible definition for context
due to the increased use in the computer vision community
[34, 33] .
Biederman et al. [4] hypothesizes object-environments
dependencies into five categories: probability, interposition,
support, familiar size, position. Combinations of some of
these categories would form a source of contextual information for tasks such as object detection [33, 30], semantic
segmentation [14], and pose estimation [6]. Context consequently is the set of sources that partially or collectively influence the perception of a scene or the objects within [32].
Visual cues originated from contextual sources, depending on the scope they influence, further direct visual tasks
at either global or local level [34, 33]. Global context such
as scene configuration, imaging conditions, and temporal
continuity refers to cues abstracted across the whole scene.
On the other hand, local context such as semantic relationships and local-surroundings characterize associations
Input
Image
Feature
Extraction
Feature
Extraction
Feature
Extraction
Task
Prediction
Feature
Extraction
Task
Prediction
(a) Feedforward
Input
Image
Feature
Extraction
Feature
Extraction
Cue
(b) Pruning
Input
Image
Feature
Extraction
Feature
Extraction
Feature
Extraction
Task
Prediction
Cue
(c) Priming
Figure 2: A neural network can be applied to an input in
an either unmodified manner (top), pruning the results after running (middle) or priming the network via an external
signal (cue) in image to affect all layers of processing (bottom).
among various parts of similar scenes.
Having delineated various contextual sources, the general process by which the visual hierarchy is modulated
prior to a particular task is referred to as visual priming
[35, 26]. A cue could be provided either implicitly by a contextual source or explicitly through other modalities such as
language.
There has been a tremendous amount of work on using
some form of top-down feedback to contextually prime the
underlying visual representation for various tasks [37, 38,
24, 16]. The objective is to have signals generated from
some task such that they could prepare the visual hierarchy oriented for the primary task. [30] proposes contextual
priming and feedback for object detection using the Faster
R-CNN framework [29]. The intuition is to modify the detection framework to be able to generate semantic segmentation predictions in one stage. In the second stage, the segmentation primes both the object proposal and classification
modules.
Instead of relying on the same modality for the source
of priming, [9, 25] proposes to modulate features of a visual hierarchy using the embeddings of the language model
trained on the task of visual question answering [1, 17].
In other words, using feature-wise affine transformations,
[25] multiplicatively and additively modulates hidden activities of the visual hierarchy using the top-down priming
signals generated from the language model, while [30] append directly the semantic segmentation predictions to the
visual hierarchy. Recently, [14] proposes to modulate convolutional weight parameters of a neural networks using
segmentation-aware masks. In this regime, the weight parameters of the model are directly approached for the purpose of priming.
Although all these methods modulate the visual representation, none has specifically studied the explicit role of
category cues to prime the visual hierarchy for object detection and segmentation. In this work, we strive to introduce
a consistent parametric mechanism into the neural network
framework. The proposed method allows every portion of
the visual hierarchy to be primed for tasks such as object detection and semantic segmentation. It should be noted that
this use of priming was defined as part of the Selective Tuning (ST) model of visual attention [36]. Other aspects of
ST have recently appeared as part of classification and localization networks as well [5, 41], and our work explores
yet another dimension of the ST theory.
3. Approach
Assume that we have some network N to perform a task
such as object detection or segmentation on an image I. In
addition, we are given some cue h ∈ Rn about the content
of the image. We next describe pruning and priming, how
they are applied and how priming is learned. We assume
that h is a binary encoding of them presence of some target(s) (e.g, objects) - though this can be generalized to other
types of information. For instance, an explicit specification
of color, location, orientation, etc, or an encoded features
representation as can be produced by a vision or language
model. Essentially, one can either ignore this cue, use it to
post-process the results, or use it to affect the computation.
These three strategies are presented graphically in Fig. 2.
Pruning. In pruning, N is fed an image and we use h
to post-process the result. In object detection, all bounding
boxes output by N whose class is different than indicated
by h are discarded. For segmentation, assume N outputs
a score map of size C × h × w , where L is the number
of classes learned by the network, including a background
class. We propose two methods of pruning, with complementary effects. The first type increases recall by ranking
the target class higher: for each pixel (x,y), we set the value
of all score maps inconsistent with h to be −∞ , except
that of the background. This allows whatever detection of
the hinted class to be ranked higher than other which previously masked it. The second type simply sets each pixels
which was not assigned by the segmentation the target class
to the background class. This decreases recall but increases
the precision. These types of pruning are demonstrated in
Fig. 8 and discussed below.
Priming. Our approach is applicable to any network N
with a convolutional structure, such as a modern network
for object detection, e.g. [20]. To enable priming, we freeze
all weights in N and add a parallel branch Np . The role of
Np is to transform an external cue h ∈ Rn to modulatory
signals which affect all or some of the layers of N . Namely,
let Li be some layer of N. Denote the output of Li by xi ∈
Rci ×hi ×wi where ci is the number of feature planes and
hi , wi are the height and width of the feature planes. Denote
the jth feature plane of xi by xij ∈ Rhi ×wi .
Np modulates each feature plane xij by applying to the
fij (xij , h) = x̂ij
(1)
The function fij always operates in a spatially-invariant
manner - for each element in a feature plane, the same function is applied. Specifically, we use a simple residual function, that is
x̂ij = αij · xij + xij
(2)
Where the coefficients αi = [αi1 , . . . , αici ]T are determined by a linear transformation of the cue:
αi = Wi ∗ h
(3)
An overall view of the proposed method is presented in
Fig. 3.
Types of Modulation The modulation in eq. 2 simply adds a calculated value to the feature plane. We have
experimented with other types of modulation, namely nonresidual ones (e.g, purely multiplicative), as well as following the modulated features with a non-linearity (ReLU), or
adding a bias term in addition to the multiplicative part.
The single most important dominant ingredient to reach
good performance was the residual formulation - without
it, training converged to very poor results. The formulation in eq. 2 performed best without any of the above listed
modifications. We note that an additive model, while having converged to better results, is not fully consistent with
biologically plausible models ([36]) which involve suppression/selection of visual features, however, it may be considered a first approximation.
Types of Cues The simplest form of a cue h is an indicator vector of the object(s) to be detected, i.e, a vector of
20 zeros and 1 in the coordinate corresponding to “horse”,
assuming there are 20 possible object classes, such as in
Pascal [11]. We call this a categorical cue because it explicitly carries semantic information about the object. This
means that when a single class k is indicated, αi becomes
the kth column of Wi .
o
+
o
W
+
Task
Prediction
+
o
W
W
Cue
Figure 3: Overall view of the proposed method to prime deep neural networks. A cue about some target in the image is given
by and external source or some form of feedback. The process of priming involves affecting each layer of computation of the
network by modulating representations along the path.
3.1. Training
Multiple Cues Per Image. Contemporary object detection and segmentation benchmarks [19, 11] often contain
more than one object type per image. In this case, we may
set each coordinate in h to 1 iff the corresponding class is
present in the image. However, this tends to prevent Np
from learning to modulate the representation of N in a way
which allows it to suppress irrelevant objects. Instead, if
an image contains k distinct object classes, we duplicate
the training sample k times and for each duplicate set the
ground truth to contain only one of the classes. This comes
at the expense of a longer training time, depending on the
average number k over the dataset.
4. Experiments
We evaluate our method on two tasks: object detection
and object class segmentation. In each case, we take a
pre-trained deep neural network and explore how it is affected by priming or pruning. Our goal here is not necessarily to improve state-of-the-art results but rather to
show how usage of top-down cues can enhance performance. Our setting is therefore different than standard
object-detection/segmentation scenarios: we assume that
some cue about the objects in the scene is given to the
0.8
0.865
0.6
mAP
mAP
To learn how to utilize the cue, we freeze the parameters of our original network N and add the network block
Np . During training, with each training example (Ii , yi )
fed into N we feed hi into Np , where Ii is an image, yi
is the ground-truth set of bounding boxes and hi is the corresponding cue. The output and loss functions of the detection network remain the same, and the error is propagated
through the parameters of Np . Fig. 3 illustrates the network. Np is very lightweight with respect to N , as it only
contains parametersPto transform from the size of the cue
h to at most K = i ki where ki is the number of output
feature planes in each layer of the network.
H
0.870
0.860
0.855
0.4
0.2
0000 0001 0010 0011 0100 0111 1000 1100 1110 1111
H
(a)
0
10 20 30 40 50 60 70 80 90 100
noise
1111
1110
1100
1000
0111
0100
0011
0010
0001
0000
(b)
Figure 4: (a) Performance gains by priming different parts
of the SSD objects detector. Priming early parts of the
network causes the most significant boost in performance.
black dashed line shows performance by pruning. (b) Testing variants of priming against increasing image noise. The
benefits of priming become more apparent in difficult viewing conditions. The x axis indicates which block of the network was primed (1 for primed, 0 for not primed).
network and the goal is to find how it can be utilized optimally. Such information can be either deduced from the
scene, such as in contextual priming [30, 18] or given by an
external source, or even be inferred from the task, such as
in question answering [1, 17].
Our experiments are conducted on the Pascal VOC [11]
2007 and 2012 datasets. For priming object detection networks we use pre-trained models of SSD [20] and yolo-v2
[28] and for segmentation we use the FCN-8 segmentation
network of [21] and the DeepLab network of [7]. We use
the YellowFin optimizer [40] in all of our experiments, with
a learning rate of either 0.1 or 0.01 (depending on the task).
4.1. Object Detection
We begin by testing our method on object detection. Using an implementation of SSD [20], we apply a pre-trained
detector trained on the trainval sets of Pascal 2012+2007 to
the test set of Pascal 2007. We use the SSD-300 variant
as described in the paper. In this experiment, we trained
and tested on what we cal PAS# : this is a reduced version of Pascal-2007 containing only images with a single
object class (but possibly multiple instances). We use this
4.1.1
Deep vs Shallow Priming
We proceed to the main result, that is, how priming affects
detection. The SSD object detector contains four major
components: (1) a pre-trained part made up of some of the
layers of vgg-16 [31] (a.k.a the “base network” in the SSD
paper), (2) some extra convolutional layers on top of the
vgg-part, (3) a localization part and (4) a class confidence
part. We name these part vgg, extra, loc and conf respectively.
To check where priming has the most significant impact,
we select different subsets of these components and denote
them by 4-bit binary vectors si ∈ {0, 1}4 , where the bits
correspond from left to right to the vgg,extra,localization
and confidence parts. For example, s = 1000 means letting
Np affect only the earliest (vgg) part of the detector, while
all other parts remain unchanged by the priming (except
indirectly affecting the deeper parts of the net). We train
Np on 10 different configurations: these include priming
from the deepest layers to the earliest: 1111, 0111, 0011,
0001 and from the earliest layer to the deepest: 1000, 1100,
1110. We add 0100 and 0010 to check the effect of exclusive control over middle layers and finally 0000 as the
default configuration in which Np is degenerate and the result is identical to pruning. Fig 4 (a) shows the effect of
priming each of these subsets of layers on PAS# . Priming
early layers (those at the bottom of the network) has a much
more pronounced effect than priming deep layers. The single largest gain by priming a single component is for the
vgg part: 1000 boosts performance from 85% to 87.1%. A
smaller gain is attained by the extra component: 86.1% for
0100. The performance peaks at 87.3% for 1110, though
this is only marginally higher than attained by 1100 - priming only the first two parts.
4.1.2
Ablation Study
Priming the earliest layers (vgg+extra) of the SSD object
detector brings the most significant boost in performance.
The first component described above contains 15 convolutional layers and the second contains 8 layers, an overall total of 23. To see how much we can gain with priming on the first few layers, we checked the performance
mAP
0.87
0.86
08
15_07
15_06
15_05
15_04
15_03
15_02
15_01
15_00
15_00
14_00
13_00
12_00
11_00
10_00
09_00
08_00
07_00
06_00
05_00
04_00
03_00
02_00
01- _
--
reduced dataset to test various aspects of our method, as
detailed in the following subsections. Without modification, the detector attains a mAP (mean-average precision) of
81.4% on PAS# (77.4% on the full test set of Pascal 2007).
Using simple pruning as described above, this increases to
85.2%. This large boost in performance is perhaps not surprising, since pruning effectively removes all detections of
classes that do not appear in the image. The remaining errors are those of false alarms of the “correct” class or misdetections.
configuration
Figure 5: Effects of early priming: we show how mAP increases when we allow priming to affect each time another
layer, from the very bottom of the network. Priming early
layers has a more significant effect than doing so for deeper
ones. The numbers indicate how many layers were primed
from the first,second blocks of the SSD network, respectively.
on PAS# when training on the first k layers only, for each
k ∈ {1, 2, . . . 23}. Each configuration was trained for 4000
iterations. Fig. 5 shows the performance obtained by each
of these configurations, where i j in the x-axis refers to having trained the first i layers and the first j layers of the first
and second parts respectively. We see that the very first convolutional layer already boosts performance when primed.
The improvement continues steadily as we add more layers
and fluctuates around 87% after the 15th layer. The fluctuation is likely due to randomness in the training process. This
further shows that priming has strong effects when applied
to very early layers of the network.
4.1.3
Detection in Challenging Images
As implied by the introduction, perhaps one of the cases
where the effect of priming is stronger is when facing a
challenging image, such as adverse imaging conditions, low
lighting, camouflage, noise. As one way to test this, we
compared how priming performs under noise. We took each
image in the test set of Pascal 2007 and added random Gaussian noise chosen from a range of standard deviations, from
0 to 100 in increments of 10. The noisy test set of PAS# with
variance σ is denoted PAS# N(σ) . For each σ, we measure the
mAP score attained by either pruning or priming. Note that
none of our experiments involved training with only images
- these are only used for testing. We plot the results in Fig.
4 (b). As expected, both methods suffer from decreasing
accuracy as the noise increases. However, priming is more
robust to increasing levels of noise; the difference between
the two methods peaks at a moderate level of noise, that is,
σ = 80, with an advantage of 10.7% in mAP: 34.8% compared to 24.1% by pruning. The gap decreases gradually to
6.1% (26.1% vs 20%) for a noise level of σ = 100. We
believe that this is due to the early-layer effects of priming on the network, selecting features from the bottom up
thresh=0.1
thresh=0.01
thresh=0.5
thresh=0.2
thresh=0.1
thresh=0.01
σ = 25
σ = 50
σ = 100
σ = 100
σ = 50
σ = 25
σ=0
thresh=0.2
σ=0
thresh=0.5
(a)
(b)
Figure 6: Priming vs. Pruning. Priming a detector allows it to find objects in images with high levels of noise while mostly
avoiding false-alarms. Left to right (a,b): decreasing detection thresholds (increasing sensitivity). Top to bottom: increasing
levels of noise. Priming (blue dashed boxes) is able to detect the horse (a) across all levels of noise, while pruning (red
dashed boxes) does not. For the highest noise level, the original classifier does not detect the horse at all - so pruning is
ineffective. (b) Priming enables detection of the train for all but the most severe level of noise. Decreasing the threshold for
pruning only produces false alarms. We recommend viewing this figure in color on-line.
to match the cue. Fig 6 shows qualitative examples, comparing priming versus pruning: we increase the noise from
top to bottom and decrease the threshold (increase the sensitivity) from left to right. We show in each image only the
top few detections of each method to avoid clutter. Priming allows the detector to find objects in images with high
levels of noise (see lower rows of a,b). In some cases priming proves to be essential for the detection: lowering the
un-primed detector’s threshold to a minimal level does not
increase the recall of the desired object (a, 4th row); in fact,
it only increases the number of false alarms (b, 2nd row, last
column). Priming, on the other hand, is often less sensitive
to a low threshold and the resulting detection persists along
a range thereof.
object classes c1 , . . . ck we split the training example for I
to k different tuples < Ii , hi , gti >, i ∈ {1 . . . k}, where
Ii are all identical to I, hi indicate the presence of class ci
and gti is the ground-truth gt reduced to contain only the
objects of class ci - meaning the bounding boxes for detection, or the masks for segmentation. This explicitly coerces
the priming network Np to learn how to force the output to
correspond to the given cue, as the input image remains the
same but the cue and desired output change together. We refer to this method multi-cue aware training (CAT for short) ,
and refer to the unchanged training scheme as regular training.
4.2. Cue Aware Training
Here, we test the multi-cue training method on object class
segmentation. We begin with the FCN-8 segmentation network of [21]. We train on the training split of SBD (Berkeley Semantic Boundaries Dataset and Benchmark) dataset
[13], as is done in [42, 7, 8, 21]. We base our code on
an unofficial PyTorch2 implementation3 . Testing is done of
In this section, we also test priming on an object detection task as well as segmentation with an added ingredient
- multi-cue training and testing. In Sec. 4.1 we limited ourselves to the case where there is only one object class per
image. This limitation is often unrealistic. To allow multiple priming cues per image, we modify the training process
as follows: for each training sample < I, gt > containing
4.2.1
Multi-Cue Segmentation
2 http://pytorch.org/
3 https://github.com/wkentaro/pytorch-fcn
Figure 7: Effect of priming a segmentation network with
different cues. In each row, we see an input image and the
output of the network when given different cues. Top row:
cues are resp. bottle, diningtable, person. Given a cue (e.g,
bottle), the network becomes more sensitive to bottle-like
image structures while suppressing others. This happens
not by discarding results but rather by affecting computation
starting from the early layers.
the validation set of Pascal 2011, taking care to avoid overlapping images between the training set defined by [13] 4 ,
which leaves us with 736 validation images. The baseline
results average IOU score of 65.3%. As before, we let the
cue be a binary encoding of the classes present in the image.
We train and test the network in two different modes: one is
by setting for each training sample (and testing) the cue so
hi = 1 if the current image contains at least one instance of
class i and 0 otherwise. The other is the multi-cue method
we describe earlier, i.e , splitting each sample to several cues
with corresponding ground-truths so each cue is a one-hot
encoding, indicating only a single class. For both training
strategies, testing the network with a cue creates a similar
improvement in performance, from 65.3% to 69% for regular training and to 69.2% for multi-cue training.
The main advantage of the multi-cue training is that it
allows the priming network Np to force N to focus on different objects in the image. This is illustrated in Fig. 7.
The top row of the figure shows from left to right an input image and the resulting segmentation masks when the
network is cued with classes bottle, diningtable and person.
The bottom row is cued with bus, car, person. The cueaware training allows the priming network to learn how to
suppress signals relating to irrelevant classes while retaining the correct class from the bottom-up.
Types of Pruning. As mentioned in Sec. 3, we examine two types of pruning to post-process segmentation results. One type removes image regions which were wrongly
labeled as the target class, replacing them with background
and the other increases the recall of previously missed seg4 for details, please refer to https://github.com/shelhamer/
fcn.berkeleyvision.org/tree/master/data/pascal
(a) input
(b) gt
(c) regular (d) prune-2 (e) prune-1 (f) priming
Figure 8: Comparing different methods of using a cue to improve segmentation: From left to right: input image (with
cue overlayed), ground-truth (all classes), unprimed segmentation, pruning type-2, pruning type-1, and priming. In
each image, we aid the segmentation network by adding a
cue (e.g, “plane”). White regions are marked as “don’t care”
in the ground truth.
mentation regions by removing all classes except the target class and retaining pixels where the target class scored
higher than the background. The first type increases precision but cannot increase recall. The second type increases
recall but possibly hinders precision. We found that both
types results in a similar overall mean-IOU. Figure 8 shows
some examples where both types of pruning result in segmentations inferior to the one resulting by priming: postprocessing can increase recall by lowering precision (first
row, column d) or increase precision by avoiding falsedetections (second and fourth row, column e), priming (column f) increases both recall and precision. The second,
and fourth rows missing parts of the train/bus are recovered while removing false classes. The third and fifth rows
previously undetected small objects are now detected. The
person (first row) is segmented more accurately.
DeepLab. Next, we use the DeepLab [7] network for
semantic-segmentation with ResNet-101 [15] as a base network. We do not employ a CRF as post-processing. The
mean-IOU of the baseline is 76.3%. Using Priming, increases this to 77.15%. While in this case priming does not
improve as much as in the other cases we tested, we find
that it is especially effective at enabling the network to discover small objects which were not previously segmented
by the non-primed version: the primed network discovers
ing method for 25 epochs. When using only pruning, performance on the test-set improves to 78.2% mAP. When we
include priming as well, this goes up to 80.6%,
5. Conclusion
We have presented a simple mechanism to prime neural networks, as inspired by psychological top-down effects
known to exist in human observers. We have tested the proposed method on two tasks, namely object detection and
segmentation, using two methods for each task, and comparing it to simple post-processing of the output. Our experiments confirm that as is observed in humans, effective usage of a top-down signal to modulate computations
from early layers not only improves robustness to noise but
also facilitates better object detection and segmentation, enabling detection of objects which are missed by the baselines without compromising precision, notably so for small
objects and those having an atypical appearance.
References
Figure 9: Priming a network allows discovery of small
objects which are completely missed by the baseline, or
ones with uncommon appearance (last row). From left to
right: input image, ground-truth, baseline segmentation [7],
primed network.
57 objects which were not discovered by the unprimed network, whereas the latter discovers only 3 which were not
discovered by the former. Fig. 9 shows some representative examples of where priming was advantageous. Note
how the bus, person, (first three rows) are segmented by
the primed network (last column). We hypothesize that the
priming process helps increase the sensitivity of the network
to features relevant to the target object. The last row shows a
successful segmentation of potted plants with a rather atypical appearance.
4.2.2
Multi-Cue Object Detection
We apply the CAT method to train priming on object detection as well. For this experiment, we use the YOLOv2
method of [28]. The base network we used is a port of the
original network, known as YOLOv2 544 × 544. Trained
on the union of Pascal 2007 and 2012 datasets, it is reported
by the authors to obtain 78.6% mAP on the test set of Pascal
2007. The implementation we use5 reaches a slightly lower
76.8%, with a PyTorch port of the network weights released
by the authors. We use all the convolutional layers of DarkNet (the base network of YOLOv2 ) to perform priming. We
freeze all network parameters of the original detection network and train a priming network with the multi-cue train5 https://github.com/marvis/pytorch-yolo2
[1] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and
Devi Parikh. Vqa: Visual question answering. In
Proceedings of the IEEE International Conference on
Computer Vision, pages 2425–2433, 2015. 2, 4
[2] Mandy V Bartsch, Kristian Loewe, Christian Merkel,
Hans-Jochen Heinze, Mircea A Schoenfeld, John K
Tsotsos, and Jens-Max Hopf. Attention to color sharpens neural population tuning via feedback processing
in the human visual cortex hierarchy. Journal of Neuroscience, 37(43):10346–10357, 2017. 1
[3] Irving Biederman. On the semantics of a glance at a
scene. 1981. 2
[4] Irving Biederman, Robert J Mezzanotte, and Jan C
Rabinowitz. Scene perception: Detecting and judging objects undergoing relational violations. Cognitive
psychology, 14(2):143–177, 1982. 2
[5] Mahdi Biparva and John Tsotsos. STNet: Selective
Tuning of Convolutional Networks for Object Localization. In The IEEE International Conference on
Computer Vision (ICCV), Oct 2017. 2
[6] Joao Carreira, Pulkit Agrawal, Katerina Fragkiadaki,
and Jitendra Malik. Human pose estimation with iterative error feedback. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 4733–4742, 2016. 2
[7] Liang-Chieh Chen, George Papandreou, Iasonas
Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab:
Semantic image segmentation with deep convolutional
nets, atrous convolution, and fully connected crfs.
arXiv preprint arXiv:1606.00915, 2016. 4, 4.2.1,
4.2.1, 9
[8] Jifeng Dai, Kaiming He, and Jian Sun. Boxsup: Exploiting bounding boxes to supervise convolutional
networks for semantic segmentation. In Proceedings
of the IEEE International Conference on Computer Vision, pages 1635–1643, 2015. 4.2.1
[9] Harm de Vries, Florian Strub, Jérémie Mary, Hugo
Larochelle, Olivier Pietquin, and Aaron Courville.
Modulating early visual processing by language.
arXiv preprint arXiv:1707.00683, 2017. 2
[10] Santosh K Divvala, Derek Hoiem, James H Hays,
Alexei A Efros, and Martial Hebert. An empirical
study of context in object detection. In Computer
Vision and Pattern Recognition, 2009. CVPR 2009.
IEEE Conference on, pages 1271–1278. IEEE, 2009.
2
[11] Mark Everingham, Luc Van Gool, Christopher KI
Williams, John Winn, and Andrew Zisserman. The
pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338,
2010. 3, 3.1, 4
[12] Carolina Galleguillos and Serge Belongie. Context
based object categorization: A critical survey. Computer vision and image understanding, 114(6):712–
722, 2010. 2
[13] Bharath Hariharan, Pablo Arbeláez, Lubomir Bourdev, Subhransu Maji, and Jitendra Malik. Semantic contours from inverse detectors. In Computer Vision (ICCV), 2011 IEEE International Conference on,
pages 991–998. IEEE, 2011. 4.2.1
[14] Adam W Harley, Konstantinos G Derpanis, and Iasonas Kokkinos. Segmentation-Aware Convolutional
Networks Using Local Attention Masks.
arXiv
preprint arXiv:1708.04607, 2017. 2
[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian
Sun. Deep residual learning for image recognition.
In Proceedings of the IEEE conference on computer
vision and pattern recognition, pages 770–778, 2016.
4.2.1
[16] Andrew Hollingworth. Does consistent scene context
facilitate object perception? Journal of Experimental
Psychology: General, 127(4):398, 1998. 2
[17] Justin Johnson, Bharath Hariharan, Laurens van der
Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross
Girshick. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning.
arXiv preprint arXiv:1612.06890, 2016. 2, 4
[18] Harish Katti, Marius V Peelen, and SP Arun.
Object detection can be improved using humanderived contextual expectations.
arXiv preprint
arXiv:1611.07218, 2016. 4
[19] Tsung-Yi Lin, Michael Maire, Serge Belongie, James
Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and
C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer
vision, pages 740–755. Springer, 2014. 3.1
[20] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and
Alexander C Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages
21–37. Springer, 2016. 3, 4, 4.1
[21] Jonathan Long, Evan Shelhamer, and Trevor Darrell.
Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–
3440, 2015. 4, 4.2.1
[22] Kevin P Murphy, Antonio Torralba, and William T
Freeman. Using the forest to see the trees: A graphical model relating features, objects, and scenes. In
Advances in neural information processing systems,
pages 1499–1506, 2004. 2
[23] Aude Oliva and Antonio Torralba. The role of context
in object recognition. Trends in cognitive sciences,
11(12):520–527, 2007. 2
[24] tephen E Palmer. The effects of contextual scenes on
the identification of objects. Memory & Cognition,
3(5):519–526, 1975. 2
[25] Ethan Perez, Florian Strub, Harm de Vries, Vincent
Dumoulin, and Aaron Courville. FiLM: Visual Reasoning with a General Conditioning Layer. arXiv
preprint arXiv:1709.07871, 2017. 2
[26] Michael I Posner, Mary Jo Nissen, and William C Ogden. Attended and unattended processing modes: The
role of set for spatial location. Modes of perceiving
and processing information, 137:158, 1978. 2
[27] Andrew Rabinovich, Andrea Vedaldi, Carolina Galleguillos, Eric Wiewiora, and Serge Belongie. Objects in context. In Computer vision, 2007. ICCV
2007. IEEE 11th international conference on, pages
1–8. IEEE, 2007. 2
[28] Joseph Redmon and Ali Farhadi. YOLO9000: Better,
Faster, Stronger. arXiv preprint arXiv:1612.08242,
2016. 4, 4.2.2
[40] Jian Zhang, Ioannis Mitliagkas, and Christopher Ré.
YellowFin and the Art of Momentum Tuning. arXiv
preprint arXiv:1706.03471, 2017. 4
[29] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian
Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in
neural information processing systems, pages 91–99,
2015. 2
[41] Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui
Shen, and Stan Sclaroff. Top-down Neural Attention
by Excitation Backprop. In European Conference on
Computer Vision, pages 543–559. Springer, 2016. 2
[30] Abhinav Shrivastava and Abhinav Gupta. Contextual priming and feedback for faster r-cnn. In European Conference on Computer Vision, pages 330–348.
Springer, 2016. 2, 4
[31] Karen Simonyan and Andrew Zisserman. Very deep
convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 4.1.1
[32] Thomas M Strat. Employing contextual information
in computer vision. DARPA93, pages 217–229, 1993.
2
[33] Antonio Torralba. Contextual priming for object detection. International journal of computer vision,
53(2):169–191, 2003. 2
[34] Antonio Torralba, Kevin P Murphy, William T Freeman, and Mark A Rubin. Context-based vision system for place and object recognition. In null, page
273. IEEE, 2003. 2
[35] John K Tsotsos. A computational perspective on visual attention. MIT Press, 2011. 1, 2
[36] John K. Tsotsos, Scan M. Culhane, Winky Yan Kei
Wai, Yuzhong Lai, Neal Davis, and Fernando Nuflo.
Modeling visual attention via selective tuning. Artificial Intelligence, 78(1–2):507–545, 1995. Special
Volume on Computer Vision. 2, 3
[37] Endel Tulving, Daniel L Schacter, et al. Priming and
human memory systems. Science, 247(4940):301–
306, 1990. 2
[38] Gagan S Wig, Scott T Grafton, Kathryn E Demos,
and William M Kelley. Reductions in neural activity
underlie behavioral components of repetition priming.
Nature neuroscience, 8(9):1228–1233, 2005. 2
[39] Jian Yao, Sanja Fidler, and Raquel Urtasun. Describing the scene as a whole: Joint object detection, scene
classification and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2012
IEEE Conference on, pages 702–709. IEEE, 2012. 2
[42] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. arXiv preprint arXiv:1612.01105, 2016. 4.2.1
| 9 |
Object category learning and retrieval with
weak supervision
arXiv:1801.08985v1 [cs.CV] 26 Jan 2018
Steven Hickson, Anelia Angelova, Irfan Essa, Rahul Sukthankar
Google Brain / Google Research
(shickson, anelia, irfanessa, sukthankar)@google.com
Abstract
We consider the problem of retrieving objects from image data and learning to
classify them into meaningful semantic categories with minimal supervision. To
that end, we propose a fully differentiable unsupervised deep clustering approach
to learn semantic classes in an end-to-end fashion without individual class labeling
using only unlabeled object proposals. The key contributions of our work are 1)
a kmeans clustering objective where the clusters are learned as parameters of the
network and are represented as memory units, and 2) simultaneously building a
feature representation, or embedding, while learning to cluster it. This approach
shows promising results on two popular computer vision datasets: on CIFAR10 for
clustering objects, and on the more complex and challenging Cityscapes dataset
for semantically discovering classes which visually correspond to cars, people, and
bicycles. Currently, the only supervision provided is segmentation objectness masks,
but this method can be extended to use an unsupervised objectness-based object
generation mechanism which will make the approach completely unsupervised.
1
Introduction
Unsupervised discovery of common patterns is a long standing task for artificial intelligence as
shown in Barlow (1989); Bengio, Courville, and Vincent (2012). Recent deep learning approaches
have offered major breakthroughs in classification into multiple categories with millions of labeled
examples (e.g. Krizhevsky (2009); Szegedy et al. (2015); He et al. (2016) and many others). These
methods rely on a lot of annotated data for training in order to perform well. Unfortunately, labeling is
an inefficient and expensive progress, so learning from unlabeled data is desirable for many complex
tasks. At the same time, much of human knowledge and learning is obtained by unsupervised
observations Grossberg (1994).
The goal of this work is to show that semantically meaningful classes can be learned with minimal
supervision. Given a set of objectness proposals, we use the activations of foreground objects in order
to learn deep features to cluster the available data while simultaneously learning the embedding in an
end-to-end manner. More specifically, we propose a differentiable clustering approach that learns
better separability of classes and embedding. The main idea is to store the potential cluster means
as weights in a neural network at the higher levels of feature representation. This allows them to be
learned jointly with the potential feature representation. This differentiable clustering approach is
integrated with Deep Neural Networks (e.g. Szegedy et al. (2015)) to learn semantic classes in an
end-to-end fashion without manual class labeling.
The idea of doing this ‘end-to-end’ is that gradient descent can not only learn good weights for
clustering, it can also change the embedding to allow for better clustering without the use of labels.
We see that this leads to better feature representation. Our results show also that different object
categories emerge and can later be retrieved from test images never before seen by the network,
resulting in clusters of meaningful categories, such as cars, persons, bicycles.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this work we use given segmentation objectness masks, which are candidate objects without labels.
This can be extended by using an independent objectness-based object generation mechanism Pathak
et al. (2017); Faktor and Irani (2014) or by using unsupervised motion segmentation in videos or
structure from motion Vijayanarasimhan et al. (2017).
2
Related Work
Unsupervised learning (Barlow (1989)) and unsupervised deep learning (Bengio, Courville, and
Vincent (2012), Bengio (2012), Bengio and others (2009)) are central topics to Machine Learning.
Unsupervised deep learning has been shown to improve results on classification tasks per Erhan et
al. (2010), especially given small datasets and complicated high dimensional data such as video.
This has been explored by many representations including sequence to sequence learning and textual
representations (Radford, Jozefowicz, and Sutskever (2017), Ramachandran, Liu, and Le (2016)).
Our work focuses on unsupervised deep learning for discovering visual object categories. This has
also been shown to improve results such as in Doersch and Zisserman (2017). Unsupervised discovery
of visual objects has been a large topic of interest in computer vision (Sivic et al. (2005); Russell et
al. (2006); Singh, Gupta, and Efros (2012); Bach and Jordan (2005); Kwak et al. (2015); Pathak et al.
(2017)).
Building specialized, deep embeddings to help computer vision tasks is also a popular approach such
as in Agrawal, Carreira, and Malik (2015). Transfer learning from supervised tasks has proven to be
very successful. Further, Agrawal, Carreira, and Malik (2015) propose learning the lower dimensional
embedding through unsupervised learning and show improved performance when transfered to other
supervised tasks.
Despite the popularity of building different embeddings, there is little work investigating the use of
clustering to modify the embedding in an end-to-end deep learning framework. Bottou and Bengio
(1995) investigate a differentiable version of the kmeans algorithm and examine its convergence
properties. Our work focuses on learnable feature representations (instead of fixed ones as in Bottou
and Bengio (1995)) and introduces memory units for the task.
3
Unsupervised deep clustering
Our unsupervised deep clustering is inspired by Bottou and Bengio (1995), who consider differentiable
clustering algorithms.
We differ from this approach because the features we cluster also change with backpropogation. In
our work, we add a kmeans-like loss that is integrated end-to-end. Our idea is to store the potential
cluster means as weights in the network and thus have them be learned.
The proposed clustering is done simultaneously while building an embedding. Given information of a
potential object vs background (binary labels), clustering in a differentiable way provides a better
embedding for the input data. We show that this method can be used for meaningful semantic retrieval
of related objects.
3.1
Embedding with clustering
We train a convolutional neural network (CNN) to predict foreground and background using oracle
labels of patches of objects and background images. Concurrently, we learn the clustering of objects
by imposing constraints that will force the embedding to be partitioned into multiple semantically
coherent clusters of objects without explicit labels for different objects.
For our experiments, we use random initialization on the fully-connected layers (the last two layers)
and we add the differentiable clustering module after the second to last layer. Note that we only
cluster the foreground labels as background activations are not of interest for clustering; the classifier
can predict foreground vs background with high accuracy (above 90%).
The objective function is shown in Equation 1.
N
1 X
Lk =
mink [(xn − wk )2 ]
2N n=1
2
(1)
In this equation, N is the number of samples, k is the number of defined clusters, w is the “weight”
(theoretically and typically the mean of the cluster) for each k, and x is the activations from the fully
connected layer before the classification fully connected layer. This is differentiable and the gradient
descent algorithm is shown in Equation 2.
δwk = wk0 − wk =
N
X
lr (xn − wk ) if k = s(xn , w)
0
otherwise
n=1
(2)
where s(xn , w) = argmin
Pk [xn ] and lr is the learning rate. We also add L2 regularization over the
weights to the loss L2 = j wj2 . Furthermore, we use a custom clustering regularization loss LC
that enforces that the clusters are evenly distributed as defined in Equation 3 and Equation 4.
K K
1 XX
LC =
|countk − countj |
NK
(3)
k=0 j=k
countk =
N
X
0
n=0
1
if argmink [xn ] = 0
if argmink [xn ] = 1
(4)
The final loss to be optimized is shown in (Equation 5)
L = Lk + αr L2 + αc LC
(5)
where αr and αc are hyperparameters which are tuned during the training. For our method, we use
αr = 0.25 and αc = 1. We apply this loss to every point that is labeled as potentially an object
and ignore the background ones when clustering. This way we learn foreground vs background and
then learn clustering of the foreground activations. Optimization was performed with a ‘RMSProp’
optimizer, with a learning rate of 0.045, momentum 0.9, decay factor 0.9, and of 1.0.
4
Experimental evaluation
We experiment with a toy example using CIFAR10 and a more challenging example using Cityscapes.
4.1
CIFAR10 dataset
We first test the proposed unsupervised clustering approach on the CIFAR10 Krizhevsky (2009)
dataset. The goal of this experiment is to test if clustering can uncover separate categories in a simple
toy problem with a two class setting.
Clusters Automobile
Dog
Cluster 0
68.5%
17.9%
Cluster 1
31.5%
82.1%
Table 1: Unsupervised clustering results on CIFAR10 for discovery of two classes. Per cluster
accuracy for each of the two given classes on the test set (class labels are unknown during training).
We selected as an example the dog and automobile classes to label as foreground. We then train a
network from scratch based on the Network in Network architecture (NiN) of Lin, Chen, and Yan
(2013) from scratch for our experiments. All other classes of CIFAR are considered background for
this experiment. By attaching our modified clustering objective function to the next to last layer, we
attempt to cluster dog and automobile without labels.
We can see in our simple experimental results that classes are naturally clustered with the majority
of examples correctly assigned. Table 1 shows quantitative results on the test set. As seen 68.5% of
the automobile classes and 82.1% of the dog examples are correctly assigned to separate clusters.
Note that in these cases, the concepts and classes of dog and automobile are unknown to the training
algorithm and we are just looking at them after clustering for evaluation.
3
Classes
Cluster 0 Cluster 1
Person
4320
138
Rider
676
138
Car
1491
4399
Truck
60
69
Bus
49
89
Train
17
16
Motorcycle
88
205
Bicycle
795
787
Table 2: Unsupervised clustering of objects from Cityscapes using our method. The table shows
number of examples assigned to each learned cluster (for K=2).
4.2
Cityscapes dataset
The Cityscapes dataset (Cordts et al. (2016)) is a large-scale dataset that is used for evaluating various
classification, detection, and segmentation algorithms related to autonomous driving. It contains 2975
training, 500 validation, and 1525 test images, where the test set is provided for the purposes of the
Cityscape competition only. In this work, we used the training set for training and the validation set
for testing and visualizing results (as the test set has no annotation results). Annotation is provided
for classes which represent moving agents in the scene, such as pedestrian, car, motorcycle, bicycle,
bus, truck, rider, train. In this work we only use foreground/background labels and intend to discover
semantic groups from among the moving objects.
4.3
Weakly supervised discovery of classes
In this experiment we considered the larger, real-life dataset, Cityscapes (Cordts et al. (2016)),
described above to see if important class categories, e.g. the moving objects in the scene can be
clustered into semantically meaningful classes. We extract the locations and extents of the moving
objects and use that as weak supervision. Note the classes are uneven and car and person dominate.
We show results clustering 8 categories into 2 and 3 clusters despite the rarity of some of them (such
as bicycle). All results below are presented on the validation set. We report the results in terms of
the number of object patches extracted from the available test images. For this dataset, the CNN
architecture is based on the Inception architecture proposed by Szegedy et al. (2015). Since there are
a small number of examples, we pre-train only the convolutional layers of the network.
Results on clustering the 8 classes of moving objects into 2 and 3 clusters are presented in Table 2 and
Table 3 respectively for the learned embedding by the proposed approach and the baseline embedding.
The baseline embedding is calculated by fine-tuning the same architecture in the same manner, but
without our loss (Equation 5); it uses the same amount of information as input as our embedding.
For this experiment, we apply standard kmeans on both activations after training is completed. We
see here that our method provides better clustering for the two dominant classes in the dataset (car
and person). On the other hand, the baseline embedding clusters on one class only, similar to the two
class case. We have consistently observed this behavior for different runs and hypothesize this is due
to the sparse nature of the baseline embedding and it’s activations.
Figure 1 visualizes the three retrieved clusters (color-coded) when clustering into 3 clusters with our
approach. We can see that people (in blue) and cars (in green) are often correctly retrieved. Bikes
are more rare and may be more often mistaken, for example in cases where a portion of the patch
contains part of a car, or since the bicycle very often has a person riding it. Still this is exciting result,
given that it is learned by not providing a single class label during training.
5
Conclusions
We propose a differentiable clustering objective which learns to separate classes during learning and
build a better embedding. The key idea is to be able to learn the clusters which are stored as weights,
and simultaneously learn the feature representation and the clustering of the data. Our results show
that the proposed approach is useful for extracting semantically related objects.
4
Our method
Baseline
Classes
Cluster 0 Cluster 1 Cluster 2 Cluster 0 Cluster 1 Cluster 2
Person
151
4315
17
4482
1
0
Rider
258
551
7
816
0
0
Car
5195
950
180
6312
13
0
Truck
89
39
5
131
2
0
Bus
127
20
5
152
0
0
Train
25
9
1
35
0
0
Motorcycle
127
76
4
207
0
0
Bicycle
1128
541
450
2119
0
0
Table 3: Unsupervised clustering on the Cityscapes dataset with 3 clusters. The table shows the
number of examples assigned to each learned cluster. Our method (left) and baseline (right). Our
method results in 69.98% accuracy.
Figure 1: Visualization of clusters learned by our method (for K=3). From the figure, the green class
is responsible for retrieving cars, the blue one persons, and the red one bicycles. We can see that both
cars and persons are discovered well but bicycles, a rarer class, can be confused with a person or with
a partially visible car in the background.
5
References
Agrawal, P.; Carreira, J.; and Malik, J. 2015. Learning to see by moving. CVPR.
Bach, F. R., and Jordan, M. I. 2005. Learning spectral clustering. NIPS.
Barlow, H. 1989. Unsupervised learning. Neural computation.
Bengio, Y., et al. 2009. Learning deep architectures for ai. Foundations and trends R in Machine Learning
2(1):1–127.
Bengio, Y.; Courville, A. C.; and Vincent, P. 2012. Unsupervised feature learning and deep learning: A review
and new perspectives. CoRR, abs/1206.5538.
Bengio, Y. 2012. Deep learning of representations for unsupervised and transfer learning. In Proceedings of
ICML Workshop on Unsupervised and Transfer Learning, 17–36.
Bottou, L., and Bengio, Y. 1995. Convergence properties of the k-means algorithms. In Advances in neural
information processing systems, 585–592.
Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele,
B. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 3213–3223.
Doersch, C., and Zisserman, A.
arXiv:1708.07860.
2017.
Multi-task self-supervised visual learning.
arXiv preprint
Erhan, D.; Bengio, Y.; Courville, A.; Manzagol, P.-A.; Vincent, P.; and Bengio, S. 2010. Why does unsupervised
pre-training help deep learning? Journal of Machine Learning Research 11(Feb):625–660.
Faktor, A., and Irani, M. 2014. Video segmentation by non-local consensus voting. BMVC.
Grossberg, S. 1994. 3-d vision and figure-ground separation by visual cortex. Perception and Psychophysics.
He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. CVPR.
Krizhevsky, A. 2009. Learning multiple layers of features from tiny images.
Kwak, S.; Cho, M.; Laptev, I.; Ponce2, J.; and Schmid, C. 2015. Unsupervised object discovery and tracking in
video collections. ICCV.
Lin, M.; Chen, Q.; and Yan, S. 2013. Network in network. arXiv preprint arXiv:1312.4400.
Pathak, D.; Girshick, R.; Dollar, P.; Darrell, T.; and Hariharan, B. 2017. Learning features by watching objects
move. CVPR.
Radford, A.; Jozefowicz, R.; and Sutskever, I. 2017. Learning to generate reviews and discovering sentiment.
arXiv preprint arXiv:1704.01444.
Ramachandran, P.; Liu, P. J.; and Le, Q. V. 2016. Unsupervised pretraining for sequence to sequence learning.
arXiv preprint arXiv:1611.02683.
Russell, B. C.; Efros, A. A.; Sivic, J.; Freeman, W. T.; and Zisserman, A. 2006. Using multiple segmentations to
discover objects and their extent in image collections. CVPR.
Singh, S.; Gupta, A.; and Efros, A. A. 2012. Unsupervised discovery of mid-level discriminative patches. ECCV.
Sivic, J.; Russell, B. C.; Efros, A. A.; Zisserman, A.; and Freeman, W. T. 2005. Discovering objects and their
location in images. ICCV.
Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; and Rabinovich,
A. 2015. Going deeper with convolutions. CVPR.
Vijayanarasimhan, S.; Ricco, S.; Schmid, C.; Sukthankar, R.; and Fragkiadaki, K. 2017. Sfm-net: Learning of
structure and motion from video.
6
| 1 |
arXiv:1608.01654v2 [cs.PL] 7 Nov 2016
Hypercollecting Semantics
and its Application to Static Analysis of Information Flow
Mounir Assaf
David A. Naumann
Julien Signoles
Stevens Institute of Technology,
Hoboken, US
[email protected]
Stevens Institute of Technology,
Hoboken, US
[email protected]
Software Reliability and Security Lab,
CEA LIST, Saclay, FR
[email protected]
Éric Totel
Frédéric Tronel
CIDRE, CentraleSupélec,
Rennes, FR
[email protected]
CIDRE, CentraleSupélec,
Rennes, FR
[email protected]
Abstract
We show how static analysis for secure information flow can be expressed and proved correct entirely within the framework of abstract
interpretation. The key idea is to define a Galois connection that
directly approximates the hyperproperty of interest. To enable use
of such Galois connections, we introduce a fixpoint characterisation
of hypercollecting semantics, i.e. a “set of sets” transformer. This
makes it possible to systematically derive static analyses for hyperproperties entirely within the calculational framework of abstract
interpretation. We evaluate this technique by deriving example static
analyses. For qualitative information flow, we derive a dependence
analysis similar to the logic of Amtoft and Banerjee (SAS’04) and
the type system of Hunt and Sands (POPL’06). For quantitative information flow, we derive a novel cardinality analysis that bounds the
leakage conveyed by a program instead of simply deciding whether
it exists. This encompasses problems that are hypersafety but not
k-safety. We put the framework to use and introduce variations
that achieve precision rivalling the most recent and precise static
analyses for information flow.
Categories and Subject Descriptors D.2.4 [Software Engineering]: Software/Program Verification–Assertion checkers; D.3 [Programming Languages]; F.3.1 [Logics and meanings of programs]:
Semantics of Programming Language
Keywords static analysis, abstract interpretation, information flow,
hyperproperties
1.
Introduction
Most static analyses tell something about all executions of a program.
This is needed, for example, to validate compiler optimizations.
Functional correctness is also formulated in terms of a predicate on
observable behaviours, i.e. more or less abstract execution traces: A
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, contact
the Owner/Author(s). Request permissions from [email protected] or Publications Dept., ACM, Inc., fax +1 (212)
869-0481.
POPL ’17, January 18 - 20, 2017, Paris, France
Copyright c 2017 held by owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-4660-3/17/01. . .$15.00
DOI: http://dx.doi.org/10.1145/http://dx.doi.org/10.1145/3009837.3009889
program is correct if all its traces satisfy the predicate. By contrast
with such trace properties, extensional definitions of dependences
involve more than one trace. To express that the final value of a
variable x may depend only on the initial value of a variable y, the
requirement—known as noninterference in the security literature
(Sabelfeld and Myers 2003)—is that any two traces with the same
initial value for y result in the same final value for x. Sophisticated
information flow policies allow dependences subject to quantitative
bounds—and their formalisations involve more than two traces,
sometimes unboundedly many.
For secure information flow formulated as decision problems, the
theory of hyperproperties classifies the simplest form of noninterference as 2-safety and some quantitative flow properties as hypersafety
properties (Clarkson and Schneider 2010). A number of approaches
have been explored for analysis of dependences, including type systems, program logics, and dependence graphs. Several works have
used abstract interpretation in some way. One approach to 2-safety is
by forming a product program that encodes execution pairs (Barthe
et al. 2004; Terauchi and Aiken 2005; Darvas et al. 2005), thereby
reducing the problem to ordinary safety which can be checked by
abstract interpretation (Kovács et al. 2013) or other means. Alternatively, a 2-safety property can be checked by dedicated analyses
which may rely in part on ordinary abstract interpretations for trace
properties (Amtoft et al. 2006).
The theory of abstract interpretation serves to specify and
guide the design of static analyses. It is well known that effective
application of the theory requires choosing an appropriate notion
of observable behaviour for the property of interest (Cousot 2002;
Bertrane et al. 2012, 2015). Once a notion of “trace” is chosen, one
has a program semantics and “all executions” can be formalized in
terms of collecting semantics, which can be used to define a trace
property of interest, and thus to specify an abstract interpretation
(Cousot and Cousot 1977, 1979; Cousot 1999).
The foundation of abstract interpretation is quite general, based
on Galois connections between semantic domains on which collecting semantics is defined. Clarkson and Schneider (2010) formalize
the notion of hyperproperty in a very general way, as a set of sets
of traces. Remarkably, prior works using abstract interpretation for
secure information flow do not directly address the set-of-sets dimension and instead involve various ad hoc formulations. This paper
presents a new approach of deriving information flow static analyses
within the calculational framework of abstract interpretation.
First contribution. We lift collecting semantics to sets of trace
sets, dubbed hypercollecting semantics, in a fixpoint formulation
which is not simply the lifted direct image. This can be composed
with Galois connections that specify hyperproperties beyond 2safety, without recourse to ad hoc additional notions. On the basis
of this foundational advance, it becomes possible to derive static
analyses entirely within the calculational framework of abstract
interpretation (Cousot and Cousot 1977, 1979; Cousot 1999).
Second contribution. We use hypercollecting semantics to
derive an analysis for ordinary dependences. This can be seen as a
rational reconstruction of both the type system of Hunt and Sands
(2006, 2011) and the logic of Amtoft and Banerjee (2004). They
determine, for each variable x, a conservative approximation of the
variables y whose initial values influence the final value of x.
Third contribution. We derive a novel analysis for quantitative
information flow. This shows the benefit of taking hyperproperties
seriously by means of abstract interpretation. For noninterference,
once the variables y on which x depends have fixed values, there
can be only one final value for x. For quantitative information flow,
one is interested in measuring the extent to which other variables
influence x: for a given range of variation for the “high inputs”,
what is the range of variation for the final values of x? We directly
address this question as a hyperproperty: given a set of traces that
agree only on the low inputs, what is the cardinality of the possible
final values for x? Using the hypercollecting semantics, we derive
a novel cardinality abstraction. We show how it can be used for
analysis of quantitative information problems including a bounding
problem which is not k-safety for any k.
The calculational approach disentangles key design decisions
and it enabled us to identify opportunities for improving precision.
We assess the precision of our analyses and provide a formal
characterisation of precision for a quantitative information flow
analysis vis a vis qualitative. Versions of our analyses rival state of
the art analyses for qualitative and quantitative information flow.
Our technical development uses the simplest programming
language and semantic model in which the ideas can be exposed.
One benefit of working entirely within the framework of abstract
interpretation is that a wide range of semantics and analyses are
already available for rich programming languages.
Outline. Following the background (Section 2), we introduce
domains and Galois connections for hyperproperties (Section 3)
and hypercollecting semantics (Section 4). Hyperproperties for
information flow are defined in Section 5. We use the framework to
derive the static analyses in Section 6 and Section 7. Section 8 uses
examples to evaluate the precision of the analyses, and shows how
existing analyses can be leveraged to improve precision. We discuss
related work (Section 9) and conclude. Appendices provide detailed
proofs for all results, as well as a table of symbols.
2.
Background: Collecting Semantics, Galois
Connections
The formal development uses deterministic imperative programs
over integer variables. Let n range over literal integers Z, x over variables, and ⊕ (resp. cmp) over some arithmetic (resp. comparison)
operators.
c ::= skip | x := e | c1 ; c2 | if b then c1 else c2 | while b do c
e ::= n | x | e1 ⊕ e2 | b
b ::= e1 cmp e2
Different program analyses may consider different semantic
domains as needed to express a given class of program properties.
For imperative programs, the usual domains are based on states
σ ∈ States that map each variable to a value (Winskel 1993). Some
P(States∗ )
P(Trc)
P(States)
Figure 1. Fragment of the hierarchy of semantic domains
abstraction
(−−−−−→)
program properties require the use of traces that include intermediate
states; others can use more abstract domains. For information flow
properties involving intermediate outputs, or restricted to explicit
data flow (Schoepe et al. 2016), details about intermediate steps
are needed. By contrast, bounding the range of variables can be
expressed in terms of final states. As another example, consider
determining which variables are left unchanged: To express this, we
need both initial and final states.
In this paper we use the succinct term trace for elements of
Trc defined by Trc , States × States, interpreting t ∈ Trc as an
initial and final state. In the literature, these are known as relational
traces, by contrast with maximal trace semantics using the set
States∗ of finite sequences. A uniform framework describes the
relationships and correspondences between these and many other
semantic domains using Galois connections (Cousot 2002). Three
of these domains are depicted in Figure 1.
Given partially ordered sets C, A, the monotone functions
α ∈ C → A and γ ∈ A → C comprise a Galois connection,
γ
−
− (A, v), provided they satisfy
a proposition we write (C, ≤) ←
−−
α→
α(c) v a iff c ≤ γ(a) for all c ∈ C, a ∈ A.
For example, to specify an analysis that determines which
variables are never changed, let A be sets of variables. Define
α ∈ P(Trc) → P(Vars) by α(T ) = {x | ∀(σ, σ 0 ) ∈ T, σ(x) =
σ 0 (x)} and γ(X) = {(σ, σ 0 ) | ∀x ∈ X, σ(x) = σ 0 (x)}. Then
γ
−
− (P(V ar), ⊇).
(P(Trc), ⊆) ←
−−
α→
For the hierarchy of usual domains, depicted in Figure 1, the
connections are defined by an “element-wise abstraction”. Define
elt ∈ States∗ → Trc by elt(σ0 σ1 . . . σn ) , (σ0 , σn ). This lifts to
an abstraction P(States∗ ) → P(Trc).
Lemma 1 Element-wise abstraction. Let elt ∈ C → A be a
function between sets. Let αelt (C) , {elt(c) | c ∈ C} and
γelt
−−
−−
− (P(A), ⊆).
γelt (A) , {c | elt(c) ∈ A}. Then (P(C), ⊆) ←
−−
→
α−
elt
The domain P(States), which suffices to describe the final
reachable states of a program, is an abstraction of the relational
domain P(Trc), by elt(σ, τ ) , τ . In this paper we focus on the
domain Trc because it is the simplest that can express dependences.
Program semantics. We define both the denotational semantics
JcK ∈ Trc⊥ → Trc⊥ of commands and the denotational semantics
JeK ∈ Trc → Val of expressions. Here Val , Z and Trc⊥ adds
bottom element ⊥ using the flat ordering.
JcK ∈ Trc⊥ → Trc⊥
Standard semantics of commands
JcK⊥ , ⊥
Jx := eK(σ, τ ) , (σ, τ [x 7→ JeK(σ, τ )])
Jc1 ; c2 Kt , Jc2 K ◦ Jc1 Kt
JskipKt , t
(
Jc1 Kt
Jif b then c1 else c2 Kt ,
Jc2 Kt
if JbKt = 1
if JbKt = 0
Jwhile b do cKt , (lfp4̇
(λt.⊥) F)(t)
(
t
where F(w)(t) ,
w
◦
JcKt
if JbKt = 0
otherwise
Let t be a trace (σ, τ ). The denotation JeKt evaluates e in the
“current state”, τ . (In Sect. 5 we also use JeKpre t which evaluates e
in the initial state, σ.) The denotation JcKt is (σ, τ 0 ) where execution
of c in τ leads to τ 0 . The denotation is ⊥ in case c diverges
from τ . Boolean expressions evaluate to either 0 or 1. We assume
programs do not go wrong. We denote by 4̇ the point-wise lifting
to Trc⊥ → Trc⊥ of the approximation order 4 on Trc⊥ .
The terminating computations of c can be written as its image
on the initial traces: {JcKt | t ∈ IniTrc and JcKt 6= ⊥} where
IniTrc , {(σ, σ) | σ ∈ States}
To specify properties that hold for all executions we use collecting semantics which lifts the denotational semantics to arbitrary
sets T ∈ P(Trc) of traces. The idea is that ⦃c⦄T is the direct image
of JcK on T . To be precise, in this paper we focus on terminationinsensitive properties, and thus ⦃c⦄T is the set of non-⊥ traces t0
such that JcKt = t0 for some t ∈ T . Later we also use the collecting
semantics of expressions: ⦃e⦄T , {JeKt | t ∈ T }.
Importantly, the collecting semantics ⦃c⦄ ∈ P(Trc) → P(Trc)
can be defined compositionally using fixpoints (Cousot 2002, Sec.
7). For conditional guard b, write ⦃ grdb ⦄ for the filter defined by
⦃ grdb ⦄T , {t ∈ T | JbKt = 1}.
⦃c⦄ ∈ P(Trc) → P(Trc)
Collecting semantics
⦃x := e⦄T , {Jx := eKt | t ∈ T }
⦃c1 ; c2 ⦄T , ⦃c2 ⦄ ◦ ⦃c1 ⦄T
⦃skip⦄T , T
⦃if b then c1 else c2 ⦄T , ⦃c1 ⦄ ◦ ⦃ grdb ⦄T ∪ ⦃c2 ⦄ ◦ ⦃ grd¬b ⦄T
⦃while b do c⦄T , ⦃ grd¬b ⦄ lfp⊆
T ⦃if b then c else skip⦄
The clause for while loops uses the denotation of a constructed
conditional command as a definitional shorthand—its denotation is
compositional.
γ
−
− (A, v), such as
Given a Galois connection (P(Trc), ⊆) ←
−−
α→
the one for unmodified variables, the desired analysis is specified as
α ◦ ⦃c⦄ ◦ γ. Since it is not computable in general, we only require
an approximation f ] ∈ A → A that is sound in this sense:
α ◦ ⦃c⦄ ◦ γ v̇ f ]
(1)
where v̇ denotes the point-wise lifting of the partial order v.
To explain the significance of this specification, suppose one
wishes to prove program c satisfies a trace property T ∈ P(Trc),
i.e. to prove that ⦃c⦄(IniTrc) ⊆ T . Given eq. (1) it suffices to find
an abstract value a that approximates IniTrc, i.e. IniTrc ⊆ γ(a),
and show that
γ(f ] (a)) ⊆ T
(2)
]
˙
eq. (1) is equivalent to ⦃c⦄ ◦ γ ⊆ γ ◦ f by a property of
Galois connections. So eq. (2) implies ⦃c⦄(γ(a)) ⊆ T which (by
monotonicity of ⦃c⦄) implies ⦃c⦄(IniTrc) ⊆ ⦃c⦄(γ(a)) ⊆ T .
The beauty of specification eq. (1) is that f ] can be obtained as
an abstract interpretation ⦃c⦄] , derived systematically for all c by
calculating from the left side of eq. (1) as shown by Cousot (1999).
3.
Domains and Galois Connections for
Hyperproperties
To express hyperproperties, we need Galois connections for domains
that involve sets of sets of observable behaviours. This section spells
out how such powerset domains form a hierarchy as illustrated
along the top of Figure 2. We describe how dependences and
P(P(States∗ ))
P(P(Trc))
P(P(States))
P(States∗ )
P(Trc)
P(States)
abstraction
Figure 2. Extended hierarchy of semantic domains (−−−−−→)
cardinalities for quantitative information flow can be formulated
as Galois connections. We spell out a methodology whereby the
standard notions and techniques of abstract interpretation can be
applied to specify and derive—in the same form as Equation (1)—
static analyses for hyperproperties.
As a first example, consider the condition: the final value of x
depends only on the initial value of y. Its expression needs, at least,
two traces: If two traces, denoted by (σ, σ 0 ) and (τ, τ 0 ), agree on
the initial value of y then they agree on the final value of x. That is,
σ(y) = τ (y) implies σ 0 (x) = τ 0 (x). This must hold for any two
traces of the program. This is equivalent to the following: For all
sets T of traces, if traces in T all agree on the initial value of y then
they all agree on the final value of x. Later we extend this example
to an analysis that infers which dependences hold.
Consider the problem of quantifying information flow with mincapacity (Smith 2009). For a program on two integer variables h, l,
the problem is to infer how much information is conveyed via l
about h: considering some traces that agree on the initial value of l,
how many final values are possible for l. For example, the program
l := (h mod 2) + l has two final values for l, for each initial l,
though there are many possible initial values for h. This cardinality
problem generalizes prior work on quantitative flow analysis, where
typically low inputs are not considered.
Whereas the simple dependence problem can be formulated in
terms of 2 traces, the cardinality problem involves trace sets of
unbounded size. In the terminology of hyperproperties, it is not a
k-safety hyperproperty for any k (Yasuoka and Terauchi 2011, Sec.
3), although it is hypersafety (Clarkson and Schneider 2010). For
a fixed k, the problem “variable l has at most k − 1 final values” is
k-safety, which means it can be formulated in terms of sets with at
most k traces.
It turns out that by using Galois connections on sets of sets, we
can develop a general theory that encompasses many hyperproperties
and which enables derivation of interesting abstract interpreters. For
our applications, we use relational traces as the notion of observable
behavior, and thus P(P(Trc)). The approach works as well for other
notions, so there is a hierarchy of domains as shown at the top of Figure 2, in parallel with the ordinary hierarchy shown along the bottom.
The abstractions of this hierarchy are obtained by lifting each
abstraction between two standard collecting semantics (Cousot
2002) to their hypercollecting versions, by element-wise abstraction (Lemma 1). For instance, Lemma 1 justifies the abstraction
between P(P(Trc)) and P(P(States)), by lifting the abstraction
between P(Trc) and P(States) (Cousot 2002, Sec. 8). Additionally, the diagonal lines in Figure 2 represent abstractions between
hypercollecting semantics defined over some form of observations
and the corresponding collecting semantics defined over the same
observations.
Lemma 2 . Let C be a set. Define αhpp (C) , ∪C∈C C and
γhpp (C) , P(C). These form a Galois connection:
γhpp
−−
−−−
− (P(C), ⊆)
(P(P(C)), ⊆) ←
−−
→
α −−
hpp
It is noted by Clarkson and Schneider (2010) that any trace
property can be lifted to a unique hyperproperty; this lifting is
exactly the concretisation γhpp of Lemma 2. Although the model
of Clarkson and Schneider (2010) is quite general, it does focus on
infinite traces. But hyperproperties can be formulated in terms of
other notions of observation, as illustrated in Figure 2.
Cardinality abstraction. To lay the groundwork for our quantitative information flow analysis, we consider abstracting a set of
values by its cardinality. Cardinality is one ingredient in many quantitative information flow analyses estimating the amount of sensitive
information a program may leak (Smith 2009; Backes et al. 2009;
Braun et al. 2009; Köpf and Rybalchenko 2013; Mardziel et al.
2013; Doychev et al. 2013). The lattice of abstract representations
we consider is the set
[0, ∞] , N ∪ {∞}
where ∞ denotes an infinite cardinal number. We use the natural order ≤, and max as a join. Consider the abstraction operator crdval ∈ P(Val) → [0, ∞] computing cardinality and
given by crdval(V ) , |V |. This operator crdval is not additive, i.e. it does not preserve joins; e.g. crdval({1, 2} ∪ {2, 3}) 6=
max(crdval({1, 2}), crdval({2, 3})). Thus, there exists no associated concretisation f for which crdval is the lower adjoint in a
Galois connection. Yet, we can lift the abstraction operator crdval
to a Galois connection over P(P(Val)) through what is called a
supremus abstraction (Cousot 2002, p.52).
Lemma 3 Supremus abstraction. Let elt ∈ C → A be a function
from a set C, with codomain forming a complete lattice (A, v). Let
αelt (C) , tc∈C elt(c) and γelt (a) , {c ∈ C | elt(c) v a}. Then
γelt
−−
−−
− (A, v)
(P(C), ⊆) ←
−−
→
α−
elt
For example, define αcrdval (V) , maxV ∈V crdval(V ) and
γcrdval (n) , {V | crdval(V ) ≤ n}. Thus we obtain a Galois
γcrdval
←
−−
−−
−−
−−
−
− ([0, ∞] , ≤).
connection (P(P(Val)), ⊆) −
−
→
αcrdval
As another example let us consider, in simplified form, an
ingredient in dependency or noninterference analysis. For program
variable x, agreex ∈ P(States) → {tt, ff} determines whether a
set of states contains only states that all agree on x’s value:
agreex (Σ) , (∀σ, σ 0 ∈ Σ, JxKσ = JxKσ 0 )
Function agreex is not additive, so it is not part of a Galois
connection from P(States) to {tt, ff}. The same problem arises
with agreements on multiple variables, and with more concrete
domains like the finite maximal trace semantics P(States∗ ).
We lift the operator agreex to a Galois connection over
P(P(States)). A supremus abstraction yields
αagreex (S) , (∀Σ ∈ S, agreex (Σ))
γagreex (bv) , {Σ | agreex (Σ) ⇐= bv}
abstract concretisation/denotation of a security requirement yields a
set of sets of traces, namely a hyperproperty. Hints of this intuition
appear in the literature (McLean 1994; Volpano 1999; Rushby 2001;
Zakinthinos and Lerner 1997); e.g. security policies “are predicates
on sets of traces (i.e. they are higher order)” (Rushby 2001, p.2).
However, only recently has a comprehensive framework proposed a
sharp characterisation of security policies as hyperproperties (Clarkson and Schneider 2008, 2010).
Abstract interpretation of hyperproperties. The basic methodology for the verification of a hyperproperty HP, may be described
as follows:
Step 1. Design approximate representations forming a complete
lattice A, choose a collecting semantics C among the extended
hierarchy (set of sets domains, e.g. P(P(Trc))), and define α, γ
γ
−
− (A, v).
for a Galois connection (C, ≤) ←
−−
α→
Step 2. Compute an approximation a ∈ A of the semantics C ∈ C
of the program P of interest.
Step 3. Prove that the inferred approximation a implies that P
satisfies HP. The concretisation γ(a) is a set of trace sets,
of which the program’s trace set is a member—by contrast to
approximations of trace properties, which infer a single trace set
of which the program trace set is a subset. Then, it suffices to
prove γ(a) ⊆ HP.
Step 1 is guided by the need to have γ(a) ⊆ HP, i.e. a describes
a hyperproperty that implies HP. The calculational design (Cousot
1999) of abstract domains greatly systematises Step 2, by relying on
the Galois connection defined in Step 1. Collecting semantics can be
adapted to the additional structure of sets, as we show in Section 4.
4.
Hypercollecting Semantics
In the following, we introduce a hypercollecting semantics defined
over sets T ∈ P(P(Trc)) of sets of traces. This is used in
subsequent sections to derive static analyses.
Here is Step 2 of the methodology, spelled out in detail. Given
γ
−
− (A, v] ) built by the
a Galois connection (P(P(Trc)), ⊆) ←
−−
α→
supremus abstraction, and an approximation a of the initial traces
(i.e. IniTrc is in γ(a)), find an approximation a0 ∈ A of the
analysed program c, i.e. ⦃c⦄ IniTrc is in γ(a0 ). Then prove that the
program satisfies the hyperproperty HP of interest, i.e. γ(a0 ) ⊆ HP.
In order to compute a0 , we define a hypercollecting semantics
LcM ∈ P(P(Trc)) → P(P(Trc)). That will serve to derive—in
the manner of Equation (1)—a static analysis that is correct by
construction.
Hypercollecting semantics
γagreex
−−
−−
−−
−−
−
− ({tt, ff}, ⇐=).
so that (P(P(States)), ⊆) ←
−−
→
αagree
x
These examples are consistent with the many formulations of
noninterference (e.g. (Goguen and Meseguer 1982; Volpano and
Smith 1997; Giacobazzi and Mastroeni 2004; Amtoft and Banerjee
2004; Hunt and Sands 2006)) that motivated the characterisation of
information-flow security requirements as hyperproperties (Clarkson and Schneider 2010). Concretising an abstract value a can
be seen as defining the denotation of a type expression (as in, for
instance, Benton (2004, Sec. 3.3.1) and Hunt and Sands (1991)),
i.e. defining the set of objects that satisfy the description a. Thus,
concretising tt, when tt is interpreted as “satisfies a property requirement”, naturally yields a set of traces. Concretising tt, where
tt is interpreted as “satisfies a security requirement”, yields a set of
sets of traces.
Intuitively, the most abstract denotation/concretisation of a property requirement is defined in terms of a set of traces. The most
L c M ∈ P(P(Trc)) → P(P(Trc))
Lx := eMT , {⦃x := e⦄T | T ∈ T}
Lc1 ; c2 MT , Lc2 M ◦ Lc1 MT
LskipMT , T
Lif b then c1 else c2 MT ,
{⦃c1 ⦄ ◦ ⦃ grdb ⦄T ∪ ⦃c2 ⦄ ◦ ⦃ grd¬b ⦄T | T ∈ T}
Lwhile b do c MT , Lgrd¬b M lfp⊆
T Lif b then c else skipM
Lgrdb MT , {⦃ grdb ⦄T | T ∈ T}
Recall from Section 2 that standard collecting semantics is a
fixpoint-based formulation that captures the direct image on sets
of the underlying program semantics – this is proved, for example,
by Cachera and Pichardie (2010); Assaf and Naumann (2016). The
fixpoint formulation at the level of sets-of-sets we use is not simply
the direct image of the standard collecting semantics. The direct
image of the standard collecting semantics would yield a set of
(inner) fixpoints over sets of traces, whereas an outer fixpoint over
sets of sets of traces enables straightforward application of the
fixpoint transfer theorem.
Theorem 1 . For all c and all T ∈ P(Trc), ⦃c⦄T is in L c M{T }.
For a singleton {T }, the set LcM{T } ∈ P(P(Trc)) is not
necessarily a singleton set containing only the element ⦃c⦄T . If
c is a loop, LcM{T } yields a set of sets R of traces, where each set
R of traces contains only traces that exit the loop after less than
k iterations, for k ∈ N. We prove this theorem as corollary of the
following:
∀T ∈ P(P(Trc)), {⦃c⦄T | T ∈ T} ⊆ LcMT
This is proved by structural induction on commands. For loops, there
is a secondary induction on iterations of the loop body.
In summary, suppose one wishes to prove program c satisfies
hyperproperty HP ∈ P(P(Trc)), i.e. one wishes to prove that
⦃c⦄(IniTrc) ∈ HP. Suppose we have an approximation f ] of the
hypercollecting semantics, similarly to eq. (1), i.e.
]
α ◦ LcM ◦ γ v̇ f
]
(3)
Given eq. (3) it suffices to find an abstract value a that approximates
IniTrc, i.e. IniTrc ∈ γ(a), and show that:
γ(f ] (a)) ⊆ HP
(4)
˙ γ ◦ f ] by a property
Why? Equation (3) is equivalent to LcM γ ⊆
of Galois connections. So we have ⦃c⦄(IniTrc) ∈ LcM(γ(a)) ⊆
γ(f ] (a)) ⊆ HP using IniTrc ∈ γ(a), the Theorem, and eq. (4).
◦
5.
Information Flow
This section gives a number of technical definitions which build
up to the definition of Galois connections with which we specify
information flow policies explicitly as hyperproperties.
When a fixed main program is considered, we refer to it as P
and its variables as VarP . Our analyses are parametrised by the
program P to analyse, and an initial typing context Γ ∈ VarP → L
mapping each variable to a security level l ∈ L for its initial value.
We assume (L, v, t, u) is a finite lattice. In the most concrete case,
L may be defined as the universal flow lattice, i.e. the powerset of
variables P(VarP ), from which all other information flow types can
be inferred through a suitable abstraction (Hunt and Sands 2006,
Sec. 6.2); the initial typing context is then defined as λx.{x}.
Initial l-equivalence and variety. A key notion in information
flow is l-equivalence. Two states are l-equivalent iff they agree on
the values of variables having security level at most l. We introduce
the same notion over a set of traces, requiring that the initial states
are l-equivalent. Let us first denote by JeKpre ∈ Trc → Val
the evaluation of expression e in the initial state σ of a trace
(σ, τ ) ∈ Trc—unlike JeK ∈ Trc → Val which evaluates expression
e in the final state τ . Then, we denote by T |=Γ l the judgement
that all traces in a set T ⊆ Trc are initially l-equivalent, i.e. they all
initially agree on the value of variables up to a security level l ∈ L.
For example, in the case that L is the universal flow lattice, T |=Γ
{x, y} means ∀t1 , t2 ∈ T, JxKpre t1 = JxKpre t2 ∧ JyKpre t1 =
JyKpre t2 .
Initial l-equivalence
T |=Γ l iff. ∀t1 , t2 ∈ T, ∀x ∈ VarP ,
Γ(x) v l =⇒ JxKpre t1 = JxKpre t2
T |=Γ l
The notion of variety (Cohen 1977) underlies most definitions of
qualitative and quantitative information flow security. Information is
transmitted from a to b over execution of program P if by “varying
the initial value of a (exploring the variety in a), the resulting
value in b after P’s execution will also vary (showing that variety is
conveyed to b)” (Cohen 1977). We define the l-variety of expression
e, as the set of sets of values e may take, when considering only
initially l-equivalent traces. The variety is defined first as a function
Ol ⦃e⦄ ∈ P(Trc) → P(P(Val)) on trace sets, from which we
obtain a function Ol LeM ∈ P(P(Trc)) → P(P(Val)), on sets of
trace sets. Intuitively, l-variety of expression e is the variety that is
conveyed to e by varying only the input values of variables having a
security level l0 such that ¬(l0 v l).
Ol ⦃e⦄
l-variety
Ol LeM
Ol ⦃e⦄ ∈ P(Trc) → P(P(Val))
Ol ⦃e⦄T , {⦃e⦄R | R ⊆ T and R |=Γ l}
Ol LeM ∈ P(P(Trc)) → P(P(Val))
Ol LeMT , ∪T ∈T Ol ⦃e⦄T
Each set V ∈ Ol ⦃e⦄T of values results from initially lequivalent traces (R |=Γ l for R ⊆ T ). Thus, expression e does not
leak sensitive information to attackers having a security clearance
l ∈ L if Ol ⦃e⦄T is a set of singleton sets. Indeed, sensitive data
for attackers with security clearance l ∈ L is all data having
a security level l0 for which attackers do not have access (i.e.
¬(l0 v l) (Denning and Denning 1977)). Thus, if Ol ⦃e⦄T is a set of
singleton sets, this means that no matter how sensitive information
varies, this variety is not conveyed to expression e.
Besides a pedagogical purpose, we define l-variety Ol ⦃e⦄ (resp.
Ol LeM) instead of simply lifting the denotational semantics JeK of
expressions to sets of traces (resp. sets of sets of traces) since we
want to build modular abstractions of traces by relying on underlying
abstractions of values. Thus, l-variety enables us to pass information
about initially l-equivalent traces to the underlying domain of values
by keeping disjoint values that originate from traces that are not
initially l-equivalent.
Specifying information flow. We now have the ingredients needed
to describe information flow for command c, with respect to typing context Γ ∈ VarP → L. A quantitative security metric, introduced by Smith (2009, 2011), relies on min-entropy and mincapacity (Rényi 1961) in order to estimate the leakage of a program. Let us assume a program P that is characterized by a set
TP ∈ P(Trc) of traces, i.e. TP , ⦃ P ⦄ IniTrc. For simplicity, assume attackers only observe the value of a single variable x ∈ VarP .
(The generalization to multiple variables is straightforward.) The
leakage of P, as measured by min-capacity, to attackers having
security clearance l ∈ L is defined by
MLl , log2
◦
αcrdval
◦
Ol ⦃x⦄TP
(The definition of αcrdval follows Lemma 3.) For our purposes,
it suffices to know that this quantity aims to measure, in bits, the
remaining uncertainty about sensitive data for attackers with security
clearance l. Refer to the original work (Smith 2009) for more details.
Leaving aside the logarithm in the definition of MLl , a quantitative security requirement may enforce a limit on the amount of
information leaked to attackers with security clearance l ∈ L, by
requiring that the l-cardinality of variable x is less than or equal to
some non-negative integer k. We denote by SR(l, k, x) the hyperproperty that characterises this security requirement, i.e. the set of
program denotations satisfying it:
SR(l, k, x) , {T ∈ P(Trc) | αcrdval
◦
Ol ⦃x⦄T ≤ k}
Note that SR implicitly depends on the choice of initial typing Γ, as
does Ol ⦃x⦄T .
The termination-insensitive noninterference policy “the final
value of x depends only on the initial values of variables labelled
at most l” corresponds to the hyperproperty SR(l, 1, x). Therefore,
the program P satisfies SR(l, 1, x) if αcrdval ◦ Ol ⦃x⦄TP ≤ 1. Let
T = LPM{IniTrc}. Since TP is in T (Theorem 1), then P satisfies
SR(l, 1, x) if αcrdval ◦ Ol LxMT ≤ 1, by monotony of αcrdval and
by Ol ⦃x⦄TP ⊆ Ol LxMT from the definition of Ol L−M.
6.
Dependences
We rely on abstract interpretation to derive a static analysis similar
to existing ones inferring dependences (Amtoft and Banerjee 2004;
Hunt and Sands 2006; Amtoft et al. 2006; Hunt and Sands 2011).
Recall that our analyses are parametrised on a security lattice
L and program P. We denote by l ; x an atomic dependence
constraint, with l ∈ L and x ∈ VarP , read as “agreement up to
security level l leads to agreement on x”. It is an atomic pre-post
contract expressing that the final value of x must only depend on
initial values having at most security level l. Said otherwise, l ; x
states the noninterference of variable x from data that is sensitive
for attackers with security clearance l, i.e. all inputs having security
level l0 such that ¬(l0 v l).
Dependences are similar to information flow types (Hunt and
Sands 2006) and are the dual of independences assertions (Amtoft
and Banerjee 2004). Both interpretations are equivalent (Hunt and
Sands 2006, Sec. 5).
D ∈ Dep
Dep
Lattice of dependence constraints
Given a lattice L and program P, define
Dep , P({l ; x | l ∈ L, x ∈ VarP })
D 1 v\ D 2 , D 1 ⊇ D 2
agree
agree(V )
∈
,
αagree
αagree (V)
∈
,
γagree
γagree (bv)
∈
,
αagree
deptr
deptr(T )
∈
,
αdeptr
αdeptr (T)
∈
,
γdeptr
γdeptr (D)
∈
,
γagree
P(Val) → {tt, ff}
(∀v1 , v2 ∈ V, v1 = v2 )
P(P(Val)) → {tt, ff}
∧V ∈V agree(V )
{tt, ff} → P(P(Val))
{V ∈ P(Val) | agree(V ) ⇐= bv}
αdeptr
γdeptr
P(Trc) → Dep
{l ; x | l ∈ L, x ∈ VarP , αagree (Ol ⦃x⦄T )}
P(P(Trc)) → Dep
t\ T ∈T deptr(T )
Dep → P(P(Trc))
{T | deptr(T ) v\ D}
γdeptr
−−
−−−−
− (Dep, v\ )
(P(P(Trc)), ⊆) ←
−
−
→
α −−−
deptr
Note that deptr(T ) is the set of dependences l ; x for which
αagree (Ol ⦃x⦄T ) holds. For instance, the initial typing context
Γ ∈ VarP → L determines the initial dependences of a program:
αdeptr ({IniTrc})
= {l ; x | l ∈ L, x ∈ VarP and αagree (Ol ⦃x⦄ IniTrc)}
= {l ; x | l ∈ L, x ∈ VarP and Γ(x) v l}
l
We derive an approximation OD
LeM\ of l-variety Ol LeM. This
l
\
approximation OD LeM ∈ Dep → {tt, ff}, called l-agreement of
expression e, determines whether a set D of dependence constraints
guarantees that no variety is conveyed to expression e when the
inputs up to security level l are fixed. Notice that we use symbol \
and subscript D here, for contrast with similar notation using ] and
subscript C in later sections.
l
OD
LeM\ ∈ Dep → {tt, ff}
l
OD
LnM\ D , tt
In the rest of this section, L and P are fixed, together with a
typing context Γ ∈ VarP → L.
The semantic characterisation of dependences is tightly linked
to variety. An atomic constraint l ; x holds if no variety is
conveyed to x when the inputs up to security level l are fixed.
We use this intuition to define the Galois connections linking the
hypercollecting semantics and the lattice Dep, by instantiating the
supremus abstraction in Lemma 3.
The agreement abstraction approximates a set V ∈ P(P(Val))
by determining whether it contains variety.
agree
deptr
Dependence abstraction
l-agreement of expressions
D1 t\ D2 , D1 ∩ D2
Agreements abstraction
of level at most l. So αagree (Ol ⦃x⦄T ) holds just if there is at most
one final value.
l
OD
LxM\ D , (l ; x ∈ D)
l
l
l
OD
Le1 ⊕ e2 M\ D , OD
Le1 M\ D ∧ OD
Le2 M\ D
l
l
l
OD
Le1 cmp e2 M\ D , OD
Le1 M\ D ∧ OD
Le2 M\ D
l
Deriving the clauses defining OD
L−M\ amounts to a constructive
proof of the following.
l
Lemma 4 . OD
LeM\ is sound:
∀e, ∀l, ∀D,
αagree
◦
l
Ol LeM ◦ γdeptr (D) ⇐= OD
LeM\ D .
Dependence abstract semantics. We derive a dependence abstract
semantics LcM\ by approximating the hypercollecting semantics LcM.
This abstract semantics LcM\ ∈ Dep → Dep over-approximates the
dependence constraints that hold after execution of a command c,
on inputs satisfying initial dependence constraints.
We assume a static analysis approximating the variables that a
command modifies.
Mod ∈ Com → P(V ar)
Modifiable variables
−−−−− ({tt, ff}, ⇐=)
(P(P(Val)), ⊆) ←
−−
α−−−→
For all c, x, if there exists t, t ∈ Trc such that JcKt = t0 and
JxKpre t0 6= JxKt0 , then x ∈ Mod(c).
Note that γagree (tt) is {V ∈ P(Val) | agree(V )} and γagree (ff) is
P(Val). Also, agree(V ) iff |V | ≤ 1.
The dependence abstraction approximates a set T ∈ P(P(Trc))
by a dependence constraint D ∈ Dep. Recall that Ol ⦃x⦄T is the
set of final values for variable x in traces t ∈ T that agree on inputs
The abstract semantics of assignments x := e discards all atomic
constraints related to variable x in the input set D of constraints,
and adds atomic constraints l ; x if D guarantees l-agreement
for expression e. For conditionals, for each security level l, if
the input set D guarantees l-agreement of the conditional guard,
γagree
agree
0
the abstract semantics computes the join over the dependences of
both conditional branches, after projecting to only those atomic
constraints related to l (notation π l (−)). If D does not guarantee
l-agreement of the conditional guard, atomic constraints related to
both l and variables possibly modified are discarded. Intuitively, if D
guarantees l-agreement of the conditional guard, then l-agreement
over some variable x in both branches guarantees l-agreement over
x after the conditional command. Otherwise, the only l-agreements
that are guaranteed after the conditional are those that hold before
the conditional for variables that are not modified.
LcM\ ∈ Dep → Dep
Dependence abstract semantics
LskipM\ D , D
Comparison with previous analyses. Our dependence analysis is
similar to the logic of Amtoft and Banerjee (2004) as well as the flowsensitive type system of Hunt and Sands (2006). The relationship
between our sets D ∈ Dep of dependence constraints and the type
environments ∆ ∈ VarP → L of Hunt and Sands can be formalised
by the abstraction:
Lc1 ; c2 M\ D , Lc2 M\
◦
Lc1 M\ D
Lx := eM\ D ,
l
{l ; y ∈ D | y 6= x} ∪ {l ; x | l ∈ L, OD
LeM\ D}
Lif b then c1 else c2 M\ D ,
let D1 = Lc1 M\ D in
let D2 = Lc2 M\ D in
let W (
= Mod(if b then c1 else c2 ) in
l
S π l (D1 ) t\ π l (D2 )
LbM\ D
if OD
l
/ W } otherwise
l∈L {l ; x ∈ π (D) | x ∈
αhs
αhs (D)
∈
,
γhs
γhs (∆)
∈
,
Dep → VarP → L
λx. u {l | l ; x ∈ D}
(VarP → L) → Dep
{l ; x | x ∈ VarP , l ∈ L, ∆(x) v l}
This is in fact an isomorphism because of the way we interpret
dependences. Indeed, if l ; x holds, then also l0 ; x for all l0 ∈ L
such that l v l0 (cf. Corollary 4 in Appendix G.2). This observation
suggests reformulating the sets D ∈ Dep of dependence constraints
to contain only elements with minimal level, but we refrain from
doing so for simplicity of presentation.
Our dependence analysis is at least as precise as the type system
of Hunt and Sands. To state this result, we denote by ⊥L the bottom
element of the lattice L. We also assume that the modified variables
is precise enough to simulate the same effect as the program counter
used in the type system: Mod(c) is a subset of the variables that are
targets of assignments in c.
Theorem 3 . For all c, D0 , D ∈ Dep, ∆0 , ∆ ∈ VarP → L, where
⊥L ` ∆0 {c}∆, and D = LcM\ D0 , it holds that:
]
\
Lwhile b do cM\ D , lfpv
D Lif b then c1 else c2 M
π l (D) , {l ; x ∈ D | x ∈ VarP }
αhs (D0 ) v̇ ∆0 =⇒ αhs (D) v̇ ∆ .
Theorem 2 . The dependence semantics is sound:
αdeptr
\
◦
\
LcM ◦ γdeptr v̇ LcM\ .
7.
We denote by v̇ the point-wise lifting of the partial order v\ .
We can derive this abstract semantics by directly approximating the
relational hypercollecting semantics LcM through the dependence
Galois connection (αdeptr , γdeptr ). The derivation is by structural
induction on commands. It leverages mathematical properties of
Galois connections. We start with the specification of the best
abstract transformer αdeptr ◦ LcM ◦ γdeptr ∈ Dep → Dep, and
successively approximate it to finally obtain the definition of the
dependence abstract semantics for each form of command. The
derivation is the proof, and the obtained definition of the abstract
semantics is correct by construction.
Let us showcase the simplest derivation for a sequence of
commands in order to illustrate this process:
αdeptr ◦ Lc1 ; c2 M ◦ γdeptr
= HBy definition of the hypercollecting semanticsI
αdeptr
◦
\
Lc2 M ◦ Lc1 M ◦ γdeptr
v̇ HBy γdeptr
αdeptr
◦
\
◦
αdeptr is extensive I
Lc2 M ◦ γdeptr
◦
αdeptr
◦
Lc1 M ◦ γdeptr
v̇ HBy induction hypothesis αdeptr
Lc2 M\
◦
Lc1 M\
◦
Lattice of cardinality constraints
Card
C ∈ Card
For a program P and lattice L, we say C is a valid set of constraints
iff ∀x ∈ VarP , ∀l ∈ L, ∃!n ∈ [0, ∞] , l ; x#n ∈ C .
Let Card be the set of valid sets of constraints.
It is a complete lattice:
C1 v] C2 iff ∀l ; x#n1 ∈ C1 , ∃n2 ,
l ; x#n2 ∈ C2 ∧ n1 ≤ n2
C1 t] C2 , {l ; x# max(n1 , n2 ) |
l ; x#n1 ∈ C1 , l ; x#n2 ∈ C2 }
\
LcM ◦ γdeptr v̇ LcM\ I
, HTake this last approximation as the definition.I
Lc1 ; c2 M\
Cardinality Abstraction
Dependence analysis is only concerned with whether variety is conveyed. We refine this analysis by deriving a cardinality abstraction
that enumerates variety.
We denote by l ; x#n an atomic cardinality constraint where
l ∈ L, x ∈ VarP and n ∈ [0, ∞], read as “agreement up to security
level l leads to a variety of at most n values in variable x”.
Alternatively, we can leverage Galois connections to give the
analysis as an approximation of the cardinality analysis. We work
this out by Lemmas 6 and 7, introduced in Section 7.
In the rest of this section, L and P are fixed, together with a
typing context Γ ∈ VarP → L.
A valid constraint set is essentially a function from l and x to n.
So v] is essentially a pointwise order on functions, and we ensure
that v] is antisymmetric.
The cardinality abstraction relies on the abstraction αcrdval ,
introduced in Section 3, in order to approximate l-variety of a
variable into a cardinality n ∈ [0, ∞].
crdtr
Cardinality abstraction
crdtr
crdtr(T )
∈
,
αcrdtr
αcrdtr (T)
∈
,
γcrdtr
γcrdtr (C )
∈
,
αcrdtr
γcrdtr
P(Trc) → Card
{l ; x#n | l ∈ L, x ∈ VarP ,
n = αcrdval (Ol ⦃x⦄T ) }
P(P(Trc)) → Card
t] T ∈T crdtr(T )
Card → P(P(Trc))
{T | crdtr(T ) v] C }
Theorem 4 . The cardinality abstract semantics is sound:
γcrdtr
−−
−−−− (Card, v )
(P(P(Trc)), ⊆) ←
−−
α −−→
]
crdtr
The cardinality abstraction enables us to derive an approximation
l
l
OC
LeM] of l-variety Ol LeM. This approximation OC
LeM] ∈ Card →
[0, ∞], called l-cardinality of expression e, enumerates the l-variety
conveyed to expression e assuming a set C ∈ Card of cardinality
constraints holds. Note that the infinite cardinal ∞ is absorbing, i.e.
∀n, ∞ × n , ∞.
l
OC
LeM] ∈ Card → [0, ∞]
l-cardinality of expressions
l
OC
LnM] C , 1
l
OC
LxM] C , n where l ; x#n ∈ C
l
l
l
OC
Le1 ⊕ e2 M] C , OC
Le1 M] C × OC
Le2 M] C
l
l
l
OC
Le1 cmp e2 M] C , min 2, OC
Le1 M] C × OC
Le2 M] C
l
LeM] is sound:
Lemma 5 . OC
∀e, ∀l,
αcrdval
◦
l
˙ OC
Ol LeM ◦ γcrdtr ≤
LeM] .
We now derive a cardinality abstract semantics by approximating the relational hypercollecting semantics of Section 4. It uses
definitions to follow.
Cardinality abstract semantics
LskipM] C , C
abstract semantics of conditionals is also similar to dependences:
if the conditional guard does not convey l-variety, then all initially
l-equivalent traces follow the same execution path and the join
operator (defined as max over cardinality) over both conditional
branches over-approximates the l-cardinality after the conditional.
Otherwise, the l-cardinality over both conditional branches have
to be summed—for the variables that may be modified in the
conditional branches—to soundly approximate the l-cardinality after
the conditional.
LcM] ∈ Card → Card
Lc1 ; c2 M] C , Lc2 M]
◦
Lc1 M] C
Lx := eM] C ,
{l ; y#n ∈ C | y 6= x}
l
∪{l ; x#n | l ∈ L, x ∈ VarP , n = OC
LeM] C }
Lif b then c1 else c2 M] C ,
let C1 = Lc1 M] C in
let C2 = Lc2 M] C in
let W (
= Mod(if b then c1 else c2 ) in
l
S π l (C1 ) t] π l (C2 )
if OC
LbM] C = 1
l
]
l
l∈L π (C1 ) t add(W,π l (C )) π (C2 ) otherwise
]
]
Lwhile b do cM] C , lfpv
C Lif b then c1 else c2 M
π l (C ) , {l ; x#n ∈ CS | x ∈ VarP , n ∈ [0, ∞]}
C1 t] add(W,C0 ) C2 ,
x∈Var
S P \W {l ; x#n ∈ C0 }
∪ x∈W {l ; x#(n1 +n2 ) |
l ; x#nj ∈ Cj , j = 1, 2}
The abstract semantics of assignments x := e is similar in spirit
to the one for dependences: discard atomic constraints related to x,
and add new ones by computing l-cardinality of expression e. The
αcrdtr
◦
]
LcM ◦ γcrdtr v̇ LcM] .
The lattice Card is complete, although not finite. We may
define a widening operator ∇ ∈ Card × Card → Card to ensure
convergence of the analysis (Cousot and Cousot 1992)(Nielson et al.
1999)(Cortesi and Zanioli 2011, Sec. 4).
C1 ∇C2 , {l ; x#n | l ; x#n1 ∈ C1 , l ; x#n2 ∈ C2 ,
n = n1 ∇n2 }
n1 ∇n2 , if (n2 ≤ n1 ) then n1 else ∞
The occurrence of widening depends on the iteration strategy
employed by the static analyser. Widening accelerates or forces
the convergence of fixpoint computations. In the simplest setting,
the analyser passes as arguments to the widening operator the old
set C1 of cardinality as well as the new set C2 that is computed.
For each atomic cardinality constraint, the widening operator then
compares the old cardinality n1 to the new cardinality n2 . If the
cardinality is still strictly increasing (n2 > n1 ), the widening forces
the convergence by setting it to ∞. If the cardinality is decreasing,
the widening operator sets it to the maximum cardinality n1 in
order to force convergence and ensure the sequence of computed
cardinalities is stationary.
Min-capacity leakage. So far, we showed how one can derive
static analyses of hyperproperties—the abstract representations
themselves are interpreted as hyperproperties—by approximating
hypercollecting semantics. Let us now recall the security requirement SR(l, k, x) introduced in Section 4 in order to illustrate how
these analyses may prove that a program satisfies a hyperproperty,
i.e. Step 3 of the methodology in Section 3 (see also Equation (4)).
Consider a program P characterised by a set TP ∈ P(Trc) of
traces, i.e. TP is ⦃ P ⦄ IniTrc. How do we prove that P satisfies the
hyperproperty SR(l, k, x)? We can use the cardinality analysis to
prove that variable x has a l-cardinality that is at most k. Indeed,
if C approximates TP (i.e. αcrdtr ({TP }) v] C ) then αcrdval ◦
l
Ol ⦃x⦄TP ≤ OC
LxM] C . Thus, if the inferred l-cardinality of C is
at most k then program P is guaranteed to satisfy the hyperproperty
SR(l, k, x). We have {TP } ⊆ γcrdtr (C ) since C approximates TP
(i.e. αcrdtr ({TP }) v] C ). And we have γcrdtr (C ) ⊆ SR(l, k, x)
l
by assumption OC
LxM] C ≤ k. Hence TP ∈ SR(l, k, x).
The hyperproperty SR(l, k, x) is a (k + 1)-safety hyperproperty (Clarkson and Schneider 2010), i.e. it requires exhibiting
at most k + 1 traces in order to prove that a program does not
satisfy SR(l, k, x). For example, termination-insensitive noninterference for security level l, which corresponds to the hyperproperty
SR(l, 1, x), is 2-safety. A k-safety hyperproperty of a program can
be reduced to a safety property of a k-fold product program (Barthe
et al. 2004; Terauchi and Aiken 2005; Darvas et al. 2005; Clarkson
and Schneider 2010).
Various quantitative information flow properties are not k-safety.
For example, the bounding problem that the cardinality analysis
targets, namely min-capacity leakage is not a k-safety hyperproperty
for any k (Yasuoka and Terauchi 2011, Sec. 3). Instead, this
bounding problem is hypersafety (Clarkson and Schneider 2010).
Cardinalities vs. dependences. Just as quantitative security metrics are the natural generalisations of qualitative metrics such as noninterference, the cardinality abstraction is a natural generalisation of
dependence analysis. Instead of deciding if variety is conveyed, the
cardinality analysis enumerates this variety. In other words, dependences are abstractions of cardinalities. We can factor the Galois connections, e.g. (αagree , γagree ) is (αlqone ◦ αcrdval , γcrdval ◦ γlqone )
for suitable (αlqone , γlqone ).
improve precision. For simplicity, we consider a two point lattice
{L, H} and an initial typing context where variables yi are the
only low variables (Γ(yi ) = L). As is usual, low may flow to high
(L v H).
Consider the following program.
1
2
3
Lemma 6 . (αagree , γagree ) is the composition of two Galois connections (αcrdval , γcrdval ) and (αlqone , γlqone ) :
4
γlqone
γ
Listing 1. Leaking 1 bit of secret
crdval
−−
−
−−−
− ([0, ∞] , ≤) ←
−−−−− ({tt, ff}, ⇐=)
(P(P(Val)), ⊆) ←
−−
→
−−
α −−−
α−−−→
crdval
lqone
The cardinality abstraction determines that x has at most 2 values after the execution of the program in Listing 1, for initially
L-equivalent traces. For fixed low inputs, x has one value in the then
branch and one value in the else branch, and these cardinalities get
summed after the conditional since the conditional guard may evaluate to 2 different values. Thus, the cardinality abstraction proves
that this example program satisfies the hyperproperty SR(L, 2, x).
with:
(
tt
ff
(
1
γlqone (bv) ,
∞
αlqone (n) ,
if n ≤ 1
, and
otherwise.
if bv = tt
otherwise.
Stronger trace properties. Another way of proving a hyperproperty is by proving a stronger trace property. If a program is proven
to satisfy a trace property T ∈ P(Trc), then proving that T is
stronger than hyperproperty H ∈ P(P(Trc))—in the sense that
γhpp (T ) ⊆ H—guarantees the program satisfies the hyperproperty H. For instance, by proving for some program that an output
variable x ranges over an interval of integer values whose size is k,
we can prove that program satisfies SR(L, k, x).
However, approximating a hyperproperty by a trace property
may be too coarse for some programs, as we can illustrate with an
interval analysis (Cousot and Cousot 1977) on the example program
in Listing 1. Such an interval analysis loses too much precision in
the initial state of this program, since it maps all low input variables
y1 , y2 and y3 to [−∞, +∞]. After the conditional, it determines
that x belongs to the interval [−∞, +∞], which is a coarse overapproximation. Also, a polyhedron (Cousot and Halbwachs 1978)
does not capture the disjunction that is needed for this example
program (x = y2 or x = y3 ). Both abstract domains and many more
existing ones are not suitable for the task of inferring cardinalities
or dependences because they are convex. Using them as a basis to
extract counting information delivers an over-approximation of the
leakage, but a coarse one, especially in the presence of low inputs.
A disjunction of two polyhedra —through powerset domains,
disjunctive postconditions, or partitioning (Bourdoncle 1992)— is
as precise as the cardinality analysis for this example. However,
disjunctions are not tractable in general. As soon as one fixes a
maximum number of disjunctive elements (as in the quantitative
information flow analysis of Mardziel et al. (2011, 2013)) or defines
a widening operator to guarantee convergence, one loses the relative
precision wrt. classical dependence analyses (Amtoft and Banerjee
2004; Hunt and Sands 2006) that the cardinality analysis guarantees
(Cf. Corollary 1). Future work will investigate relying on cardinality analysis as a strategy guiding trace partitioning (Rival and
Mauborgne 2007). Combining our analyses with existing domains
will also deliver better precision.
Consider the following program.
Lemma 7 . (αdeptr , γdeptr ) is the composition of two Galois connections (αcrdtr , γcrdtr ) and (αlqonecc , γlqonecc ) :
γlqonecc
γ
crdtr
−−
−−−− (Card, v] ) ←
−−
−−−−−
− (Dep, v\ )
(P(P(Trc)), ⊆) ←
−−
−−
→
α −−→
α −−−−
crdtr
lqonecc
with:
αlqonecc (C ) , {l ;Sx | l ; x#n ∈ C and αlqone (n)}
γlqonecc (D) ,
{l ; x#n | n = γlqone (l ; x ∈ D)}
l∈L,x∈VarP
We use Lemmas 6 and 7 to abstract further the cardinality
abstract semantics and derive the correct by construction dependence
analysis of Section 6. This derivation, which can be found in
Appendix G, proves Lemma 4 and Theorem 2 stated earlier.
As a corollary and by Theorem 3, this also proves the precision of the cardinality analysis relative to Amtoft and Banerjee’s
logic (Amtoft and Banerjee 2004) as well as Hunt and Sands’ type
system (Hunt and Sands 2006, 2011).
Corollary 1 No leakage for well-typed programs. For all c,
C0 , C ∈ Card, ∆0 , ∆ ∈ VarP → L, where ⊥L ` ∆0 {c}∆,
and C = LcM] C0 , it holds that:
αhs
αlqonecc (C0 ) v̇ ∆0 =⇒
l
∀x ∈ VarP , l ∈ L, ∆(x) v l =⇒ OC
LxM] ≤ 1
◦
The cardinality analysis determines that there is no leakage for
programs that are “well-typed” by the flow-sensitive type system
of Hunt and Sands. By “well-typed”, we mean that the final typing
environment that is computed by the type system allows attackers
with security clearance l ∈ L to observe a variable x ∈ VarP .
To the best of our knowledge, the cardinality abstraction is
the first approximation-based analysis for quantitative information
flow that provides a formal precision guarantee wrt. traditional
analyses for qualitative information flow. This advantage makes
the cardinality analysis appealing even when interested in proving
a qualitative security policy such as non-interference, since the
cardinality abstraction provides quantitative information that may
assist in making better informed decisions if declassification is
necessary. Nonetheless, we need further experimentation to compare
to other quantitative analyses —see Section 9.
8.
Towards More Precision
This section introduces examples to evaluate the precision of the
analyses, and shows how existing analyses can be leveraged to
if (y1 ≥ secret ) then
x := y2
else
x := y3
1
2
if (y1 ≥ secret ) then x := y2 else x := y3 ;
o := x * y4
Listing 2. Leaking x
The cardinal abstraction determines that variable o leaks the two
possible values of x: for fixed low inputs, x has two possible values whereas y4 has one possible value. Relational abstract domains
such as polyhedra (Cousot and Halbwachs 1978) or octogons (Miné
2006a) do not support non-linear expressions, and therefore are
unable to compute a precise bound of the leakage for variable o.
Consider an analysis with a disjunction {x = y2 ∨ x = y3 } of polyhedra and linearisation over intervals (Miné 2006b). Linearisation of
expressions y2 ∗y4 and y3 ∗y4 will compute the following constraints
for variable o: {(o = y2 ∗ [−∞, +∞]) ∨ (o = y3 ∗ [−∞, +∞])} if
linearisation happens for the right side of expressions, or constraint
{(o = [−∞, +∞] ∗ y4 ) ∨ (o = [−∞, +∞] ∗ y4 )} if linearisation happens for the left side expressions. Two more combinations
of constraints are possible, but none will deduce that variable o
has at most 2 values, because the underlying domain of intervals
lacks the required precision. Linearisation over both intervals and
cardinalities delivers better precision.
We can now also improve the dependence abstraction:
αlqonecc
αcrdval
◦
l
Ol Lx2 M ◦ Lgrdx1 ==x2 M ◦ γcrdtr (C ) ≤ OC
Lx1 M] C
Therefore, we can deduce that:
αcrdtr
◦
Lgrdx1 ==x2 M ◦ γcrdtr (C )
v {l ; x#n ∈ C | x 6= x1 , x 6= x2 }
]
∪ {l ; x1 # min(n1 , n2 ), l ; x2 # min(n1 , n2 ) |
l ; x1 #n1 ∈ C , l ; x2 #n2 ∈ C }
, Lgrdx1 ==x2 M] C
For other comparison operators, we use as before Lgrdb M] C , C .
l ; x1 #n1 ∈ γlqonecc (D), l ; x2 #n2 ∈ γlqonecc (D)})
For other comparison operators, we also use Lgrdb M\ D , D.
With these new definitions, we can update the abstract semantics
of conditionals and loops, for both dependences and cardinalities, to
leverage the transfer functions Lgrd− M\ and Lgrd− M] .
Improved dependences abstract semantics LcM\ ∈ Dep → Dep
Lif b then c1 else c2 M\ D ,
let D1 = Lgrdb M\ ◦ Lc1 M\ D in
let D2 = Lgrd¬b M\ ◦ Lc2 M\ D in
let W (
= Mod(if b then c1 else c2 ) in
l
S π l (D1 ) t\ π l (D2 )
LbM\ D
if OD
l
/ W } otherwise
l∈L {l ; x ∈ π (D) | x ∈
The cardinality abstraction determines that initially L-equivalent
memories lead to a variety of at most 2 in the pointer p after the
conditional, whereas both y2 and y3 have a variety of 1. Assuming
an aliasing analysis determines that p may point to y2 or y3 , the
cardinality analysis determines that variable o has a variety of at
most 2, for initially L-equivalent memories.
l
Ol Lx1 M ◦ Lgrdx1 ==x2 M ◦ γcrdtr (C ) ≤ OC
Lx2 M] C
γlqonecc (D)
, Lgrdx1 ==x2 M\ D
Listing 3. Leaking 1 bit of secret
◦
◦
v {l ; x ∈ D | x 6= x1 , x 6= x2 }
∪ {l ; x1 , l ; x2 | l ; x1 ∈ D or l ; x2 ∈ D}
\
if (y1 ≥ secret ) then
p := &y2
else
p := &y3
o := * p
αcrdval
Lgrdx1 ==x2 M]
v αlqonecc ({l ; x#n ∈ γlqonecc (D) | x 6= x1 , x 6= x2 })
∪ αlqonecc ({l ; x1 # min(n1 , n2 ), l ; x2 # min(n1 , n2 ) |
\
Scaling to richer languages. We can rely on existing abstract
domains to support richer language constructs, e.g. pointers and
aliasing. Consider the following variation of Listing 1.
Improving precision. To improve precision of the cardinality
abstraction, we can augment it with existing abstract domains. One
shortcoming of the cardinality analysis is the fact that it is not
relational. Assuming attackers with security clearance L observe
both variables x and o after execution of the program in Listing 2,
the cardinality abstraction leads us to compute a leakage of two bits:
four different possible values, instead of only 2 possible values for
initially L-equivalent memories. Relying on a relational domain with
linearisation (Miné 2006b) over cardinalities captures the required
constraints {L ; x#2, L ; o#1 ∗ x} to compute a leakage of
only one bit; these constraints are to be interpreted as “initially
L-equivalent memories result in o being equal to one fixed integer
times x, and x having at most 2 values”.
We leave these extensions of cardinality analysis —and its abstraction as dependence analysis— for future work. In the following,
we focus on one particular improvement to both previous analyses in
order to gain more precision. We uncovered this case while deriving
the analyses, by relying on the calculational framework of abstract
interpretation. Indeed, notice that the following holds:
◦
Lwhile b do cM\ D , Lgrd¬b M\
]
◦
\
lfpv
D Lif b then c1 else c2 M
Improved cardinality abs. semantics
LcM] ∈ Card → Card
Lif b then c1 else c2 M] C ,
let C1 = Lgrdb M] ◦ Lc1 M] C in
let C2 = Lgrd¬b M] ◦ Lc2 M] C in
let W (
= Mod(if b then c1 else c2 ) in
l
S π l (C1 ) t] π l (C2 )
LbM] C = 1
if OC
l
]
l
l∈L π (C1 ) t add(W,π l (C )) π (C2 ) otherwise
Lwhile b do cM] C , Lgrd¬b M\
]
◦
]
lfpv
C Lif b then c1 else c2 M
To illustrate the benefits of this improvement, consider the
following example.
1
2
3
4
5
while ( secret != y3 ) do {
x := x +1;
secret := secret - 1;
}
o := secret ;
Listing 4. Improved precision
The cardinality analysis determines that initially L-equivalent
memories result in x having an infinity of values: the L-cardinality
of x grows until it is widened to ∞. In contrast, cardinalities also
determine that variables o and secret have only 1 value, assuming Lequivalent memories. This is because of the reduction that concerns
variable secret after the while loop, specifically Lgrdsecret==y3 M\ .
Similarly, the improved dependence analysis also determines that
both variables secret and o are low. These are sound precision gains
for termination-insensitive noninterference; Askarov et al. (2008)
discusses the guarantees provided by this security requirement.
Remarkably, this has been overlooked by many previous analyses.
In fact, this simple improvement makes our dependence analysis
strictly more precise than Amtoft and Banerjee (2004)’s and Hunt
and Sands (2006, 2011)’s analyses and incomparable to the more
recent dependence analysis of Müller et al. (2015).
Combination with intervals. Consider now the following
example inspired from Müller et al. (2015).
0
1
2
3
4
1
2
3
4
5
6
7
if ( secret == 0) then {
x := 0;
y := y + 1;
}
else {
x := 0;
}
5
6
7
8
9
10
11
Listing 5. Example program from Müller et al. (2015)
The analysis of Müller et al. (2015) determines that x is low,
whereas the cardinality abstraction determines that L-equivalent
memories result in at most 2 values for variable x, because it does
not track the actual values of variables. We can combine cardinality
with an interval analysis to be more precise in such cases, through a
reduced product (Cousot and Cousot 1979; Granger 1992; Cortesi
et al. 2013).
Assume a set StInt of interval environments provided with
˙ ],Int . Assume also
the usual partial order that we denote by ≤
Int
Int
a Galois connection (α , γ ) enabling the derivation of an
interval analysis as an approximation of a standard collecting
semantics defined over P(Trc). We can lift this Galois connection
to P(P(Trc)) to obtain a Galois connection by compositing with
(αhpp , γhpp ), to obtain (α0 , γ 0 ) , (αInt ◦ αhpp , γ Int ◦ γhpp ) with:
γhpp
γ Int
hpp
αInt
˙ ],Int )
−−
−−−
− (P(Trc), ⊆) ←
−−
−−
−
− (StInt, ≤
(P(P(Trc)), ⊆) ←
−−
→
−−
→
α −−
A Granger’s reduced product Granger (1992) for the cardinality
abstraction and an interval analysis may be defined as a pair
of functions toint ∈ Card × StInt → StInt and tocard ∈
Card × StInt → Card verifying the following conditions:
1. soundness:
γ 0 (toint(C , ı)) ∩ γcrdtr (C )
γ 0 (ı) ∩ γcrdtr (tocard(C , ı))
2. reduction:
toint(C , ı)
tocard(C , ı)
= γ 0 (ı) ∩ γcrdtr (C )
= γ 0 (ı) ∩ γcrdtr (C )
˙ ],Int ı
≤
v] C
Let us denote by size the function that returns the size of an
interval. One such Granger’s reduced product can be defined as:
tocard
tocard(C , ı)
∈
,
toint
toint(C , ı)
∈
,
Card × StInt → Card
{l ; x#n0 | l ; x#n ∈ C and
n0 = min (n, size ı(x))}
Card × StInt → Card
ı
Once enhanced with this reduced product, the cardinality analysis determines for the program in Listing 5, that L-equivalent memories result in at most one possible value for variable x.
The dependence analysis can be improved similarly, with a
reduction function defined as follows:
todep ∈ Dep × StInt → Dep
todep(D, ı) , D ∪ {l ; x | l ∈ L and size ı(x) = 1}
Once extended with a reduced product with intervals, the dependence analysis is also able to determine that variable x is low for
the program in Listing 5.
12
13
//L ; h#∞, L ; y1 #1, L ; y2 #1, L ; y3 #1
y1 := 1; //L ; y1 #1
if ( h == y1 ) then {
skip ; //L ; h#1, L ; y1 #1, L ; y2 #1
}
else {
y2 := 5; //L ; y1 #1, L ; y2 #1
while (y2 != 1) do {
y2 := y2 -1; //L ; y2 #1
y1 := y2 ; //L ; y1 #1
} //L ; y1 #1, L ; y2 #1
}
//L ; h#∞, L ; y1 #2, L ; y2 #2, L ; y3 #1
o := y1 * y3 ; //L ; o#2
Listing 6. No leakage for variable o
More reduced products. As a final example, let us consider
Listing 6, inspired by Besson et al. (2016, program 7), that we
annotate with the result of the improved cardinality abstraction. To
the best of our knowledge, no existing automated static analysis
determines that variable o is low at the end of this program. Also,
no prior monitor but the one recently presented by Besson et al.
(2016) accepts all executions of this program, assuming attackers
with clearance L can observe variable o.
For initially L-equivalent memories, the cardinality abstraction
determines that variables y1 , y2 and o have at most two values. This
result is precise for y2 , but not precise for y1 and o. As a challenge,
let us see what is required to gain more precision to determine that
both variables y1 and o have at most 1 possible value – they are low.
To tackle this challenge, we need to consider cardinality combined with an interval analysis and a simple relational domain tracking equalities. With the equality y1 = y2 at the exit of the loop, both
y1 and y2 will be reduced to the singleton interval [1, 1]. After the
conditional, we still deduce that y2 has at most 2 different values
thanks to the cardinality abstraction. Using intervals, we deduce that
variable y1 has only one value (singleton interval [1, 1]). And finally,
at the last assignment the cardinalities abstraction determines that
variable o has only one possible value. Similarly, this same combination of analyses can be put to use to let the dependence analysis
reach the desired precision.
9.
Related Work
Although noninterference has important applications, for many security requirements it is too strong. That is one motivation for research
in quantitative information flow analysis. In addition, a number of
works investigate weakenings of noninterference and downgrading
policies that are conditioned on events or data values (Askarov and
Sabelfeld 2007; Banerjee et al. 2008; Sabelfeld and Sands 2009;
Mastroeni and Banerjee 2011). Assaf (2015, Chapter 4) proposes
to take the guarantees provided by termination-insensitive noninterference (Askarov et al. 2008) as an explicit definition for security;
this Relative Secrecy requirement is inspired by Volpano and Smith
(2000) who propose a type-system preventing batch-job programs
from leaking secrets in polynomial time. Giacobazzi and Mastroeni
(2004) introduce abstract noninterference, which generalizes noninterference by means of abstract interpretations that specify, for
example, limits on the attacker’s power and the extent of partial
releases (declassification). The survey by Mastroeni (2013) further
generalizes the notion and highlights, among other things, its applicability to a range of underlying semantics. The Galois connections
in this work are at the level of trace sets, not sets of sets. Abstract
noninterference retains the explicit 2-run formulation (Volpano et al.
1996; Sabelfeld and Myers 2003): from two related initial states, two
executions lead to related final states. The relations are defined in
terms of abstract interpretations of the individual states/executions.
Mastroeni and Banerjee (2011) show how to infer indistinguishability relations—modelling attackers’ observations—to find the best
abstract noninterference policy that holds. The inference algorithm
iteratively refines the relation by using counter-examples and abstract domain completion (Cousot and Cousot 1979).
Set-of-sets structures occur in work on abstraction for nondeterministic programs, but in those works one level of sets are powerdomains for nondeterminacy; the properties considered are trace
properties (Schmidt 2009, 2012). Hunt and Sands (1991) develop a
binding time analysis and a strictness analysis (Hunt 1990) based on
partial equivalence relations: Their concretisations are sets of equivalence classes. Cousot and Cousot (1994) point out that this analysis
could be achieved by a collecting semantics over sets-of-sets, defined simply as a direct image. To the best of our knowledge this
has not been explored further in the literature, except in unpublished
work on which this paper builds (Assaf 2015; Assaf et al. 2016b).
Clarkson et al. (2014); Finkbeiner et al. (2015) extend temporal
logic with means to quantify over multiple traces in order to express
hyperproperties, and provide model checking algorithms for finite
space systems. Agrawal and Bonakdarpour (2016) introduce a
technique for runtime verification of k-safety properties.
The dependences analysis we derive is similar to the information flow logic of Amtoft and Banerjee (2004) and the equivalent
flow-sensitive type system of Hunt and Sands (2006). Amtoft and
Banerjee use the domain P(Trc) and on the basis of a relational
logic they validate a forward analysis. In effect their interpretation
of “independences” is a Galois connection with sets of sets, but
the analysis is not formulated or proved correct as an abstract interpretation. To deal with dynamically allocated state, Amtoft et al.
(2006) augment the relational assertions of information flow logic
with region assertions, which can be computed by abstract interpretation. This is used both to express agreement relations between the
two executions and to approximate modifiable locations. This approach is generalized in Banerjee et al. (2016) to a relational Hoare
logic for object-based programs that encompasses information flow
properties with conditional downgrading (Banerjee et al. 2008).
Müller et al. (2015) give a backwards analysis that infers dependencies and is proved strictly more precise than (Hunt and Sands
2006; Amtoft and Banerjee 2004). This is achieved by product
construction that facilitates inferring relations between variables in
executions that follow different control paths. Correctness of the
analysis is proved by way of a relational Hoare logic. The variations
of our proposed analyses, in Section 8, rivals theirs in terms of
precision—they are incomparable.
Our dependence analysis relies on an approximation of the modifiable variables, to soundly track implicit flows due to control flow,
instead of labelling a program counter variable pc to account for implicit flows (Sabelfeld and Myers 2003). Zanioli and Cortesi (2011)
also derive a similar analysis through a syntactic Galois connection—
a syntactic assignment z := x ∗ y is abstracted into a propositional
formula x → z∧y → z denoting an information flow from variables
x and y to variable z. The soundness of this analysis wrt. a semantic
property such as noninterference requires more justification, though
it is remarkable that the concretisation of propositional formula
yields, roughly speaking, a set of program texts. Zanotti (2002) also
provides an abstract interpretation account of a flow-insensitive type
system (Volpano et al. 1996) enforcing noninterference by guaranteeing a stronger safety property, namely that sensitive locations
should not influence public locations (Boudol 2008).
Kovács et al. (2013) explicitly formulate termination-insensitive
noninterference as an abstract interpretation, namely the “merge
over all twin computations” that makes explicit both the 2-safety
aspect and the need for an analysis to relate some aligned intermediate states. Their analysis, like many others, is based on reducing the
problem to a safety property of product programs. Sousa and Dillig
(2016) implement an algorithm that automates reasoning in a Hoare
logic for k-safety, implicitly constructing product programs; the
performance compares favorably with explicit construction of product programs. Program dependency graphs are another approach to
dependency, shown to be correct for noninterference by Wasserrab
et al. (2009) using slicing and a simulation argument.
Denning (1982, Chap. 5) proposes the first quantitative measure
of a program’s leakage in terms of Shannon entropy (Shannon
1948). Other quantitative metrics emerge in the literature (Braun
et al. 2009; Clarkson et al. 2009; Smith 2009; Dwork 2011; Smith
2011; Alvim et al. 2012). These quantitative security metrics model
different scenarios suitable for different policies. Most existing static
analyses for quantitative information flow leverage existing model
checking tools and abstract domains for safety; they prove that a
program satisfies a quantitative security requirement by proving a
stronger safety property. In contrast, the cardinal abstraction proves a
hyperproperty by inferring a stronger hyperproperty satisfied by the
analysed program. This is key to target quantitative information flow
in mutlilevel security lattices, beyond the 2-point lattice {L, H}.
Backes et al. (2009) synthesize equivalence classes induced by
outputs over low equivalent memories by relying on software model
checkers, in order to bound various quantitative metrics. Heusser and
Malacaria (2009) also rely on a similar technique to quantify information flow for database queries. Köpf and Rybalchenko (2010) note
that the exact computation of information-theoretic characteristics
is prohibitively hard, and propose to rely on approximation-based
analyses, among which are randomisation techniques and abstract
interpretation ones. They also propose to rely on a self-composed
product program to model a scenario where attackers may refine
their knowledge by influencing the low inputs. Klebanov (2014)
relies on similar techniques to handle programs with low inputs, and
uses polyhedra to synthesize linear constraints (Cousot and Halbwachs 1978) over variables. Mardziel et al. (2013) decide whether
answering a query on sensitive data augments attackers’ knowledge
beyond a certain threshold, by using probabilistic polyhedra.
10.
Conclusion
Galois connection-based semantic characterisations of program
analyses provide new perspectives and insights that lead to improved
techniques. We have extended the framework to fully encompass
hyperproperties, through a remarkable form of hypercollecting
semantics that enables calculational derivation of analyses. This
new foundation raises questions too numerous to list here.
One promising direction is to combine dependence and cardinality analysis with existing abstract domains, e.g. through advanced
symbolic methods (Miné 2006b), and partitioning (Handjieva and
Tzolovski 1998; Rival and Mauborgne 2007).
Static analysis of secure information flow has yet to catch up with
recent advances in dynamic information flow monitoring (Besson
et al. 2013; Bello et al. 2015; Hedin et al. 2015; Assaf and Naumann
2016; Besson et al. 2016). We discussed, in Section 8, how existing
static analyses may be of use to statically secure information flow.
It seems likely that hypercollecting semantics will also be of use for
dynamic analyses.
Acknowledgments
Thanks to Anindya Banerjee and the anonymous reviewers for
thoughtful comments and helpful feedback. This work was partially
supported by NSF awards CNS-1228930 and CCF-1649884, ANR
project AnaStaSec ANR-14-CE28-0014 and a CFR CEA Phd
Fellowship.
References
S. Agrawal and B. Bonakdarpour. Runtime verification of k-safety hyperproperties in HyperLTL. In IEEE Computer Security Foundations
Symposium, pages 239–252, 2016.
M. S. Alvim, K. Chatzikokolakis, C. Palamidessi, and G. Smith. Measuring
information leakage using generalized gain functions. In IEEE Computer
Security Foundations Symposium, pages 265–279, 2012.
T. Amtoft and A. Banerjee. Information flow analysis in logical form. In
Static Analysis Symposium, pages 100–115, 2004.
T. Amtoft, S. Bandhakavi, and A. Banerjee. A logic for information
flow in object-oriented programs. In ACM Symposium on Principles
of Programming Languages, pages 91–102, 2006.
A. Askarov and A. Sabelfeld. Gradual release: Unifying declassification,
encryption and key release policies. In IEEE Symposium on Security and
Privacy, 2007.
A. Askarov, S. Hunt, A. Sabelfeld, and D. Sands. Termination-insensitive
noninterference leaks more than just a bit. In European Symposium on
Research in Computer Security, volume 5283 of LNCS, 2008.
M. Assaf. From Qualitative to Quantitative Program Analysis : Permissive
Enforcement of Secure Information Flow. PhD thesis, Université de
Rennes 1, May 2015. https://hal.inria.fr/tel-01184857.
M. Assaf and D. Naumann. Calculational design of information flow
monitors. In IEEE Computer Security Foundations Symposium, pages
210–224, 2016.
M. Assaf, D. Naumann, J. Signoles, É. Totel, and F. Tronel. Hypercollecting
semantics and its application to static analysis of information flow.
Technical report, Apr. 2016a. URL https://arxiv.org/abs/1608.
01654.
M. Assaf, J. Signoles, É. Totel, and F. Tronel. The cardinal abstraction
for quantitative information flow. In Workshop on Foundations of
Computer Security (FCS), June 2016b. https://hal.inria.fr/hal01334604.
M. Backes, B. Köpf, and A. Rybalchenko. Automatic discovery and
quantification of information leaks. In IEEE Symposium on Security
and Privacy, pages 141–153. IEEE, 2009.
A. Banerjee, D. A. Naumann, and S. Rosenberg. Expressive declassification
policies and modular static enforcement. In IEEE Symposium on Security
and Privacy, pages 339–353, 2008.
A. Banerjee, D. A. Naumann, and M. Nikouei. Relational logic with framing
and hypotheses. In 36th IARCS Annual Conference on Foundations of
Software Technology and Theoretical Computer Science, 2016. To appear.
G. Barthe, P. R. D’Argenio, and T. Rezk. Secure information flow by selfcomposition. In IEEE Computer Security Foundations Workshop, pages
100–114, 2004.
L. Bello, D. Hedin, and A. Sabelfeld. Value sensitivity and observable
abstract values for information flow control. In Logic for Programming,
Artificial Intelligence, and Reasoning (LPAR), pages 63–78, 2015.
N. Benton. Simple relational correctness proofs for static analyses and program transformations. In ACM Symposium on Principles of Programming
Languages, pages 14–25, 2004.
J. Bertrane, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, and
X. Rival. Static analysis and verification of aerospace software by abstract
interpretation. In AIAA Infotech@Aerospace 2010, 2012.
J. Bertrane, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, and
X. Rival. Static analysis and verification of aerospace software by abstract
interpretation. Foundations and Trends in Programming Languages, 2
(2-3):71–190, 2015.
F. Besson, N. Bielova, and T. Jensen. Hybrid information flow monitoring
against web tracking. In IEEE Computer Security Foundations Symposium, pages 240–254. IEEE, 2013.
F. Besson, N. Bielova, and T. Jensen. Hybrid monitoring of attacker
knowledge. In IEEE Computer Security Foundations Symposium, pages
225–238, 2016.
G. Boudol. Secure information flow as a safety property. In Formal Aspects
in Security and Trust, pages 20–34, 2008.
F. Bourdoncle. Abstract interpretation by dynamic partitioning. Journal of
Functional Programming, 2(04):407–435, 1992.
C. Braun, K. Chatzikokolakis, and C. Palamidessi. Quantitative notions of
leakage for one-try attacks. In Mathematical Foundations of Programming Semantics (MFPS), volume 249, pages 75–91, 2009.
D. Cachera and D. Pichardie. A certified denotational abstract interpreter. In
Interactive Theorem Proving (ITP), pages 9–24, 2010.
M. R. Clarkson and F. B. Schneider. Hyperproperties. In IEEE Computer
Security Foundations Symposium, pages 51–65, 2008.
M. R. Clarkson and F. B. Schneider. Hyperproperties. Journal of Computer
Security, 18(6):1157–1210, 2010.
M. R. Clarkson, A. C. Myers, and F. B. Schneider. Quantifying information
flow with beliefs. Journal of Computer Security, 17:655–701, 2009.
M. R. Clarkson, B. Finkbeiner, M. Koleini, K. K. Micinski, M. N. Rabe,
and C. Sánchez. Temporal logics for hyperproperties. In Principles of
Security and Trust, volume 8414 of LNCS, pages 265–284, 2014.
E. Cohen. Information transmission in computational systems. In Proceedings of the sixth ACM Symposium on Operating Systems Principles, pages
133–139, 1977.
A. Cortesi and M. Zanioli. Widening and narrowing operators for abstract
interpretation. Computer Languages, Systems & Structures, pages 24–42,
2011.
A. Cortesi, G. Costantini, and P. Ferrara. A survey on product operators
in abstract interpretation. In Semantics, Abstract Interpretation, and
Reasoning about Programs: Essays Dedicated to David A. Schmidt on the
Occasion of his Sixtieth Birthday, volume 129 of EPTCS, pages 325–336,
2013.
P. Cousot. The calculational design of a generic abstract interpreter. In
M. Broy and R. Steinbrüggen, editors, Calculational System Design,
volume 173, pages 421–506. NATO ASI Series F. IOS Press, Amsterdam,
1999.
P. Cousot. Constructive design of a hierarchy of semantics of a transition
system by abstract interpretation. Theoretical Computer Science, 277
(1-2):47–103, 2002.
P. Cousot and R. Cousot. Abstract interpretation: a unified lattice model for
static analysis of programs by construction or approximation of fixpoints.
In ACM Symposium on Principles of Programming Languages, pages
238–252, 1977.
P. Cousot and R. Cousot. Systematic design of program analysis frameworks.
In ACM Symposium on Principles of Programming Languages, pages
269–282, 1979.
P. Cousot and R. Cousot. Comparing the galois connection and widening/narrowing approaches to abstract interpretation. In Programming Language
Implementation and Logic Programming (PLILP), pages 269–295, 1992.
P. Cousot and R. Cousot. Higher-order abstract interpretation (and application to comportment analysis generalizing strictness, termination,
projection and per analysis of functional languages). In International
Conference on Computer Languages (ICCL), pages 95–112, 1994.
P. Cousot and N. Halbwachs. Automatic discovery of linear restraints
among variables of a program. In ACM Symposium on Principles of
Programming Languages, pages 84–96, 1978.
Á. Darvas, R. Hähnle, and D. Sands. A theorem proving approach to analysis
of secure information flow. In Security in Pervasive Computing, pages
193–209, 2005.
D. E. R. Denning. Cryptography and Data Security. Addison-Wesley
Longman Publishing Co., Inc., 1982.
D. E. R. Denning and P. J. Denning. Certification of programs for secure
information flow. Communications of ACM, 20(7):504–513, 1977.
G. Doychev, D. Feld, B. Köpf, L. Mauborgne, and J. Reineke. Cacheaudit:
A tool for the static analysis of cache side channels. In USENIX Security
Symposium, pages 431–446, 2013.
C. Dwork. A firm foundation for private data analysis. Communications of
ACM, pages 86–95, 2011.
B. Finkbeiner, M. N. Rabe, and C. Sánchez. Algorithms for model checking
HyperLTL and HyperCTL ˆ*. In Computer Aided Verification, volume
9206 of LNCS, pages 30–48, 2015.
R. Giacobazzi and I. Mastroeni. Abstract non-interference: parameterizing
non-interference by abstract interpretation. In ACM Symposium on
Principles of Programming Languages, pages 186–197, 2004.
J. A. Goguen and J. Meseguer. Security policies and security models. In
IEEE Symposium on Security and Privacy, pages 11–20, 1982.
P. Granger. Improving the results of static analyses programs by local decreasing iteration. In Foundations of Software Technology and Theoretical
Computer Science, volume 652, pages 68–79, 1992.
M. Handjieva and S. Tzolovski. Refining dtatic analyses by trace-based partitioning using control flow. In International Static Analysis Symposium,
1998.
D. Hedin, L. Bello, and A. Sabelfeld. Value-sensitive hybrid information
flow control for a JavaScript-Like language. In IEEE Computer Security
Foundations Symposium, pages 351–365, 2015.
J. Heusser and P. Malacaria. Applied quantitative information flow and
statistical databases. In Formal Aspects in Security and Trust, pages
96–110, 2009.
S. Hunt. PERs generalize projections for strictness analysis (extended
abstract). In Proceedings of the Third Annual Glasgow Workshop on
Functional Programming, 1990.
S. Hunt and D. Sands. Binding time analysis: A new PERspective. In
Proceedings of the Symposium on Partial Evaluation and SemanticsBased Program Manipulation, PEPM’91, Yale University, New Haven,
Connecticut, USA, June 17-19, 1991, pages 154–165, 1991.
S. Hunt and D. Sands. On flow-sensitive security types. In ACM Symposium
on Principles of Programming Languages, pages 79–90, 2006.
S. Hunt and D. Sands. From exponential to polynomial-time security typing
via principal types. In ACM Workshop on Programming Languages and
Analysis for Security, pages 297–316, 2011.
V. Klebanov. Precise quantitative information flow analysis - a symbolic
approach. Theoretical Computer Science, 538:124–139, 2014.
B. Köpf and A. Rybalchenko. Approximation and randomization for
quantitative information-flow analysis. In IEEE Computer Security
Foundations Symposium, pages 3–14, 2010.
B. Köpf and A. Rybalchenko. Automation of quantitative information-flow
analysis. In Formal Methods for Dynamical Systems - 13th International
School on Formal Methods for the Design of Computer, Communication,
and Software Systems, volume 7938 of LNCS, pages 1–28, 2013.
M. Kovács, H. Seidl, and B. Finkbeiner. Relational abstract interpretation for
the verification of 2-hypersafety properties. In ACM SIGSAC conference
on Computer and Communications Security, pages 211–222, 2013.
P. Mardziel, S. Magill, M. Hicks, and M. Srivatsa. Dynamic enforcement
of knowledge-based security policies. In IEEE Computer Security
Foundations Symposium, pages 114–128. IEEE, 2011.
P. Mardziel, S. Magill, M. Hicks, and M. Srivatsa. Dynamic enforcement of
knowledge-based security policies using probabilistic abstract interpretation. Journal of Computer Security, 21(4):463–532, 2013.
I. Mastroeni. Abstract interpretation-based approaches to security - A
survey on abstract non-interference and its challenging applications.
In Semantics, Abstract Interpretation, and Reasoning about Programs:
Essays Dedicated to David A. Schmidt on the Occasion of his Sixtieth
Birthday, volume 129 of EPTCS, pages 41–65, 2013.
I. Mastroeni and A. Banerjee. Modelling declassification policies using
abstract domain completeness. Mathematical Structures in Computer
Science, 21(06):1253–1299, 2011.
J. McLean. A general theory of composition for trace sets closed under
selective interleaving functions. In IEEE Symposium on Security and
Privacy, pages 79–93, 1994.
A. Miné. The octagon abstract domain. Higher-order and symbolic
computation, 19(1):31–100, 2006a.
A. Miné. Symbolic methods to enhance the precision of numerical abstract
domains. In Verification, Model Checking, and Abstract Interpretation,
pages 348–363, 2006b.
C. Müller, M. Kovács, and H. Seidl. An analysis of universal information
flow based on self-composition. In IEEE Computer Security Foundations
Symposium, pages 380–393, 2015.
F. Nielson, H. R. Nielson, and C. Hankin. Principles of Program Analysis.
Springer, 1999.
A. Rényi. On measures of entropy and information. In the Fourth Berkeley
Symposium on Mathematical Statistics and Probability, 1961.
X. Rival and L. Mauborgne. The trace partitioning abstract domain. ACM
Transactions on Programming Languages and Systems, 29(5):26, 2007.
J. Rushby. Security requirements specifications: How and what. In Symposium on Requirements Engineering for Information Security (SREIS),
2001.
A. Sabelfeld and A. C. Myers. Language-based information-flow security.
IEEE Journal on Selected Areas in Communications, 21(1):5–19, 2003.
A. Sabelfeld and D. Sands. Declassification: Dimensions and principles.
Journal of Computer Security, 17(5), 2009.
D. A. Schmidt. Abstract interpretation from a topological perspective. In
Static Analysis, 16th International Symposium, volume 5673 of LNCS,
pages 293–308, 2009.
D. A. Schmidt. Inverse-limit and topological aspects of abstract interpretation. Theoretical Computer Science, 430:23–42, 2012.
D. Schoepe, M. Balliu, B. C. Pierce, and A. Sabelfeld. Explicit secrecy: A
policy for taint tracking. In IEEE European Symposium on Security and
Privacy, pages 15–30, 2016.
C. E. Shannon. A mathematical theory of communication. The Bell System
Technical Journal, 27:379–423, 1948.
G. Smith. On the foundations of quantitative information flow. In International Conference on Foundations of Software Science and Computational
Structures, pages 288–302, 2009.
G. Smith. Quantifying information flow using min-entropy. In Quantitative
Evaluation of Systems (QEST), 2011 Eighth International Conference on,
pages 159–167. IEEE, 2011.
M. Sousa and I. Dillig. Cartesian Hoare logic for verifying k-safety
properties. In ACM Conference on Programming Language Design
and Implementation, pages 57–69, 2016.
T. Terauchi and A. Aiken. Secure information flow as a safety problem. In
Static Analysis Symposium, pages 352–367, 2005.
D. Volpano and G. Smith. Eliminating covert flows with minimum typings.
In IEEE Computer Security Foundations Workshop, pages 156–168, 1997.
D. Volpano and G. Smith. Verifying secrets and relative secrecy. In ACM
Symposium on Principles of Programming Languages, pages 268–276,
2000.
D. Volpano, C. Irvine, and G. Smith. A sound type system for secure flow
analysis. Journal of Computer Security, 4(2-3):167–187, 1996.
D. M. Volpano. Safety versus secrecy. In Static Analysis Symposium, pages
303–311, 1999.
D. Wasserrab, D. Lohner, and G. Snelting. On PDG-based noninterference
and its modular proof. In ACM Workshop on Programming Languages
and Analysis for Security, pages 31–44, 2009.
G. Winskel. The Formal Semantics of Programming Languages: an
Introduction. Cambridge, 1993.
H. Yasuoka and T. Terauchi. On bounding problems of quantitative
information flow. Journal of Computer Security, 19(6):1029–1082, 2011.
A. Zakinthinos and S. Lerner. A general theory of security properties. In
IEEE Symposium on Security and Privacy, pages 94–102, 1997.
M. Zanioli and A. Cortesi. Information leakage analysis by abstract
interpretation. In SOFSEM 2011: Theory and Practice of Computer
Science, pages 545–557, 2011.
M. Zanotti. Security typings by abstract interpretation. In Static Analysis
Symposium, volume 2477, pages 360–375, 2002.
Appendix A.
Symbols
Val
∞
v ∈ Val
V ∈ P(Val)
V ∈ P(P(Val))
Trc
t ∈ Trc
T ∈ P(Trc)
T ∈ P(P(Trc))
States
σ ∈ States
Σ ∈ P(States)
S ∈ P(P(States))
States∗ , ∪n∈N Statesn
a set of integers
an infinite cardinal number
an integer
a set of values
a set of sets of values
the set of (relational) traces
a trace
a set of traces
a set of sets of traces
the set of states
a state
a set of states
a set of sets of states
the set of finite sequence of states
VarP
L
l∈L
Γ ∈ VarP → L
the set of variables of a program P
a multilevel security lattice
a security level
an initial typing context
γ
−
−A
C←
−−
α→
a Galois connection
JcK ∈ Trc → Trc
JeK ∈ Trc → Val
JeKpre ∈ Trc → Val
⦃c⦄ ∈ P(Trc) → P(Trc)
LcM ∈ P(P(Trc)) → P(P(Trc))
denotational semantics of commands
value of e in final state
value of e in initial state
collecting semantics
hypercollecting semantics
l;x
atomic dependence: “agreement up to security
level l leads to agreement on x”
A set of atomic dependency constraints
atomic cardinality: “agreement up to security
level l leads to an l-cardinality of n values for x”
A valid set of atomic cardinality constraints
D ∈ Dep
l ; x#n
C ∈ Card
Appendix B.
Background: Collecting Semantics, Galois Connections
Lemma 1 Element-wise abstraction. Let elt ∈ C → A be a function between sets. Let αelt (C) , {elt(c) | c ∈ C} and
γelt
−−
−−
− (P(A), ⊆).
γelt (A) , {c | elt(c) ∈ A}. Then (P(C), ⊆) ←
−−
→
α−
elt
Proof.
Let C ∈ P(C) and A ∈ P(A).
αelt (C) ⊆ A ⇐⇒ {elt(c) | c ∈ C} ⊆ A
⇐⇒ ∀c ∈ C, elt(c) ∈ A
⇐⇒ C ⊆ {c | elt(c) ∈ A}
⇐⇒ C ⊆ γelt (A)
Appendix C.
Domains and Galois Connections for Hyperproperties
Lemma 2 . Let C be a set. Define αhpp (C) , ∪C∈C C and γhpp (C) , P(C). These form a Galois connection:
γhpp
−−
−−−
− (P(C), ⊆)
(P(P(C)), ⊆) ←
−−
→
α −−
hpp
Proof.
This is a special case of the supremus abstraction (Cousot 2002, p.52), that is defined in Lemma 3. Indeed, we can instantiate a supremus
γhpp
−−
−−−
− P(C), with αhpp (C) = ∪C∈C C
abstraction by taking hpp , id (∈ P(C) → P(C)). We thus obtain a Galois connection P(P(C)) ←
−−
→
α −−
hpp
and γhpp (C) = {C 0 ∈ P(C) | C 0 ⊆ C} (= P(C)). Notice here that the powerset of a set C, provided with set inclusion as a partial order, is
a complete lattice as required by the supremus abstraction.
Lemma 3 Supremus abstraction. Let elt ∈ C → A be a function from a set C, with codomain forming a complete lattice (A, v). Let
αelt (C) , tc∈C elt(c) and γelt (a) , {c ∈ C | elt(c) v a}. Then
γ
elt
−−
−
−
− (A, v)
(P(C), ⊆) ←
−−
→
α−
elt
Proof.
Notice that the assumption that the lattice (A, v, t) is complete guarantees that αelt (C) is well-defined: the set {elt(c) | c ∈ C} does
have a supremum.
Let C ∈ P(C) and a ∈ A. The proof goes by definitions.
αelt (C) v a ⇐⇒ tc∈C elt(c) v a
⇐⇒ ∀c ∈ C, elt(c) v a
⇐⇒ C ⊆ {c ∈ C | elt(c) v a}
⇐⇒ C ⊆ γelt (a)
Appendix D.
Hypercollecting Semantics
Before proving the main result of this section in Theorem 1, we will first prove Lemma 8.
Both proofs of Lemma 8 and Theorem 1 are by structural induction. Most cases follow from definitions. The important cases are for while
loops and the proof technique is a classical one when using a denotational semantics. E.g., in order to prove equality of two denotations
characterised as a fixpoint, it suffices to introduce two sequences that converge towards the fixpoint characterisations and prove equality of
these sequences. This ensures that their limits – the denotations characterised as a fixpoint – are equal.
Let us now prove Lemma 8 – this lemma is used later in the proof case of while loops for Theorem 1.
Lemma 8 . For all commands c, for all sets of traces T ∈ P(Trc), the standard collecting semantics (Section 2) can be expressed as the direct
image of the denotational semantics :
⦃c⦄T = {JcKt ∈ Trc | t ∈ T }
Proof.
The proof proceeds by structural induction on commands. The most important case is the case of while loops.
1 – Case skip:
⦃skip⦄T = T = {JskipKt | t ∈ T }
2 – Case x := e:
⦃x := e⦄T = {Jx := eKt | t ∈ T }
3 – Case c1 ; c2 :
⦃c1 ; c2 ⦄T = ⦃c2 ⦄ ◦ ⦃c1 ⦄T
= HBy induction on c1 I
⦃c2 ⦄({Jc1 Kt ∈ Trc | t ∈ T })
= HBy induction on c2 I
{Jc2 K ◦ Jc1 Kt ∈ Trc | t ∈ T }
= {Jc1 ; c2 Kt ∈ Trc | t ∈ T }
4 – Case if (b) then c1 else c2 :
⦃if (b) then c1 else c2 ⦄T = ⦃c1 ⦄ ◦ ⦃ grdb ⦄T ∪ ⦃c2 ⦄ ◦ ⦃ grd¬b ⦄T
= HBy induction hypothesis on both c1 and c2 I
{Jc1 Kt ∈ Trc | t ∈ ⦃ grdb ⦄T } ∪ {Jc2 Kt ∈ Trc | t ∈ ⦃ grd¬b ⦄T }
= {Jif (b) then c1 else c2 Kt ∈ Trc | t ∈ T }
5 – Case while (b) do c:
5.1 – Let us first prove the following intermediate result:
∀T ∈ P(Trc),
b
◦ ⦃ grd ⦄X .
λX.
T
∪
⦃c⦄
{Jwhile (b) do cKt ∈ Trc | t ∈ T } = ⦃ grd¬b ⦄ lfp⊆
∅
Indeed, let the sequence (xTn )n≥0 be defined as:
with F defined as
xTn , {F (n) (⊥)(t) ∈ Trc | t ∈ T }
(
t
F(w)(t) ,
w
◦
JcKt
if JbKt = 0
otherwise
Notice that for all t ∈ T , the sequence (F (n) (⊥)(t))n≥0 converges and is equal to the evaluation of the while loop in the state
t (i.e. Jwhile b do cKt = F (∞) (⊥)(t)), by definition of the denotational semantics of loops; thus, the sequence xTn converges to
{Jwhile b do cKt ∈ Trc | t ∈ T }.
Let also the sequences (ynT )n≥0 and (gnT )n≥0 be defined as:
ynT , ⦃ grd¬b ⦄gnT
T
gn+1
, T ∪ ⦃c⦄ ◦ ⦃ grdb ⦄gnT
g0T , ∅
⊆
b
b
◦
◦
Notice that for all T ∈ P(Trc), the sequence gnT converges to lfp⊆
∅ λX. T ∪ ⦃c⦄ ⦃ grd ⦄X (or written otherwise: lfpT λX. ⦃c⦄ ⦃ grd ⦄X).
⊆
T
¬b
b
◦
This also means that the sequence yn converges to ⦃ grd ⦄( lfp∅ λX. T ∪ ⦃c⦄ ⦃ grd ⦄X ).
Thus, it suffices to prove that:
∀T ∈ P(Trc), ∀n ∈ N, xTn = ynT .
The proof proceeds by induction on n.
- xT0 = ∅ = y0T
- Let n ∈ N such that: ∀T ∈ P(Trc), xTn = ynT . Then:
xTn+1 = {F (n+1) (⊥)(t) ∈ Trc | t ∈ T }
= ⦃ grd¬b ⦄T ∪ {F (n) (⊥)(JcKt) ∈ Trc | σ ∈ ⦃ grdb ⦄T }
= ⦃ grd¬b ⦄T ∪ {F (n) (⊥)(t) ∈ Trc | t ∈ ⦃c⦄ ◦ ⦃ grdb ⦄T }
◦⦃ grdb
= ⦃ grd¬b ⦄T ∪ x⦃c⦄
n
⦄T
= HBy induction hypothesisI
◦⦃ grdb
⦄T
b
⦄T
⦃ grd¬b ⦄T ∪ yn⦃c⦄
= HBy definition of yn⦃c⦄
¬b
◦⦃ grd
¬b
⦄T ∪ ⦃ grd
b
◦
= ⦃ grd ⦄ T ∪ gn⦃c⦄ ⦃ grd ⦄T
[
= HBecause for all T , gnT =
⦃ grd
I
b
◦
⦄gn⦃c⦄ ⦃ grd ⦄T
¬b
0≤k≤n−1
(⦃c⦄ ◦ ⦃ grdb ⦄)(k) (T ) I
T
⦃ grd¬b ⦄gn+1
T
= yn+1
5.2 – Let us now prove that :
b
⊆
◦
lfp⊆
∅ λX. T ∪ ⦃c⦄ ⦃ grd ⦄X = lfp∅ λX.T ∪ ⦃if b then c else skip⦄X
Indeed, let the sequence (fnT )n≥0 be defined as:
f0T , ∅
T
fn+1
, T ∪ ⦃if b then c else skip⦄fnT
Therefore, by induction on n ∈ N, it holds that fn = gn :
- f0T = g0T = ∅.
- let n ∈ N, such that fnT = gnT . Then:
T
gn+1
= Σ ∪ ⦃c⦄ ◦ ⦃ grdb ⦄gnT
T
= HSince ⦃ grd¬b ⦄gnT ⊆ gnT ⊆ gn+1
I
T ∪ ⦃c⦄ ◦ ⦃ grdb ⦄gnT ∪ ⦃ grd¬b ⦄gnT
= T ∪ ⦃if b then c else skip⦄gnT
= HBy induction hypothesisI
T ∪ ⦃if b then c else skip⦄fnT
T
= fn+1
This concludes our induction on n.
Thus, by passing to the limit of both sequences, we obtain the desired result.
5.3 – Finally, we can conclude:
⦃while b do c⦄T =⦃ grd¬b ⦄ lfp⊆
T ⦃if b then c else skip⦄
= H by intermediate result 5.2 I
b
⦃ grd¬b ⦄ lfp⊆
T ⦃c⦄⦃ grd ⦄
= H by intermediate result 5.1I
{⦃while (b) do c⦄t ∈ Trc | t ∈ T }
We conclude this proof by structural induction, and Cases 1 to 5.
Theorem 1 . For all c and all T ∈ P(Trc), ⦃c⦄T is in L c M{T }.
Proof.
We prove the theorem as a corollary of this more general result:
∀T ∈ P(P(Trc)), {⦃c⦄T | T ∈ T} ⊆ LcMT
This proof proceeds by structural induction on commands. The most important case is the one for while loops; the other ones follow from
definition.
1 – Case skip :
2 – Case x := e:
LskipMT = {⦃skip⦄T | T ∈ T} ⊇ {⦃skip⦄T | T ∈ T}
3 – Case c1 ; c2 :
Lx := eMT = {⦃x := e⦄T | T ∈ T} ⊇ {⦃x := e⦄T | T ∈ T}
Lc1 ; c2 MT = Lc2 M ◦ Lc1 MT
⊇ Hby structural induction on c1 , and monotonicity of the hypercollecting semantics LcMI
Lc2 M({⦃c1 ⦄T | T ∈ T})
⊇ Hby structural induction on c2 I
{⦃c2 ⦄T 0 | T 0 ∈ {⦃c1 ⦄T | T ∈ T}}
= {⦃c2 ⦄ ◦ ⦃c1 ⦄T | T ∈ T}
= {⦃c1 ; c2 ⦄T | T ∈ T}
4 – Case if (b) then c1 else c2 :
Lif (b) then c1 else c2 MT = {⦃if (b) then c1 else c2 ⦄T | T ∈ T}
⊇ {⦃if (b) then c1 else c2 ⦄T | T ∈ T}
5 – Case while (b) do c:
Let (XTn )n∈N be the sequence defined as
n
o
XTn , {F (n) (⊥)(t) ∈ Trc | t ∈ T } | T ∈ T for n ≥ 1,
XT0 = ∅
where:
F(w)(t) ,
(
t
w
◦
JcKt
if JbKt = 0
otherwise
Notice that the limit of the sequence xTn , {F (n) (⊥)(t) ∈ Trc | t ∈ T } is the ordinary collecting semantics ⦃while (b) do c⦄T of the while
loop, as proved in Lemma 8. Thus, the sequence XTn converges to {⦃while (b) do c⦄T | T ∈ T}.
Let also (YTn )n∈N and (GTn )n∈N be the sequences defined as
GTn+1 , T ∪ Lif (b) then c else skipMGTn for n ≥ 0,
GT0 , ∅
YTn , Lgrd¬b MGTn
is the hypercollecting semantics of the while loop (Lwhile (b) do cMT).
Notice that the limit of
Thus, it suffices to prove that the sequences XTn and YTn verify the following result ∀T ∈ P(P(Trc)), ∀n ∈ N, XTn+1 ⊆ YTn+1 ; passing to
the limit in this inequality leads to the required result ∀T ∈ P(P(Trc)), {⦃while (b) do c⦄T | T ∈ T} ⊆ Lwhile (b) do cMT.
We prove the following more precise characterisation of the sequences XTn and YTn (this implies XTn+1 ⊆ YTn+1 ):
YTn
∀n ∈ N, ∀T ∈ P(P(Trc)), YTn+1 = YTn ∪ XTn+1
The remaining of this proof proceeds by induction on n ∈ N.
- case n = 0:
YT1 = Lgrd¬b MGT1
= Lgrd¬b MT
= {⦃ grd¬b ⦄T | T ∈ T}
= {{F (1) (⊥)(t) ∈ Trc | t ∈ T } | T ∈ T}
= Hsince YT0 = ∅ and by definition of XT1 I
YT0 ∪ XT1
- Let n ∈ N such that YTn+1 = YTn ∪ XTn+1 . Then:
YTn+2 = Lgrd¬b MGTn+2
= Lgrd¬b M T ∪ Lif (b) then c1 else skipMGTn+1
= Lgrd¬b MT ∪ Lgrd¬b M ◦ Lif (b) then c1 else skipMGTn+1
= H ∀n ∈ N, GTn+1 = ∪0≤k≤n Lif (b) then c1 else c2 M(k) TI
Lgrd¬b MT ∪ Lgrd¬b M ∪1≤k≤n+1 Lif (b) then c else skipM(k) T
= ∪0≤k≤n+1 Lgrd¬b M ◦ Lif (b) then c else skipM(k) T
= ∪0≤k≤n Lgrd¬b M ◦ Lif (b) then c else skipM(k) T ∪ Lgrd¬b M ◦ Lif (b) then c else skipM(n+1) T
= Lgrd¬b M ◦ ∪0≤k≤n Lif (b) then c else skipM(k) T ∪ Lgrd¬b M ◦ Lif (b) then c else skipM(n+1) T
= Lgrd¬b MGTn+1 ∪ Lgrd¬b M ◦ Lif (b) then c else skipM(n+1) T
= YTn+1 ∪ Lgrd¬b M ◦ Lif (b) then c else skipM(n+1) T
= YTn+1 ∪ {⦃ grd¬b ⦄ ◦ ⦃if (b) then c1 else skip⦄(n+1) T | T ∈ T}
= Hthe set ⦃ grd¬b ⦄ ◦ ⦃if (b) then c1 else c2 ⦄(n+1) T is the set of traces exiting the loop body after n+1 or less iterations:I
H it is equal to {F (n+2) (⊥)(t) | t ∈ T } by definition of FI
n
o
YTn+1 ∪ {F (n+2) (⊥)(t) | t ∈ T } | T ∈ T
= YTn+1 ∪ XTn+2
This concludes our induction on n.
We conclude this proof by structural induction, and Cases 1 to 5.
Appendix E.
Dependences
γdeptr
−−
−−−−
− Dep.
Lemma 9 . (αdeptr , γdeptr ) yields a Galois connection: P(P(Trc)) ←
−
−
→
α −−−
deptr
Proof.
The lattice Dep is finite, therefore complete. Thus, this is a Galois connection since it is an instance of the supremus abstraction presented
in Lemma 3.
γagree
−−−−− {tt, ff}.
The same reasoning applies for P(P(Val)) ←
−−
α−−−→
agree
The proofs of of both Lemma 4 and Theorem 2 are deferred to Appendix G: as we explain after Lemmas 6 and 7, we derive the dependence
abstract semantics as an approximation of the cardinality semantics.
Appendix F.
Cardinality Abstraction
γ
crdtr
−−
−−−− Card.
Lemma 10 . (αcrdtr , γcrdtr ) yields a Galois connection: P(P(Trc)) ←
−−
α −−→
crdtr
Proof. The lattice Card is complete, since all subsets of Card have an infimum and a supremum wrt. partial order v] , notably because the
closed interval [0, ∞] is complete wrt. partial order ≤. Thus, this is an instance of the supremus abstraction Lemma 3.
l
Lemma 5 . OC
LeM] is sound:
∀e, ∀l,
αcrdval
◦
l
˙ OC
Ol LeM ◦ γcrdtr ≤
LeM] .
Proof.
The derivation proof is by structural induction on expressions. In each case we start from the left side and derive the definition on the right
side. The interesting case is for binary arithmetic operations.
1 – Case : integer literal n
Let l ∈ L, and C ∈ Card.
Ol LnM ◦ γcrdtr (C )
= αcrdval ◦ Ol LnM {T | crdtr(T ) v] C }
αcrdval
◦
= αcrdval ∪T ∈γcrdtr (C ) {⦃n⦄R | R ⊆ T and R |=Γ l}
= αcrdval ({⦃n⦄R | R ⊆ T, R |=Γ l and T ∈ γcrdtr (C )})
≤ HNB: precision loss for simplicity of presentation, when C is bottomI
αcrdval ({{n}})
=
max crdval(V )
V ∈{{n}}
=1
l
, OC
LnM] C
l
LnM] C is being defined.
Here we use , to indicate that OC
2 – Case : variable id
Let l ∈ L, and C ∈ Card.
Ol LidM ◦ γcrdtr (C )
= αcrdval ∪T ∈γcrdtr (C ) Ol ⦃id⦄T
αcrdval
◦
= Hαcrdval preserves joinsI
max
αcrdval Ol ⦃id⦄T
T ∈γcrdtr (C )
= n H where id ; l#n ∈ αcrdtr
≤ Hαcrdtr
◦
γcrdtr (C )I
γcrdtr is reductive : αcrdtr
n H where id ; l#n ∈ C I
3 – Case : e1 ⊕ e2
Let l ∈ L, and C ∈ Card.
◦
◦
γcrdtr (C ) v] C I
l
, OC
LidM] C
αcrdval
◦
Ol Le1 ⊕ e2 M ◦ γcrdtr (C )
= αcrdval ({⦃e1 ⊕ e2 ⦄R | R ⊆ T, R |=Γ l and T ∈ γcrdtr (C )})
≤ αcrdval ({⦃e1 ⦄R | R ⊆ T, R |=Γ l and T ∈ γcrdtr (C )}) ×
αcrdval ({⦃e2 ⦄R | R ⊆ T, R |=Γ l and T ∈ γcrdtr (C )})
= αcrdval
◦
Ol Le1 M ◦ γcrdtr (C ) × αcrdval
◦
≤ HBy induction hypothesisI
Ol Le2 M ◦ γcrdtr (C )
l
l
OC
Le1 M] C × OC
Le2 M] C
l
, OC
Le1 ⊕ e2 M] C
4 – Case : e1 cmp e2
This derivation is similar to case e1 ⊕ e2 , with the difference that booleans evaluate to at most 2 different values, 1 or 0.
Ol Le1 cmp e2 M ◦ γcrdtr (C )
l
l
≤ min 2, OC
Le1 M] C × OC
Le2 M] C
αcrdval
◦
l
, OC
Le1 cmp e2 M] C
5 – Case : conclusion
We conclude by structural induction on expressions, and cases 1 to 4.
Theorem 4 . The cardinality abstract semantics is sound:
αcrdtr
◦
]
LcM ◦ γcrdtr v̇ LcM] .
Proof.
The derivation proof is by structural induction on commands. The interesting case is for conditionals.
1 – Case : skip
Let C ∈ Card.
αcrdtr
LskipM ◦ γcrdtr (C )
◦
= αcrdtr
◦
]
v Hαcrdtr
C
γcrdtr (C )
◦
γcrdtr is reductive: αcrdtr
◦
γcrdtr (C ) v] C I
, LskipM] C
2 – Case : c1 ; c2
αcrdtr
◦
Lc1 ; c2 M ◦ γcrdtr (C )
= αcrdtr
]
v
αcrdtr
◦
Lc2 M ◦ Lc1 M ◦ γcrdtr (C )
αcrdtr is extensive, Lc2 M and αcrdtr are monotoneI
Lc2 M ◦ γcrdtr ◦ αcrdtr ◦ Lc1 M ◦ γcrdtr (C )
Hγcrdtr ◦
◦
v] HBy induction hypothesisI
Lc2 M]
◦
Lc1 M] C
, Lc1 ; c2 M] C
3 – Case : id := e
3.1 – We first proceed towards an intermediate derivation:
αcrdtr
=
◦
Lid := eM ◦ γcrdtr (C )
G]
crdtr(T )
T ∈Lid:=eM◦γcrdtr (C )
G]
=
[
T ∈Lid:=eM◦γcrdtr (C )
{l ; x#n | n = αcrdval (O ⦃x⦄T ) }
l
l∈L,x∈VarP
=
[
l∈L,x∈VarP
T ∈Lid:=eM◦γcrdtr (C )
l ; x#n | n =
{l ; x#n | n = αcrdval (O ⦃x⦄T ) }
l
l∈L,x∈VarP
=
G]
[
max
T ∈Lid:=eM◦γcrdtr (C )
αcrdval (Ol ⦃x⦄T )
We now consider two cases: variables that are not modified by the assignment, and variable that are.
3.2 – Case x 6= id:
Notice that ∀l ∈ L, ∀x ∈ VarP , such that x 6= id, ∀T ∈ γcrdtr (C ):
Ol ⦃x⦄T = Ol ⦃x⦄ ⦃id := e⦄T
Thus:
max
αcrdval (Ol ⦃x⦄T )
T ∈Lid:=eM◦γcrdtr (C )
=
max
T ∈γcrdtr (C )
αcrdval (Ol ⦃x⦄T )
= Hαcrdval preserves joinsI
[
l
αcrdval
O ⦃x⦄T
T ∈γcrdtr (C )
= HBy definition of Ol LxMI
αcrdval Ol LxM (γcrdtr (C ))
= αcrdval
◦
Ol LxM ◦ γcrdtr (C )
l
≤ HBy soundness of OC
LxM] C , Lemma 5I
l
OC
LxM] C
= n where l ; x#n ∈ C
3.3 – Case x is id :
∀l ∈ L, we have :
max
αcrdval (Ol ⦃id⦄T )
T ∈Lid:=eM◦γcrdtr (C )
=
max
T ∈γcrdtr (C )
= αcrdval
◦
αcrdval (Ol ⦃e⦄T )
Ol LeM ◦ γcrdtr (C )
l
≤ HBy soundness of OC
LxM] C , Lemma 5I
l
OC
LeM] C
3.4 – Final derivation:
αcrdtr
◦
Lid := eM ◦ γcrdtr (C )
= HRecall the intermediate derivation in Case 3.1I
[
l ; x#n | n =
max
T ∈Lid:=eM◦γcrdtr (C )
l∈L,x∈VarP
αcrdval (Ol ⦃x⦄T )
v] HBy Cases 3.2 and 3.3I
n
o
[
[
l
]
l ; x#n ∈ C } ∪ {l ; id#OC LeM C
x∈VarP \{id}
l∈L
HNB: this set of constraints remains valid, owing to exclusion of id on the leftI
n
o
l
= l ; x#n ∈ C | x 6= id} ∪ {l ; id#OC
LeM] C | l ∈ L
, Lid := eM] C
4 – Case if b then c1 else c0 :
4.1 – Intermediate derivation:
αcrdtr
=
◦
Lif b then c1 else c0 M ◦ γcrdtr (C )
G]
crdtr(T )
T ∈Lif b c1 else c0 M◦γcrdtr (C )
=
G]
[
T ∈Lif b c1 else c0 M◦γcrdtr (C )
l∈L,x∈VarP
n
o
l ; x#αcrdval (O ⦃x⦄T )
l
=
G]
[
l∈L,x∈VarP
=
n
T ∈Lif b c1 else c0 M◦γcrdtr (C )
l ; x#
[
o
l ; x#αcrdval (O ⦃x⦄T )
l
l∈L,x∈VarP
max
T ∈Lif b c1 else c0 M◦γcrdtr (C )
αcrdval (Ol ⦃x⦄T )
l
4.2 – Case OC
LbM] C = 1 :
l
Let l ∈ L, and assume OC
LbM] C = 1. Let x ∈ VarP .
0
◦
∀T ∈ Lif b c1 else c0 M γcrdtr (C ), exists T ∈ γcrdtr (C ) such that T 0 = ⦃if b c1 else c0 ⦄T .
(Since LcM is not just the lifting of ⦃c⦄ to a set of sets (semantics of loops is not), in general if T 0 ∈ LcMT, we only have the existence of
T ∈ T such that T 0 ⊆ ⦃c⦄T . Here, we also rely on the fact that γcrdtr (C ) is a subset-closed. This is merely a convenient shortcut to avoid
lengthy details; it should be possible to use only the fact that T 0 ⊆ ⦃c⦄T to perform the same derivation. )
Let T 0 ∈ Lif b c1 else c0 M ◦ γcrdtr (C ), and T ∈ γcrdtr (C ) such that T 0 = ⦃if b c1 else c0 ⦄T .
l
˙ OC
Since αcrdval ◦ Ol LbM ◦ γcrdtr (C ) ≤
LbM] C (= 1), ∀R ⊆ T such that R |=Γ l, the traces r ∈ R all evaluate b to 1 or (exclusively) to
0; i.e. the sets R ⊆ T such that R |=Γ l are partitioned into the sets evaluating b to 1, and those evaluating b to 0.
Therefore, ∀R0 ⊆ T 0 such that R0 |=Γ l, exists R ∈ T and j ∈ {0, 1} such that ⦃b⦄R = {j} and ⦃cj ⦄R = R0 .
Thus,
αcrdval (Ol ⦃x⦄T 0 )
= αcrdval ({⦃x⦄R0 | R0 ⊆ T 0 and R0 |=Γ l})
[
= αcrdval
{⦃x⦄(⦃if b c1 else c0 ⦄R)}
R⊆T and R|=Γ l
= αcrdval
[
[
j∈{0,1} R⊆T and R|=Γ l and ⦃b⦄R={j}
= max αcrdval
j∈{0,1}
{⦃x⦄(⦃cj ⦄R)}
[
R⊆T and R|=Γ l and ⦃b⦄R={j}
{⦃x⦄(⦃cj ⦄R)}
≤ Hαcrdval is monotoneI
max αcrdval ◦ Ol LxM ◦ Lcj M ◦ γcrdtr (C )
j∈{0,1}
≤ Hαcrdval ◦ Ol LxM is monotone, γcrdtr ◦ αcrdtr extensive I
max αcrdval ◦ Ol LxM ◦ γcrdtr ◦ αcrdtr Lcj M ◦ γcrdtr (C )
j∈{0,1}
≤ HBy induction hypothesisI
max αcrdval ◦ Ol LxM ◦ γcrdtr
j∈{0,1}
◦
Lcj M] C
≤ HBy soundness of abstract variety, Lemma 5I
l
max OC
LxM] ◦ Lcj M] C
j∈{0,1}
= max nj where l ; x#nj ∈ Lcj M] C
j∈{0,1}
l
4.3 – Case OC
LbM] C > 1, x ∈
/ Mod(if b c1 else c0 ) :
l
Let l ∈ L, and assume OC
LbM] C > 1. Let x ∈ VarP .
Let T 0 ∈ Lif b c1 else c0 M ◦ γcrdtr (C ), and T ∈ γcrdtr (C ) such that T 0 = ⦃if b c1 else c0 ⦄T .
Notice first that if x ∈
/ Mod(if b c1 else c0 ), then:
αcrdval (Ol ⦃x⦄T 0 ) = αcrdval (Ol ⦃x⦄T )
≤ Hαcrdval is monotoneI
αcrdval
l
4.4 – Case OC
LbM] C > 1, x ∈ Mod(if b c1 else c0 ) :
l
Let l ∈ L, and assume OC
LbM] C > 1. Let x ∈ VarP .
◦
Ol LxM ◦ γcrdtr (C )
l
≤ OC
LxM] C
= n s.t l ; x#n ∈ C
Let T 0 ∈ Lif b c1 else c0 M ◦ γcrdtr (C ), and T ∈ γcrdtr (C ) such that T 0 = ⦃if b c1 else c0 ⦄T .
αcrdval (Ol ⦃x⦄T 0 )
= αcrdval (Ol ⦃x⦄ ◦ ⦃if b c1 else c0 ⦄T )
≤ αcrdval (Ol ⦃x⦄ ⦃c1 ⦄ ◦ ⦃ grdb ⦄T ∪ ⦃c0 ⦄ ◦ ⦃ grd¬b ⦄T
≤ αcrdval Ol ⦃x⦄ ◦ ⦃c1 ⦄ ◦ ⦃ grdb ⦄T + αcrdval Ol ⦃x⦄ ◦ ⦃c0 ⦄ ◦ ⦃ grd¬b ⦄T
≤ HBy monotonicity, T ∈ γcrdtr (C ) and Theorem 1I
αcrdval
◦
l
≤ OC
LxM]
Ol LxM ◦ Lc1 M ◦ Lgrd¬b M ◦ γcrdtr (C ) + αcrdval
Lc1 M]
◦
◦
l
Lgrd¬b M] C + OC
LxM]
◦
Lc0 M]
◦
◦
Ol LxM ◦ Lc0 M ◦ Lgrd¬b M ◦ γcrdtr (C )
Lgrd¬b M] C
≤ HAs a first approximation, we simply use Lgrdb M] C v] C . We refine this in Section 8I
l
OC
LxM]
◦
l
Lc1 M] C + OC
LxM]
◦
Lc2 M] C
= n1 + n2 s.t l ; x#n1 ∈ Lc1 M] C and l ; x#n2 ∈ Lc2 M] C
4.4 – Final derivation:
αcrdtr
◦
Lif b then c1 else c0 M ◦ γcrdtr (C )
= HBy the intermediate derivation in case 4.1I
[
l ; x#
max
αcrdval (Ol ⦃x⦄T )
T ∈Lif b c1 else c0 M◦γcrdtr (C )
l∈L,x∈VarP
(
≤
π l (Lc1 M] C ) t] π l (Lc2 M] C )
π l (Lc1 M] C ) t] add(if b c1 else c0 ,πl (C )) π l (Lc1 M] C )
[
l∈L
l
LbM] = 1
if OC
otherwise
with
π l (C ) , {l ; x#n ∈ C | x ∈ VarP , n ∈ [0, ∞]}
and
C1 t] add(com,C0 ) C2 ,
[
{l ; x#n | n , n1 + n2 s.t l ; x#nj ∈ Cj , j = 1, 2}
x∈Mod(com)
[
{l ; x#n ∈ C0 }
x∈VarP \ Mod(com)
5 – Case while b do c:
αcrdtr
◦
Lwhile b do cM]
= αcrdtr
◦ γcrdtr (C )
¬b
⊆
◦ Lgrd M lfp
γcrdtr (C ) Lif b then c else skipM
v] Hαcrdtr ,Lgrd¬b M are monotone, γcrdtr ◦ αcrdtr is extensiveI
αcrdtr ◦ Lgrd¬b M ◦ γcrdtr ◦ αcrdtr lfp⊆
γcrdtr (C ) Lif b then c else skipM
v] HBy assuming Lgrdb M] is soundI
Lgrd¬b M] ◦ αcrdtr lfp⊆
γcrdtr (C ) Lif b then c else skipM
v] HBy the fixpoint transfer theoremI
Lgrd¬b M]
]
◦
]
lfpv
C Lif b then c1 else c2 M
]
v] Hprecision loss for simplicity as a first approximation, Lgrd¬b M] v̇ idI
]
]
lfpv
C Lif b then c1 else c2 M
, Lwhile b do cM] C
6 – Case : conclusion
We conclude by structural induction on commands and cases 1 to 5.
Appendix G.
Appendix G.1
Dependencies reloaded
Soundness proof for dependences semantics
As noted in the text, we can derive the dependency analysis by calculation from its specification. The derivation looks similar to the one in
Appendix F for the cardinality abstraction. So here we choose a different way of proving soundness for dependency analysis. We formulate it
as an abstraction of the cardinality abstraction. This is another illustration of the benefit gained from working with hyperproperties entirely
within the framework of abstract interpretation.
This proof of soundness also implies that the cardinality abstraction is at least as precise as the type system of Hunt and Sands (Hunt and
Sands 2006) and the logic of Amtoft and Banerjee (Amtoft and Banerjee 2004), as a corollary of Theorem 3.
Lemma 6 . (αagree , γagree ) is the composition of two Galois connections (αcrdval , γcrdval ) and (αlqone , γlqone ) :
γcrdval
γlqone
crdval
lqone
−−
−−−−
− ([0, ∞] , ≤) ←
−−−−− ({tt, ff}, ⇐=)
(P(P(Val)), ⊆) ←
−−
→
−−
α −−−
α−−−→
with:
(
if n ≤ 1
, and
otherwise.
tt
ff
(
1
γlqone (bv) ,
∞
αlqone (n) ,
if bv = tt
otherwise.
Proof.
Notice that:
agree(V ) , (∀v1 , v2 ∈ V, v1 = v2 ) = (crdval(V ) ≤ 1)
Also,
αagree (V) , ∧V ∈V agree(V )
= ∧V ∈V (crdval(V ) ≤ 1)
= max crdval(V ) ≤ 1
V ∈V
= αcrdval (V) ≤ 1
= αlqone
◦
αcrdval (V)
(
where αlqone (n) ,
tt if n ≤ 1
ff otherwise
And,
γagree (bv) , {V ∈ P(Val) | agree(V ) ⇐= bv}
= {V ∈ P(Val) | crdval(V ) ≤ γlqone (bv)} where
(
1
if bv = tt
γlqone (bv) ,
∞ otherwise
= γcrdval
◦
γlqone (bv)
γlqone
−−−−− {tt, ff}:
Notice that [0, ∞] ←
−−
α−−−→
lqone
∀n ∈ [0, ∞] , ∀ bv ∈ {tt, ff},
Thus, we obtain αagree = αlqone
◦
αlqone (n) ⇐= bv iff. n ≤ γlqone (bv)
αcrdval , as well as γagree = γcrdval
◦
γlqone :
γcrdval
γlqone
crdval
lqone
−−−−− ({tt, ff}, ⇐=)
−−
−−−−
− ([0, ∞] , ≤) ←
(P(P(Val)), ⊆) ←
−−
→
−−
α −−−
α−−−→
Lemma 7 . (αdeptr , γdeptr ) is the composition of two Galois connections (αcrdtr , γcrdtr ) and (αlqonecc , γlqonecc ) :
γcrdtr
γlqonecc
crdtr
lqonecc
−−
−−−− (Card, v] ) ←
−−
−−−−−
− (Dep, v\ )
(P(P(Trc)), ⊆) ←
−−
−−
→
α −−→
α −−−−
with:
αlqonecc (C ) , {l ;Sx | l ; x#n ∈ C and αlqone (n)}
γlqonecc (D) ,
{l ; x#n | n = γlqone (l ; x ∈ D)}
l∈L,x∈VarP
Proof.
First,
αdeptr (T) =
G\
deptr(T )
T ∈T
=
G\
[
T ∈T l∈L,x∈VarP
{l ; x | αagree (Ol ⦃x⦄T )}
= HBy the decomposition in Case 1I
G\
[
{l ; x | αlqone
◦
αcrdval (Ol ⦃x⦄T )}
T ∈T l∈L,x∈VarP
=
G\
αlqonecc
T ∈T
=
G\
[
{l ; x#n | n = αcrdval (O ⦃x⦄T )}
l
l∈L,x∈VarP
Hwith αlqonecc (C ) , {l ; x | l ; x#n ∈ C and αlqone (n)}I
αlqonecc
◦
crdtr(T )
T ∈T
= Hαlqonecc preserves unionsI
!
G]
αlqonecc
crdtr(T )
T ∈T
= αlqonecc
◦
αcrdtr (T)
Also,
γdeptr (D) = {T | deptr(T ) v\ D}
= {T | αlqonecc
◦
= {T | αlqonecc
◦
crdtr(T ) v\ D}
crdtr(T ) ⊇ D}
= {T | ∀l ; x ∈ D, l ; x ∈ αlqonecc
= T | ∀l ; x ∈ D,
◦
crdtr(T )}
l ; x ∈ αlqonecc ∪l∈L,x∈VarP {l ; x#n | n = αcrdval (Ol ⦃x⦄T ) }
= {T | ∀l ; x ∈ D, αlqone (αcrdval (Ol ⦃x⦄T ))}
= {T | ∀l ; x ∈ D, αcrdval (Ol ⦃x⦄T ) ≤ 1}
= γcrdtr
◦
γlqonecc (D)
Hγlqonecc (D) ,
Therefore, we have αdeptr = αlqonecc
◦
[
l∈L,x∈VarP
{l ; x#n | n = γlqone (l ; x ∈ D)}I
αcrdtr , and γdeptr = γcrdtr
◦
γlqonecc , with:
γcrdtr
γlqonecc
crdtr
lqonecc
−−
−−−− (Card, v] ) ←
−−
−−−−−
− (Dep, v\ )
(P(P(Trc)), ⊆) ←
−−
−−
→
α −−→
α −−−−
l
Lemma 4 . OD
LeM\ is sound:
∀e, ∀l, ∀D,
αagree
◦
l
Ol LeM ◦ γdeptr (D) ⇐= OD
LeM\ D .
Proof.
l
1 – Derivation of Agreements OD
LeM\ up to l as an abstraction of cardinalities up to security level l.
αagree
◦
Ol LnM ◦ γdeptr (D)
= αlqone
Henceforth, we will derive
1.1 – Case : n
Let l ∈ L, D ∈ Dep.
l
OD
LeM\
⇐=
◦
αcrdval
◦
Ol LnM ◦ γdeptr
l
αlqone ◦ OC
LeM] ◦
◦
γlqonecc (D)
γlqonecc (D)
l
as an abstraction of cardinalities OC
LeM] . This derivation goes by structural induction on expressions.
αlqone
◦
l
OC
LnM]
◦
γlqonecc (D) = αlqone (1)
= tt
l
H, OD
LnM\ I
1.2 – Case : id Let l ∈ L, D ∈ Dep.
αlqone
◦
l
OC
LidM]
◦
γlqonecc (D) = αlqone (n) where l ; id#n ∈ γlqonecc (D)
= (l ; id ∈ D)
1.3 – Case : e1 ⊕ e2 Let l ∈ L, D ∈ Dep.
l
H, OD
LidM\ I
l
OC
Le1 ⊕ e2 M] ◦ γlqonecc (D)
l
l
= αlqone (OC
Le1 M] ◦ γlqonecc (D)) × (OC
Le2 M]
αlqone
◦
⇐= αlqone
=
◦
l
OC
Le1 M]
l
OD
Le1 M\ D
∧
γlqonecc (D) ∧ αlqone
◦
l
OD
Le2 M\ D
H,
1.4 – Case : e1 cmp e2
This case is similar to case 1.3.
1.5 – Case: conclusion
We conclude by structural induction on expressions.
l
OD
Le1
◦
◦
γlqonecc (D))
l
OC
Le2 M]
◦
γlqonecc (D)
⊕ e2 M DI
\
Theorem 2 . The dependence semantics is sound:
αdeptr
Proof.
Recall that we have αdeptr = αlqonecc
◦
◦
\
LcM ◦ γdeptr v̇ LcM\ .
αcrdtr , and γdeptr = γcrdtr
◦
γlqonecc , with:
γcrdtr
γlqonecc
crdtr
lqonecc
−−
−−−−−
− Dep
−−
−−−− Card ←
P(P(Trc)) ←
−−
→
−−
α −−−−
α −−→
Since,
αdeptr
◦
LcM ◦ γdeptr (D)
= αlqonecc
v\ αlqonecc
◦
αcrdtr
◦
LcM]
◦
◦
LcM ◦ γcrdtr
◦
γlqonecc (D)
γlqonecc (D)
We will continue the derivation of dependences abstract semantics LcM\ as an abstraction of LcM] .
We make explicit 2 derivations, for assignments and conditionals. The other cases are similar to the derivation of the cardinalities abstract
semantics.
1 – Case : id := e
Lid := eM] ◦ γlqonecc (D)
l
= αlqonecc {l ; x#n ∈ γlqonecc (D) | x 6= id} ∪ {l ; id#n | n , OC
LeM]
αlqonecc
◦
◦
γlqonecc (D), l ∈ L}
l
= {l ; x ∈ D | x 6= id} ∪ {l ; id | OD
LeM\ }
2 – Case if b then c1 else c2 :
αlqonecc
=
⇐=
◦
Lif b then c1 else c2 M]
γlqonecc (D)
◦
l
if OC
LbM]
◦
γlqonecc (D) = 1
otherwise
let D1 = Lc1 M D in
let D2 = Lc2 M\ D in
let W
( = Mod(if b c1 else c0 ) in
l
S π l (D1 ) t\ π l (D2 )
if OD
LbM\ D
l
l∈L π (D) \ {l ; x | x ∈ W } otherwise
We conclude by structural induction on commands
Appendix G.2
◦
let C1 = Lc1 M γlqonecc (D) in
let C2 = Lc2 M] ◦ γlqonecc (D) in
let W =
Mod(if
b c1 else c0 ) in
l
] l
π (C1 ) t π (C2 )
π l (C )
S
1
αlqonecc
t] add(W,πl (C ))
l∈L
π l (C2 )
]
Precision proof
Lemma 11 . For all l, l0 ∈ L, for all T ∈ P(Trc):
\
.
0
l v l0 =⇒ Ol ⦃e⦄T ⊆ Ol ⦃e⦄T
Proof.
Assume l v l0 . Then, for all R ⊆ T ,
R |=Γ l0 =⇒ R |=Γ l
Thus,
Therefore, it holds that:
{R | R ⊆ T and R |=Γ l0 } ⊆ {R | R ⊆ T and R |=Γ l}
0
Ol ⦃e⦄T ⊆ Ol ⦃e⦄T
Corollary 2 . For all l, l0 ∈ L, for all T ∈ P(P(Trc)):
0
l v l0 =⇒ Ol LeMT ⊆ Ol LeMT
Proof. This is a direct result from Lemma 11 and definition of Ol LeM.
Corollary 3 . For all l, l0 ∈ L, for all id, for all T ∈ P(P(Trc)):
l v l0 =⇒
l ; id ∈ αdeptr (T) =⇒ l0 ; id ∈ αdeptr (T)
Proof.
0
Let us assume l v l0 . By Corollary 2, we have Ol LidMT ⊆ Ol LidMT.
0
And by monotonicity of αagree , we have αagree (Ol LidMT) ⇐= αagree (Ol LidMT).
l
Thus, if l ; id ∈ αdeptr (T), then αagree (O LidMT) = tt and also
0
αagree (Ol LidMT) = tt, thus l0 ; D ∈ αdeptr (T).
Corollary 4 . For all l, l0 ∈ L, for all id, for all D ∈ Dep:
l v l0 =⇒ γdeptr (D ∪ {l ; id}) = γdeptr (D ∪ {l ; id, l0 ; id})
Proof.
1 – Note that D ∪ {l ; id, l0 ; id} ⊇ D ∪ {l ; id}, thus
D ∪ {l ; id, l0 ; id} v\ D ∪ {l ; id}
Therefore, by monotony of γdeptr :
γdeptr (D ∪ {l ; id, l0 ; id}) ⊆ γdeptr (D ∪ {l ; id})
2 – Also, let T ∈ γdeptr (D ∪ {l ; id}).
We have deptr(T ) v\ D ∪ {l ; id} by definition of γdeptr .
Thus, l ; id ∈ αdeptr (T ) and also l0 ; id ∈ αdeptr (T ) by Corollary 3. This also means that deptr(T ) v\ D ∪ {l ; id, l0 ; id}.
Finally, T ∈ γdeptr (D ∪ {l ; id, l0 ; id}) and
γdeptr (D ∪ {l ; id, l0 ; id}) ⊇ γdeptr (D ∪ {l ; id})
This concludes our proof by Case 1 and 2.
Henceforth, we will assume that all D ∈ Dep are well formed, meaning that ∀l, l0 ∈ Dep, l ; id ∈ D =⇒ l0 ; id ∈ D.
We conjecture that this can be proven for the dependence analysis we have derived: given a well formed initial set of dependence constraints,
the analysis always yields a well formed set of dependence constraints. For simplicity, we will use Corollary 4 to argue that we can still
augment any set of dependence constraints to ensure it is well formed by adding the appropriate atomic constraints. An alternative approach
would reduce the set of dependence constraints, and change slightly the abstract semantics in order to leverage Corollary 4 and guarantee the
same precision, but we refrain from doing so for simplicity.
We consider the constructive version of Hunt and Sands’ flow sensitive type system, proposed in (Hunt and Sands 2011).
Lemma 12 . For all e, D ∈ Dep, ∆ ∈ VarP → L, l ∈ L such that ∆ ` e : l, it holds that:
l
αhs (D) v̇ ∆ =⇒ OD
LeM\ D = tt
Proof.
The proof proceeds by structural induction on expressions.
1 – Case n:
l
By definition of OD
LnM\ , we have:
l
∀D, ∀l ∈ L, OD
LnM\ D = tt
2 – Case id:
By definition of the type system, we have ∆(id) = l. Thus:
αhs (D) v̇ ∆ =⇒ u{l0 | l0 ; id ∈ D} v l
=⇒ Hsince D is assumed well-formedI
l ; id ∈ D
l
=⇒ OD
LidM\ D = tt
3 – Case e1 ⊕ e2 :
By definition of the type system, there is l1 , l2 such that ∆ ` e1 : l1 and ∆ ` e2 : l2 , with l1 t l2 = l.
Thus, by induction on e1 and e2 , and assuming αhs (D) v̇ ∆, we have:
l1
l2
OD
Le1 M\ D = tt ∧ OD
Le2 M\ D = tt
Thus, since D is well formed and l1 v l and l2 v l, it holds that:
Therefore, also:
l
l
OD
Le1 M\ D = tt ∧ OD
Le2 M\ D = tt
l
OD
Le1 ⊕ e2 M\ D = tt
4 – Case e1 cmp e2 :
This case is similar to Case 3.
5 – We conclude by structural induction and Cases 1 to 4.
Let us denote by ⊥L ∈ L the bottom element of the lattice L.
Theorem 3 . For all c, D0 , D ∈ Dep, ∆0 , ∆ ∈ VarP → L, where ⊥L ` ∆0 {c}∆, and D = LcM\ D0 , it holds that:
αhs (D0 ) v̇ ∆0 =⇒ αhs (D) v̇ ∆ .
Proof.
The proof goes by structural induction on commands. The conditional case explicitly assumes that the modified variables analysis is precise
enough, to enable the simulation of the program counter. This can be achieved by collecting variable names in a while language.
1 – Case skip : this case stems from the premice.
2 – Case id := e :
Assume αhs (D0 ) v̇ ∆0 .
2.1 – Case : x 6= id
Then, for all x 6= id, ∆(x) = ∆0 (x).
Also, αhs (D)(x) = αhs (D0 )(x) v ∆0 (x) = ∆(x).
2.2 – Case : x = id
Otherwise, ∆ = ∆0 [id 7→ l], where ∆0 ` e : l.
l
By Lemma 12, since αhs (D0 ) v̇ ∆0 , we have OD
LeM\ D0 = tt.
Thus, l ; id ∈ D and :
αhs (D)(id) v ∆(id)
2.3 – Finally, by Cases 2.2 and 2.3, we have :
αhs (D) v̇ ∆
3 – Case c1 ; c2 :
This case proceeds by induction on both c1 and c2 by remarking that the type system types both command c1 and c2 in ⊥L .
4 – Case if (b) then c1 else c2 :
Assume αhs (D0 ) v̇ ∆0 . Let lb , ∆1 , ∆2 such that: lb ` ∆0 {c1 }∆1 and lb ` ∆0 {c2 }∆2 , with ∆ = ∆1 ṫ ∆2 .
Also, let ∆01 and ∆02 such that: ⊥L ` ∆0 {c1 }∆01 and ⊥L ` ∆0 {c2 }∆02 , with:
∆1 = ∆01 [id 7→ ∆01 (id) t lb , ∀id ∈ Mod(c1 )], ∆2 = ∆02 [id 7→ ∆02 (id) t lb , ∀id ∈ Mod(c2 )].
Intuitively, the program counter pc can be simulated by a modified variables analysis that is precise enough. For a while language, this can
be achieved simply by collecting variable names.
Let D1 , D2 ∈ Dep such that: Lc1 M\ D0 = D1 and Lc2 M\ D0 = D2 .
Then, assuming
W = M od(if (b) then c1 else c2 ), we have:
l
\ l
l
\
π
(D
1 ) t π (D2 ) if OD LbM D
S
l
D=
{l ; x ∈ π (D) |
l∈L
x∈
/ W}
otherwise
4.1 – By induction on c1 , we have αhs (D1 ) v̇ ∆01 .
4.2 – By induction on c2 , we have αhs (D2 ) v̇ ∆02 .
4.3 – Assume x 6∈ W , and prove αhs (D)(x) v ∆(x).
Since x 6∈ W , we have ∆(x) = ∆0 (x).
Therefore, αhs (D0 ) v̇ ∆0 implies ∆(x) ; x ∈ D0 .
Thus, since x 6∈ W , we have ∆(x) ; x ∈ D1 , and ∆(x) ; x ∈ D2 (atomic constraints related to variables not explicitly written in c1 are
not discarded from D0 , and likewise for those that are not explicitly written in c2 ). Thus, ∆(x) ; x ∈ D, meaning that αhs (D)(x) v ∆(x).
4.4 – Assume x ∈ W amd prove αhs (D)(x) v ∆(x):
We have lb v ∆(x) since x is explicitly written in one of the branches at least.
Also, by 4.1 and 4.2, we have αhs (D1 )(x) v ∆01 (x), and αhs (D2 )(x) v ∆02 (x). Meaning that
αhs (D1 t\ D2 )(x) = αhs (D1 )(x) t αhs (D2 )(x) v ∆01 (x) t ∆02 (x) v ∆(x)
Notice that ∆01 and ∆02 are well formed. Thus, exists lx such that lb v lx , such that lx ; x ∈ D1 t\ D2 , and lx v ∆(x).
l
And since ∀l ∈ L, such that lb v l, we also have OD
LbM\ D0 = tt by using Lemma 12 and D0 is well formed.
l
l
\ l
Thus, ∀l ∈ L, such that lb v l, π (D) = π (D1 ) t π (D2 ) = π l (D1 t\ D2 ), i.e. lx ; x ∈ D
Thus, αhs (D)(x) v ∆(x).
5 – Case : while (b) then c
Assume αhs (D0 ) v̇ ∆0 .
The output type environment ∆ is defined by:
∆ = lfp λ∆v .let ∆0 s.t. ⊥L t ∆v (a) ` ∆v {c}∆0 in ∆0 ṫ ∆0
Or written differently, ∆ is given by:
∆ = lfp λ∆v .let ∆0 s.t. ⊥L ` ∆v {if (b) then c else skip}∆0 in ∆0 ṫ ∆0
Let (∆n ) be the sequence defined as
∆n+1 = let ∆0 s.t. ⊥L ` ∆n {if (b) then c else skip}∆0 in ∆0 ṫ ∆0
Also, let (Dn ) be the sequence defined as Dn+1 = D0 t\ Lif (b) then c else skipM\ Dn
Then, we prove by induction on n that αhs (Dn ) v ∆n .
5.1 – Case n = 0.
This case holds by assumption αhs (D0 ) v̇ ∆0 .
5.2 – Case: Assume αhs (Dn ) v ∆n , and prove αhs (Dn+1 ) v ∆n+1 .
Let ∆0 such that ⊥L ` ∆n {if (b) then c else skip}∆0
By assumption, we have αhs (Dn ) v ∆n . Thus, by using the same proof in Case 4, we have
αhs (Lif (b) then c else skipM\ Dn ) v̇ ∆0
Therefore,
αhs (Lif (b) then c else skipM\ Dn ) ṫ αhs (D0 ) v̇ (∆0 ṫ ∆0 )
Therefore, αhs (Dn+1 ) v ∆n+1 . which proves that both least fixpoints are equal.
6 – Finally, we conclude by Cases 1–5, and structural induction on commands.
| 6 |
Bounded Model Checking for
Probabilistic Programs?
Nils Jansen2 , Christian Dehnert1 , Benjamin Lucien Kaminski1 ,
Joost-Pieter Katoen1 , and Lukas Westhofen1
1
arXiv:1605.04477v2 [cs.PL] 26 Jul 2016
2
RWTH Aachen University, Germany
University of Texas at Austin, USA
Abstract. In this paper we investigate the applicability of standard
model checking approaches to verifying properties in probabilistic programming. As the operational model for a standard probabilistic program
is a potentially infinite parametric Markov decision process, no direct
adaption of existing techniques is possible. Therefore, we propose an on–
the–fly approach where the operational model is successively created and
verified via a step–wise execution of the program. This approach enables
to take key features of many probabilistic programs into account: nondeterminism and conditioning. We discuss the restrictions and demonstrate
the scalability on several benchmarks.
1
Introduction
Probabilistic programs are imperative programs, written in languages like C,
Scala, Prolog, or ML, with two added constructs: (1) the ability to draw values at random from probability distributions, and (2) the ability to condition
values of variables in a program through observations. In the past years, such
programming languages became very popular due to their wide applicability for
several different research areas [1]: Probabilistic programming is at the heart
of machine learning for describing distribution functions; Bayesian inference
is pivotal in their analysis. They are central in security for describing cryptographic constructions (such as randomized encryption) and security experiments.
In addition, probabilistic programs are an active research topic in quantitative
information flow. Moreover, quantum programs are inherently probabilistic due
to the random outcomes of quantum measurements. All in all, the simple and
intuitive syntax of probabilistic programs makes these different research areas
accessible to a broad audience.
However, although these programs typically consist of a few lines of code, they
are often hard to understand and analyze; bugs, for instance non–termination of
a program, can easily occur. It seems of utmost importance to be able to automatically prove properties like “Is the probability for termination of the program
?
This work has been partly funded by the awards AFRL # FA9453-15-1-0317, ARO
# W911NF-15-1-0592 and ONR # N00014-15-IP-00052 and is supported by the
Excellence Initiative of the German federal and state government.
at least 90%” or “Is the expected value of a certain program variable at least
5 after successful termination?”. Approaches based on the simulation of a program to show properties or infer probabilities have been made in the past [2,3].
However, to the best of our knowledge there is no work which exploits wellestablished model checking algorithms for probabilistic systems such as Markov
decision processes (MDP) or Markov chains (MCs), as already argued to be an
interesting avenue for the future in [1].
As the operational semantics for a probabilistic program can be expressed as a
(possible infinite) MDP [4], it seems worthwhile to investigate the opportunities
there. However, probabilistic model checkers like PRISM [5], iscasMc [6], or
MRMC [7] offer efficient methods only for finite models.
We make use of the simple fact that for a finite unrolling of a program
the corresponding operational MDP is also finite. Starting from a profound understanding of the (intricate) probabilistic program semantics—including features such as observations, unbounded (and hence possibly diverging) loops, and
nondeterminism—we show that with each unrolling of the program both conditional reachability probabilities and conditional expected values of program
variables increase monotonically. This gives rise to a bounded model-checking
approach for verifying probabilistic programs. This enables for a user to write a
program and automatically verify it against a desired property without further
knowledge of the programs semantics.
We extend this methodology to the even more complicated case of parametric probabilistic programs, where probabilities are given by functions over
parameters. At each iteration of the bounded model checking procedure, parameter valuations violating certain properties are guaranteed to induce violation at
each further iteration.
We demonstrate the applicability of our approach using five well-known
benchmarks from the literature. Using efficient model building and verification
methods, our prototype is able to prove properties where either the state space
of the operational model is infinite or consists of millions of states.
Related Work. Besides the tools employing probabilistic model checking as listed
above, one should mention the approach in [8], where finite abstractions of the
operational semantics of a program were verified. However, this was defined for
programs without parametric probabilities or observe statements. In [9], verification on partial operational semantics is theoretically discussed for termination
probabilities.
The paper is organized as follows: In Section 2, we introduce the probabilistic
models we use, the probabilistic programming language, and the structured operational semantics (SOS) rules to construct an operational (parametric) MDP.
Section 3 first introduces formal concepts needed for the finite unrollings of the
program, then shows how expectations and probabilities grow monotonically,
and finally explains how this is utilized for bounded model checking. In Section 4, an extensive description of used benchmarks, properties and experiments
is given before the paper concludes with Section 5.
2
Preliminaries
2.1
Distributions and Polynomials
A probability distribution P
over a finite or countably infinite set X is a function
µ : X → [0, 1] ⊆ R with x∈X µ(x) = 1. The set of all distributions on X is
denoted by Distr (X). Let V be a finite set of parameters over R. A valuation
for V is a function u : V → R. Let Q[V ] denote the set of multivariate polynomials with rational coefficients and QV the set of rational functions (fractions
of polynomials) over V . For g ∈ Q[V ] or g ∈ QV , let g[u] denote the evaluation
of g at u. We write g = 0 if g can be reduced to 0, and g 6= 0 otherwise.
2.2
Probabilistic Models
First, we introduce parametric probabilistic models which can be seen as transition systems where the transitions are labelled with polynomials in Q[V ].
Definition 1 (pMDP and pMC). A parametric Markov decision process
(pMDP) is a tuple M = (S, sI , Act, P) with a countable set S of states,
an initial state sI ∈ S, a finite set Act of actions, and a transition function
P : S × Act × S → Q[V ] satisfying for all s ∈ S : Act(s) 6= ∅, where V is a finite
set of parameters over R and Act(s) = {α ∈ Act | ∃s0 ∈ S. P(s, α, s0 ) 6= 0}. If
for all s ∈ S it holds that |Act(s)| = 1, M is called a parametric discrete-time
Markov chain (pMC), denoted by D.
At each state, an action is chosen nondeterministically, then the successor states
are determined probabilistically as defined by the transition function. Act(s) is
the set of enabled actions at state s. As Act(s) is non-empty for all s ∈ S, there
are no deadlock states. For pMCs there is only one single action per state and
we write the transition probability function as P : S × S → Q[V ], omitting that
action. Rewards are defined using a reward function rew : S → R which assigns
rewards to states of the model. Intuitively, the reward rew(s) is earned upon
leaving the state s.
Schedulers. The nondeterministic choices of actions in pMDPs can be resolved
using schedulers 3 . In our setting it suffices to consider memoryless deterministic
schedulers [10]. For more general definitions we refer to [11].
Definition 2. (Scheduler) A scheduler for pMDP M = (S, sI , Act, P) is a
function S : S → Act with S(s) ∈ Act(s) for all s ∈ S.
Let Sched M denote the set of all schedulers for M. Applying a scheduler to
a pMDP yields an induced parametric Markov chain, as all nondeterminism is
resolved, i.e., the transition probabilities are obtained w.r.t. the choice of actions.
Definition 3. (Induced pMC) Given a pMDP M = (S, sI , Act, P), the
pMC induced by S ∈ Sched M is given by MS = (S, sI , Act, P S ), where
P S (s, s0 ) = P(s, S(s), s0 ),
3
for all s, s0 ∈ S .
Also referred to as adversaries, strategies, or policies.
Valuations. Applying a valuation u to a pMDP M, denoted M[u], replaces each
polynomial g in M by g[u]. We call M[u] the instantiation of M at u. A valuation
u is well-defined for M if the replacement yields probability distributions at all
states; the resulting model M[u] is a Markov decision process (MDP) or, in
absence of nondeterminism, a Markov chain (MC).
Properties. For our purpose we consider conditional reachability properties and
conditional expected reward properties in MCs. For more detailed definitions we
refer to [11, Ch. 10]. Given an MC D with state space S and initial state sI ,
let PrD (¬♦U ) denote the probability not to reach a set of undesired states U
from the initial state sI within D. Furthermore, let PrD (♦T | ¬♦U ) denote the
conditional probability to reach a set of target states T ⊆ S from the initial
state sI within D, given that no state in the set U is reached. We use the
standard probability measure on infinite paths through an MC. For threshold
λ ∈ [0, 1] ⊆ R, the reachability property, asserting that a target state is to be
reached with conditional probability at most λ, is denoted ϕ = P≤λ (♦T | ¬♦U ).
The property is satisfied by D, written D |= ϕ, iff PrD (♦T | ¬♦U ) ≤ λ. This is
analogous for comparisons like <, >, and ≥.
The reward of a path through an MC D until T is the sum of the rewards
of the states visited along on the path before reaching T . The expected reward
of a finite path is given by its probability times its reward. Given PrD (♦T ) = 1,
the conditional expected reward of reaching T ⊆ S, given that no state in set
U ⊆ S is reached, denoted ERD (♦T | ¬♦U ), is the expected reward of all paths
accumulated until hitting T while not visiting a state in U in between divided
by the probability of not reaching a state in U (i.e., divided by PrD (¬♦U )).
An expected reward property is given by ψ = E≤κ (♦T | ¬♦U ) with threshold
κ ∈ R≥0 . The property is satisfied by D, written D |= ψ, iff ERD (♦T | ¬♦U ) ≤ κ.
Again, this is analogous for comparisons like <, >, and ≥. For details about
conditional probabilities and expected rewards see [12].
Reachability probabilities and expected rewards for MDPs are defined on
induced MCs for specific schedulers. We take here the conservative view that a
property for an MDP has to hold for all possible schedulers.
Parameter Synthesis. For pMCs, one is interested in synthesizing well-defined
valuations that induce satisfaction or violation of the given specifications [13].
In detail, for a pMC D, a rational function g ∈ QV is computed which—
when instantiated by a well-defined valuation u for D—evaluates to the actual
reachability probability or expected reward for D, i.e., g[u] = PrD[u] (♦T ) or
g[u] = ERD[u] (♦T ). For pMDPs, schedulers inducing maximal or minimal probability or expected reward have to be considered [14].
2.3
Conditional Probabilistic Guarded Command Language
We first present a programming language which is an extension of Dijkstra’s
guarded command language [15] with a binary probabilistic choice operator,
yielding the probabilistic guarded command language (pGCL) [16]. In [17], pGCL
was endowed with observe statements, giving rise to conditioning. The syntax of
this conditional probabilistic guarded command language (cpGCL) is given by
P ::= skip | abort | x := E | P; P | if G then P else P
| {P} [g] {P} | {P} {P} | while (G) {P} | observe (G)
Here, x belongs to the set of program variables V; E is an arithmetical expression
over V; G is a Boolean expression over arithmetical expressions over V. The
probability is given by a polynomial g ∈ Q[V ]. Most of the cpGCL instructions
are self–explanatory; we elaborate only on the following: For cpGCL-programs P
and Q, {P } [g] {Q} is a probabilistic choice where P is executed with probability
g and Q with probability 1−g; analogously, {P } {Q} is a nondeterministic
choice between P and Q; abort is syntactic sugar for the diverging program
while (true) {skip}. The statement observe (G) for the Boolean expression G
blocks all program executions violating G and induces a rescaling of probability
of the remaining execution traces so that they sum up to one. For a cpGCLprogram P , the set of program states is given by S = {σ | σ : V → Q}, i.e., the
set of all variable valuations. We assume all variables to be assigned zero prior
to execution or at the start of the program. This initial variable valuation σI ∈ S
with ∀x ∈ V. σI (x) = 0 is called the initial state of the program.
Example 1. Consider the following cpGCL-program with variables x and c:
1
2
3
4
while ( c = 0) {
{ x := x + 1 } [0.5] { c := 1 }
};
observe " x is odd "
While c is 0, the loop body is iterated: With probability 1/2 either x is incremented by one or c is set to one. After leaving the loop, the event that the
valuation of x is odd is observed, which means that all program executions where
x is even are blocked. Properties of interest for this program would, e.g., concern
the termination probability, or the expected value of x after termination.
4
2.4
Operational Semantics for Probabilistic Programs
We now introduce an operational semantics for cpGCL-programs which is given
by an MDP as in Definition 1. The structure of such an operational MDP is
schematically depicted below.
h i
hP, σI i
↓
↓
↓ ↓↓ ↓
diverge
hsink i
Squiggly arrows indicate reaching certain states via possibly multiple paths and
states; the clouds indicate that there might be several states of the particular
kind. hP, σI i marks the initial state of the program P . In general the states of
the operational MDP are of the form hP 0 , σ 0 i where P 0 is the program that is
left to be executed and σ 0 is the current variable valuation.
All runs of the program (paths through the MDP) are either terminating
and eventually end up in the hsink i state, or are diverging (thus they never
reach hsink i). Diverging runs occur due to non–terminating computations. A
terminating run has either terminated successfully, i.e., it passes a ↓–state, or it
has terminated due to a violation of an observation, i.e., it passes the h i–state.
Sets of runs that eventually reach h i, or hsink i, or diverge are pairwise disjoint.
The ↓–labelled states are the only ones with positive reward, which is due to
the fact that we want to capture probabilities of events (respectively expected
values of random variables) occurring at successful termination of the program.
The random variables of interest are E = {f | f : S → R≥0 }. Such random
variables are referred to as post–expectations [16]. Formally, we have:
Definition 4 (Operational Semantics of Programs). The operational semantics of a cpGCL program P with respect to a post–expectation f ∈ E is the
MDP Mf JP K = (S, hP, σI i, Act, P) together with a reward function rew, where
– S = hQ, σi, h↓, σi Q is a cpGCL program, σ ∈ S ∪ {h i, hsink i} is the
countable set of states,
– hP, σI i ∈ S is the initial state,
– Act = {left, right, none} is the set of actions, and
– P is the smallest relation defined by the SOS rules given in Figure 1.
The reward function is rew(s) = f (σ) if s = h↓, σi, and rew(s) = 0, otherwise.
A state of the form h↓, σi indicates successful termination, i.e., no commands are
left to be executed. These terminal states and the h i–state go to the hsink i state.
skip without context terminates successfully. abort self–loops, i.e., diverges.
x := E alters the variable valuation according to the assignment then terminates
successfully. For the concatenation, h↓; Q, σi indicates successful termination
of the first program, so the execution continues with hQ, σi. If for P ; Q the
execution of P leads to h i, P ; Q does so, too. Otherwise, for hP, σi−→µ, µ is
lifted such that Q is concatenated to the support of µ. For more details on the
operational semantics we refer to [4].
If for the conditional choice σ |= G holds, P is executed, otherwise Q. The
case for while is similar. For the probabilistic choice, a distribution ν is created
according to probability p. For {P } {Q}, we call P the left choice and Q
the right choice for actions left, right ∈ Act. For the observe statement, if
σ |= G then observe acts like skip. Otherwise, the execution leads directly to
h i indicating a violation of the observe statement.
Example 2. Reconsider Example 1, where we set for readability P1 = {x :=
x + 1} [0.5] {c := 1}, P2 = observe(“x is odd”), P3 = {x := x + 1}, and
(terminal)
h↓, σi −→ hsink i
(undesired)
(skip)
(abort)
hskip, σi −→ h↓, σi
(assign)
h i −→ hsink i
habort, σi −→ habort, σi
hx := E, σi −→ h↓, σ[x ← JEKσ ]i
σ 6|= G
hobserve G, σi −→ h i
σ |= G
(observe1)
hobserve G, σi −→ h↓, σi
(observe2)
(concatenate1)
(concatenate2)
(concatenate3)
(if1)
h↓; Q, σi −→ hQ, σi
hP, σi −→ µ
0
0
0
0
0
, where ∀P . ν(hP ; Q, σ i) := µ(hP , σ i)
hP ; Q, σi −→ ν
σ |= G
hite (G) {P } {Q}, σi −→ hP, σi
(while1)
(prob)
(if2)
σ |= G
hwhile (G) {P }, σi −→ hP ; while (G) {P }, σi
h{P } [p] {Q}, σi −→ ν
(nondet1)
hP, σi −→ h i
hP ; Q, σi −→ h i
σ 6|= G
hite (G) {P } {Q}, σi −→ hQ, σi
(while2)
σ 6|= G
hwhile (G) {P }, σi −→ h↓, σi
, where ν(hP, σi) := p, ν(hQ, σi) := 1 − p
(nondet2)
left
h{P } {Q}, σi −−−→ hP, σi
right
h{P } {Q}, σi −−−−→ hQ, σi
Fig. 1. SOS rules for constructing the operational MDP of a cpGCL program. We
use s −→ t to indicate P(s, none, t) = 1, s −→ µ for µ ∈ Distr (S) to indicate
left
right
∀t ∈ S : P(s, none, t) = µ(t), s −−−→ t to indicate P(s, left, t) = 1, and s −−−−→ t to
indicate P(s, right, t) = 1.
P4 = {c := 1}. A part of the operational MDP Mf JP K for an arbitrary initial variable valuation σI and post–expectation x is depicted in Figure 2.4 Note
that this MDP is an MC, as P contains no nondeterministic choices. The MDP
has been unrolled until the second loop iteration, i.e., at state hP, σI [x/2]i, the
unrolling could be continued. The only terminating state is h↓, σI [x/1, c/1]i. As
our post-expectation is the value of variable x, we assign this value to terminating
states, i.e., reward 1 at state h↓, σI [x/1, c/1]i, where x has been assigned 1. At
state hP, σI [c/1]i, the loop condition is violated as is the subsequent observation
because of x being assigned an even number.
4
3
Bounded Model Checking for Probabilistic Programs
In this section we describe our approach to model checking probabilistic programs. The key idea is that satisfaction or violation of certain properties for a
program can be shown by means of a finite unrolling of the program. Therefore,
we introduce the notion of a partial operational semantics of a program, which
we exploit to apply standard model checking to prove or disprove properties.
First, we state the correspondence between the satisfaction of a property for a
cpGCL-program P and for its operational semantics, the MDP Mf JP K. Intu4
We have tacitly overloaded the variable name x to an expectation here for readability.
More formally, by the “expectation x” we actually mean the expectation λσ. σ(x).
hP, σI i
1
2
hP3 ; P, σI i
hP1 ; P, σI i
1
2
h↓; P, σI [x/1]i
hP3 ; P, σI [x/1]i
1
2
hP, σI [x/1]i
hP4 ; P, σI i
h↓; P, σI [c/1]i
1
2
hP4 ; P, σI [x/1]i
hP, σI [c/1]i
h↓; P, σI [x/2]i
h↓; P, σI [x/1, c/1]i
h↓; P2 , σI [c/1]i
hP, σI [x/2]i
..
.
hP, σI [x/1, c/1]i
hP2 , σI [c/1]i
h↓; P2 , σI [x/1, c/1]i
h i
hP2 , σI [x/1, c/1]i
1
h↓, σI [x/1, c/1]i
hsink i
Fig. 2. Partially unrolled operational semantics for program P
itively, a program satisfies a property if and only if the property is satisfied on
the operational semantics of the program.
Definition 5 (Satisfaction of Properties). Given a cpGCL program P and
a (conditional) reachability or expected reward property ϕ. We define
P |= ϕ
iff
Mf JP K |= ϕ .
This correspondence on the level of a denotational semantics for cpGCL has been
discussed extensively in [17]. Note that there only schedulers which minimize expected rewards were considered. Here, we also need maximal schedulers as we
are considering both upper and lower bounds on expected rewards and probabilities. Note that satisfaction of properties is solely based on the operational
semantics and induced maximal or minimal probabilities or expected rewards.
We now introduce the notion of a partial operational MDP for a cpGCL–
program P , which is a finite approximation of the full operational MDP of P .
Intuitively, this amounts to the successive application of SOS rules given in
Figure 1, while not all possible rules have been applied yet.
Definition 6 (Partial Operational Semantics). A partial operational semantics for a cpGCL–program P is a sub-MDP Mf JP K0 = (S 0 , hP, σI i, Act, P 0 )
of the operational
semantics for P (denoted Mf JP K0 ⊆ Mf JP K) with S 0 ⊆S. Let
0
Sexp = S \ hQ, σi ∈ S 0 Q 6= ↓, ∃ s ∈ S \ S 0 ∃ α ∈ Act : P hQ, σi, α, s > 0
be the set of expandable states. Then the transition probability function P 0 is
for s, s0 ∈ S 0 and α ∈ Act given by
(
0
0
P (s, α, s ) =
if s = s0 for s, s0 ∈ Sexp ,
otherwise .
1,
P(s, α, s0 ),
Intuitively, the set of non–terminating expandable states describes the states
where there are still SOS rules applicable. Using this definition, the only transitions leaving expandable states are self-loops, enabling to have a well-defined
probability measure on partial operational semantics. We will use this for our
method, which is based on the fact that both (conditional) reachability probabilities and expected rewards for certain properties will always monotonically
increase for further unrollings of a program and the respective partial operational
semantics. This is discussed in what follows.
3.1
Growing Expectations
As mentioned before, we are interested in the probability of termination or the
expected values of expectations (i.e. random variables ranging over program
states) after successful termination of the program. This is measured on the
operational MDP by the set of paths reaching hsink i from the initial state conditioned on not reaching h i [17]. In detail, we have to compute the conditional
expected value of post–expectation f after successful termination of program
P , given that no observation was violated along the computation. For nondeterministic programs, we have to compute this value either under a minimizing
or maximizing scheduler (depending on the given property). We focus our presentation on expected rewards and minimizing schedulers, but all concepts are
analogous for the other cases. For Mf JP K we have
inf
S∈Sched
Mf JP K
ERM
f
JP KS
(♦hsink i | ¬♦h i) .
f
Recall that Mf JP KS is the induced MC under scheduler S ∈ Sched M JP K as
in Definition 3. Recall also that for ¬♦h i all paths not eventually reaching h i
either diverge (collecting reward 0) or pass by a ↓–state and reach hsink i. More
importantly, all paths that do eventually reach h i also collect reward 0. Thus:
inf
S∈Sched
Mf JP K
ERM
f
JP KS
f
=
inf
S∈Sched
=
Mf JP K
JP KS
(♦hsink i ∩ ¬♦h i)
Mf JP KS
Mf JP K
inf
S∈Sched
ERM
(♦hsink i | ¬♦h i)
Pr
Mf JP KS
ER
(♦hsink i)
Mf JP KS
Pr
(¬♦ )
(¬♦ )
.
Finally, observe that the probability of not reaching h i is one minus the probability of reaching h i, which gives us:
f
=
ERM
inf
S∈Sched
Mf JP K
JP KS
(♦hsink i)
Mf JP KS
1 − Pr
.
(†)
(♦ )
Regarding the quotient minimization we assume “ 00 < 0” as we see 00 —being
undefined—to be less favorable than 0. For programs without nondeterminism
this view agrees with a weakest–precondition–style semantics for probabilistic
programs with conditioning [17].
f
S
It was shown in [18] that all strict lower bounds for ERM JP K (♦hsink i) are
in principle computably enumerable in a monotonically non–decreasing fashion.
One way to do so, is to allow for the program to be executed for an increasing
number of k steps, and collect the expected rewards of all execution traces that
have lead to termination within k computation steps. This corresponds naturally to constructing a partial operational semantics Mf JP K0 ⊆ Mf JP K as in
Definition 6 and computing minimal expected rewards on Mf JP K0 .
Analogously, it is of course also possible to monotonically enumerate all
S
f
strict lower bounds of PrM JP K (♦ ), since—again—we need to just collect the
probability mass of all traces that have led to h i within k computation steps.
Since probabilities are quantities bounded between 0 and 1, a lower bound for
S
S
f
f
PrM JP K (♦ ) is an upper bound for 1 − PrM JP K (♦ ).
f
S
Put together, a lower bound for ERM JP K (♦hsink i) and a lower bound for
f
S
PrM JP K (♦ ) yields a lower bound for (†). We are thus able to enumerate all
f
S
lower bounds of ERM JP K (♦hsink i | ¬♦h i) by inspection of a finite sub–MDP
of Mf JP K. Formally, we have:
Theorem 1. For a cpGCL program P , post–expectation f , and a partial operational MDP Mf JP K0 ⊆ Mf JP K it holds that
inf
S∈Sched
Mf JP K0
≤
ERM
f
JP K0S
inf
S∈Sched
3.2
(♦hsink i | ¬♦h i)
Mf JP K
ERM
f
JP KS
(♦hsink i | ¬♦h i) .
Model Checking
Using Theorem 1, we transfer satisfaction or violation of certain properties
from a partial operational semantics Mf JP K0 ⊆ Mf JP K to the full semantics
of the program. For an upper bounded conditional expected reward property
ϕ = E≤κ (♦T | ¬♦U ) where T, U ∈ S we exploit that
Mf JP K0 6|= ϕ
=⇒
P 6|= ϕ .
(1)
That means, if we can prove the violation of ϕ on the MDP induced by a finite
unrolling of the program, it will hold for all further unrollings, too. This is
because all rewards and probabilities are positive and thus further unrolling can
only increase the accumulated reward and/or probability mass.
Dually, for a lower bounded conditional expected reward property ψ =
E≥λ (♦T | ♦U ) we use the following property:
Mf JP K0 |= ψ
=⇒
P |= ϕ .
(2)
The preconditions of Implication (1) and Implication (2) can be checked by
probabilistic model checkers like PRISM [5]; this is analogous for conditional
reachability properties. Let us illustrate this by means of an example.
Example 3. As mentioned in Example 1, we are interested in the probability of
termination. As outlined in Section 2.4, this probability can be measured by
Pr(♦hsink i | ¬♦h i) =
Pr(♦hsink i ∧ ¬♦h i)
.
Pr(♦h i)
We want this probability to be at least 1/2, i.e., ϕ = P≥0.5 (♦hsink i | ¬♦h i).
Since for further unrollings of our partially unrolled MDP this probability never
decreases, the property can already be verified on the partial MDP Mf JP K0 by
f
PrM
JP K0
(♦hsink i | ¬♦h i) =
1/4
1/2
=
1
,
2
where Mf JP K0 is the sub-MDP from Figure 2. This finite sub-MDP Mf JP K0 is
therefore a witness of Mf JP K |= ϕ.
4
Algorithmically, this technique relies on suitable heuristics regarding the size
of the considered partial MDPs. Basically, in each step k states are expanded
and the corresponding MDP is model checked, until either the property can be
shown to be satisfied or violated, or no more states are expandable. In addition,
heuristics based on shortest path searching algorithms can be employed to favor
expandable states that so far induce high probabilities.
Note that this method is a semi-algorithm when the model checking problems
stated in Implications (1) and (2) are considering strict bounds, i.e. < κ and > κ.
It is then guaranteed that the given bounds are finally exceed.
Consider now the case where we want to show satisfaction of ϕ = E≤κ (♦T |
¬♦U ), i.e., Mf JP K0 |= ϕ ⇒ P |= ϕ. As the conditional expected reward will
monotonically increase as long as the partial MDP is expandable, the implication is only true if there are no more expandable states, i.e., the model is fully
expanded. This is analogous for the violation of upper bounded properties. Note
that many practical examples actually induce finite operational MDPs which
enables to build the full model and perform model checking.
It remains to discuss how this approach can be utilized for parameter synthesis as explained in Section 2.2. For a partial operational pMDP Mf JP K0 and
a property ϕ = E≤κ (♦T | ¬♦U ) we use tools like PROPhESY [13] to determine for which parameter valuations ϕ is violated. For each valuation u with
Mf JP K0 [u] 6|= ϕ it holds that Mf JP K[u] 6|= ϕ; each parameter valuation violating a property on a partial pMDP also violates it on the fully expanded MDP.
4
Evaluation
Experimental Setup. We implemented and evaluated the bounded model checking method in C++. For the model checking functionality, we use the stochastic
model checker Storm, developed at RWTH Aachen University, and PROPhESY [19] for parameter synthesis.
We consider five different, well-known benchmark programs, three of which
are based on models from the PRISM benchmark suite [5] and others taken from
other literature (see Appendix A for some examples). We give the running times
of our prototype on several instances of these models. Since there is — to the best
of our knowledge — no other tool that can analyze cpGCL programs in a purely
automated fashion, we cannot meaningfully compare these figures to other tools.
As our technique is restricted to establishing that lower bounds on reachability
probabilities and the expectations of program variables, respectively, exceed a
threshold λ, we need to fix λ for each experiment. For all our experiments, we
chose λ to be 90% of the actual value for the corresponding query and choose
to expand 106 states of the partial operational semantics of a program between
each model checking run.
We ran the experiments on an HP BL685C G7 machine with 48 cores clocked
with 2.0GHz each and 192GB of RAM while each experiment only runs in a single
thread with a time–out of one hour. We ran the following benchmarks5 :
Crowds Protocol [21]. This protocol aims at anonymizing the sender of R messages by routing them probabilistically through a crowd of N hosts. Some of
these hosts, however, are corrupt and try to determine the real sender by observing the host that most recently forwarded a message. For this model, we are
interested in (a) the probability that the real sender is observed more than R/10
times, and (b) the expected number of times that the real sender is observed.
We also consider a variant (crowds-obs) of the model in which an observe
statement ensures that after all messages have been delivered, hosts different
from the real sender have been observed at least R/4 times. Unlike the model
from the PRISM website, our model abstracts from the concrete identity of hosts
different from the sender, since they are irrelevant for properties of interest.
Herman Protocol. In this protocol [22], N hosts form a token-passing ring and
try to steer the system into a stable state. We consider the probability that the
system eventually reaches such a state in two variants of this model where the
initial state is either chosen probabilistically or nondeterministically.
Robot. The robot case-study is loosely based on a similar model from the PRISM
benchmark suite. It models a robot that navigates through a bounded area of an
unbounded grid. Doing so, the robot can be blocked by a janitor that is moving
probabilistically across the whole grid. The property of interest is the probability
that the robot will eventually reach its final destination.
5
All input programs and log files of the experiments can be downloaded at
moves.rwth-aachen.de/wp-content/uploads/conference material/pgcl atva16.tar.gz
Table 1. Benchmark results for probability queries.
program
instance #states #trans. full?
λ result actual time
(100,60)
877370 1104290 yes
0.29
0.33
0.33 109
crowds
(100,80)
106 1258755
no
0.30
0.33
0.33 131
(100,100)
2 · 106 2518395
no
0.30
0.33
0.33 354
(100,60)
878405 1105325 yes
0.23
0.26
0.26 126
crowds-obs
(100,80)
106 1258718
no
0.23
0.25
0.24 170
(100,100)
3 · 106 3778192
no
0.23
0.26
0.26 890
(17)
106 1136612
no
0.9
0.99
1
91
herman
(21)
106 1222530
no
0.9
0.99
1 142
(13)
1005945 1112188 yes
0.9
1
1 551
herman-nd
(17)
−
−
no
0.9
0
1 TO
robot
181595 234320 yes
0.9
1
1
24
predator
106 1234854
no
0.9
0.98
1 116
(5)
106 1589528
no
0.75
0.83
0.83
11
coupon
(7)
2 · 106 3635966
no
0.67
0.72
0.74 440
(10)
−
−
no
0.57
0
0.63 TO
(5)
106 1750932
no
0.85
0.99
0.99
11
coupon-obs
(7)
106 1901206
no
0.88
0.91
0.98
15
(10)
−
−
no
0.85
0
0.95 TO
(5)
106 1356463
no 3.4e-3 3.8e-3 3.8e-3
9
coupon-classic
(7)
106 1428286
no 5.5e-4 6.1e-4 6.1e-4
9
(10)
−
−
no 3.3e-5
0 3.6e-5 TO
Predator. This model is due to Lotka and Volterra [23, p. 127]. A predator
and a prey population evolve with mutual dependency on each other’s numbers.
Following some basic biology principles, both populations undergo periodic fluctuations. We are interested in (a) the probability of one of the species going
extinct, and (b) the expected size of the prey population after one species has
gone extinct.
Coupon Collector. This is a famous example6 from textbooks on randomized
algorithms [24]. A collector’s goal is to collect all of N distinct coupons. In every
round, the collector draws three new coupons chosen uniformly at random out
of the N coupons. We consider (a) the probability that the collector possesses
all coupons after N rounds, and (b) the expected number of rounds the collector
needs until he has all the coupons as properties of interest. Furthermore, we
consider two slight variants: in the first one (coupon-obs), an observe statement
ensures that the three drawn coupons are all different and in the second one
(coupon-classic), the collector may only draw one coupon in each round.
Table 1 shows the results for the probability queries. For each model instance,
we give the number of explored states and transitions and whether or not the
model was fully expanded. Note that the state number is a multiple of 106 in
6
https://en.wikipedia.org/wiki/Coupon collector%27s problem
Table 2. Benchmark results for expectation queries.
program
instance #states #trans. full? result actual time
(100,60) 877370 1104290 yes
5.61
5.61 125
crowds
(100,80)
106 1258605
no
7.27
7.47 176
(100,100)
2 · 106 2518270
no
9.22
9.34 383
(100,60) 878405 1105325 yes
5.18
5.18 134
crowds-obs
(100,80)
106 1258569
no
6.42
6.98 206
(100,100)
2 · 106 2518220
no
8.39
8.79 462
predator
−
3 · 106 3716578
no 99.14
? 369
(5)
106 1589528
no
4.13
4.13
15
coupon
(7)
3 · 106 5379492
no
5.86
6.38
46
(10)
−
−
no
0
10.1 TO
(5)
106 1750932
no
2.57
2.57
13
coupon-obs
(7)
2 · 106 3752912
no
4.22
4.23
30
(10)
−
−
no
0
6.96 TO
(5)
106 1356463
no 11.41 11.42
15
coupon-classic
(7)
106 1393360
no 18.15 18.15
21
(10)
−
−
no
0 29.29 TO
case the model was not fully explored, because our prototype always expands
106 states before it does the next model checking call. The next three columns
show the probability bound (λ), the result that the tool could achieve as well as
the actual answer to the query on the full (potentially infinite) model. Due to
space constraints, we rounded these figures to two significant digits. We report
on the time in seconds that the prototype took to establish the result (TO =
3600 sec.).
We observe that for most examples it suffices to perform few unfolding steps
to achieve more than 90% of the actual probability. For example, for the largest
crowds-obs program, 3 · 106 states are expanded, meaning that three unfolding
steps were performed. Answering queries on programs including an observe statement can be costlier (crowds vs. crowds-obs), but does not need to be (coupon vs.
coupon-obs). In the latter case, the observe statement prunes some paths early
that were not promising to begin with, whereas in the former case, the observe
statement only happens at the very end, which intuitively makes it harder for
the search to find target states. We are able to obtain non-trivial lower bounds
for all but two case studies. For herman-nd, not all of the (nondeterministically
chosen) initial states were explored, because our exploration order currently does
not favour states that influence the obtained result the most. Similarly, for the
largest coupon collector examples, the time limit did not allow for finding one
target state. Again, an exploration heuristic that is more directed towards these
could potentially improve performance drastically.
Table 2 shows the results for computing the expected value of program variables at terminating states. For technical reasons, our prototype currently cannot
perform more than one unfolding step for this type of query. To achieve mean-
expected value
2
1.5
1
0.5 lower bound on exp. number of draws
actual value
0
10000 20000 30000 40000 50000 60000
number of explored states
(a) coupon-obs (5)
100
90
80
70
60
50
40
30
20 lower bound on exp. number of goats
10
500000 1x106 1.5x106 2x106 2.5x106
number of explored states
expected value
3
2.5
(b) predator
Fig. 3. The obtained values approach the actual value from below.
(a) after 9 iterations
(b) after 13 iterations
Fig. 4. Analyzing parametric models yields violating parameter instances.
ingful results, we therefore vary the number of explored states until 90% of the
actual result is achieved. Note that for the predator program, the actual value
for the query is not known to us, so we report on the value at which the result
only grows very slowly. The results are similar to the probability case in that
most often a low number of states suffices to show meaningful lower bounds.
Unfortunately — as before — we can only prove a trivial lower bound for the
largest coupon collector examples.
Figure 3 illustrates how the obtained lower bounds approach the actual expected value with increasing number of explored states for two case studies.
For example, in the left picture one can observe that exploring 60000 states is
enough to obtain a very precise lower bound on the expected number of rounds
the collector needs to gather all five coupons, as indicated by the dashed line.
Finally, we analyze a parametric version of the crowds model that uses the
parameters f and b to leave the probabilities (i) for a crowd member to be corrupt
(b) and (ii) of forwarding (instead of delivering) a message (f ) unspecified. In
each iteration of our algorithm, we obtain a rational function describing a lower
bound on the actual probability of observing the real sender of the message
more than once for each parameter valuation. Figure 4 shows the regions of
the parameter space in which the protocol was determined to be unsafe (after
iterations 9 and 13, respectively) in the sense that the probability to identify
the real sender exceeds 12 . Since the results obtained over different iterations are
monotonically increasing, we can conclude that all parameter valuations that
were proved to be unsafe in some iteration are in fact unsafe in the full model.
This in turn means that the blue area in Figure 4 grows in each iteration.
5
Conclusion and Future Work
We presented a direct verification method for probabilistic programs employing
probabilistic model checking. We conjecture that the basic idea would smoothly
translate to reasoning about recursive probabilistic programs [25]. In the future
we are interested in how loop invariants [26] can be utilized to devise complete model checking procedures preventing possibly infinite loop unrollings.
This is especially interesting for reasoning about covariances [27], where a mixture of invariant–reasoning and successively constructing the operational MC
would yield sound over- and underapproximations of covariances. To extend the
gain for the user, we will combine this approach with methods for counterexamples [28], which can be given in terms of the programming language [29,19].
Moreover, it seem promising to investigate how approaches to automatically
repair a probabilistic model towards satisfaction of properties [30,31] can be
transferred to programs.
References
1. Gordon, A.D., Henzinger, T.A., Nori, A.V., Rajamani, S.K.: Probabilistic programming. In: FOSE, ACM Press (2014) 167–181
2. Sankaranarayanan, S., Chakarov, A., Gulwani, S.: Static analysis for probabilistic
programs: inferring whole program properties from finitely many paths. In: PLDI,
ACM (2013) 447–458
3. Claret, G., Rajamani, S.K., Nori, A.V., Gordon, A.D., Borgström, J.: Bayesian
inference using data flow analysis. In: ESEC/SIGSOFT FSE, ACM Press (2013)
92–102
4. Gretz, F., Katoen, J.P., McIver, A.: Operational versus weakest pre-expectation
semantics for the probabilistic guarded command language. Perform. Eval. 73
(2014) 110–132
5. Kwiatkowska, M., Norman, G., Parker, D.: Prism 4.0: Verification of probabilistic
real-time systems. In: CAV. Volume 6806 of LNCS, Springer (2011) 585–591
6. Hahn, E.M., Li, Y., Schewe, S., Turrini, A., Zhang, L.: IscasMC: A web-based
probabilistic model checker. In: FM. Volume 8442 of LNCS, Springer (2014) 312–
317
7. Katoen, J.P., Zapreev, I.S., Hahn, E.M., Hermanns, H., Jansen, D.N.: The ins and
outs of the probabilistic model checker MRMC. Performance Evaluation 68(2)
(2011) 90–104
8. Kattenbelt, M.: Automated Quantitative Software Verification. PhD thesis, Oxford
University (2011)
9. Sharir, M., Pnueli, A., Hart, S.: Verification of probabilistic programs. SIAM
Journal on Computing 13(2) (1984) 292–314
10. Vardi, M.Y.: Automatic verification of probabilistic concurrent finite-state programs. In: FOCS, IEEE Computer Society (1985) 327–338
11. Baier, C., Katoen, J.P.: Principles of Model Checking. The MIT Press (2008)
12. Baier, C., Klein, J., Klüppelholz, S., Märcker, S.: Computing conditional probabilities in Markovian models efficiently. In: TACAS. Volume 8413 of LNCS, Springer
(2014) 515–530
13. Dehnert, C., Junges, S., Jansen, N., Corzilius, F., Volk, M., Bruintjes, H., Katoen,
J., Ábrahám, E.: Prophesy: A probabilistic parameter synthesis tool. In: CAV.
Volume 9206 of LNCS, Springer (2015) 214–231
14. Quatmann, T., Dehnert, C., Jansen, N., Junges, S., Katoen, J.: Parameter synthesis
for Markov models: Faster than ever. CoRR abs/1602.05113 (2016)
15. Dijkstra, E.W.: A Discipline of Programming. Prentice Hall (1976)
16. McIver, A., Morgan, C.: Abstraction, Refinement And Proof For Probabilistic
Systems. Springer (2004)
17. Jansen, N., Kaminski, B.L., Katoen, J., Olmedo, F., Gretz, F., McIver, A.: Conditioning in probabilistic programming. Electr. Notes Theor. Comput. Sci. 319
(2015) 199–216
18. Kaminski, B.L., Katoen, J.P.: On the hardness of almost-sure termination. In:
MFCS. Volume 9234 of LNCS, Springer (2015)
19. Dehnert, C., Jansen, N., Wimmer, R., Ábrahám, E., Katoen, J.: Fast debugging
of PRISM models. In: ATVA. Volume 8837 of LNCS, Springer (2014) 146–162
20. Jansen, N., Dehnert, C., Kaminski, B.L., Katoen, J., Westhofen, L.: Bounded
model checking for probabilistic programs. CoRR abs/1605.04477 (2016)
21. Reiter, M.K., Rubin, A.D.: Crowds: Anonymity for web transactions. ACM Trans.
on Information and System Security 1(1) (1998) 66–92
22. Herman, T.: Probabilistic self-stabilization. Inf. Process. Lett. 35(2) (1990) 63–67
23. Brauer, F., Castillo-Chavez, C.: Mathematical Models in Population Biology and
Epidemiology. Texts in Applied Mathematics. Springer New York (2001)
24. Erds, P., Rnyi, A.: On a classical problem of probability theory. Publ. Math. Inst.
Hung. Acad. Sci., Ser. A 6 (1961) 215 – 220
25. Olmedo, F., Kaminski, B., Katoen, J.P., Matheja, C.: Reasoning about recursive
probabilistic programs. In: LICS. (2016) [to appear].
26. Gretz, F., Katoen, J.P., McIver, A.: PRINSYS - on a quest for probabilistic loop
invariants. In: QEST. Volume 8054 of LNCS, Springer (2013) 193–208
27. Kaminski, B., Katoen, J.P., Matheja, C.: Inferring covariances for probabilistic
programs. In: QEST. Volume 9826 of LNCS, Springer (2016) [to appear].
28. Ábrahám, E., Becker, B., Dehnert, C., Jansen, N., Katoen, J., Wimmer, R.: Counterexample generation for discrete-time Markov models: An introductory survey.
In: SFM. Volume 8483 of Lecture Notes in Computer Science, Springer (2014)
65–121
29. Wimmer, R., Jansen, N., Abraham, E., Katoen, J.P.: High-level counterexamples
for probabilistic automata. Logical Methods in Computer Science 11(1:15) (2015)
30. Bartocci, E., Grosu, R., Katsaros, P., Ramakrishnan, C.R., Smolka, S.A.: Model
repair for probabilistic systems. In: TACAS. Volume 6605 of Lecture Notes in
Computer Science, Springer (2011) 326–340
31. Pathak, S., Ábrahám, E., Jansen, N., Tacchella, A., Katoen, J.P.: A greedy approach for the efficient repair of stochastic models. In: NFM. Volume 9058 of
LNCS, Springer (2015) 295–309
A
Models
A.1
1
2
3
4
5
int
int
int
int
int
coupon-obs (5)
coup0
coup1
coup2
coup3
coup4
:=
:=
:=
:=
:=
0;
0;
0;
0;
0;
6
7
8
9
int draw1 := 0;
int draw2 := 0;
int draw3 := 0;
10
11
int numberDraws := 0;
12
13
14
15
16
17
while (!( coup0 = 1) | !( coup1 = 1) | !( coup2 = 1) | !( coup3 =
1) | !( coup4 = 1) ) {
draw1 := unif (0 ,4) ;
draw2 := unif (0 ,4) ;
draw3 := unif (0 ,4) ;
numberDraws := numberDraws + 1;
18
observe ( draw1 != draw2 & draw1 != draw3 & draw2 != draw3 ) ;
19
20
if ( draw1 = 0 | draw2 = 0 | draw3 = 0) {
coup0 := 1;
}
if ( draw1 = 1 | draw2 = 1 | draw3 = 1) {
coup1 := 1;
}
if ( draw1 = 2 | draw2 = 2 | draw3 = 2) {
coup2 := 1;
}
if ( draw1 = 3 | draw2 = 3 | draw3 = 3) {
coup3 := 1;
}
if ( draw1 = 4 | draw2 = 4 | draw3 = 4) {
coup4 := 1;
}
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
}
A.2
1
2
3
4
5
int
int
int
int
int
coupon (5)
coup0
coup1
coup2
coup3
coup4
:=
:=
:=
:=
:=
0;
0;
0;
0;
0;
6
7
8
9
int draw1 := 0;
int draw2 := 0;
int draw3 := 0;
10
11
int numberDraws := 0;
12
13
14
15
16
17
while (!( coup0 = 1) | !( coup1 = 1) | !( coup2 = 1) | !( coup3 =
1) | !( coup4 = 1) ) {
draw1 := unif (0 ,4) ;
draw2 := unif (0 ,4) ;
draw3 := unif (0 ,4) ;
numberDraws := numberDraws + 1;
18
if ( draw1 = 0 | draw2 = 0 | draw3 = 0) {
coup0 := 1;
}
if ( draw1 = 1 | draw2 = 1 | draw3 = 1) {
coup1 := 1;
}
if ( draw1 = 2 | draw2 = 2 | draw3 = 2) {
coup2 := 1;
}
if ( draw1 = 3 | draw2 = 3 | draw3 = 3) {
coup3 := 1;
}
if ( draw1 = 4 | draw2 = 4 | draw3 = 4) {
coup4 := 1;
}
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
}
1
2
3
4
5
A.3
crowds-obs (100, 60)
int
int
int
int
int
delivered := 0;
lastSender := 0;
remainingRuns := 60;
observeSender := 0;
observeOther := 0;
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
while ( remainingRuns > 0) {
while ( delivered = 0) {
{
if ( lastSender = 0) {
observeSender := observeSender + 1;
} else {
observeOther := observeOther + 1;
}
lastSender := 0;
delivered := 1;
} [0.091] {
{
{ lastSender :=0; } [1/100] { lastSender := 1; }
}
[0.8]
{
lastSender := 0;
// When not forwarding , the message is delivered here
delivered := 1;
}
}
}
// Set up new run .
delivered := 0;
remainingRuns := remainingRuns - 1;
}
observe ( observeOther > 15) ;
A.4
1
2
3
4
5
int
int
int
int
int
crowds (100, 60)
delivered := 0;
lastSender := 0;
remainingRuns := 60;
observeSender := 0;
observeOther := 0;
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
while ( remainingRuns > 0) {
while ( delivered = 0) {
{
if ( lastSender = 0) {
observeSender := observeSender + 1;
} else {
observeOther := observeOther + 1;
}
lastSender := 0;
delivered := 1;
} [0.091] {
{
{ lastSender :=0; } [1/100] { lastSender := 1; }
}
[0.8]
{
lastSender := 0;
// When not forwarding , the message is delivered here
delivered := 1;
}
}
}
// Set up new run .
delivered := 0;
remainingRuns := remainingRuns - 1;
}
A.5
crowds (100, 60) parametric
This program is parametric with the parameters f (probability of forwarding
the message) and b (probability that a crowd member is bad).
1
2
3
4
5
int
int
int
int
int
delivered := 0;
lastSender := 0;
remainingRuns := 60;
observeSender := 0;
observeOther := 0;
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
while ( remainingRuns > 0) {
while ( delivered = 0) {
{
if ( lastSender = 0) {
observeSender := observeSender + 1;
} else {
observeOther := observeOther + 1;
}
lastSender := 0;
delivered := 1;
} [b] {
{
{ lastSender :=0; } [1/100] { lastSender := 1; }
}
[f]
{
lastSender := 0;
// When not forwarding , the message is delivered here
delivered := 1;
}
}
}
// Set up new run .
delivered := 0;
remainingRuns := remainingRuns - 1;
}
| 6 |
arXiv:1704.00997v1 [math.AC] 4 Apr 2017
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE
AND 2-AGL RINGS
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
Abstract. The notion of 2-AGL ring in dimension one which is a natural generalization of almost Gorenstein local ring is posed in terms of the rank of Sally modules of
canonical ideals. The basic theory is developed, investigating also the case where the
rings considered are numerical semigroup rings over fields. Examples are explored.
Contents
1. Introduction
2. Preliminaries
3. 2-AGL rings and Proof of Theorem 1.4
4. 2-AGL rings obtained by idealization
5. The algebra m : m
6. Numerical semigroup rings
References
1
5
8
13
14
20
28
1. Introduction
The destination of this research is to find a good notion of Cohen-Macaulay local rings of
positive dimension which naturally generalizes Gorenstein local rings. In dimension one,
the research has started from the works of V. Barucci and R. Fröberg [BF] and the second,
the fourth authors, and T. T. Phuong [GMP]. In [BF] Barucci and Fröberg introduced
the notion of almost Gorenstein ring in the case where the local rings are one-dimensional
and analytically unramified. They explored also numerical semigroup rings over fields
and developed a beautiful theory. In [GMP] the authors extended the notion given by
[BF] to arbitrary one-dimensional Cohen-Macaulay local rings and showed that their new
definition works well to analyze almost Gorenstein rings which are analytically ramified.
In [GTT] the second author, R. Takahashi, and N. Taniguchi gave the notion of almost
Gorenstein local/graded rings of higher dimension. Their research is still in progress,
exploring, for example, the problem of when the Rees algebras of ideals/modules are
2010 Mathematics Subject Classification. 13H10, 13H15, 13A30.
Key words and phrases. Cohen-Macaulay ring, Gorenstein ring, almost Gorenstein ring, canonical
ideal, parameter ideal, Rees algebra, Sally module.
The first author was partially supported by the International Research Supporting Program of Meiji
University. The second author was partially supported by JSPS Grant-in-Aid for Scientific Research
(C) 25400051. The fourth author was partially supported by JSPS Grant-in-Aid for Scientific Research
26400054.
1
2
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
almost Gorenstein rings; see [GMTY1, GMTY2, GMTY3, GMTY4, GRTT, T]. One can
consult [El] for a deep investigation of canonical ideals in dimension one.
The interests of the present research are a little different from theirs and has been
strongly inspired by [GGHV, Section 4] and [V2]. Our aim is to discover a good candidate
for natural generalization of almost Gorenstein rings. Even though our results are at this
moment restricted within the case of dimension one, we expect that a higher dimensional
notion might be possible after suitable modifications. However, before entering more
precise discussions, let us fix our terminology.
Throughout this paper let (R, m) be a Cohen-Macaulay local ring of dimension one and
let I be a canonical ideal of R. Assume that I contains a parameter ideal Q = (a) of R
as a reduction. We set K = aI = { xa | x ∈ I} in the total ring Q(R) of fractions of R
and let S = R[K]. Therefore, K is a fractional ideal of R such that R ⊆ K ⊆ R and
S is a module-finite extension of R, where R denotes the integral closure of R in Q(R).
We denote by c = R : S the conductor. With this notation the second and the fourth
authors and T. T. Phuong [GMP] closely studied the almost Gorenstein property of R.
Here let us recall the definition of almost Gorenstein local rings given by [GTT], which
works with a suitable modification in higher dimensional cases also. Notice that in our
setting, the condition in Definition 1.1 below is equivalent to saying that mK ⊆ R ([GTT,
Proposition 3.4]).
Definition 1.1 ([GTT, Definition 1.1]). Suppose that R possesses the canonical module
KR . Then we say that R is an almost Gorenstein local (AGL for short) ring, if there is
an exact sequence
0 → R → KR → C → 0
of R-modules such that mC = (0).
Consequently, R is an AGL ring if R is a Gorenstein ring (take C = (0)) and Definition
1.1 certifies that once R is an AGL ring, although it is not a Gorenstein ring, R can be
embedded into its canonical module KR and the difference KR /R is little.
Let ei (I) (i = 0, 1) denote the Hilbert coefficients of R with respect to I (notice that
our canonical ideal I is an m-primary ideal of R) and let r(R) = ℓR (Ext1R (R/m, R)) denote
the Cohen-Macaulay type of R. With this notation the following characterization of AGL
rings is given by [GMP], which was a starting point of the present research.
Theorem 1.2 ([GMP, Theorem 3.16]). The following conditions are equivalent.
(1)
(2)
(3)
(4)
(5)
R is an AGL ring but not a Gorenstein ring.
e1 (I) = r(R).
e1 (I) = e0 (I) − ℓR (R/I) + 1.
ℓR (S/K) = 1, that is S = K : m.
ℓR (I 2 /QI) = 1.
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
3
(6) S = m : m but R is not a DVR.
When this is the case, I 3 = QI 2 and
ℓR (R/I
n+1
n+1
− r(R)
) = (r(R) + ℓR (R/I) − 1)
1
for all n ≥ 1, where ℓR (∗) denotes the length.
The aim of the present research is to ask for a generalization of AGL rings in dimension
one. For the purpose we notice that Condition (3) in Theorem 1.2 is equivalent to saying
that the Sally module of I with respect to Q has rank one. In order to discuss more
explicitly, here let us explain the notion of Sally module ([V1]). The results we recall
below hold true in Cohen-Macaulay local rings (R, m) of arbitrary positive dimension for
all m-primary ideals I and reductions Q of I which are parameter ideals of R ([GNO]).
Let us, however, restrict our attention to the case where dim R = 1 and I is a canonical
ideal of R.
Let T = R(Q) = R[Qt] and R = R(I) = R[It] respectively denote the Rees algebras
of Q and I, where t is an indeterminate over R. We set
SQ (I) = IR/IT
and call it the Sally module of I with respect to Q ([V1]). Then SQ (I) is a finitely
generated graded T -module with dimT SQ (I) ≤ 1 whose grading is given by
(
(0)
if n ≤ 0,
[SQ (I)]n =
n+1
n
I /Q I if n ≥ 1
for each n ∈ Z ([GNO, Lemma 2.1]). Let p = mT and B = T /p (= (R/m)[T ] the
polynomial ring). We set
rank SQ (I) = ℓTp ([SQ (I)]p )
and call it the rank of SQ (I). Then AssT SQ (I) ⊆ {p} and
rank SQ (I) = e1 (I) − [e0 (I) − ℓR (R/I)]
([GNO, Proposition 2.2]). As we later confirm in Section 2 (Theorem 2.5), the invariant
rank SQ (I) is equal to ℓR (S/K) and is independent of the choice of canonical ideals I
and their reductions Q. By [S3, V1] it is known that Condition (3) in Theorem 1.2 is
equivalent to saying that rank SQ (I) = 1, which is also equivalent to saying that
SQ (I) ∼
= B(−1)
as a graded T -module. According to these stimulating facts, as is suggested by [GGHV,
Section 4] it seems reasonable to expect that one-dimensional Cohen-Macaulay local rings
R which satisfy the condition
rank SQ (I) = 2, that is e1 (I) = e0 (I) − ℓR (R/I) + 2
for canonical ideals I could be a good candidate for generalization of AGL rings.
4
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
Chasing the expectation, we now give the following.
Definition 1.3. We say that R is a 2-almost Gorenstein local (2-AGL for short) ring, if
rank SQ (I) = 2.
In this paper we shall closely explore the structure of 2-AGL rings to show the above
expectation comes true. Let us note here the basic characterization of 2-AGL rings, which
starts the present paper.
Theorem 1.4. The following conditions are equivalent.
(1)
(2)
(3)
(4)
(5)
(6)
(7)
R is a 2-AGL ring.
There is an exact sequence 0 → B(−1) → SQ (I) → B(−1) → 0 of graded T -modules.
K 2 = K 3 and ℓR (K 2 /K) = 2.
I 3 = QI 2 and ℓR (I 2 /QI) = 2.
R is not a Gorenstein ring but ℓR (S/[K : m]) = 1.
ℓR (S/K) = 2.
ℓR (R/c) = 2.
When this is the case, m·SQ (I) 6= (0), whence the exact sequence given by Condition (2)
is not split, and we have
n+1
n+1
− (e0 (I) − ℓR (R/I) + 2)
ℓR (R/I ) = e0 (I)
1
for all n ≥ 1.
See [HHS] for another direction of generalization of Gorenstein rings. In [HHS] the
authors posed the notion of nearly Gorenstein ring and developed the theory. Here let us
note that in dimension one, 2-AGL rings are not nearly Gorenstein and nearly Gorenstein
rings are not 2-AGL rings (see [HHS, Remark 6.2, Theorem 7.4], Theorems 1.4, 3.6).
Here let us explain how this paper is organized. In Section 2 we will summarize some
preliminaries, which we need throughout this paper. The proof of Theorem 1.4 shall be
given in Section 3. In Section 3 we study also the question how the 2-AGL property of
rings is preserved under flat base changes. Condition (7) in Theorem 1.4 is really practical,
which we shall show in Sections 5 and 6. In Section 4 we study 2-AGL rings obtained by
idealization. We will show that A = R ⋉ c is a 2-AGL ring if and only if so is R, which
enables us, starting from a single 2-AGL ring, to produce an infinite family {An }n≥0 of
2-AGL rings which are analytically ramified (Example 4.3). Let v(R) (resp. e(R)) denote
the embedding dimension of R (resp. the multiplicity e0m (R) of R with respect to m).
We set B = m : m. Then it is known by [GMP, Theorem 5.1] that R is an AGL ring
with v(R) = e(R) if and only if B is a Gorenstein ring. In Section 5 we shall closely
study the corresponding phenomenon of the 2-AGL property. We will show that if R is
a 2-AGL ring with v(R) = e(R), then B contains a unique maximal ideal M such that
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
5
BN is a Gorenstein ring for all N ∈ Max B \ {M} and BM is an AGL ring which is not a
Gorenstein ring. The converse is also true under suitable conditions, including the specific
one that K/R is a free R/c-module. Section 6 is devoted to the analysis of the case where
R = k[[H]] (k a field) are the semigroup rings of numerical semigroups H. We will give in
several cases a characterization for R = k[[H]] to be 2-AGL rings in terms of numerical
semigroups H.
2. Preliminaries
The purpose of this section is to summarize some auxiliary results, which we later need
throughout this paper. First of all, let us make sure of our setting.
Setting 2.1. Let (R, m) be a Cohen-Macaulay local ring with dim R = 1, possessing the
canonical module KR . Let I be a canonical ideal of R. Hence I is an ideal of R such
that I 6= R and I ∼
= KR as an R-module. We assume that I contains a parameter ideal
Q = (a) of R as a reduction. Let
o
I nx
K= =
|x∈I
a
a
in the total ring Q(R) of fractions of R. Hence K is a fractional ideal of R such that
R ⊆ K ⊆ R, where R denotes the integral closure of R in Q(R). Let S = R[K] and
c = R : S. We denote by SQ (I) = IR/IT the Sally module of I with respect to Q,
where T = R[Qt], R = R[It], and t is an indeterminate over R. Let B = T /mT and
ei (I) (i = 0, 1) the Hilbert coefficients of I.
We notice that a one-dimensional Cohen-Macaulay local ring (R, m) contains a canonib is a Gorenstein ring, where R
b denotes the m-adic completion
cal ideal if and only if Q(R)
of R ([HK, Satz 6.21]). Also, every m-primary ideal of R contains a parameter ideal as a
reduction, once the residue class field R/m of R is infinite. If K is a given fractional ideal
of R such that R ⊆ K ⊆ R and K ∼
= KR as an R-module, then taking a non-zerodivisor
a ∈ m so that aK ( R, I = aK is a canonical ideal of R such that Q = (a) as a reduction
and K = aI . Therefore, the existence of canonical ideals I of R containing parameter
ideals as reductions is equivalent to saying that there are fractional ideals K of R such
that R ⊆ K ⊆ R and K ∼
= KR as an R-module (cf. [GMP, Remark 2.10]). We have for
all n ≥ 0
K n+1 /K n ∼
= I n+1 /QI n
as an R-module, whence K/R ∼
= I/Q. Let rQ (I) = min{n ≥ 0 | I n+1 = QI n } be the
reduction number of I with respect to Q.
Let us begin with the following.
Lemma 2.2. The following assertions hold true.
(1) rQ (I) = min{n ≥ 0 | K n = K n+1 }. Hence S = K n for all n ≥ rQ (I).
6
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
(2) Let b ∈ I. Then (b) is a reduction of I if and only if
When this is the case, S = R[ Ib ] and rQ (I) = r(b) (I).
b
a
is an invertible element of S.
Proof. (1) The first equality is clear, since I = aK. The second one follows from the fact
S
that S = n≥0 K n .
(2) Suppose that (b) is a reduction of I and choose an integer n ≫ 0 so that S = K n
n+1
n
and I n+1 = bI n . Then since aI n+1 = ab · aI n , we get S = ab S, whence ab is an invertible element
of S. The reverse implication is now clear. To see S = R[ Ib ], notice that S ⊇ ab · aI = Ib ,
because ab ∈ S. Hence S ⊇ R[ Ib ] and by symmetry we get S = R[ Ib ]. To see rQ (I) = r(b) (I),
let n = rQ (I). Then K n+1 = S = ab S = ab K n by Assertion (1) , so that I n+1 = bI n .
Therefore, rQ (I) ≥ r(b) (I), whence rQ (I) = r(b) (I) by symmetry.
Proposition 2.3 ([GMP, GTT]). The following assertions hold true.
(1) c = K : S and ℓR (R/c) = ℓR (S/K).
(2) c = R : K if and only if S = K 2 .
(3) R is a Gorenstein ring if and only if rQ (I) ≤ 1. When this is the case, I = Q, that is
K = R.
(4) R is an AGL ring if and only if mK 2 ⊆ K.
(5) Suppose that R is an AGL ring but not a Gorenstein ring. Then rQ (I) = 2 and
ℓR (K 2 /K) = 1.
Proof. (1) See [GMP, Lemma 3.5 (2)].
(2) Since K : K = R ([HK, Bemerkung 2.5 a)]), we have
R : K = (K : K) : K = K : K 2 .
Because c = K : S by Assertion (1), c = R : K if and only if K : S = K : K 2 . The latter
condition is equivalent to saying that S = K 2 ([HK, Definition 2.4]).
(3), (5) See [GMP, Theorems 3.7, 3.16].
(4) As K : K = R, mK 2 ⊆ K if and only if mK ⊆ R. By [GTT, Proposition 3.4] the
latter condition is equivalent to saying that R is an AGL ring.
Let µR (M) denote, for each finitely generated R-module M, the number of elements in
a minimal system of generators of M.
Corollary 2.4. The following assertions hold true.
(1) K : m ⊆ K 2 if R is not a Gorenstein ring. If R is not an AGL ring, then K : m ( K 2 .
(2) mK 2 + K = K : m, if ℓR (K 2 /K) = 2.
(3) Suppose that R is not a Gorenstein ring. Then µR (S/K) = r(R/c). Therefore, R/c
is a Gorenstein ring if and only if µR (S/K) = 1.
Proof. (1) As R is not a Gorenstein ring, K 6= K 2 by Lemma 2.2 (1) and Proposition
2.3 (3). Therefore, K : m ⊆ K 2 , since ℓR ([K : m]/K) = 1 ([HK, Satz 3.3 c)]) and
ℓR (K 2 /K) < ∞. Proposition 2.3 (4) implies that K : m 6= K 2 if R is not an AGL ring.
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
7
(2) Suppose that ℓR (K 2 /K) = 2. Then R is not an AGL ring. Hence mK 2 6⊆ K, while
by Assertion (1) K : m ( K 2 . Therefore, since m2 K 2 ⊆ K, we get
K ( mK 2 + K ⊆ K : m ( K 2 ,
whence mK 2 + K = K : m, because ℓR (K 2 /K) = 2.
(3) We get µR (S/K) = ℓR (S/(mS + K)) = ℓR ([K : (mS + K)]/(K : S)), where the
second equality follows by duality ([HK, Bemerkung 2.5 c)]). Since K : K = R and
c = K : S by Proposition 2.3 (1),
µR (S/K) = ℓR ([K : (mS + K)]/(K : S))
= ℓR ([(K : mS) ∩ (K : K)] /c)
= ℓR ([(K : S) : m] ∩ R] /c)
= r(R/c),
where the last equality follows from the fact [(K : S) : m] ∩ R = c :R m = (0) :R m/c.
We close this section with the following, which guarantees that rank SQ (I) and S =
R[K] are independent of the choice of canonical ideals I of R and reductions Q of I.
Assertions (1) and (2) of Theorem 2.5 are more or less known (see, e.g., [GMP, GGHV]).
In particular, in [GGHV] the invariant ℓR (I/Q) is called the canonical degree of R and
intensively investigated. Let us include here a brief proof in our context for the sake of
completeness.
Theorem 2.5. The following assertions hold true.
(1) ℓR (S/K) = e1 (I) − [e0 (I) − ℓR (R/I)]. Hence rank SQ (I) = ℓR (S/K) = ℓR (R/c).
(2) The invariants rQ (I), ℓR (S/K), and ℓR (K/R) are independent of the choice of I and
Q.
(3) The ring S = R[K] is independent of the choice of I and Q.
Proof. (1) We have K/R ∼
= I/Q as an R-module, whence
ℓR (K/R) = ℓR (I/Q) = ℓR (R/Q) − ℓR (R/I) = e0 (I) − ℓR (R/I).
So, the first equality is clear, because ℓR (S/R) = e1 (I) by [GMP, Lemma 2.1] and
ℓR (S/K) = ℓR (S/R) − ℓR (K/R). See [GNO, Proposition 2.2] and Proposition 2.3 (1)
for the second and the third equalities.
(2), (3) The invariant ℓR (S/R) = e1 (I) is independent of the choice of I, since the first
Hilbert coefficient e1 (I) of canonical ideals I is independent of the choice of I ([GMP,
Corollary 2.8]). Therefore, because ℓR (I/Q) = e0 (I) − ℓR (R/I) depends only on I, to see
that ℓR (S/K) = ℓR (S/R) − ℓR (I/Q) is independent of the choice of I and Q, it is enough
to show that ℓR (K/R) = ℓR (I/Q) is independent of the choice of I. Let J be another
canonical ideal of R and assume that (b) is a reduction of J. Then, since I ∼
= J as an
R-module, J = εI for some invertible element ε of Q(R) ([HK, Satz 2.8]). Let b′ = εa.
8
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
Then (b′ ) is a reduction of J, r(b′ ) (J) = rQ (I), and ℓR (J/(b′ )) = ℓR (I/Q), clearly. Hence
ℓR (J/(b)) = ℓR (J/(b′ )) = ℓR (I/Q), which is independent of I. Because r(b) (J) = r(b′ ) (J)
by Lemma 2.2 (2), the reduction number rQ (I) is independent of the choice of canonical
ideals I and reductions Q of I. Because R[ aI ] = R[ bJ′ ] = R[ Jb ] where the second equality
follows from Lemma 2.2 (2), the ring S = R[K] is independent of the choice of I and Q
as well.
3. 2-AGL rings and Proof of Theorem 1.4
The main purpose of this section is to prove Theorem 1.4. Let us maintain Setting 2.1.
We begin with the following.
Lemma 3.1. The ring R is 2-AGL if and only if K 2 = K 3 and ℓR (K 2 /K) = 2.
Proof. If R is a 2-AGL ring, then ℓR (S/K) = 2 by Theorem 2.5 (1), while by Proposition
2.3 (5) ℓR (K 2 /K) ≥ 2 since R is not an AGL ring; therefore S = K 2 . Conversely,
if K 2 = K 3 , then K 2 = K n for all n ≥ 2, so that S = K 2 . Hence the equivalence
follows.
Before going ahead, let us note basic examples of 2-AGL rings. Later we will give more
examples. Let r(R) = ℓR (Ext1R (R/m, R)) denote the Cohen-Macaulay type of R.
Example 3.2. Let k[[t]] and k[[X, Y, Z, W ]] denote the formal power series rings over a
field k.
(1) Consider the rings R1 = k[[t3 , t7 , t8 ]], R2 = k[[X, Y, Z, W ]]/(X 3 − Y Z, Y 2 − XZ, Z 2 −
X 2 Y, W 2 − XW ), and R3 = k[[t3 , t7 , t8 ]] ⋉ k[[t]] (the idealization of k[[t]] over
k[[t3 , t7 , t8 ]]). Then these rings R1 , R2 , and R3 are 2-AGL rings. The ring R1 is
an integral domain, R2 is a reduced ring but not an integral domain, and R3 is not a
reduced ring.
(2) Let c ≥ 4 be an integer such that c 6≡ 0 mod 3 and set R = k[[t3 , tc+3 , t2c ]]. Then R
is a 2-AGL ring such that v(R) = e(R) = 3 and r(R) = 2.
We note basic properties of 2-AGL rings.
Proposition 3.3. Suppose that R is a 2-AGL ring and set r = r(R). Then we have the
following.
(1) c = K : S = R : K.
(2) ℓR (R/c) = 2. Hence there is a minimal system x1 , x2 , . . . , xn of generators of m such
that c = (x21 ) + (x2 , x3 , . . . , xn ).
(3) S/K ∼
= R/c and S/R ∼
= K/R ⊕ R/c as R/c-modules.
⊕ℓ
(4) K/R ∼
= (R/c) ⊕ (R/m)⊕m as an R/c-module for some ℓ > 0 and m ≥ 0 such that
ℓ + m = r − 1. Hence ℓR (K/R) = 2ℓ + m. In particular, K/R is a free R/c-module
if and only if ℓR (K/R) = 2(r − 1).
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
9
(5) µR (S) = r.
Proof. (1), (2) We have c = K : S and ℓR (R/c) = 2 by Proposition 2.3 (1). Hence
R : K = c, because R : K = (K : K) : K = K : K 2 . The second assertion in Assertion
(2) is clear, because m2 ⊆ c and ℓR (m/c) = 1.
(3), (4) Because R/c is an Artinian Gorenstein local ring, any finitely generated R/cmodule M contains R/c as a direct summand, once M is faithful. If M is not faithful,
then (0) :R/c M ⊇ m/c as ℓR (R/c) = 2, so that M is a vector space over R/m. Therefore,
every finitely generated R/c-module M has a unique direct sum decomposition
M∼
= (R/c)⊕ℓ ⊕ (R/m)⊕m
with ℓ, m ≥ 0 such that µR/c (M) = ℓ + m. Because by Assertion (1) the modules S/R,
K/R, and S/K are faithful over R/c, they contain R/c as a direct summand; hence
S/K ∼
= R/c, because ℓR (S/K) = ℓR (R/c) by Proposition 2.3 (1). Consequently, the
canonical exact sequence
0 → K/R → S/R → S/K → 0
of R/c-modules is split, so that S/R ∼
= K/R ⊕ R/c. Since µR (K/R) = r − 1 > 0, K/R
contains R/c as a direct summand, whence
K/R ∼
= (R/c)⊕ℓ ⊕ (R/m)⊕m
with ℓ > 0 and m ≥ 0, where ℓ + m = r − 1 and ℓR (K/R) = 2ℓ + m.
(5) This is now clear, since S/R ∼
= K/R ⊕ R/c.
The 2-AGL rings R such that K/R are R/c-free enjoy a certain specific property, which
we will show in Section 5. Here let us note Example 3.4 (resp. Example 3.5) of 2-AGL
rings, for which K/R is a free R/c-module (resp. not a free R/c-module). Let V = k[[t]]
denote the formal power series ring over a field k.
Example 3.4. Let e ≥ 3 and n ≥ 2 be integers. Let R = k[[te , {ten+i }1≤i≤e−2 , t2en−(e+1) ]]
P
and m the maximal ideal of R. Let K = R + 1≤i≤e−2 Rt(n−2)e+i . Then we have the
following.
(1) I = t2(n−1)e K is a canonical ideal of R containing (t2(n−1)e ) as a reduction.
(2) R is a 2-AGL ring such that m2 = te m and r(R) = e − 1.
(3) K/R ∼
= (R/c)⊕2(e−2) as an R/c-module.
Example 3.5. Let e ≥ 4 be an integer. Let R = k[[te , {te+i }3≤i≤e−1 , t2e+1 , t2e+2 ]] and m
P
the maximal ideal of R. Let K = R + Rt + 3≤i≤e−1 Rti . Then we have the following.
(1)
(2)
(3)
(4)
I = te+3 K is a canonical ideal of R containing (te+3 ) as a reduction.
R is a 2-AGL ring such that m2 = te m and r(R) = e − 1.
K/R ∼
= (R/c) ⊕ (R/m)⊕(e−3) as an R/c-module.
m : m = k[[t3 , t4 , t5 ]].
10
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
We note the following.
Theorem 3.6. Suppose that R is a 2-AGL ring. Then the following assertions hold true.
(1) R is not an AGL ring.
(2) There is an exact sequence
0 → B(−1) → SQ (I) → B(−1) → 0
of graded T -modules.
(3) m·SQ (I) 6= (0). Therefore, the above exact sequence is not split.
Proof. (1) Since ℓR (K 2 /K) = 2 by Lemma 3.1, by Proposition 2.3 (5) R is not an AGL
ring.
(2) We have mI 2 6⊆ QI by Proposition 2.3 (4), since I 2 /QI ∼
= K 2 /K. Therefore,
as ℓR (I 2 /QI) = 2, we get ℓR (I 2 /[mI 2 + QI]) = ℓR ([mI 2 + QI]/QI) = 1. Let us write
I 2 = QI + (f ) for some f ∈ I 2 . Then since ℓR ([mI 2 + QI]/QI) = 1 and mI 2 + QI =
QI + mf , mI 2 + QI = QI + (αf ) for some α ∈ m. We set g = αf . Now remember that
SQ (I) = T ·[SQ (I)]1 = T ·f t, because I 3 = QI 2 ([GNO, Lemma 2.1 (5)]), where ∗ denotes
the image in SQ (I). Therefore, because (0) :T gt = mT , we get an exact sequence
ϕ
0 → B(−1) −
→ SQ (I) → C → 0
of graded T -modules, where ϕ(1) = gt. Let ξ denote the image of f t in C = SQ (I)/T ·gt.
Then C = T ξ and (0) :T C = mT , whence C ∼
= B(−1) as a graded T -module, and the
result follows.
(3) Since [SQ (I)]1 ∼
= I 2 /QI as an R-module, we get m·SQ (I) 6= (0).
We are in a position to prove Theorem 1.4.
Proof of Theorem 1.4. (1) ⇒ (2) See Theorem 3.6.
(1) ⇔ (3) See Lemma 3.1.
(3) ⇔ (4) Remember that K n+1 /K n ∼
= I n+1 /QI n for all n ≥ 0.
(2) ⇒ (1) We have rank SQ (I) = ℓTp ([SQ (I)]p ) = 2·ℓTp (Bp ) = 2, where p = mT .
(1) ⇔ (6) ⇔ (7) See Theorem 2.5 (1).
(1) ⇒ (5) By Theorem 3.6 (1) R is not an AGL ring. Hence K : m ( K 2 = S by
Corollary 2.4 (1). Because ℓR ((K : m)/K) = 1 and
ℓR (S/(K : m)) + ℓR ((K : m)/K) = ℓR (S/K) = 2,
we get ℓR (S/(K : m)) = 1.
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
11
See Theorem 3.6 (3) for the former part of the last assertion. To see the latter part,
notice that
ℓR (R/I n+1) = ℓR (R/Qn+1 ) − ℓR (I n+1 /Qn I) + ℓR (Qn I/Qn+1 )
n+1
− [ℓR ([SQ (I)]n ) + ℓR (I/Q)]
= e0 (I)
1
n+1
− [2 + ℓR (I/Q)]
= e0 (I)
1
for all n ≥ 1, where the last equality follows from the exact sequence given by Condition
(2). Thus
n+1
n+1
− [e0 (I) − ℓR (R/I) + 2]
ℓR (R/I ) = e0 (I)
1
for all n ≥ 1.
Let us note a consequence of Theorem 1.4.
Proposition 3.7. Suppose that r(R) = 2. Then the following conditions are equivalent.
(1) R is a 2-AGL ring.
(2) c = R : K and ℓR (K/R) = 2.
(3) S = K 2 and ℓR (K/R) = 2.
When this is the case, K/R ∼
= R/c as an R-module.
Proof. (1) ⇒ (2) By Proposition 3.3 (4) K/R ∼
= R/c, since µR (K/R) = r(R) − 1 = 1.
Hence ℓR (K/R) = ℓR (R/c) = 2.
(2) ⇔ (3) Remember that S = K 2 if and only if c = R : K; see Proposition 2.3 (2).
(2) ⇒ (1) Since µR (K/R) = 1, K/R ∼
= R/c, so that ℓR (R/c) = ℓR (K/R) = 2. Hence R
is a 2-AGL ring by Theorem 1.4.
Let us explore the question of how the 2-AGL property is preserved under flat base
changes. Let (R1 , m1 ) be a Cohen-Macaulay local ring of dimension one and let ϕ : R →
R1 be a flat local homomorphism of local rings such that R1 /mR1 is a Gorenstein ring.
Hence dim R1 /mR1 = 0 and KR1 ∼
= R1 ⊗R K as an R1 -module ([HK, Satz 6.14]). Notice
that
R1 ⊆ R1 ⊗R K ⊆ R1 ⊗R R ⊆ R1
in Q(R1 ). We set K1 = R1 ⊗R K. Then R1 also satisfies the conditions stated in Setting
2.1 and we have following.
Proposition 3.8. For each n ≥ 0 the following assertions hold true.
(1) K1n = K1n+1 if and only if K n = K n+1 .
(2) ℓR1 (K1n+1 /K1n ) = ℓR1 (R1 /mR1 )·ℓR (K n+1 /K n ).
12
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
Proof. The equalities follow from the isomorphisms
K1n ∼
= R1 ⊗R K n , K1n+1 /K1n ∼
= R1 ⊗R (K n+1 /K n )
of R1 -modules.
We furthermore have the following.
Theorem 3.9. The following conditions are equivalent.
(1) R1 is a 2-AGL ring.
(2) Either (i) R is an AGL ring and ℓR1 (R1 /mR1 ) = 2 or (ii) R is a 2-AGL ring and
mR1 = m1 .
Proof. Suppose that R1 is a 2-AGL ring. Then K12 = K13 and ℓR1 (K12 /K1 ) = 2. Therefore,
K 2 = K 3 and ℓR1 (K12 /K1 ) = ℓR1 (R1 /mR1 )·ℓR (K 2 /K) = 2 by Proposition 3.8. We have
ℓR (K 2 /K) = 1 (resp. ℓR (K 2 /K) = 2) if ℓR1 (R1 /mR1 ) = 2 (resp. ℓR1 (R1 /mR1 ) = 1),
whence the implication (1) ⇒ (2) follows. The reverse implication is now clear.
Example 3.10. Let n ≥ 1 be an integer and let R1 = R[X]/(X n + α1 X n−1 + · · · + αn ),
where R[X] denotes the polynomial ring and αi ∈ m for all 1 ≤ i ≤ n. Then R1 is a
flat local R-algebra with m1 = mR1 + (x) (here x denotes the image of X in R1 ) and
R1 /mR1 = (R/m)[X]/(X n ) is a Gorenstein ring. Since ℓR1 (R1 /mR1 ) = n, taking n = 1
(resp. n = 2), we get R1 is an AGL ring (resp. R1 is a 2-AGL ring). Notice that if R is
an integral domain and 0 6= α ∈ m, then R1 = R[X]/(X 2 − αX) is a reduced ring but not
an integral domain. The ring R2 of Example 3.2 (1) is obtained in this manner, taking
n = 2 and α = t3 , from the AGL ring R = k[[t3 , t4 , t5 ]].
We say that R has minimal multiplicity, if v(R) = e(R). When m contains a reduction
(α), this condition is equivalent to saying that m2 = αm ([L, S2]).
Proposition 3.11. Suppose that e(R) = 3 and R has minimal multiplicity. Then R is a
2-AGL ring if and only if ℓR (K/R) = 2.
Proof. Thanks to Theorem 3.9, passing to R1 = R[X]mR[X] if necessary, we can assume
that the residue class field R/m of R is infinite. Since v(R) = e(R) = 3, r(R) = 2.
Therefore, by Corollary 3.7 we have only to show that S = K 2 , once ℓR (K/R) = 2. Choose
a non-zerodivisor b of R so that J = bK ( R. Then, since R/m is infinite, J contains an
element c such that J 3 = cJ 2 (see [S1, ES]; remember that µR (J 3 ) ≤ e(R) = 3). Hence
K 2 = K 3 by Lemma 2.2 (1) and Theorem 2.5 (2).
We close this section with the following.
Remark 3.12. Let r(R) = 2 and assume that R is a homomorphic image of a regular
local ring T of dimension 3. If R is a 2-AGL ring, then R has a minimal T -free resolution
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
of the form
"
X 2 g1
Y g2
Z g3
13
#
0 → T ⊕2 −−−−−→ T ⊕3 → T → R → 0,
where X, Y, Z is a regular system of parameters of T . In fact, let
M
0 → T ⊕2 −
→ T ⊕3 → T → R → 0
be a minimal T -free resolution of R. Then, since K ∼
= Ext2T (R, T ), taking the T -dual of
the resolution, we get a minimal T -free resolution
tM
τ
→K→0
0 → T → T ⊕3 −→ T ⊕2 −
of K. Because µR (K/R) = 1, without loss of generality, we may assume that τ (e2 ) = 1,
where e2 = ( 01 ). Therefore, writing t M = fg11 fg22 fg33 , we get
K/R ∼
= T /(f1 , f2 , f3 )
as a T -module. Let C = K/R and q = (f1 , f2 , f3 ). Then since ℓT (T /q) = ℓR (C) = 2, after
suitable elementary column-transformations in the matrix t M we get that f1 = X 2 , f2 =
Y, f3 = Z for some regular system X, Y, Z of parameters of T .
The converse of the assertion in Remark 3.12 is not true in general. In the case where
R is the semigroup ring of a numerical semigroup, we shall give in Section 6 a complete
description of the assertion in terms of the matrix t M = fg11 fg22 fg33 (see Theorem 6.4 and
its consequences).
4. 2-AGL rings obtained by idealization
In this section we study the problem of when the idealization A = R ⋉ c of c = R : S
over R is a 2-AGL ring. To do this we need some preliminaries. For a moment let R be
an arbitrary commutative ring and M an R-module. Let A = R ⋉ M be the idealization
of M over R. Hence A = R ⊕ M as an R-module and the multiplication in A is given by
(a, x)(b, y) = (ab, bx + ay)
where a, b ∈ R and x, y ∈ M. Let K be an R-module and set L = HomR (M, K) ⊕ K. We
consider L to be an A-module under the following action of A
(a, x) ◦ (f, y) = (af, f (x) + ay),
where (a, x) ∈ A and (f, y) ∈ L. Then it is standard to check that the map
HomR (A, K) → L, α 7→ (α ◦ j, α(1))
is an isomorphism of A-modules, where j : M → A, x 7→ (0, x) and 1 = (1, 0) denotes
the identity of the ring A.
We are now back to our Setting 2.1. Let A = R ⋉ c and set L = S × K. Then A is a
one-dimensional Cohen-Macaulay local ring and
KA = HomR (A, K) ∼
= L = S × K,
= HomR (c, K) × K ∼
14
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
because c = K : S by Proposition 2.3 (1). Therefore
A = R ⋉ c ⊆ L = S × K ⊆ R ⋉ Q(R).
Because Q(A) = Q(R) ⋉ Q(R) and A = R ⋉ Q(R), our idealization A = R ⋉ c satisfies
the same assumption as in Setting 2.1 and we have the following.
Proposition 4.1. The following assertions hold true.
(1)
(2)
(3)
(4)
Ln = S ⋉ S for all n ≥ 2, whence A[L] = S ⋉ S.
ℓA (A[L]/L) = ℓR (S/K).
A : A[L] = c × c.
v(A) = v(R) + µR (c) and e(A) = 2·e(R).
Proof. (1) Since Ln = (S × K)n = S n × S n−1 K, we have Ln = S × S for n ≥ 2.
(2) We get ℓA (A[L]/L) = ℓR ((S ⊕ S)/(S ⊕ K)) = ℓR (S/K).
(3) This is straightforward, since A[L] = S ⋉ S.
(4) To see the first assertion, remember that m × c is the maximal ideal of A and that
(m × c)2 = m2 × mc. For the second one, notice that mA is a reduction of m × c and that
A = R ⊕ c as an R-module. We then have e(A) = e0m (A) = 2·e0m (R).
By Proposition 4.1 (2) we readily get the following.
Theorem 4.2. A = R ⋉ c is a 2-AGL ring if and only if so is R.
Example 4.3. Suppose that R is a 2-AGL ring, for instance, take R = k[[t3 , t7 , t8 ]] (see
Example 3.2 (1)). We set
(
R
if n = 0,
An =
An−1 ⋉ cn−1 if n ≥ 1,
that is A0 = R and for n ≥ 1 let An be the idealization of cn−1 over An−1 , where
cn−1 = An−1 : An−1 [Kn−1 ] and Kn−1 denotes a fractional ideal of An−1 such that An−1 ⊆
Kn−1 ⊆ An−1 and Kn−1 ∼
= KAn−1 as an An−1 -module. We then have an infinite family
{An }n≥0 of analytically ramified 2-AGL rings such that e(An ) = 2n ·e(R) for each n ≥ 0.
Since c = t6 k[[t]] ∼
= k[[t]] for R = k[[t3 , t7 , t8 ]], the ring R3 = k[[t3 , t7 , t8 ]]⋉k[[t]] of Example
3.2 (1) is obtained in this manner.
5. The algebra m : m
We maintain Setting 2.1 and set B = m : m. By [GMP, Theorem 5.1] B is a Gorenstein
ring if and only if R is an AGL ring of minimal multiplicity. Our purpose of this section is
to explore the structure of the algebra B = m : m in connection with the 2-AGL property
of R.
Let us begin with the following.
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
15
Proposition 5.1. Suppose that there is an element α ∈ m such that m2 = αm and that
R is not a Gorenstein ring. Set L = BK. Then the following assertions hold true.
(1) B = R : m = mα and K : B = mK.
(2) L = K : m, L ∼
= mK as a B-module, and B ⊆ L ⊆ B.
(3) S = B[L] = B[K].
Proof. Since R is not a DVR (resp. m2 = αm), we have B = R : m (resp. B = mα ).
Because K : mK = R : m = B, we get mK = K : B. We have K : L = R : B = m,
∼ mK as a B-module. We have
since R ( B. Therefore, L = K : m. Clearly, L = mK
α =
2
S ⊆ B[K] ⊆ B[L]. Because B ⊆ L = K : m ⊆ K by Corollary 2.4 (1), B[L] ⊆ S,
whence S = B[L] = B[K] as claimed.
We have the following.
Theorem 5.2. Suppose that R is a 2-AGL ring. Assume that there is an element α ∈ m
such that m2 = αm. Set L = BK. Then the following assertions hold true.
(1) ℓR (L2 /L) = 1.
(2) Let M = (0) :B (L2 /L). Then M ∈ Max B, R/m ∼
= B/M, and BM is an AGL ring
which is not a Gorenstein ring.
(3) If N ∈ Max B \ {M}, then BN is a Gorenstein ring.
Therefore, BN is an AGL ring for every N ∈ Max B.
Proof. Because S = K 2 and S ⊇ L ⊇ K by Proposition 5.1, we have S = L2 , while
ℓR (L2 /L) = 1 as L = K : m. Hence
0 < ℓB/M (L2 /L) = ℓB (L2 /L) ≤ ℓR (L2 /L) = 1,
so that M ∈ Max B, R/m ∼
= B/M, and L2 /L ∼
= B/M. Because L ∼
= K : B as a B-module
by Proposition 5.1, we get LM ∼
= KBM as a BM -module ([HK, Satz 5.12]). Therefore, since
2
ℓBM (LM /LM ) = 1, by [GMP, Theorem 3.16] BM is an AGL ring which is not a Gorenstein
ring. If N ∈ Max B and if N 6= M, then (L2 /L)N = (0), so that BN is a Gorenstein ring
by [GMP, Theorem 3.7].
Let us note a few consequences of Theorem 5.2.
Corollary 5.3. Assume that m2 = αm for some α ∈ m and that B is a local ring with
maximal ideal n. Then the following conditions are equivalent.
(1) R is a 2-AGL ring.
(2) B is a non-Gorenstein AGL ring and R/m ∼
= B/n.
When this is the case, S is a Gorenstein ring, provided v(B) = e(B).
16
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
Proof. By Theorem 5.2 we have only to show the implication (2) ⇒ (1). Let L = BK.
Then L = K : m, L ∼
= KB , and S = B[L] by Proposition 5.1. Because B is a nonGorenstein AGL ring, ℓB (B[L]/L) = 1 by [GMP, Theorem 3.16], so that
ℓR (S/K) = ℓR (S/L) + ℓR (L/K) = ℓB (B[L]/L) + ℓR ((K : m)/K) = 2,
where the second equality follows from the fact that R/m ∼
= B/n. Hence R is a 2-AGL
ring. The last assertion is a direct consequence of [GMP, Theorem 5.1].
If R is the semigroup ring of a numerical semigroup, the algebra B = m : m is also the
semigroup ring of a numerical semigroup, so that B is always a local ring with R/m ∼
= B/n,
where n denotes the maximal ideal of B. Hence by Corollary 5.3 we readily get the
following. Let k[[t]] be the formal power series ring over a field k.
Corollary 5.4. Let H = ha1 , a2 , . . . , aℓ i be a numerical semigroup and R =
k[[ta1 , ta2 , . . . , taℓ ]] the semigroup ring of H. Assume that R has minimal multiplicity.
Then R is a 2-AGL ring if and only if B = m : m is an AGL ring which is not a
Gorenstein ring. When this is the case, S is a Gorenstein ring, provided v(B) = e(B).
Proof. Remember that m2 = te m, where e = min{ai | 1 ≤ i ≤ ℓ}.
If v(R) < e(R), the ring S is not necessarily a Gorenstein ring, even though R is a
2-AGL ring and B is an AGL ring with v(B) = e(B) ≥ 3. Let us note one example.
Example 5.5. Let R = k[[t5 , t7 , t9 , t13 ]] and set K = R+Rt3 . Then we have the following.
(1) K ∼
= KR as an R-module and I = t12 K is a canonical ideal of R with (t12 ) a reduction.
(2)
(3)
(3)
(4)
Hence r(R) = 2.
S = k[[t3 , t5 , t7 ]] and c = (t10 ) + (t7 , t9 , t13 ).
R is a 2-AGL ring with v(R) = 4 and e(R) = 5.
K/R ∼
= R/c as an R-module.
B = k[[t5 , t7 , t8 , t9 , t11 ]] and B is an AGL ring, possessing minimal multiplicity 5.
The ring B does not necessarily have minimal multiplicity, even though B is a local
ring and R is a 2-AGL ring of minimal multiplicity. Let us note one example.
Example 5.6. Let R = k[[t4 , t9 , t11 , t14 ]] and set K = R + Rt3 + Rt5 . Then we have the
following.
(1) K ∼
= KR as an R-module and I = t11 K is a canonical ideal of R with (t11 ) a reduction.
Hence r(R) = 3.
(2) R is a 2-AGL ring with m2 = t4 m.
(3) ℓR (K/R) = 3 and K/R ∼
= R/c ⊕ R/m as an R-module.
4 5 7
(4) B = k[[t , t , t ]].
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
17
In Theorem 5.2, if K/R is a free R/c-module, then B = m : m is necessarily a local
ring. To state the result, we need further notation.
Suppose that R is a 2-AGL ring and set r = r(R). Then since by Proposition 3.3 (4)
K/R ∼
= (R/c)⊕ℓ ⊕ (R/m)⊕m
with integers ℓ > 0 and m ≥ 0 such that ℓ + m = r − 1, there are elements f1 , f2 , . . . , fℓ
and g1 , g2 , . . . , gm of K such that
K/R =
ℓ
X
i=1
R·fi
m
MX
j=1
R·gj ,
ℓ
X
R·fi ∼
= (R/c)⊕ℓ , and
i=1
m
X
R·gj ∼
= (R/m)⊕m
j=1
P
P
where ∗ denotes the image in K/R. We set F = ℓi=1 Rfi and U = m
j=1 Rgj . Let us
2
write c = (x1 ) + (x2 , x3 , . . . , xn ) for some minimal system {xi }1≤i≤n of generators of m
(see Proposition 3.3 (2)). With this notation we have the following.
Proposition 5.7. The following assertions hold true.
(1) Suppose m = 0, that is K/R is a free R/c-module. Then B is a local ring with
maximal ideal mS and R/m ∼
= B/mS.
q
(2) Suppose that U ⊆ mS for some q > 0. Then B is a local ring.
Proof. We divide the proof of Proposition 5.7 into a few steps. Notice that B = R : m,
since R is not a DVR.
Claim 1. The following assertions hold true.
(1) m2 S ⊆ R.
(2) mS ⊆ J(B), where J(B) denotes the Jacobson radical of B.
Proof. Because m2 K 3 = m2 K 2 ⊆ K, we have m2 K 2 ⊆ K : K = R. Hence m2 S ⊆ R, so
that mS ⊆ R : m = B. Let M ∈ Max B and choose N ∈ Max S such that M = N ∩ B.
Then because mS ⊆ N, mS ⊆ N ∩ B = M, whence mS ⊆ J(B).
We consider Assertion (2) of Proposition 5.7. Since gjq ∈ mS for all 1 ≤ j ≤ m, the ring
B/mS = (R/m)[g1 , g2 , . . . , gm ] is a local ring, where gj denotes the image of gj in B/mS.
Therefore B is a local ring, since mS ⊆ J(B) by Claim 1 (2).
To prove Assertion (1) of Proposition 5.7, we need more results. Suppose that m = 0;
hence U = (0) and ℓ = r − 1.
Claim 2. The following assertions hold true.
(1) x1 fi 6∈ R for all 1 ≤ i ≤ ℓ.
(2) (R : m) ∩ K = R + x1 F and ℓR ([(R : m) ∩ K]/R) = ℓ.
(3) B = R + x1 F + Rh for some h ∈ mK 2 .
18
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
P
Proof. (1) Remember that K/R = ℓi=1 R·fi ∼
= (R/c)⊕ℓ . We then have mfi 6= (0) but
cfi = (0) for each 1 ≤ i ≤ ℓ. Hence x1 fi 6∈ R, because c = (x21 ) + (xj | 2 ≤ j ≤ n) and
m = (x1 ) + c.
(2) Because (0) :R/c m is generated by the image of x1 , we have
(0) :K/R m =
ℓ
X
R·x1 fi ,
i=1
whence (R : m) ∩ K = R + x1 F and ℓR ([(R : m) ∩ K]/R) = ℓ.
(3) Notice that by Claim 1 (1)
R ⊆ mK + R ⊆ (R : m) ∩ K ⊆ R : m
and that ℓR ([(R : m) ∩ K]/R) = ℓ = r − 1 by Assertion (2). We then have
ℓR ((R : m)/[(R : m) ∩ K]) = 1,
because ℓR ((R : m)/R) = r. Hence R : m = [(R : m) ∩ K] + Rg for some g ∈ R : m. On
the other hand, because K ( mK 2 + K ( K 2 by Corollary 2.4 (1) and ℓR (K 2 /K) = 2, we
get ℓR (K 2 /(mK 2 + K)) = ℓR ((mK 2 + K)/K) = 1. Consequently, K 2 = K + Rξ for some
ξ ∈ K 2 . Let us write g = ρ + βξ with ρ ∈ K and β ∈ m. Then since βξ ∈ mK 2 ⊆ R : m
by Claim 1 (1) and g ∈ R : m, we get ρ ∈ (R : m) ∩ K, whence setting h = βξ, we have
B = R : m = [(R : m) ∩ K] + Rg = [(R : m) ∩ K] + Rh
as claimed.
Let us finish the proof of Assertion (1) of Proposition 5.7. In fact, by Claim 2 (3) we
have B ⊆ R + mS, which implies R/m ∼
= B/mS, whence mS ∈ Max B. Therefore, B
is a local ring with unique maximal ideal mS, since mS ⊆ J(B) by Claim 1 (2). This
completes the proof of Proposition 5.7.
Under some additional conditions, the converse of Theorem 5.2 is also true.
Theorem 5.8. Suppose that the following conditions are satisfied.
(1) e(R) = e ≥ 3 and R is not an AGL ring,
(2) B is an AGL ring with e(B) = e, and
(3) there is an element α ∈ m such that m2 = αm and n2 = αn, where n denotes the
maximal ideal of B.
Then R is a 2-AGL ring and K/R is a free R/c-module.
Proof. We have B = R : m = ma . Let L = BK. Then by Proposition 5.1 (3) S = B[L] =
B[K]. As B is an AGL ring, we have S = n : n by [GMP, Theorem 3.16], whence S = αn .
Consequently, R ⊆ B = mα ⊆ S = an . Let us write m = (α, x2 , x3 , . . . , xe ). We set yi = xαi
for each 2 ≤ i ≤ e.
Claim 3. We can choose the elements {xi }2≤i≤e of m so that yi ∈ n for all 2 ≤ i ≤ e.
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
19
Proof. Since by Conditions (1) and (2)
ℓB (B/αB) = e(B) = e = e(R) = ℓR (R/αR) = ℓR (B/αB),
we get the isomorphism R/m ∼
= B/n of fields. Let 2 ≤ i ≤ e be an integer and choose
i
∈ n, replacing xi with xi − αci ,
ci ∈ R so that yi ≡ ci mod n. Then since yi − ci = xi −αc
α
we have yi ∈ n for all 2 ≤ i ≤ e.
We now notice that B =
have
m
α
= R+
e
X
Pe
i=2
R· xαi and
xi
α
∈ n for each 2 ≤ i ≤ e. We then
e
e
X xi
X xi
xi
R· = Rα +
R· ,
= m+
α
α
α
i=2
i=2
i=2
P
where the last equality follows from the fact that m = Rα + ei=2 Rxi . Thus
n = (n ∩ R) +
R·
e
X xi
n
R· 2 ,
S = =R+
α
α
i=2
x
x
whence α2 ∈ c. Let 2 ≤ i, j ≤ e be integers. Then xi · αj2 = xαi · αj ∈ n2 = αn and
P
P
x
n = Rα+ ei=2 R· xαi , so that xi · αj2 ∈ Rα2 + ei=2 Rxi , which shows (α2 , x2 , x3 , . . . , xe ) ⊆ c.
Therefore, c = (α2 , x2 , x3 , . . . , xe ) because c ( m (remember that R is not an AGL ring),
whence ℓR (R/c) = 2, so that R is a 2-AGL ring. Because S = αn and B = mα and because
R/m ∼
= B/n, we get
ℓR (S/B) = ℓR (S/n) − ℓR (B/n) = ℓR (S/αS) − 1 = e − 1 and
ℓR (B/R) = ℓR (B/m) − ℓR (R/m) = ℓR (B/αB) − 1 = e − 1.
Therefore, ℓR (S/R) = 2e − 2, whence ℓR (K/R) = 2e − 4 = 2(e − 2) because ℓR (S/K) = 2.
Consequently, by Proposition 3.3 (4) K/R is a free R/c-module, since µR (K/R) = e − 2
(notice that r(R) = e − 1), which completes the proof of Theorem 5.8.
However, the ring B is not necessarily a local ring in general, although R is a 2-AGL
ring with v(R) = e(R). Let us note one example.
Example 5.9. Let V = k[[t]] be the formal power series ring over an infinite field k.
We consider the direct product A = k[[t3 , t7 , t8 ]] × k[[t3 , t4 , t5 ]] of rings and set R =
k·(1, 1) + J(A) where J(A) denotes the Jacobson radical of A. Then R is a subring of A
and a one-dimensional Cohen-Macaulay complete local ring with J(A) the maximal ideal.
We have the ring R is a 2-AGL ring and v(R) = e(R) = 6. However
m : m = k[[t3 , t4 , t5 ]] × V
which is not a local ring, so that K/R is not a free R/c-module.
20
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
6. Numerical semigroup rings
Let k be a field. In this section we study the case where R = k[[H]] is the semigroup
ring of a numerical semigroup H. First of all, let us fix the notation, according to the
terminology of numerical semigroups.
Setting 6.1. Let 0 < a1 , a2 , . . . , aℓ ∈ Z (ℓ > 0) be positive integers such that
GCD (a1 , a2 , . . . , aℓ ) = 1. We set
( ℓ
)
X
H = ha1 , a2 , . . . , aℓ i =
ci ai | 0 ≤ ci ∈ Z for all 1 ≤ i ≤ ℓ
i=1
and call it the numerical semigroup generated by the numbers {ai }1≤i≤ℓ . Let V = k[[t]]
be the formal power series ring over a field k. We set
R = k[[H]] = k[[ta1 , ta2 , . . . , taℓ ]]
in V and call it the semigroup ring of H over k. The ring R is a one-dimensional CohenMacaulay local domain with R = V and m = (ta1 , ta2 , . . . , taℓ ).
We set T = k[ta1 , ta2 , . . . , taℓ ] in k[t]. Let P = k[X1 , X2 , . . . , Xℓ ] be the polynomial
ring over k. We consider P to be a Z-graded ring such that P0 = k and deg Xi = ai for
1 ≤ i ≤ ℓ. Let ϕ : P → T denote the homomorphism of graded k-algebras defined by
ϕ(Xi ) = tai for each 1 ≤ i ≤ ℓ.
In this section we are interested in the question of when R = k[[H]] is a 2-AGL ring.
To study the question, we recall some basic notion on numerical semigroups. Let
c(H) = min{n ∈ Z | m ∈ H for all m ∈ Z such that m ≥ n}
be the conductor of H and set f(H) = c(H) − 1. Hence f(H) = max (Z \ H), which is
called the Frobenius number of H. Let
PF(H) = {n ∈ Z \ H | n + ai ∈ H for all 1 ≤ i ≤ ℓ}
denote the set of pseudo-Frobenius numbers of H. Therefore, f(H) equals the a-invariant
of the graded k-algebra k[ta1 , ta2 , . . . , taℓ ] and ♯PF(H) = r(R) ([GW, Example (2.1.9),
Definition (3.1.4)]). We set f = f(H) and
X
K=
Rtf −c
c∈PF(H)
in V . Then K is a fractional ideal of R such that R ⊆ K ⊆ R and
X
Rt−c
K∼
= KR =
c∈PF(H)
as an R-module ([GW, Example (2.1.9)]). Let us refer to K as the fractional canonical
ideal of R. Notice that tf 6∈ K but mtf ⊆ R, whence K : m = K + Rtf .
Let us begin with the following.
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
21
D
E
∨
Theorem 6.2. Suppose that ℓ ≥ 3 and aj 6∈ a1 , . . . , aj , . . . , aℓ for all 1 ≤ j ≤ ℓ.
Assume that r(R) = 2 and let K = R + Rta for some 0 < a ∈ Z. Then the following
conditions are equivalent.
(1) R is a 2-AGL ring.
(2) 3a ∈ H and f = 2a + ai for some 1 ≤ i ≤ ℓ.
Proof. (1) ⇒ (2) We have t3a ∈ K 2 as K 2 = K 3 . If 2a ∈ H, then K 2 = K and R
is a Gorenstein ring (Proposition 2.3 (3)). Hence 2a 6∈ H, so that 3a ∈ H. Because
P
K : m = mK 2 + K by Corollary 2.4 (2), we get K + Rtf = K : m = K + ℓj=1 Rt2a+aj .
Therefore, because ℓR ((K : m)/K) = 1,
K + Rtf = K + Rt2a+ai
for some 1 ≤ i ≤ ℓ, whence f = 2a + ai .
(2) ⇒ (1) We get K 3 = K 2 = K + Rt2a , since 3a ∈ H. Let L = mK 2 + K. Hence
P
L = K + ℓj=1 Rt2a+aj and L ( K 2 because R is not a Gorenstein ring. Notice that
ℓR (K 2 /L) = 1, since µR (K 2 /K) = 1. We furthermore have the following.
Claim 4. L = K : m.
Proof of Claim 4. We have only to show L ⊆ K : m = K + Rt2a+ai . Let 1 ≤ j ≤ ℓ and
assume that t2a+aj 6∈ K + Rt2a+ai . Then aj < ai since f = 2a + ai = c(H) − 1. Because
2a + aj 6∈ H, tf −(2a+aj ) = tai −aj ∈ K = R + Rta . Hence ai − aj ∈ H or ai − aj − a ∈ H.
Suppose that ai − aj − a ∈ H. Then, setting h = ai − aj − a ∈ H, we get ai = aj + a + h
whence f = 2a + ai = 3a + aj + h ∈ H, which is impossible. Hence ai − aj ∈ H. Let us
write
ℓ
X
ai − aj =
mk ak
k=1
with 0 ≤ mk ∈ Z. Then mk = 0 if ak ≥ ai , since ai − aj < ai . Therefore
D
E
X
∨
ai = aj +
mk ak ∈ a1 , . . . , ai , . . . , aℓ
ak <ai
which contradicts the assumption that aj 6∈
2a+ai
L = K + Rt
.
D
∨
a1 , . . . , aj , . . . , aℓ
E
for all 1 ≤ j ≤ ℓ. Thus
We have ℓR (K 2 /K) = ℓR (K 2 /L) + ℓR (L/K) = 2 because ℓR (L/K) = 1 by Claim 4, so
that R is a 2-AGL ring.
Let us recover Example 3.2 (2) in the present context.
Corollary 6.3. Let c ≥ 4 such that c 6≡ 0 mod 3 and set H = h3, c + 3, 2ci. Then R is
a 2-AGL ring such that r(R) = 2 and K/R ∼
= R/c as an R-module.
Proof. We set a = c − 3. Then f = 2a + 3 and K = R + Rta .
22
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
Suppose that ℓ = 3. We set a = Ker ϕ, where
ϕ : P = k[X1 , X2 , X3 ] → T = k[ta1 , ta2 , ta3 ]
is the homomorphism of k-algebras defined by ϕ(Xi ) = tai for i = 1, 2, 3. Let us write
X = X1 , Y = X2 , and Z = X3 for short.
If T is not a Gorenstein ring, then by
Xα Y β Zγ
[H] it is known that a = I2 Y β ′ Z γ′ X α′ for some integers α, β, γ, α′, β ′, γ ′ > 0, where
α β γ
I2 Xβ ′ Y γ′ Zα′ denotes the ideal of P generated by the 2 × 2 minors of the matrix
αY βZ γX
X Y
Z
. With this notation we have the following.
′
′
′
Y β Zγ Xα
Theorem 6.4. Suppose that H is 3-generated, that is ℓ = 3. Then the following conditions
are equivalent.
(1) R is a 2-AGL ring.
2
(2) After a suitable permutation of a1 , a2 , a3 , a = I2 YXβ ′
α′ , β ′ , γ ′ such that α′ ≥ 2 and β ′ , γ ′ > 0.
Y
Z
′
′
Zγ Xα
for some integers
To prove Theorem 6.4, we need a result of [GMP, Section 4]. Throughout, let H =
ha1 , a2 , a3 i and assume that T is not a Gorenstein ring. Hence the ideal a is generated by
the 2 × 2 minors of the matrix
α
X
Y β Zγ
′
′
′
Y β Zγ Xα
′
′
′
′
where 0 < α, β, γ, α′, β ′, γ ′ ∈ Z. Let ∆1 = Z γ+γ − X α Y β , ∆2 = X α+α − Y β Z γ , and
′
′
∆3 = Y β+β − X α Z γ . Then a = (∆1 , ∆2 , ∆3 ) and thanks to the theorem of Hilbert–
Burch ([E, Theorem 20.15]), the graded ring T = P/a possesses a graded minimal P -free
resolution of the form
0 −→
P (−m)
⊕
P (−n)
′
Xα Y β
Y β Z γ′
′
Zγ Xα
−→
P (−d1 )
⊕
[∆1 ∆2 ∆3 ]
ε
P (−d2 ) −→ P −→ T −→ 0,
⊕
P (−d3 )
where d1 = deg ∆1 = a3 (γ + γ ′ ), d2 = deg ∆2 = a1 (α + α′ ), d3 = deg ∆3 = a2 (β + β ′),
m = a1 α + d1 = a2 β + d2 = a3 γ + d3 , and n = a1 α′ + d3 = a2 β ′ + d1 = a3 γ ′ + d2 . Therefore
(E) n − m = a2 β ′ − a1 α = a3 γ ′ − a2 β = a1 α′ − a3 γ.
Let KP = P (−d) denote the graded canonical module of P where d = a1 + a2 + a3 . Then,
taking the KP –dual of the above resolution, we get the minimal presentation
(♯)
P (d1 − d)
Xα Y β Zγ
P (m − d)
⊕
′
′
′
ε
Y β Zγ Xα
⊕
P (d2 − d)
−→ KT −→ 0
−→
P (n − d)
⊕
P (d3 − d)
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
23
of the graded canonical module KT = Ext2P (T, KP ) of T . Therefore, because KT =
P
−c
([GW, Example (2.1.9)]), we have ℓk ([KT ]i ) ≤ 1 for all i ∈ Z, whence
c∈PF(H) T t
m 6= n. After the permutation of a2 and a3 if necessary, we may assume without loss of
generality that m < n. Then the presentation (♯) shows that PF(H) = {m − d, n − d}
and f = n − d.
We set a = n − m. Hence a > 0, f = a + (m − d), and K = R + Rta . With this notation
we have the following. Remember that R is the MTM -adic completion of the local ring
TM , where M = (tai | i = 1, 2, 3) denotes the graded maximal ideal of T .
Proposition 6.5 ([GMP, Theorem 4.1]). ℓR (K/R) = αβγ.
Therefore, if R is a 2-AGL ring, then ℓR (K/R) = 2 by Proposition 3.7, so that α = 2
and β = γ = 1 by Proposition 6.5 after a suitable permutation of a1 , a2 , a3 if necessary.
Consequently, Theorem 6.4 is reduced to the following.
2
X
Y
Z
Theorem 6.6. Let m < n and assume that a = I2 Y β ′ Z γ′ X α′ with α′ , β ′, γ ′ > 0. Then
R is a 2-AGL ring if and only if α′ ≥ 2. When this is the case, f = 2a + a1 , where
a=n−m
Proof of Theorem 6.6. Notice that R is not an AGL ring, since ℓR (K/R) = 2 by Proposition 6.5. We get by equations (E) above
(i) a2 β ′ = 2a1 + a,
(ii) a3 γ ′ = a2 + a, and
(iii) a1 α′ = a3 + a.
Suppose that α′ ≥ 2. Then 3a = a2 (β ′ − 1) + a1 (α′ − 2) + a3 (γ ′ − 1) ∈ H. Hence
S = K 2 = R + Rta + Rt2a . Therefore, because
(iv) 2a1 + 2a = (2a1 + a) + a = a2 β ′ + (a3 γ ′ − a2 ) = a2 (γ ′ − 1) + a3 γ ′ ∈ H,
(v) a2 + 2a = (a3 γ ′ − a2 ) + (a1 α′ − a3 ) + a2 = a3 (γ ′ − 1) + a1 α′ ∈ H, and
(vi) a3 + 2a = (a1 α′ − a3 ) + (a2 β ′ − 2a1 ) + a3 = a1 (α′ − 2) + a2 β ′ ∈ H,
we get that (t2a1 )+(ta2 , ta3 ) ⊆ K : S = c by Proposition 2.3 (1) and that (2a+a1 )+ai ∈ H
for i = 1, 2, 3. Hence 2a + a1 ∈ PF(H) if 2a + a1 6∈ H. Now notice that mK 2 + K =
K + Rt2a+a1 because 2a + ai ∈ H for i = 2, 3 by equations (v) and (vi), whence t2a+a1 6∈ K
because mK 2 6⊆ K by Proposition 2.3 (4). In particular, 2a + a1 6∈ H. Therefore, ta1 6∈ c,
so that c 6= m and c = (t2a1 ) + (ta2 , ta3 ). Thus R is a 2-AGL ring, because ℓR (R/c) = 2.
Notice that 2a + a1 ∈ PF(H) = {f − a, f } and we get f = 3a + a1 ∈ H if f 6= 2a + a1 ,
which is impossible as 3a ∈ H. Hence f = 2a + a1 .
Conversely, assume that R is a 2-AGL ring. Then 2a 6∈ H, since K 6= K 2 . Therefore
P
3a ∈ H, since t3a ∈ K 2 . Because mK 2 + K = K + 3j=1 Rt2a+aj and a2 + 2a ∈ H by
equation (v), we get
K + Rtf = K : m = mK 2 + K = K + Rt2a+a1 + Rt2a+a3 ,
24
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
where the second equality follows from Corollary 2.4 (2). Therefore, if t2a+a3 6∈ K, then
f = 2a + a3 , so that PF(H) = {a + a3 , 2a + a3 }, which is absurd because a + a3 ∈ H by
equation (iii). Thus t2a+a3 ∈ K, so that mK 2 +K = K +Rt2a+a1 and f = 2a+a1 . Suppose
now that α′ = 1. Then a1 = a + a3 by equation (iii), whence f = 2a + a1 = 3a + a3 ∈ H
because 3a ∈ H. This is a required contradiction, which completes the proof of Theorem
6.4 as well as that of Theorem 6.6.
When H is 3-generated and e(R) = min{a1 , a2 , a3 } is small, we have the following
structure theorem of H for R to be a 2-AGL ring.
Corollary 6.7. Let ℓ = 3.
(1) Suppose that min{a1 , a2 , a3 } = 3. Then the following conditions are equivalent.
(a) R is a 2-AGL ring.
(b) H = h3, c + 3, 2ci for some c ≥ 4 such that c 6≡ 0 mod 3.
(2) If min{a1 , a2 , a3 } = 4, then R is not a 2-AGL ring.
(3) Suppose that min{a1 , a2 , a3 } = 5. Then the following conditions are equivalent.
(a) R is a 2-AGL ring.
(b) (i) H = h5, 3c + 8, 2c + 2i for some c ≥ 2 such that c 6≡ 4 mod 5 or (ii) H =
h5, c + 4, 3c + 2i for some c ≥ 2 such that c 6≡ 1 mod 5.
Proof. Let e = min{a1 , a2 , a3 }. Suppose that R is a 2-AGL ring. Then by Theorem 6.4,
after a suitable permutation of a1 , a2 , a3 we get
2
a = I2 YXβ ′ ZYγ′ XZα′
for some integers α′ , β ′ , γ ′ such that α′ ≥ 2 and β ′ , γ ′ > 0. Remember that
a1 = β ′ γ ′ + β ′ + 1,
′
′
′
because a1 = ℓR (R/ta1 R) = ℓk (k[Y, Z]/(Y β +1 , Y β Z, Z γ +1 ). We similarly have that
a2 = 2γ ′ + α′ γ ′ + 2 ≥ 6,
a3 = α′ β ′ + α′ + 2 ≥ 6
since α′ ≥ 2. Therefore, e = a1 = β ′ γ ′ + β ′ + 1, if e ≤ 5.
(1) (a) ⇒ (b) We have β ′ = γ ′ = 1. Hence a2 = α′ + 4 and a3 = 2α′ + 2, that is
H = h3, c + 3, 2ci, where c = α′ +1. We have c 6≡ 0 mod 3 because GCD (3, c+3, 2c) = 1,
whence c ≥ 4.
2
(b) ⇒ (a) See Corollary 6.3 or notice that a = I2 XY YZ XZc−1 and apply Theorem 6.4
(2) We have a1 = β ′ γ ′ + β ′ + 1 = 4, so that β ′ = 1 and γ ′ = 2. Hence a2 = 2α′ + 6 and
a3 = 2α′ + 2, which is impossible because GCD (a1 , a2 , a3 ) = 1.
(3) (a) ⇒ (b) We set c = α′ . Then either β ′ = 1 and γ ′ = 3 or β ′ = 2 and γ ′ = 1. For
the former case we get (i) H = h5, 3c + 8, 2c + 2i, where c 6≡ 4 mod 5. For the latter case
we get (ii) H = h5, c + 4, 3c + 2i, where c 6≡ 1 mod 5.
2
2 Y Z
(b) ⇒ (a) For Case (i) we have a = I2 XY ZY3 XZc and for Case (ii) a = I2 X
,
2
c
Y Z X
whence R is a 2-AGL ring by Theorem 6.4.
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
25
Even though R is a 2-AGL ring, K/R is not necessarily a free R/c-module (Example
3.5). Here let us note a criterion for the freeness of K/R.
Proposition 6.8. Let r = r(R) ≥ 2 and write PF(H) = {c1 < c2 < · · · < cr = f }.
Assume that R is a 2-AGL ring. Then the following conditions are equivalent.
(1) K/R ∼
= (R/c)⊕(r−1) as an R-module.
(2) There is an integer 1 ≤ j ≤ ℓ such that f + aj = ci + cr−i for all 1 ≤ i ≤ r − 1.
P
Proof. (2) ⇒ (1) We have K = ri=1 Rtf −ci and ℓR (K/(mK + R)) = µR (K/R) = r − 1.
Pr−1 ci
Rt ⊆ mK + R, whence
Because tcr−i = tf −ci +aj ∈ mK + R for all 1 ≤ i < r, R + i=1
ℓR (K/R) ≥ ℓR (K/(mK + R)) + ℓR ((R +
r−1
X
Rtci )/R) = 2(r − 1).
i=1
Thus K/R is a free R/c-module by Proposition
3.3 (4).
D
E
∨
(1) ⇒ (2) We may assume that aj 6∈ a1 , . . . , aj , . . . , aℓ for all 1 ≤ j ≤ ℓ. Hence
m is minimally generated by the elements {tai }1≤i≤ℓ . Therefore, since ℓR (R/c) = 2, by
Proposition 3.3 (2) c = (t2aj ) + (tai | 1 ≤ i ≤ ℓ, i 6= j) for some 1 ≤ j ≤ ℓ. Because K/R
is minimally generated by {tf −ci }1≤i≤r−1 where tf −ci denotes the image of tf −ci in K/R
and because K/R is a free R/c-module, the homomorphism
(♯) ϕ : (R/c)⊕(r−1) → K/R,
ei 7→ tf −ci for each 1 ≤ i ≤ r − 1
of R/c-modules has to be an isomorphism, where {ei }1≤i≤r−1 denotes the standard basis
of (R/c)⊕(r−1) . Now remember that taj 6∈ c, which shows via the isomorphism (♯) above
that taj ·tf −ci 6∈ R for all 1 ≤ i ≤ r − 1, while we have t2aj ·tf −ci ∈ R and tak ·tf −ci ∈ R
for all k 6= j. Therefore, f − ci + aj 6∈ H but (f − ci + aj ) + am ∈ H for all 1 ≤ m ≤ ℓ,
so that f − ci + aj ∈ PF(H) for all 1 ≤ i ≤ r − 1. Notice that f − c1 + aj ≤ f because
f − c1 + aj 6∈ H and that f − c1 + aj < f because c1 6= aj . Therefore, because
{f − cr−1 + aj < · · · < f − c2 + aj < f − c1 + aj } ⊆ PF(H) = {c1 < · · · < cr−1 < cr = f }
and f − c1 + aj < f , we readily get that f + aj = ci + cr−i for all 1 ≤ i ≤ r − 1.
We close this paper with a broad method of constructing 2-AGL rings from a given
symmetric numerical semigroup. Let H be a numerical semigroup and assume that H is
symmetric, that is R = k[[H]] is a Gorenstein ring. For the rest of this paper we fix an
arbitrary integer 0 < e ∈ H. Let αi = min{h ∈ H | h ≡ i mod e} for each 0 ≤ i ≤ e − 1.
We set Ape (H) = {αi | 0 ≤ i ≤ e − 1}. We then have Ape (H) = {h ∈ H | h − e 6∈ H},
which is called the Apery set of H mod e. Let us write
Ape (H) = {h0 = 0 < h1 < · · · < he−1 }.
We then have he−1 = hi + he−1−i for all 0 ≤ i ≤ e − 1, because H is symmetric. Notice
that H = he, h1 , . . . , he−2 i and he−1 = h1 + he−2 . Let n ≥ 0 be an integer and set
Hn = he, h1 + ne, h2 + ne, . . . , he−1 + nei .
26
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
Notice that H0 = H and for each n > 0,
e < h1 + ne < · · · < he−2 + ne < he−1 + ne
and GCD (e, h1 + ne, . . . , he−2 + ne, he−1 + ne) = 1. We set
Rn = k[[Hn ]], Sn = Rn [Kn ], and cn = Rn : Sn ,
where Kn denotes the fractional canonical ideal of Rn . Let mn = (te ) + (thi +ne | 1 ≤ i ≤
e − 1) be the maximal ideal of Rn .
With this notation we have the following.
Theorem 6.9. For all n ≥ 0 the following assertions hold true.
(1) Kn2 = Kn3 .
(2) ℓRn (Kn2 /Kn ) = n.
(3) Kn /Rn ∼
= (Rn /cn )⊕(e−2) as an Rn -module.
Hence R2 is a 2-AGL ring for any choice of the integer 0 < e ∈ H.
Proof. We may assume n > 0. We begin with the following.
Claim 5. The following assertions hold true.
(1) h + ne ∈ Hn for all h ∈ H.
(2) v(Rn ) = e(Rn ) = e.
Proof of Claim 6.9. (1) Let h = hi + qe with 0 ≤ i ≤ e − 1 and q ≥ 0. Then h + ne =
(hi + ne) + qe ∈ Hn .
(2) Let 1 ≤ i, j ≤ e−1. Then (hi + ne) + (hj + ne) −e = [(hi + hj ) + ne] + (n−1)e ∈ Hn
by Assertion (1). Therefore, m2n = te mn .
Consequently, by Claim 5 (2) we get that {e} ∪ {hi + ne}1≤i≤e−1 is a minimal system
of generators of Hn , whence
PF(Hn ) = {h1 + (n − 1)e, h2 + (n − 1)e, . . . , he−1 + (n − 1)e}.
Pe−2
Rn thj , so that Sn = Rn [Kn ] = R.
Therefore, Kn = j=0
Let 0 ≤ i, j ≤ e − 1 and write hi + hj = hk + qe with 0 ≤ k ≤ e − 1 and q ≥ 0. If
k ≤ e − 2, then thi thj = (te )q thk ∈ Kn , which shows Kn2 = Kn + Rn the−1 (remember that
Pe−2
Rn ·the−1 +hi . If 1 ≤ i ≤ e−2 and the−1 +hi 6∈ Kn ,
he−1 = h1 + he−2 ). Hence Kn3 = Kn2 + i=1
then the−1 +hi ∈ Rn the−1 ⊆ Kn2 as we have shown above. Hence Kn2 = Kn3 , which proves
Assertion (1) of Theorem 6.9.
Pe−1
Rn thj . Now notice that by
Because Sn = R, we have mn R = te R, so that R = j=0
Claim 5 (1)
(hi + ne) + hj = (hi + hj ) + ne ∈ Hn
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
27
for all 0 ≤ i, j ≤ e−1 and we get thi +ne ∈ cn , whence tne Rn +(thi +ne | 1 ≤ i ≤ e−1)Rn ⊆ cn ,
while (n − 1)e + hj 6∈ Hn for all 1 ≤ j ≤ e − 1, so that t(n−1)e 6∈ cn . Thus
cn = tne Rn + (thi +ne | 1 ≤ i ≤ e − 1)Rn = (thi +ne | 0 ≤ i ≤ e − 1)Rn
and hence ℓRn (Rn /cn ) = n. Therefore, ℓRn (Kn /Rn ) = n by Proposition 2.3 (1), which
proves Assertion (2) of Theorem 6.9.
To prove Assertion (3) of Theorem 6.9, it suffices by Assertion (2) that ℓRn (Kn /Rn ) =
n(e − 2), because cn = Rn : Kn by Proposition 2.3 (2) and µRn (Kn /Rn ) = e − 2 (notice
that r(Rn ) = e − 1 by Claim 5 (2)). We set Lq = mqn Kn + Rn . We then have by
Pe−2
induction on q that Lq = Rn + j=1
Rn thj +qe for all 0 ≤ q ≤ n. In fact, let 0 ≤ q < n
and assume thath our assertion holds
true for q. Then since Lq+1 = mn Lq + Rn , we get
i
Pe−2
Pe−2
hj +qe
Rn thj +(q+1)e , because for
. Therefore, Lq+1 = Rn + j=1
Lq+1 = Rn + mn
j=1 Rn t
all 1 ≤ i ≤ e − 1 and 1 ≤ j ≤ e − 2
(hi + ne) + (hj + qe) = [(hi + hj ) + ne] + qe ∈ Hn
by Claim 5 (1). Hence we obtain a filtration
Kn = L0 ⊇ L1 ⊇ · · · ⊇ Ln = Rn ,
Pe−2
where Lq = Lq+1 + j=1
Rn thj +qe and mn ·(Lq /Lq+1 ) = (0) for 0 ≤ q < n. Consequently,
to see that ℓRn (Kn /Rn ) = n(e − 2), it is enough to show the following.
Claim 6. ℓk (Lq /Lq+1 ) = e − 2 for all 0 ≤ q < n.
Proof of Claim 6. Let 0 ≤ q < n and let {cj }1≤j≤e−2 be elements of the field k such that
Pe−2 hj +qe
∈ Lq+1 . Suppose cj 6= 0 for some 1 ≤ j ≤ e − 2. Then thj +qe ∈ Lq+1 . Hence
j=1 cj t
hj + qe ∈ Hn or (hj + qe) − (hm + (q + 1)e) ∈ Hn for some 1 ≤ m ≤ e − 2. We get
hj + qe 6∈ Hn , since hj + (n − 1)e 6∈ Hn . On the other hand, if
(hj + ne) − (hm + ne + e) = (hj + qe) − (hm + (q + 1)e) ∈ Hn ,
then 1 ≤ m < j ≤ e − 2. Let us write
(hj + ne) − (hm + ne + e) = α0 e + α1 (h1 + ne) + · · · + αe−1 (he−1 + ne)
with integers 0 ≤ αp ∈ Z. Then αj = 0 since (hj + ne) − (hm + ne + e) < hj + ne, so that
∨
hj + ne = (α0 + 1)e + α1 (h1 + ne) + · · · + αj (hj + ne) + · · · + · · · + αe−1 (he−1 + ne),
which violates the fact that {e} ∪ {hi + ne}1≤i≤e−1 is a minimal system of generators of
Hn . Thus cj = 0 for all 1 ≤ j ≤ e − 2, whence ℓk (Lq /Lq+1 ) = e − 2 as claimed.
Therefore, ℓRn (Kn /Rn ) = n(e − 2), so that Kn /Rn ∼
= (Rn /cn )⊕(e−2) as an Rn -module,
which completes the proof of Theorem 6.9.
28
TRAN DO MINH CHAU, SHIRO GOTO, SHINYA KUMASHIRO, AND NAOYUKI MATSUOKA
References
[BF]
[E]
[El]
[ES]
[GGHV]
[GMP]
[GNO]
[GMTY1]
[GMTY2]
[GMTY3]
[GMTY4]
[GRTT]
[GTT]
[GW]
[H]
[HHS]
[HK]
[L]
[S1]
[S2]
[S3]
[T]
[V1]
[V2]
V. Barucci and R. Fröberg, One-dimensional almost Gorenstein rings, J. Algebra, 188
(1997), 418–442.
D. Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry, GTM, 150,
Springer-Verlag, 1994.
J. Elias, On the canonical ideals of one-dimensional Cohen-Macaulay local rings, Proc. Edinburgh Math. Soc., 59 (2016), 77-90.
P. Eakin and A. Sathaye, Prestable ideals, J. Algebra, 41 (1976), 439–454.
L. Ghezzi, S. Goto, J. Hong, and W. V. Vasconcelos, Invariants of Cohen-Macaulay
rings associated to their canonical ideals, arXiv:1701.05592v1.
S. Goto, N. Matsuoka, and T. T. Phuong, Almost Gorenstein rings, J. Algebra, 379
(2013), 355-381.
S. Goto, K. Nishida, and K. Ozeki, Sally modules of rank one, Michigan Math. J., 57
(2008), 359–381.
S. Goto, N. Matsuoka, N. Taniguchi, and K.-i. Yoshida, The almost Gorenstein Rees
algebras of parameters, J. Algebra, 452 (2016), 263–278.
S. Goto, N. Matsuoka, N. Taniguchi, and K.-i. Yoshida, The almost Gorenstein Rees
algebras over two-dimensional regular local rings, J. Pure Appl. Algebra, 220 (2016), 3425–
3436.
S. Goto, N. Matsuoka, N. Taniguchi, and K.-i. Yoshida, On the almost Gorenstein
property in Rees algebras of contracted ideals, arXiv:1604.04747v2.
S. Goto, N. Matsuoka, N. Taniguchi, and K.-i. Yoshida, The almost Gorenstein Rees
algebras of pg -ideals, good ideals, and powers of the maximal ideals, arXiv:1607.05894v2.
S. Goto, M. Rahimi, N. Taniguchi, and H. L. Truong, When are the Rees algebras of
parameter ideals almost Gorenstein graded rings?, Kyoto J. Math. (to appear).
S. Goto, R. Takahashi and N. Taniguchi, Almost Gorenstein rings -towards a theory of
higher dimension, J. Pure Appl. Algebra, 219 (2015), 2666–2712.
S. Goto and K. Watanabe, On graded rings I, J. Math. Soc. Japan, 30 (1978), no. 2,
179–213.
J. Herzog, Generators and relations of Abelian semigroups and semigroup rings, Manuscripta
Math., 3 (1970), 175–193.
J. Herzog, T. Hibi, and D. I. Stamate, The trace ideal of the canonical module, Preprint
2017, arXiv:1612.02723v1.
J. Herzog and E. Kunz, Dear kanonische Modul eines-Cohen-Macaulay-Rings, Lecture
Notes in Mathematics, 238, Springer-Verlag, 1971.
J. Lipman, Stable ideals and Arf rings, Amer. J. Math., 93 (1971), 649–685.
J. Sally, Numbers of generators of ideals in local rings, Lecture Notes in Pure and Applied
Mathematics, 35 , Marcel Dekker, INC., 1978.
J. Sally, Cohen-Macaulay local rings of maximal embedding dimension, J. Algebra, 56 (1979),
168–183.
J. Sally, Hilbert coefficients and reduction number 2, J. Algebraic Geom., 1 (1992), 325–333.
N. Taniguchi, On the almost Gorenstein property of determinantal rings,
arXiv:1701.06690v1.
W. V. Vasconcelos, Hilbert functions, analytic spread, and Koszul homology, Commutative algebra: Syzygies, multiplicities, and birational algebra (South Hadley, 1992), Contemp. Math.,
159, 410–422, Amer. Math. Soc., Providence, RI, 1994.
V. W. Vasconcelos, The Sally modules of ideals:
a survey, Preprint 2017,
arXiv:1612.06261v1.
SALLY MODULES OF CANONICAL IDEALS IN DIMENSION ONE AND 2-AGL RINGS
29
Thai Nguyen University of Education, Khoa Toan, truong DHSP Thai Nguyen
E-mail address: [email protected]
Department of Mathematics, School of Science and Technology, Meiji University,
1-1-1 Higashi-mita, Tama-ku, Kawasaki 214-8571, Japan
E-mail address: [email protected]
Department of Mathematics and Informatics, Graduate School of Science and Technology, Chiba University, Chiba-shi 263, Japan
E-mail address: [email protected]
Department of Mathematics, School of Science and Technology, Meiji University,
1-1-1 Higashi-mita, Tama-ku, Kawasaki 214-8571, Japan
E-mail address: [email protected]
| 0 |
arXiv:1705.03980v2 [math.AC] 25 Jan 2018
AUSLANDER MODULES
PEYMAN NASEHPOUR
Dedicated to my father, Maestro Nasrollah Nasehpour
Abstract. In this paper, we introduce the notion of Auslander modules, inspired from Auslander’s zero-divisor conjecture (theorem) and give some interesting results for these modules. We also investigate torsion-free modules.
0. Introduction
Auslander’s zero-divisor conjecture in commutative algebra states that if R is a
Noetherian local ring, M is a nonzero R-module of finite type, and finite projective
dimension, and r ∈ R is not a zero-divisor on M , then r is not a zero-divisor on
R [6, p. 8] and [5, p. 496]. This “conjecture” is in fact a theorem, after Peskine
and Szpiro in [15] showed that Auslander’s zero-divisor theorem is a corollary of
their new intersection theorem and thereby proved it for a large class of local rings.
Also see [16, p. 417]. Note that its validity without any restrictions followed when
Roberts [17] proved the new intersection theorem in full generality. Also see Remark
9.4.8 in [1].
Let M be an arbitrary unital nonzero module over a commutative ring R with
a nonzero identity. Inspired from Auslander’s zero-divisor theorem, one may ask
when the inclusion ZR (R) ⊆ ZR (M ) holds, where by ZR (M ), we mean the set of
all zero-divisors of the R-module M . In Definition 1.1, we define an R-module M
to be Auslander if ZR (R) ⊆ ZR (M ) and in Proposition 1.2, we give a couple of
examples for the families of Auslander modules. The main theme of §1 is to see
under what conditions if M is an Auslander R-module, then the S-module M ⊗R S
is Auslander, where S is an R-algebra (see Theorem 1.4, Theorem 1.6, and Theorem
1.10). For example, in Corollary 1.11, we show that if M is an Auslander R-module,
B a content R-algebra, and M has property (A), then M ⊗R B is an Auslander
B-module. For the definition of content algebras refer to [14, Section 6].
On the other hand, let us recall that an R-module M is torsion-free if the natural
map M → M ⊗ Q is injective, where Q is the total quotient ring of the ring R
[1, p. 19]. It is easy to see that M is a torsion-free R-module if and only if
ZR (M ) ⊆ ZR (R). In §2, we investigate torsion-free property under polynomial and
power series extensions (see Theorem 2.1 and Theorem 2.2). We also investigate
torsion-free Auslander modules (check Proposition 2.5, Theorem 2.7, and Theorem
2.9).
In this paper, all rings are commutative with non-zero identities and all modules
are unital.
2010 Mathematics Subject Classification. 13A15, 13B25, 13F25.
Key words and phrases. Auslander modules, Auslander’s zero-divisor conjecture, content algebras, torsion-free modules.
1
2
PEYMAN NASEHPOUR
1. Auslander Modules
We start the first section by defining Auslander modules:
Definition 1.1. We define an R-module M to be an Auslander module, if r ∈ R
is not a zero-divisor on M , then r is not a zero-divisor on R, or equivalently, if the
following property holds:
ZR (R) ⊆ ZR (M ).
Let us recall that if M is an R-module, the content of m ∈ M , denoted by c(m),
is defined to be the following ideal:
\
c(m) = {I ∈ Id(R) : m ∈ IM },
where by Id(R), we mean the set of all ideals of R. The R-module M is said to be
a content R-module, if m ∈ c(m)M , for all m ∈ M [14]. In the following, we give
some families of Auslander modules:
Proposition 1.2 (Some Families of Auslander Modules). Let M be an R-module.
Then the following statements hold:
(1) If R is a domain, then M is an Auslander R-module.
(2) If M is a flat and content R-module such that for any s ∈ R, there is an
x ∈ M such that c(x) = (s). Then M is an Auslander R-module.
(3) If M is an R-module such that Ann(M ) = (0), then M is an Auslander
R-module.
(4) If for any nonzero s ∈ R, there is an x ∈ M such that s · x 6= 0, i.e.
Ann(M ) = (0), then HomR (M, M ) is an Auslander R-module.
(5) If N is an R-submodule of an R-module M and N is Auslander, then M
is also Auslander.
(6) If M is an Auslander R-module, then M ⊕ M ′ is an Auslander R-module
for any R-module M ′ . In particular, if {Mi }i∈Λ is a family of R-modules
and there
L is an i ∈QΛ, say i0 , such that Mi0 is an Auslander R-module,
then i∈Λ Mi and i∈Λ Mi are Auslander R-modules.
Proof. The statement (1) is obvious. We prove the other statements:
(2): Let r ∈ ZR (R). By definition, there is a nonzero s ∈ R such that r · s = 0.
Since in content modules c(x) = (0) if and only if x = 0 [14, Statement 1.2] and
by assumption, there is a nonzero x ∈ M such that c(x) = (s), we obtain that
r · c(x) = (0). Also, since M is a flat and content R-module, by [14, Theorem 1.5],
r · c(x) = c(r · x). This implies that r ∈ ZR (M ).
(3): Suppose that r · s = 0 for some nonzero s in R. By assumpstion, there exists
an x in M such that s · x 6= 0, but r · (s · x) = 0, and so r is a zero-divisor on M .
(4): Let r ∈ ZR (R). So, there is a nonzero s ∈ R such that r · s = 0. Define
fs : M −→ M by fs (x) = s · x. By assumption, fs is a nonzero element of
HomR (M, M ). But rfs = 0. This means that r ∈ ZR (Hom(M, M )).
(5): The proof is straightforward, if we consider that Z(N ) ⊆ Z(M ).
The statement (6) is just a corollary of the statement (5).
Proposition 1.3. Let M be an Auslander R-module and S a multiplicatively closed
subset of R contained in R − ZR (M ). Then, MS is an Auslander RS -module.
Proof. Let ZR (R) ⊆ ZR (M ) and S be a multiplicatively closed subset of R such
that S ⊆ R − ZR (M ). Take r1 /s1 ∈ ZRS (RS ). So there exists an r2 /s2 6= 0/1 such
AUSLANDER MODULES
3
that (r1 · r2 )/(s1 · s2 ) = 0/1. Since S ⊆ R − ZR (R), we have r1 · r2 = 0, where
r2 6= 0. But ZR (R) ⊆ ZR (M ), so r1 ∈ ZR (M ). Consequently, there is a nonzero
m ∈ M such that r1 · m = 0. Since S ⊆ R − ZR (M ), m/1 is a nonzero element of
MS . This point that r1 /s1 · m/1 = 0/1, causes r1 /s1 to be an element of ZRS (MS )
and the proof is complete.
Let us recall that an R-module M has property (A), if each finitely generated
ideal I ⊆ ZR (M ) has a nonzero annihilator in M [11, Definition 10]. Examples
of modules having property (A) include modules having very few zero-divisors [11,
Definition 6]. Especially, finitely generated modules over Noetherian rings have
property (A) [7, p. 55]. Homological aspects of modules having very few zerodivisors have been investigated in [12]. Finally, we recall that if R is a ring, G a
monoid, and f = r1 X g1 + · · · + rn X gn is an element of the monoid ring R[G], then
the content of f , denoted by c(f ), is the finitely generated ideal (r1 , . . . , rn ) of R.
Theorem 1.4. Let the R-module M have property (A) and G be a commutative,
cancellative, and torsion-free monoid. Then, M [G] is an Auslander R[G]-module if
and only if M is an Auslander R-module.
Proof. (⇒): Let r ∈ ZR (R). So, r ∈ ZR[G] (R[G]) and by assumption, r ∈
ZR[G] (M [G]). Clearly, this means that there is a nonzero g in ZR[G] (M [G]) such
that rg = 0. Therefore, there is a nonzero m in M such that rm = 0.
(⇐): Let f ∈ ZR[G] (R[G]). By [11, Theorem 2], there is a nonzero element
r ∈ R such that f · r = 0. This implies that c(f ) ⊆ ZR (R). But M is an Auslander
R-module, so ZR (R) ⊆ ZR (M ), which implies that c(f ) ⊆ ZR (M ). On the other
hand, M has property (A). Therefore, c(f ) has a nonzero annihilator in M . Hence,
f ∈ ZR[G] (M [G]) and the proof is complete.
Note that a semimodule version of Theorem 1.4 has been given in [10].
It is good to mention that if R is a ring and f = a0 + a1 X + · · · + an X n + · · ·
is an element of R[[X]], then Af is defined to be the ideal of R generated by the
coefficients of f , i.e.
Af = (a0 , a1 , . . . , an , . . .).
One can easily check that if R is Noetherian, then Af = c(f ). The following lemma
is a generalization of Theorem 5 in [4]:
Lemma 1.5. Let R be a Noetherian ring, M a finitely generated R-module, f ∈
R[[X]], g ∈ M [[X]] − {0}, and f g = 0. Then, there is a nonzero constant m ∈ M
such that f · m = 0.
Proof. Define c(g), the content of g, to be the R-submodule of M generated by its
coefficients. If c(f )c(g) = (0), then choose a nonzero m ∈ c(g). Clearly, f · m = 0.
Otherwise, by Theorem 3.1 in [2], one can choose a positive integer k, such that
c(f )c(f )k−1 c(g) = 0, while c(f )k−1 c(g) 6= 0. Now for each nonzero element m in
c(f )k−1 c(g), we have f · m = 0 and the proof is complete.
Theorem 1.6. Let R be a Noetherian ring and the R-module M have property (A).
Then, M [[X]] is an Auslander R[[X]]-module if and only if M is an Auslander Rmodule.
Proof. By Lemma 1.5, the proof is just a mimicking of the proof of Theorem 1.4.
4
PEYMAN NASEHPOUR
Since finitely generated modules over Noetherian rings have property (A) [7, p.
55], we have the following corollary:
Corollary 1.7. Let R be a Noetherian ring and M be a finitely generated R-module.
Then, M [[X]] is an Auslander R[[X]]-module if and only if M is an Auslander Rmodule.
Remark 1.8 (Ohm-Rush Algebras). Let us recall that if B is an R-algebra, then
B is said to be an Ohm-Rush R-algebra, if f ∈ c(f )B, for all f ∈ B [3, Definition
2.1]. It is easy to see that if P is a projective R-algebra, then P is an OhmRush R-algebra [14, Corollary 1.4]. Note that if R is a Noetherian ring and f =
a0 + a1 X + · · · + an X n + · · · is an element of R[[X]], then Af = c(f ), where Af is
the ideal of R generated by the coefficients of f . This simply implies that R[[X]] is
an Ohm-Rush R-algebra.
Now we go further to define McCoy algebras, though we don’t go through them
deeply in this paper. McCoy semialgebras (and algebras) and their properties
have been discussed in more details in author’s recent paper on zero-divisors of
semimodules and semialgebras [10].
Definition 1.9. We say that B is a McCoy R-algebra, if B is an Ohm-Rush Ralgebra and f · g = 0 with g 6= 0 implies that there is a nonzero r ∈ R such that
c(f ) · r = (0), for all f, g ∈ B.
Since any content algebra is a McCoy algebra [14, Statement 6.1], we have plenty
of examples for McCoy algebras. For instance, if G is a torsion-free abelian group
and R is a ring, then R[G] is a content - and therefore, a McCoy - R-algebra [13].
For other examples of McCoy algebras, one can refer to content algebras given in
Examples 6.3 in [14]. Now we proceed to give the following general theorem on
Auslander modules:
Theorem 1.10. Let M be an Auslander R-module and B a faithfully flat McCoy
R-algebra. If M has property (A), then M ⊗R B is an Auslander B-module.
Proof. Let f ∈ ZB (B). So by definition, there is a nonzero r ∈ R such that
c(f ) · r = (0). This implies that c(f ) ⊆ ZR (R). But M is an Auslander R-module.
Therefore, c(f ) ⊆ ZR (M ). Since c(f ) is a finitely generated ideal of R [14, p. 3]
and M has property (A), there is a nonzero m ∈ M such that c(f ) · m = (0). This
means that c(f ) ⊆ AnnR (m). Therefore, c(f )B ⊆ AnnR (m)B. Since any McCoy
R-algebra is by definition an Ohm-Rush R-algebra, we have that f ∈ c(f )B. Our
claim is that AnnR (m)B = AnnB (1 ⊗ m) and here is the proof: Since
0 −→ R/ AnnR (m) −→ M
is an R-exact sequence and B is a faithfully flat R-module, we have the following
B-exact sequence:
0 −→ B/ AnnR (m)B −→ M ⊗R B,
with AnnR (m)B = Ann(m ⊗R 1B ). This means that f ∈ ZB (M ⊗R B) and the
proof is complete.
Corollary 1.11. Let M be an Auslander R-module and B a content R-algebra. If
M has property (A), then M ⊗R B is an Auslander B-module.
AUSLANDER MODULES
5
Proof. By definition of content algebras [14, Section 6], any content R-algebra is
faithfully flat. Also, by [14, Statement 6.1], any content R-algebra is a McCoy
R-algebra.
Question 1.12. Is there any faithfully flat McCoy algebra that is not a content
algebra?
2. Torsion-Free Modules
Let us recall that if R is a ring, M an R-module, and Q the total ring of fractions
of R, then M is torsion-free if the natural map M → M ⊗ Q is injective [1, p.
19]. It is starightforward to see that M is a torsion-free R-module if and only if
ZR (M ) ⊆ ZR (R). Therefore, the notion of Auslander modules defined in Definition
1.1 is a kind of dual to the notion of torsion-free modules.
The proof of the following theorem is quite similar to the proof of Proposition
1.4. Therefore, we just mention the proof briefly.
Theorem 2.1. Let the ring R have property (A) and G be a commutative, cancellative, and torsion-free monoid. Then, the R[G]-module M [G] is torsion-free if
and only if the R-module M is torsion-free.
Proof. (⇒): Let r ∈ ZR (M ). Clearly, this implies that r ∈ ZR[G] (M [G]). But
the R[G]-module M [G] is torsion-free. Therefore, ZR[G] (M [G]) ⊆ ZR[G] (R[G]). So,
r ∈ ZR (R).
(⇐): Let f ∈ ZR[G] (M [G]). By [11, Theorem 2], there is a nonzero m ∈ M
such that c(f ) · m = 0, which means that c(f ) ⊆ ZR (M ). Since M is torsion-free,
c(f ) ⊆ ZR (R), and since R has property (A), f ∈ ZR[G] (R[G]) and the proof is
complete.
Theorem 2.2. Let R be a Noetherian ring and M be a finitely generated R-module.
Then, the R[[X]]-module M [[X]] is torsion-free if and only if the R-module M is
torsion-free.
Proof. (⇒): Its proof is similar to the proof of Theorem 2.1 and therefore, we don’t
bring it here.
(⇐): Let f ∈ ZR[[X]] (M [[X]]). By Lemma 1.5, there is a nonzero element
m ∈ M such that f · m = 0. By Remark 1.8, this implies that c(f ) ⊆ ZR (M ). But
M is torsion-free, so ZR (M ) ⊆ ZR (R), which implies that c(f ) ⊆ ZR (R). On the
other hand, since every Noetherian ring has property (A) (check [7, Theorem 82,
p. 56]), c(f ) has a nonzero annihilator in R. This means that f ∈ ZR[[X]] (R[[X]]),
Q.E.D.
We continue this section by investigating torsion-free Auslander modules.
Remark 2.3. In the following, we show that there are examples of modules that are
Auslander but not torsion-free and also there are some modules that are torsion-free
but not Auslander.
(1) Let R be a ring and S ⊆ R − ZR (R) a multiplicatively closed subset of
R. Then, it is easy to see that ZR (R) = ZR (RS ), i.e. RS is a torsion-free
Auslander R-module.
6
PEYMAN NASEHPOUR
(2) Let D be a domain and M a D-module such that ZD (M ) 6= {0}. Clearly,
ZD (D) = {0} and therefore, M is Auslander, while M is not torsion-free.
For example, if D is a domain that is not a field, then D has an ideal I
such that I 6= (0) and I 6= D. It is clear that ZD (D/I) ⊇ I.
(3) Let k be a field and consider the ideal I = (0)⊕k of the ring R = k ⊕k. It is
easy to see that ZR (R) = ((0)⊕k)∪(k⊕(0)), while ZR (R/I) = (0)⊕k. This
means that the R-module R/I is torsion-free, while it is not Auslander.
Proposition 2.4 (Some Families of Torsion-free Auslander Modules). Let M be
an R-module. Then, the following statements hold:
(1) If R is a domain and M is a flat R-module, then M is torsion-free Auslander R-module.
(2) If M is a flat and content R-module such that for any s ∈ R, there is an
x ∈ M such that c(x) = (s). Then M is a torsion-free Auslander R-module.
(3) If R is a Noetherian ring and M is a finitely generated flat R-module and
for any nonzero s ∈ R, there is an x ∈ M such that s · x 6= 0. Then
HomR (M, M ) is a torsion-free Auslander R-module.
(4) If M is an Auslander R-module, and M and M ′ are both flat modules, then
M ⊕ M ′ is a torsion-free Auslander R-module. In particular, if {Mi }i∈Λ is
a family of flat R-modules and
L there is an i ∈ Λ, say i0 , such that Mi0 is an
Auslander R-module, then i∈Λ Mi is a torsion-free Auslander R-module.
(5) If R is a coherent ring and {Mi }i∈Λ is a family of flat R-modules and
Q there
is an i ∈ Λ, say i0 , such that Mi0 is an Auslander R-module, then i∈Λ Mi
is a torsion-free Auslander R-module.
Proof. It is trivial that every flat module is torsion-free. By considering Proposition
1.2, the proof of statements (1) and (2) is straightforward.
The proof of statement (3) is based on Theorem 7.10 in [9] that says that each
finitely generated flat module over a local ring is free. Now if R is a Noetherian
ring and M is a flat and finitely generated R-module, then M is a locally free Rmodule. This causes HomR (M, M ) to be also a locally free R-module and therefore,
HomR (M, M ) is R-flat and by Proposition 1.2, a torsion-free Auslander R-module.
The proof of the statements (4) and (5) is also easy, if we note that the direct
sum of flat modules is flat [8, Proposition 4.2] and, if R is a coherent ring, then the
direct product of flat modules is flat [8, Theorem 4.47].
Proposition 2.5. Let both the ring R and the R-module M have property (A)
and G be a commutative, cancellative, and torsion-free monoid. Then, M [G] is a
torsion-free Auslander R[G]-module if and only if M is a torsion-free Auslander
R-module.
Proof. By Theorem 1.4 and Theorem 2.1, the statement holds.
Corollary 2.6. Let R be a Noetherian ring and M a finitely generated R-module,
and G a commutative, cancellative, and torsion-free monoid. Then, M [G] is a
torsion-free Auslander R[G]-module if and only if M is a torsion-free Auslander
R-module.
Theorem 2.7. Let M be a flat Auslander R-module and B a faithfully flat McCoy
R-algebra. If M has property (A), then M ⊗R B is a torsion-free Auslander Bmodule.
AUSLANDER MODULES
7
Proof. By Theorem 1.10, ZB (B) ⊆ ZB (M ⊗R B). On the other hand, since M is
a flat R-module, by [8, Proposition 4.1], M ⊗R B is a flat B-module. This implies
that ZB (M ⊗R B) ⊆ ZB (B) and the proof is complete.
Corollary 2.8. Let M be a flat Auslander R-module and B a content R-algebra.
If M has property (A), then M ⊗R B is a torsion-free Auslander B-module.
Theorem 2.9. Let R be a Noetherian ring and M a finitely generated R-module.
Then, M [[X]] is a torsion-free Auslander R[[X]]-module if and only if M is a
torsion-free Auslander R-module.
Proof. Since M is finite and R is Noetherian, M is also a Noetherian R-module.
This means that both the ring R and the module M have property (A). Now by
Theorem 1.6 and Theorem 2.2, the proof is complete.
Acknowledgements
The author was partly supported by the Department of Engineering Science at
Golpayegan University of Technology and wishes to thank Professor Winfried Bruns
for his invaluable advice. The author is also grateful for the useful comments by
the anonymous referee.
References
[1] W. Bruns and J. Herzog, Cohen-Macaulay Rings, revised edn., Cambridge University Press,
Cambridge, 1998.
[2] N. Epstein and J. Shapiro, A Dedekind-Mertens theorem for power series rings, Proc. Amer.
Math. Soc. 144 (2016), 917–924.
[3] N. Epstein and J. Shapiro, The Ohm-Rush content function, J. Algebra Appl., 15, No. 1
(2016), 1650009 (14 pages).
[4] D. E. Fields, Zero divisors and nilpotent elements in power series rings, Proc. Amer. Math.
Soc. 27 (1971), no. 3, 427–433.
[5] M. Hochster, Intersection Problems and Cohen-Macaulay Modules, in Algebraic Geometry:
Bowdoin 1985 (part 2) (1987), 491–501.
[6] M. Hochster, Topics in the homological theory of modules over commutative rings, CBMS
Regional Conf. Ser. in Math., 24, Amer. Math. Soc, Providence, RI, 1975.
[7] I. Kaplansky, Commutative Rings, Allyn and Bacon, Boston, 1970.
[8] T. Y. Lam, Lectures on Modules and Rings, Springer-Verlag, Berlin, 1999.
[9] H. Matsumura, Commutative Ring Theory, Vol. 8., Cambridge University Press, Cambridge,
1989.
[10] P. Nasehpour, On zero-divisors of semimodules and semialgebras, arXiv preprint (2017),
arXiv:1702.00810.
[11] P. Nasehpour, Zero-divisors of semigroup modules, Kyungpook Math. J., 51 (1) (2011),
37–42.
[12] P. Nasehpour and Sh. Payrovi, Modules having few zero-divisors, Comm. Algebra 38 (2010),
3154–3162.
[13] D. G. Northcott, A generalization of a theorem on the content of polynomials, Proc. Cambridge Phil. Soc. 55 (1959), 282–288.
[14] J. Ohm and D. E. Rush, Content modules and algebras, Math. Scand. 31 (1972), 49–68.
[15] C. Peskine and L. Szpiro, Dimension projective finie et cohomologie locale, Publ. Math. IHES
42, (1973), 47-119.
[16] P. Roberts, Intersection theorems, in: Commutative Algebra, in: Math. Sci. Res. Inst. Publ.,
Vol. 15, Springer-Verlag, Berlin, 1989, 417–436.
[17] P. Roberts, Le théoreme d’intersection, CR Acad. Sci. Paris Ser. I Math 304 (1987), 177–180.
8
PEYMAN NASEHPOUR
Department of Engineering Science, Golpayegan University of Technology, Golpayegan, Iran
E-mail address: [email protected], [email protected]
| 0 |
arXiv:1406.0915v2 [math.AT] 11 May 2016
A VANISHING THEOREM FOR
THE p-LOCAL HOMOLOGY OF COXETER GROUPS
TOSHIYUKI AKITA
A BSTRACT. Given an odd prime number p and a Coxeter group W such that the
order of the product st is prime to p for every Coxeter generators s,t of W , we
prove that the p-local homology groups Hk (W, Z(p) ) vanish for 1 ≤ k ≤ 2(p − 2).
This generalize a known vanishing result for symmetric groups due to Minoru
Nakaoka.
1. I NTRODUCTION
Coxeter groups are important objects in many branches of mathematics, such as
Lie theory and representation theory, combinatorial and geometric group theory,
topology and geometry. Since the pioneering work of Serre [21], Coxeter groups
have been studied in group cohomology as well. See the book by Davis [9] and §2.2
of this paper for brief outlooks. In this paper, we will study the p-local homology
of Coxeter groups for odd prime numbers p. For an arbitrary Coxeter group W ,
its integral homology group Hk (W, Z) is known to be a finite abelian group for all
k > 0, and hence it decomposes into a finite direct sum of p-local homology groups
each of which is a finite abelian p-group:
Hk (W, Z)
M
Hk (W, Z(p) ).
p
According to a result of Howlett [13], the first and second p-local homology
groups, H1 (W, Z(p) ) and H2 (W, Z(p) ), are trivial for every odd prime number p.
On the other hand, the symmetric group of n letters Sn (n ≥ 2) is the Coxeter
group of type An−1 . Much is known about the (co)homology of symmetric groups.
Most notably, in his famous two papers, Nakaoka proved the homology stability for
symmetric groups [16] and computed the stable mod p homology [17]. As a consequence of his results, Hk (Sn , Z(p) ) vanishes for 1 ≤ k ≤ 2(p − 2) (see Theorem
2.4 below). The purpose of this paper is to generalize vanishing of Hk (Sn , Z(p) ) to
all Coxeter groups:
Theorem 1.1. Let p be an odd prime number and W a p-free Coxeter group. Then
Hk (W, Z(p) ) = 0 holds for 1 ≤ k ≤ 2(p − 2).
Here a Coxeter group W is said to be p-free if the order of the product st is prime
to p for every distinct Coxeter generators s,t of W . We should remark that, for p ≥
2010 Mathematics Subject Classification. Primary 20F55, 20J06; Secondary 55N91.
Key words and phrases. Coxeter groups, Group homology.
1
2
T. AKITA
5, the p-freeness assumption is necessary and the vanishing range 1 ≤ k ≤ 2(p − 2)
is best possible. The situation is somewhat different for p = 3. See §5.4.
The proof of Theorem 1.1 consists of two steps, a case by case argument for
finite p-free Coxeter groups with relatively small rank, and the induction on the
number of generators. The induction is made possible by means of the equivariant
homology of Coxeter complexes and the Leray spectral sequence converging to
the equivariant homology. Now we will introduce the content of this paper very
briefly. In §2.1, we will recall definitions and relevant facts concerning of Coxeter
groups. Known results about homology of Coxeter groups and their consequences
will be reviewed more precisely in §2.2. After the consideration of the equivariant
homology of Coxeter complexes in §3. the proof of Theorem 1.1 will be given in
§4. The final section §5 consists of miscellaneous results. There we will give some
classes of Coxeter groups such that all the p-local homology groups vanish.
Notation. Throughout this paper, p is an odd prime number unless otherwise stated.
Z(p) is the localization of Z at the prime p (the ring of p-local integers). For a
finite abelian group A, its p-primary component is denoted by A(p) . Note that
A(p) A ⊗Z Z(p) . For a group G and a (left) G-module M, the co-invariant of M is
denoted by MG (see [8, II.2] for the definition). For a prime number p ≥ 2, we denote the cyclic group of order p and the field with p elements by the same symbol
Z/p.
2. P RELIMINARIES
2.1. Coxeter groups. We recall definitions and relevant facts concerning of Coxeter groups. Basic references are [1, 7, 9, 14]. See also [12] for finite Coxeter
groups. Let S be a finite set and m : S × S → N ∪ {∞} a map satisfying the following conditions:
(1) m(s, s) = 1 for all s ∈ S
(2) 2 ≤ m(s,t) = m(t, s) ≤ ∞ for all distinct s,t ∈ S.
The map m is represented by the Coxeter graph Γ whose vertex set is S and whose
edges are the unordered pairs {s,t} ⊂ S such that m(s,t) ≥ 3. The edges with
m(s,t) ≥ 4 are labeled by those numbers. The Coxeter system associated to Γ is the
pair (W, S) where W = W (Γ) is the group generated by s ∈ S and the fundamental
relations (st)m(s,t) = 1 (m(s,t) < ∞):
W := hs ∈ S | (st)m(s,t) = 1(m(s,t) < ∞)i.
The group W is called the Coxeter group of type Γ, and elements of S are called
Coxeter generators of W . The cardinality of S is called the rank of W and is
denoted by |S| or rankW . Note that the order of the product st is precisely m(s,t).
For a subset T ⊆ S (possibly T = ∅), the subgroup WT := hT i of W generated by
elements t ∈ T is called a parabolic subgroup. In particular, WS = W and W∅ = {1}.
It is known that (WT , T ) is a Coxeter system.
A Coxeter group W is called irreducible if its defining graph Γ is connected,
otherwise called reducible. For a reducible Coxeter group W (Γ) of type Γ, if Γ
p-LOCAL HOMOLOGY OF COXETER GROUPS
3
consists of the connected components Γ1 , Γ2 , . . . , Γr , then W (Γ) is the direct product of parabolic subgroups W (Γi )’s (1 ≤ i ≤ r), each of which is irreducible:
W (Γ) = W (Γ1 ) ×W (Γ2 ) × · · · ×W (Γr ).
Coxeter graphs for finite irreducible Coxeter groups are classified. There are
four infinite families An (n ≥ 1), Bn (n ≥ 2), Dn (n ≥ 4), I2 (q) (q ≥ 3), and six
exceptional graphs E6 , E7 , E8 , F4 , H3 and H4 . The subscript indicates the rank of the
resulting Coxeter group. See Appendix for the orders of finite irreducible Coxeter
groups. Here we follow the classification given in the book by Humphreys [14]
and there are overlaps A2 = I2 (3), B2 = I2 (4). Note that W (An ) is isomorphic to
the symmetric group of n + 1 letters, while W (I2 (q)) is isomorphic to the dihedral
group of order 2q.
Finally, given an odd prime number p, we define a Coxeter group W to be p-free
if m(s,t) is prime to p for all s,t ∈ S. Here ∞ is prime to all prime numbers by
the convention. For example, the Coxeter group W (I2 (q)) is p-free if and only if
q is prime to p, while the Coxeter group W (An ) (n ≥ 2) is p-free for p ≥ 5. For
every finite irreducible Coxeter group W , the range of odd prime numbers p such
that W is p-free can be found in Appendix. Note that parabolic subgroups of p-free
Coxeter groups are also p-free. Henceforth, we omit the reference to the Coxeter
graph Γ and the set of Coxeter generators S if there is no ambiguity.
2.2. Known results for homology of Coxeter groups. In this subsection, we will
review some of known results concerning the homology of Coxeter groups which
are related to our paper. A basic reference for (co)homology of groups is [8].
In the beginning, Serre [21] proved that every Coxeter group W has finite virtual
cohomological dimension and is a group of type WFL (see [8, Chapter VIII] for
definitions). This implies, in particular, that Hk (W, Z) is a finitely generated abelian
group for all k. On the other hand, the rational homology of any Coxeter groups are
known to be trivial (see [5, Proposition 5.2] or [9, Theorem 15.1.1]). Combining
these results, we obtain the following result:
Proposition 2.1. For any Coxeter groups W , the integral homology group Hk (W, Z)
is a finite abelian group for all k > 0.
Consequently, Hk (W, Z) (k > 0) decomposes into a finite direct sum
(2.1)
Hk (W, Z)
M
Hk (W, Z)(p)
p
where p runs the finite set of prime numbers dividing the order of Hk (W, Z). The
universal coefficient theorem implies
(2.2)
Hk (W, Z)(p) Hk (W, Z) ⊗Z Z(p) Hk (W, Z(p) )
(see [6, Corollary 2.3.3]). It turns out that the study of the integral homology groups
of Coxeter groups reduces to the study of the p-local homology groups. Later, we
will prove that Hk (W, Z(p) ) = 0 (k > 0) if W has no p-torsion (Proposition 5.3).
The first and second integral homology of Coxeter groups are known.
4
T. AKITA
Proposition 2.2. For any Coxeter groups W , we have H1 (W, Z) (Z/2)n1 (W ) and
H2 (W, Z) (Z/2)n2 (W ) for some non-negative integers n1 (W ), n2 (W ).
The claim for H1 (W, Z) is obvious because H1 (W, Z) = W /[W,W ] and W is generated by elements of order 2. The statement for H2 (W, Z) was proved by Howlett
[13] (following earlier works by Ihara and Yokonuma [15] and Yokonuma [25]).
The nonnegative integers n1 (W ), n2 (W ) can be computed from the Coxeter graph
for W . As for n1 (W ), let GW be the graph whose vertices set is S and whose edges
are unordered pair {s,t} ⊂ S such that m(s,t) is a finite odd integer. Then it is
easy to see that n1 (W ) agrees with the number of connected components of GW . In
particular, n1 (W ) ≥ 1 and hence H1 (W, Z) = H1 (W, Z(2) ) , 0. For the presentation
of n2 (W ), see [13, Theorem A] or [14, §8.11]. As a consequence of Proposition
2.2, we obtain the following result:
Corollary 2.3. Let p be an odd prime number. For any Coxeter groups W , we have
H1 (W, Z(p) ) = H2 (W, Z(p) ) = 0.
The corollary does not hold for the third homology or higher. Indeed, for
the Coxeter group W (I2 (q)) of type I2 (q), which is isomorphic to the dihedral
group of order 2q as mentioned before, it can be proved that, if p divides q, then
Hk (W (I2 (q)), Z(p) ) , 0 whenever k ≡ 3 (mod 4) (see [20, Theorem 2.1] and §5.1
below). This observation also shows the necessity of the p-freeness assumption in
our results for p ≥ 5. Finally, we will recall a consequence of results of Nakaoka
[16, 17] which was mentioned in the introduction.
Theorem 2.4 (Nakaoka [16, 17]). Let Sn be the symmetric group of n letters. Then
Hk (Sn , Z(p) ) = 0 (1 ≤ k ≤ 2(p − 2)) for all odd prime numbers p.
Proof. In his paper [16], Nakaoka proved the homology stability for symmetric
groups. Namely, for 2 ≤ m ≤ n ≤ ∞, the homomorphism Hk (Sm , A) → Hk (Sn , A)
induced by the natural inclusion Sm ֒→ Sn is injective for all k, and is an isomorphism if k < (m + 1)/2, where A is an abelian group with the trivial Sn -action,
and S∞ is the infinite symmetric group [16, Theorem 5.8 and Corollary 6.7]. He
also computed the mod p homology of S∞ in [17, Theorem 7.1], from which we
deduce that Hk (S∞ , Z/p) = 0 for 1 ≤ k ≤ 2(p − 2) and that H2p−3 (S∞ , Z/p) , 0.
Combining these results, we see that Hk (Sn , Z/p) = 0 (1 ≤ k ≤ 2(p − 2)) for all n.
Applying the universal coefficient theorem, the theorem follows.
Theorem 1.1, together with Corollary 2.3 for p = 3, generalize Theorem 2.4
to all Coxeter groups. For further results concerning of (co)homology of Coxeter
groups, we refer the book by Davis [9] and papers [2–4, 10, 11, 18, 20, 23] as well
as references therein.
3. C OXETER
COMPLEXES AND THEIR EQUIVARIANT HOMOLOGY
3.1. Coxeter complexes. We recall the definition and properties of Coxeter complexes which are relevant to prove Theorem 1.1. A basic reference for Coxeter
complexes is [1, Chapter 3]. Given a Coxeter group W , the Coxeter complex XW
of W is the poset of cosets wWT (w ∈ W, T ( S), ordered by reverse inclusion. It is
p-LOCAL HOMOLOGY OF COXETER GROUPS
5
known that XW is actually an (|S|−1)-dimensional simplicial complex (see [1, Theorem 3.5]). The k-simplices of XW are the cosets wWT with k = |S| − |T | − 1. A
coset wWT is a face of w′WT ′ if and only if wWT ⊇ w′WT ′ . In particular, the vertices are cosets of the form wWS\{s} (s ∈ S, w ∈ W ), the maximal simplices are the
singletons wW∅ = {w} (w ∈ W ), and the codimension one simplices are cosets of
the form wW{s} = {w, ws} (s ∈ S, w ∈ W ). In what follows, we will not distinguish
between XW and its geometric realization.
There is a simplicial action of W on XW by left translation w′ · wWT := w′ wWT .
The isotropy subgroup of a simplex wWT is precisely wWT w−1 , which fixes wWT
pointwise. Next, consider the subcomplex ∆W = {WT | T ( S} of XW , which consists of a single (|S| − 1)-simplex W∅ and its faces. Since the type function XW → S,
wWT 7→ S \ T is well-defined (see [1, Definition 3.6]), ∆W forms the set of representatives of W -orbits of simplices of XW . The following fact is well-known.
Proposition 3.1. If W is a finite Coxeter group, then XW is a triangulation of the
(|S| − 1)-dimensional sphere S|S|−1 .
See [1, Proposition 1.108] for the proof. Alternatively, W can be realized as
an orthogonal reflection group on the |S|-dimensional Euclidean space R|S| and
hence it acts on the unit sphere S|S|−1 . Each s ∈ S acts on S|S|−1 as an orthogonal
reflection. The Coxeter complex XW coincides with the equivariant triangulation
of S|S|−1 cut out by the reflection hyperplanes for W . In case W is infinite, Serre
proved the following result:
Proposition 3.2 ([21, Lemma 4]). If W is an infinite Coxeter group, then XW is
contractible.
3.2. Equivariant homology of Coxeter complexes. Given a Coxeter group W ,
let HkW (XW , Z(p) ) be the k-th equivariant homology group of XW (see [8, Chapter VII] for the definition). If XW is infinite, then XW is contractible so that the
equivariant homology is isomorphic to the homology of W :
Proposition 3.3. If W is an infinite Coxeter group, then
HkW (XW , Z(p) ) Hk (W, Z(p) )
for all k.
If W is finite, then HkW (XW , Z(p) ) may not be isomorphic to Hk (W, Z(p) ), however, they are isomorphic if k is relatively small:
Proposition 3.4. If W is a finite Coxeter group, then
HkW (XW , Z(p) ) Hk (W, Z(p) )
for k ≤ rankW − 1.
Proof. Consider the spectral sequence
W
Ei,2 j = Hi (W, H j (XW , Z(p) )) ⇒ Hi+
j (XW , Z(p) )
6
T. AKITA
2 H (W, Z ) for all i. Since X is homeomorphic
(see [8, VII.7]) and note that Ei,0
i
W
(p)
W
|S|−1
2
to S
, we have Ei, j = 0 for j , 0, |S| − 1. Hence Hk (XW , Z(p) ) Hk (W, Z(p) )
for k ≤ |S| − 2. Now
2
E0,|S|−1
= H0 (W, H|S|−1 (XW , Z(p) )) = H|S|−1 (XW , Z(p) )W
where the RHS is the co-invariant of H|S|−1 (XW , Z(p) ) as a W -module (see [8,
III.1]). Since each s ∈ S acts on XW ≈ S|S|−1 as an orthogonal reflection as mentioned in §3.1, it acts on H|S|−1 (XW , Z(p) ) Z(p) as the multiplication by −1. It
follows that the co-invariant H|S|−1 (XW , Z(p) )W is isomorphic to the quotient group
of Z(p) by the subgroup generated by r − (−1)r = 2r (r ∈ Z(p) ). But this subgroup
is nothing but the whole group Z(p) because 2 is invertible in Z(p) . This proves
2
E0,|S|−1
= 0 and hence
W
H|S|−1
(XW , Z(p) ) H|S|−1 (W, Z(p) )
as desired.
4. P ROOF
OF
T HEOREM 1.1
We will prove Theorem 1.1 by showing the following two claims:
Claim 1. If W is a finite p-free Coxeter group with rankW ≤ 2(p − 2), then
Hk (W, Z(p) ) = 0 for 1 ≤ k ≤ 2(p − 2).
Claim 2. Claim 1 implies Theorem 1.1.
The first claim is equivalent to Theorem 1.1 for finite p-free Coxeter groups
with rankW ≤ 2(p − 2), and will be proved by a case by case argument. The
second claim will be proved by the induction on rankW by using the equivariant
homology of Coxeter complexes. Let us prove Claim 2 first.
4.1. Proof of Claim 2. For every Coxeter group W , there is a spectral sequence
(4.1)
Ei,1 j =
M
σ ∈Si
W
H j (Wσ , Z(p) ) ⇒ Hi+
j (XW , Z(p) ),
where Si is the set of representatives of W -orbits of i-simplices of XW , and Wσ
is the isotropy subgroup of an i-simplex σ (see [8, VII.7]). It is the Leray spectral sequence for the natural projection EW ×W XW → XW /W . Note that Z(p) in
H j (Wσ , Z(p) ) is the trivial Wσ -module because Wσ fixes σ pointwise. We may
choose the subset {WT | T ( S, |T | = |S| − i − 1} (the set of i-simplices of ∆W ) as
Si , and the spectral sequence can be rewritten as
(4.2)
Ei,1 j =
M
W
H j (WT , Z(p) ) ⇒ Hi+
j (XW , Z(p) ).
T (S
|T |=|S|−i−1
2 = 0 for i , 0 and E 2 Z .
Lemma 4.1. In the spectral sequence (4.2), Ei,0
(p)
0,0
p-LOCAL HOMOLOGY OF COXETER GROUPS
7
2 H (∆ , Z ) for all i, which implies the lemma because
Proof. We claim Ei,0
i W
(p)
∆W is an (|S| − 1)-simplex and hence contractible. Although such a claim may be
familiar to experts, we write down the proof for completeness. To show the claim,
we recall the construction of the spectral sequence (4.1) given in [8, VII.7]. At the
1 -term of (4.1) is given by
first stage, the Ei,0
1
Ei,0
= H0 (W,Ci (XW , Z(p) )) = Ci (XW , Z(p) )W ,
which is isomorphic to the one in (4.1) due to Eckmann-Shapiro lemma. The differ1 → E1
ential d 1 : Ei,0
i−1,0 is the map induced by the boundary operator Ci (XW , Z(p) ) →
Ci−1 (XW , Z(p) ). On the other hand, the composition
(4.3)
Ci (∆W , Z(p) ) ֒→ Ci (XW , Z(p) ) ։ Ci (XW , Z(p) )W
is an isomorphism, where the first map is induced by the inclusion ∆W ֒→ XW and
the second map is the natural projection, because the subcomplex ∆W forms the
set of representatives of W -orbits of simplices of XW . Moreover, the isomorphism
(4.3) is compatible with the boundary operator of Ci (∆W , Z(p) ) and the differential on Ci (XW , Z(p) )W . In other words, (4.3) yields a chain isomorphism of chain
complexes
(Ci (∆W , Z(p) ), ∂ ) → (Ci (XW , Z(p) )W , d 1 ).
The claim follows immediately.
Proof of Claim 2. We argue by the induction on |S|. When W is finite, we may
assume |S| > 2(p − 2), for we suppose that Claim 1 holds. Consider the spectral
sequence (4.2). Observe first that all WT ’s appearing in (4.2) are p-free and satisfy
|T | < |S|. By the induction assumption, we have H j (WT , Z(p) ) = 0 (1 ≤ j ≤ 2(p −
2)) for all T ( S, which implies Ei,1 j = Ei,2 j = 0 for 1 ≤ j ≤ 2(p − 2). Moreover,
2 = 0 for i > 0 by Lemma 4.1. This proves H W (X , Z ) = 0 for 1 ≤ k ≤
Ei,0
W
(p)
k
2(p − 2). Now Claim 2 follows from Proposition 3.3 and 3.4.
4.2. Proof of Claim 1. Given an odd prime p, if W is a finite p-free Coxeter
group with rankW ≤ 2(p − 2), then W decomposes into the direct product of finite
irreducible p-free Coxeter groups W W1 × · · · × Wr with Σri=1 rankWi = rankW .
Since Z(p) is PID, we may apply the Künneth theorem to conclude that Claim 1 is
equivalent to the following claim:
Claim 3. If W is a finite irreducible p-free Coxeter group with rankW ≤ 2(p − 2),
then Hk (W, Z(p) ) vanishes for 1 ≤ k ≤ 2(p − 2).
We prove Claim 3 for each finite irreducible Coxeter group. Firstly, the Coxeter group W (I2 (q)) of type I2 (q) is p-free if and only if q is prime to p. If so,
H∗ (W (I2 (q)), Z(p) ) = 0 for ∗ > 0 because the order of W (I2 (q)) is 2q and hence
having no p-torsion. Next, we prove the claim for the Coxeter group of type An .
To do so, we deal with cohomology instead of homology. We invoke the following
elementary lemma:
Lemma 4.2. Let G be a finite group and p ≥ 2 a prime. Then Hk (G, Z(p) )
H k+1 (G, Z)(p) for all k ≥ 1.
8
T. AKITA
Now Claim 3 for W (An ) can be proved by applying standard arguments in cohomology of finite groups:
Lemma 4.3. Hk (W (An ), Z(p) ) = 0 (1 ≤ k ≤ 2(p − 2)) holds for all n with 1 ≤ n ≤
2(p − 1).
Proof. Recall that W (An ) is isomorphic to the symmetric group of n + 1 letters
Sn+1 . If n < p then Sn has no p-torsion and hence Hk (Sn , Z(p) ) = 0 for all k > 0.
Now suppose p ≤ n ≤ 2p − 1, and let C p be a Sylow p-subgroup of Sn , which is
a cyclic group of order p. Then H ∗ (C p , Z) Z[u]/(pu) where deg u = 2. Let N p
be the normalizer of C p in Sn . It acts on C p by conjugation, and the induced map
N p → Aut(C p ) (Z/p)× is known to be surjective. Consequently, the invariant
H ∗ (C p , Z)Np is the subring generated by u p−1 . Since C p is abelian, the restriction
H ∗ (Sn , Z) → H ∗ (C p , Z) induces the isomorphism
H k (Sn , Z)(p) H k (C p , Z)Np
for k > 0 by a result of Swan [22, Lemma 1 and Appendix] (see also [24, Lemma
3.4]). This proves, for p ≤ n ≤ 2p − 1, that H k (Sn , Z)(p) = 0 (0 < k < 2p − 2) and
H 2p−2 (Sn , Z)(p) , 0. In view of Lemma 4.2, the proposition follows.
Remark 4.4. Since H2p−3 (W (A p−1 ), Z(p) ) H 2p−2 (S p , Z)(p) , 0 for all prime numbers p as was observed in the proof of Lemma 4.3, the vanishing range 1 ≤ k ≤
2(p − 2) in our theorem is best possible for p ≥ 5.
Remark 4.5. Of course, Lemma 4.3 is a direct consequence of Theorem 2.4, however, we avoid the use of Theorem 2.4 for two reasons: Firstly, by doing so, we
provide an alternative proof of Theorem 2.4. Secondly, the proof of Lemma 4.3
is much simpler than that of Theorem 2.4, for the latter relies on the homology
stability for symmetric groups and the computation of H∗ (S∞ , Z/p).
Claim 3 for the Coxeter groups of type Bn and Dn follows from Lemma 4.3 and
the following proposition:
Proposition 4.6. For any odd prime number p,
H∗ (W (Bn ), Z(p) ) H∗ (W (An−1 ), Z(p) )
holds for all n ≥ 2, and
H∗ (W (Dn ), Z(p) ) H∗ (W (An−1 ), Z(p) )
holds for all n ≥ 4.
Proof. Recall that the Coxeter group W (Bn ) is isomorphic to the semi-direct product (Z/2)n ⋊W (An−1 ) (see [9, §6.7] or [14, §1.1]). In the Lyndon-Hochschild-Serre
spectral sequence
Ei,2 j = Hi (W (An−1 ), H j ((Z/2)n , Z(p) )) ⇒ Hi+ j (W (Bn ), Z(p) ),
one has Ei,2 j = 0 for j , 0 since H j ((Z/2)n , Z(p) ) = 0 for j , 0. This proves
H∗ (W (Bn ), Z(p) ) H∗ (W (An−1 ), Z(p) ). On the other hand, W (Dn ) is known to be
isomorphic to the semi-direct product (Z/2)n−1 ⋊ W (An−1 ) (see loc. cit.), and the
proof for H∗ (W (Dn ), Z(p) ) H∗ (W (An−1 ), Z(p) ) is similar.
p-LOCAL HOMOLOGY OF COXETER GROUPS
9
These observations prove Claim 3 for p ≥ 11, for all finite irreducible Coxeter
groups of type other than An , Bn , Dn and I2 (q) have no p-torsion for p ≥ 11. The
case p = 3 follows from Corollary 2.3. Now we will prove the cases p = 5 and
p = 7. Observe that, apart from Coxeter groups of type An , Bn , Dn and I2 (q),
finite irreducible p-free Coxeter groups, with rank at most 2(p − 2) and having ptorsion, are W (E6 ) for p = 5, W (E7 ) and W (E8 ) for p = 7. So the proof of Claim
3 is completed by showing the following lemma:
Lemma 4.7. Hk (W (E6 ), Z(5) ) vanishes for 1 ≤ k ≤ 6, while Hk (W (E7 ), Z(7) ) and
Hk (W (E8 ), Z(7) ) vanish for 1 ≤ k ≤ 10.
Proof. The Coxeter group W (A4 ) is a parabolic subgroup of W (E6 ), and they have
a common Sylow 5-subgroup C5 , which is a cyclic group of order 5. The transfer
homomorphism to the Sylow 5-subgroup Hk (W (E6 ), Z(5) ) → Hk (C5 , Z(5) ) is injective and factors into a composition of transfer homomorphisms
Hk (W (E6 ), Z(5) ) → Hk (W (A4 ), Z(5) ) → Hk (C5 , Z(5) ).
In view of Lemma 4.3, we conclude that Hk (W (E6 ), Z(5) ) = 0 for 1 ≤ k ≤ 6, which
proves the lemma for W (E6 ). On the other hand, there is a sequence of parabolic
subgroups W (A6 ) < W (E7 ) < W (E8 ), and they have a common Sylow 7-subgroup
C7 , which is a cyclic group of order 7. The proof of the lemma for W (E7 ) and
W (E8 ) is similar.
5. C OXETER
GROUPS WITH VANISHING
p- LOCAL
HOMOLOGY
In this final section, we introduce some families of Coxeter groups such that
Hk (W, Z(p) ) vanishes for all k > 0.
5.1. Aspherical Coxeter groups. A Coxeter group W is called aspherical in [18]
if, for all distinct Coxeter generators s,t, u ∈ S, the inequality
1
1
1
+
+
≤1
m(s,t) m(t, u) m(u, s)
holds, where 1/∞ = 0 by the convention. The inequality is equivalent to the condition that the parabolic subgroup W{s,t,u} is of infinite order. The (co)homology
groups of aspherical Coxeter groups were studied by Pride and Stöhr [18], and the
mod 2 cohomology rings of aspherical Coxeter groups were studied by the author
[4]. Among other things, Pride and Stöhr obtained the following exact sequence
· · · → Hk+1 (W, A) →
M
s∈S
Hk (W{s} , A)⊕n(s) →
M
Hk (W{s,t} , A) → Hk (W, A) → · · ·
{s,t}⊂S
m(s,t)<∞
terminating at H2 (W, A), where A is a W -module and n(s) is a certain nonnegative
integer defined for each s ∈ S [18, Theorem 5]. Since W{s} Z/2, Hk (W{s} , Z(p) ) =
0 for k > 0. Moreover, if p does not divide m(s,t), then Hk (W{s,t} , Z(p) ) = 0 for
k > 0 either. Here no prime numbers p divide ∞ by the convention. Hence we
obtain the following result (the statement for k = 1, 2 follows from Corollary 2.3):
10
T. AKITA
Proposition 5.1. For any aspherical Coxeter groups W , we have
Hk (W, Z(p) )
M
Hk (W{s,t} , Z(p) )
{s,t}⊂S
p |m(s,t)
for all k > 0. Furthermore, Hk (W, Z(p) ) vanishes for all k > 0 if and only if W is
p-free.
Note that if p divides m(s,t), then
(
(Z/m(s,t))(p)
Hk (W{s,t} , Z(p) )
0
k ≡ 3 (mod 4)
k . 3 (mod 4)
for k > 0, where Z/m(s,t) is the cyclic group of order m(s,t) (see [20, Theorem
2.1]).
5.2. Coxeter groups without p-torsion. Next we prove vanishing of the p-local
homology of Coxeter groups without p-torsion. Before doing so, we characterize
such Coxeter groups in terms of their finite parabolic subgroups.
Proposition 5.2. Let p be a prime number. A Coxeter group W has no p-torsion if
and only if every finite parabolic subgroup has no p-torsion.
Proof. According to a result of Tits, every finite subgroup of W is contained in
conjugate of some parabolic subgroup of finite order (see [9, Corollary D.2.9]).
The proposition follows at once.
Proposition 5.3. If W is a Coxeter group without p-torsion, then Hk (W, Z(p) ) = 0
for all k > 0.
Proof. The claim is obvious for finite Coxeter groups. We prove the proposition
for infinite Coxeter groups by the induction on |S|. Let W be an infinite Coxeter
group without p-torsion and consider the spectral sequence (4.2). Every proper
parabolic subgroup WT of W has no p-torsion, and hence Hk (WT , Z(p) ) = 0 (k > 0)
by the induction assumption. This implies Ei,1 j = 0 for j , 0. Moreover Ei,2 j = 0 for
(i, j) , (0, 0) by Lemma 4.1, which proves the proposition.
In view of the last proposition, the direct sum decomposition (2.1) can be replaced to the following:
Corollary 5.4. For any Coxeter groups W and k > 0, we have
Hk (W, Z)
M
Hk (W, Z(p) ),
p
where p runs prime numbers such that W has p-torsion.
Remark 5.5. Proposition 5.3 and Corollary 5.4 should be compared with the following general results. Namely, suppose that Γ is a group having finite virtual cohomological dimension vcd Γ. If Γ does not have p-torsion, then H k (Γ, Z)(p) = 0
for k > vcd Γ. Consequently, we have the finite direct product decomposition
H k (Γ, Z) ∏ H k (Γ, Z)(p)
p
p-LOCAL HOMOLOGY OF COXETER GROUPS
11
which holds for k > vcd Γ, where p ranges over the prime numbers such that Γ has
p-torsion. See [8, Chapter X].
5.3. Right-angled Coxeter groups. A Coxeter group is called right-angled if
m(s,t) = 2 or ∞ for all distinct s,t ∈ S. The mod 2 cohomology rings of right-angled
Coxeter groups were determined by Rusin [19] (see also [9, Theorem 15.1.4]). In
this section, we prove vanishing of p-local homology for a class of Coxeter groups
which includes right-angled Coxeter groups.
Proposition 5.6. If W is a Coxeter group such that m(s,t) equals to the power
of 2 or ∞ for all distinct s,t ∈ S, then Hk (W, Z(p) ) = 0 (k > 0) for all odd prime
numbers p ≥ 3.
Proof. The finite irreducible Coxeter groups satisfying the assumption are W (A1 )
(of order 2), W (B2 ) (of order 8), and W (I2 (2m ))’s (of order 2m+1 ). Every finite
parabolic subgroup of W is isomorphic to a direct product of copies of those groups
and hence has the order the power of 2. Consequently, W has no p-torsion by
Proposition 5.2. Now the proposition follows from Proposition 5.3.
5.4. 3-free Coxeter groups. In this final section, we look into situations for p = 3
more closely. Firstly, according to Corollary 2.3, H1 (W, Z(3) ) = H2 (W, Z(3) ) = 0 for
any Coxeter groups W . This means that Theorem 1.1 remains true for p = 3 without
3-freeness assumption. On the other hand, the finite irreducible 3-free Coxeter
groups are W (A1 ),W (B2 ) and W (I2 (q)) such that q is prime to 3, all of which
have no 3-torsion. Consequently, every 3-free Coxeter group has no 3-torsion by
Proposition 5.2. Applying Proposition 5.3 we obtain the following result:
Proposition 5.7. For every 3-free Coxeter group, Hk (W, Z(3) ) = 0 holds for all
k > 0.
A PPENDIX
The following is the table for the Coxeter graph Γ, the order |W (Γ)| of the
corresponding Coxeter group W (Γ), the order |W (Γ)| factored into primes, and the
range of odd prime numbers p such that W (Γ) is p-free.
Γ
|W (Γ)|
p-freeness
A1
2
p≥3
An (n ≥ 2)
(n + 1)!
p≥5
B2
8
p≥3
n
Bn (n ≥ 3)
2 n!
p≥5
Dn (n ≥ 4)
2n−1 n!
p≥5
E6
72 · 6!
27 · 34 · 5
p≥5
10
4
E7
72 · 8!
2 ·3 ·5·7
p≥5
E8
192 · 10! 214 · 35 · 52 · 7
p≥5
F4
1152
27 · 32
p≥5
H3
120
23 · 3 · 5
p≥7
6
2
2
H4
14400
2 ·3 ·5
p≥7
I2 (q) (q ≥ 3)
2q
p6|q
12
T. AKITA
Acknowledgement. This study started from questions posed by Takefumi Nosaka
concerning of the third p-local homology of Coxeter groups for odd prime numbers
p. The author thanks to him for drawing our attention. This study was partially
supported by JSPS KAKENHI Grant Numbers 23654018 and 26400077.
R EFERENCES
[1] Peter Abramenko and Kenneth S. Brown, Buildings, Graduate Texts in Mathematics, vol. 248,
Springer, New York, 2008. Theory and applications. MR2439729 (2009g:20055)
[2] Toshiyuki Akita, On the cohomology of Coxeter groups and their finite parabolic subgroups,
Tokyo J. Math. 18 (1995), no. 1, 151–158, DOI 10.3836/tjm/1270043616. MR1334713
(96f:20078)
[3]
, On the cohomology of Coxeter groups and their finite parabolic subgroups. II,
Group representations: cohomology, group actions and topology (Seattle, WA, 1996), Proc.
Sympos. Pure Math., vol. 63, Amer. Math. Soc., Providence, RI, 1998, pp. 1–5, DOI
10.1090/pspum/063/1603123, (to appear in print). MR1603123 (98m:20066)
[4]
, Aspherical Coxeter groups that are Quillen groups, Bull. London Math. Soc. 32
(2000), no. 1, 85–90, DOI 10.1112/S0024609399006414. MR1718721 (2000j:20068)
, Euler characteristics of Coxeter groups, PL-triangulations of closed manifolds, and
[5]
cohomology of subgroups of Artin groups, J. London Math. Soc. (2) 61 (2000), no. 3, 721–736,
DOI 10.1112/S0024610700008693. MR1766100 (2001f:20080)
[6] David J. Benson and Stephen D. Smith, Classifying spaces of sporadic groups, Mathematical
Surveys and Monographs, vol. 147, American Mathematical Society, Providence, RI, 2008.
MR2378355 (2009f:55017)
[7] Nicolas Bourbaki, Éléments de mathématique, Masson, Paris, 1981 (French). Groupes et
algèbres de Lie. Chapitres 4, 5 et 6. [Lie groups and Lie algebras. Chapters 4, 5 and 6].
MR647314 (83g:17001)
[8] Kenneth S. Brown, Cohomology of groups, Graduate Texts in Mathematics, vol. 87, SpringerVerlag, New York, 1982. MR672956 (83k:20002)
[9] Michael W. Davis, The geometry and topology of Coxeter groups, London Mathematical Society Monographs Series, vol. 32, Princeton University Press, Princeton, NJ, 2008. MR2360474
(2008k:20091)
[10] C. De Concini and M. Salvetti, Cohomology of Coxeter groups and Artin groups, Math.
Res. Lett. 7 (2000), no. 2-3, 213–232, DOI 10.4310/MRL.2000.v7.n2.a7. MR1764318
(2001f:20118)
[11] Mohamed Errokh and Fulvio Grazzini, Sur la cohomologie modulo 2 des groupes de Coxeter à trois générateurs, C. R. Acad. Sci. Paris Sér. I Math. 324 (1997), no. 7, 741–745, DOI
10.1016/S0764-4442(97)86937-6 (French, with English and French summaries). MR1446573
(98h:20094)
[12] L. C. Grove and C. T. Benson, Finite reflection groups, 2nd ed., Graduate Texts in Mathematics,
vol. 99, Springer-Verlag, New York, 1985. MR777684 (85m:20001)
[13] Robert B. Howlett, On the Schur multipliers of Coxeter groups, J. London Math. Soc. (2) 38
(1988), no. 2, 263–276, DOI 10.1112/jlms/s2-38.2.263. MR966298 (90e:20010)
[14] James E. Humphreys, Reflection groups and Coxeter groups, Cambridge Studies in Advanced Mathematics, vol. 29, Cambridge University Press, Cambridge, 1990. MR1066460
(92h:20002)
[15] Shin-ichiro Ihara and Takeo Yokonuma, On the second cohomology groups (Schur-multipliers)
of finite reflection groups, J. Fac. Sci. Univ. Tokyo Sect. I 11 (1965), 155–171 (1965).
MR0190232 (32 #7646a)
[16] Minoru Nakaoka, Decomposition theorem for homology groups of symmetric groups, Ann. of
Math. (2) 71 (1960), 16–42. MR0112134 (22 #2989)
[17]
, Homology of the infinite symmetric group, Ann. of Math. (2) 73 (1961), 229–257.
MR0131874 (24 #A1721)
p-LOCAL HOMOLOGY OF COXETER GROUPS
13
[18] Stephen J. Pride and Ralph Stöhr, The (co)homology of aspherical Coxeter groups, J. London Math. Soc. (2) 42 (1990), no. 1, 49–63, DOI 10.1112/jlms/s2-42.1.49. MR1078174
(91k:20058)
[19] David John Rusin, The cohomology of groups generated by involutions, ProQuest LLC, Ann
Arbor, MI, 1984. Thesis (Ph.D.)–The University of Chicago. MR2611843
[20] Mario Salvetti, Cohomology of Coxeter groups, Topology Appl. 118 (2002), no. 1-2, 199–208,
DOI 10.1016/S0166-8641(01)00051-7. Arrangements in Boston: a Conference on Hyperplane
Arrangements (1999). MR1877725 (2003d:20073)
[21] Jean-Pierre Serre, Cohomologie des groupes discrets, Prospects in mathematics (Proc. Sympos.,
Princeton Univ., Princeton, N.J., 1970), Princeton Univ. Press, Princeton, N.J., 1971, pp. 77–
169. Ann. of Math. Studies, No. 70 (French). MR0385006 (52 #5876)
[22] Richard G. Swan, The p-period of a finite group, Illinois J. Math. 4 (1960), 341–346.
MR0122856 (23 #A188)
[23] James Andrew Swenson, The mod-2 cohomology of finite Coxeter groups, ProQuest LLC, Ann
Arbor, MI, 2006. Thesis (Ph.D.)–University of Minnesota. MR2709085
[24] C. B. Thomas, Characteristic classes and the cohomology of finite groups, Cambridge Studies
in Advanced Mathematics, vol. 9, Cambridge University Press, Cambridge, 1986. MR878978
(88f:20005)
[25] Takeo Yokonuma, On the second cohomology groups (Schur-multipliers) of infinite discrete
reflection groups, J. Fac. Sci. Univ. Tokyo Sect. I 11 (1965), 173–186 (1965). MR0190233
(32 #7646b)
D EPARTMENT OF M ATHEMATICS , H OKKAIDO U NIVERSITY, S APPORO , 060-0810 JAPAN
E-mail address: [email protected]
| 4 |
DeepLesion: Automated Deep Mining, Categorization
and Detection of Significant Radiology Image Findings
using Large-Scale Clinical Lesion Annotations
arXiv:1710.01766v2 [cs.CV] 10 Oct 2017
Ke Yan? , Xiaosong Wang? , Le Lu, and Ronald M. Summers
Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, MD 20892
{ke.yan, xiaosong.wang, le.lu, rms}@nih.gov
Abstract. Extracting, harvesting and building large-scale annotated radiological
image datasets is a greatly important yet challenging problem. It is also the bottleneck to designing more effective data-hungry computing paradigms (e.g., deep
learning) for medical image analysis. Yet, vast amounts of clinical annotations
(usually associated with disease image findings and marked using arrows, lines,
lesion diameters, segmentation, etc.) have been collected over several decades and
stored in hospitals’ Picture Archiving and Communication Systems. In this paper,
we mine and harvest one major type of clinical annotation data – lesion diameters annotated on bookmarked images – to learn an effective multi-class lesion
detector via unsupervised and supervised deep Convolutional Neural Networks
(CNN). Our dataset is composed of 33,688 bookmarked radiology images from
10,825 studies of 4,477 unique patients. For every bookmarked image, a bounding box is created to cover the target lesion based on its measured diameters. We
categorize the collection of lesions using an unsupervised deep mining scheme to
generate clustered pseudo lesion labels. Next, we adopt a regional-CNN method
to detect lesions of multiple categories, regardless of missing annotations (normally only one lesion is annotated, despite the presence of multiple co-existing
findings). Our integrated mining, categorization and detection framework is validated with promising empirical results, as a scalable, universal or multi-purpose
CAD paradigm built upon abundant retrospective medical data. Furthermore, we
demonstrate that detection accuracy can be significantly improved by incorporating pseudo lesion labels (e.g., Liver lesion/tumor, Lung nodule/tumor, Abdomen
lesions, Chest lymph node and others). This dataset will be made publicly available (under the open science initiative).
1
Introduction
Computer-aided detection/diagnosis (CADe/CADx) has been a highly prosperous and
successful research field in medical image processing. Many commercial software packages have been developed for clinical usage and screening. Recent advances (e.g.,
automated classification of skin lesions [3], detection of liver lesion [1], pulmonary
embolism [10]) have attracted even more attention to the application of deep learning paradigms to CADe/CADx. Deep learning, namely Convolutional Neural Network
?
These two authors contributed equally.
2
(CNN) based algorithms, perform significantly better than conventional statistical learning approaches combined with hand-crafted image features. However, these performance gains are often achieved at the cost of requiring tremendous amounts of training
data accompanied with high quality labels. Unlike general computer vision tasks, medical image analysis currently lacks a substantial, large-scale annotated image dataset
(comparable to ImageNet [2] and MS COCO [6]),for two main reasons: 1) The conventional methods for collecting image labels via Google search + crowd-sourcing from
average users cannot be applied in the medical image domain, as medical image annotation reuqires extensive clinical expertise; 2) Significant inter and intra-observer variability (among even well-trained, experienced radiologists) frequently occurs, and thus
may compromise reliable annotation of a large amount of medical images, especially
considering the great diversity of radiology diagnosis tasks.
Current CADe/CADx methods generally target one particular type of diseases or lesions, such as lung nodules, colon polyps or lymph nodes [7]. Yet, this approach differs
from the methods radiologists routinely apply to read medical image studies and compile radiological reports. Multiple findings can be observed and are often correlated. For
instance, liver metastases can spread to regional lymph nodes or other body parts. By
obtaining and maintaining a holistic picture of relevant clinical findings, a radiologist
will be able to make a more accurate diagnosis. However, it remains greatly challenging
to develop a universal or multi-purpose CAD framework, capable of detecting multiple
disease types in a seamless fashion. Such a framework is crucial to building an automatic radiological diagnosis and reasoning system.
In this paper, we attempt to address these challenges by first introducing a new
large-scale dataset of bookmarked radiology images, which accommodate lesions from
multiple categories. Our dataset, named DeepLesion, is composed of 33,688 bookmarked images from 10,825 studies of 4,477 patients (see samples in Fig. 1). For each
bookmarked image, a bounding box is generated to indicate the location of the lesions.
Furthermore, we integrate an unsupervised deep mining method to compute pseudo
image labels for database self-annotating. Categories of Liver lesion/tumor, Lung nodule/tumor, Abdomen lesions, Chest lymph node and others are identified by our computerized algorithm instead of radiologists’ annotation, which may be infeasible. After
obtaining the dataset, we develop an automatic lesion detection approach to jointly localize and classify lesion candidates using discovered multiple categories. Last, how the
unsupervisedly-learned pseudo lesion labels affect the deep CNN training strategies and
the quantitative performance of our proposed multi-class lesion detector is investigated.
2
Methods
In this section, we first describe how our DeepLesion dataset is constructed. Next, we
propose an unsupervised deep learning method to mine the latent lesion categories in
each image. This method involves an iterative process of deep image feature extraction,
image clustering and CNN model retraining. Finally, we present a multi-class object
detection approach to detect lesions of multiple categories.
3
(a)
(b)
(c)
Fig. 1. Three sample bookmarked images illustrated with annotation lesion patches (i.e., yellow
dashed boxes). The outputs from our proposed multi-category lesion detection framework are
shown in colored boxes with LiVer lesion (LV) in Red, Lung Nodule (LN) in Orange, ABdomen
lesion (AB) in Green, Chest Lymph node (CL) in magenta and other MiXed lesions (MX) in blue.
(a) A spleen metastasis is correctly detected along with several liver and abdomen metastases; (b)
Two large lymph nodes in mediastinum are all correctly detected; (c) All three lung nodules are
detected despite two small ones not being annotated in this bookmarked image.
2.1
DeepLesion Dataset
Radiologists routinely annotate hundreds of clinically meaningful findings in medical
images, using arrows, lines, diameters or segmentations to highlight and measure different disease patterns to be reported. These images, called “bookmarked images”, have
been collected over close to two decades in our institute’s Picture Archiving and Communication Systems (PACS). Without loss of generality, in this work, we study one type
of bookmark in CT images: lesion diameters. Each pair of lesion diameters consists of
two lines, one measuring the longest diameter and the second measuring its longest
perpendicular diameter in the plane of measurement. We extract the lesion diameter coordinates from the PACS server and convert into corresponding positions in the image
plane coordinates, noted as {(x11 , y11 ), (x12 , y12 )}; {(x21 , y21 ), (x22 , y22 )}. A bounding box (lef tx , topy , width, height) is computed to cover a rectangular area enclosing
the lesion measurement with 20 pixel padding in each direction, i.e., (xmin −20, ymin −
20, xmax − xmin + 40, ymax − ymin + 40) where xmin = M in(x11 , x12 , x21 , x22 )
and xmax = M ax(x11 , x12 , x21 , x22 ), and similarly for ymin and ymax . The padding
range can capture the lesion’s full spatial extent with sufficient image context. We thus
generate 33,688 bookmarked radiology images from 10,825 studies of 4,477 unique patients, and each bookmarked image is associated with a bounding box annotation of the
enclosed lesion. Sample bookmarked images and bounding boxes are shown in Fig. 1.
2.2
Unsupervised Lesion Categorization
The images in our constructed lesion dataset contain several types of lesions commonly
observed by radiologists, such as lung nodule/lesion, lymph node, and liver/kidney
lesion. However, no detailed precise category labels for each measured lesion have
been provided. Obtaining such from radiologists would be highly tedious and timeconsuming, due to the vast size and comprehensiveness of DeepLesion. To address this
4
Fig. 2. Lesion categorization framework via unsupervised and iteratively-optimized deep CNNs.
problem, we propose a looped deep optimization procedure for automated category discovery, which generates visually coherent and clinically-semantic image clusters. Our
algorithm is conceptually simple: it is based on the hypothesis that the optimization procedure will “converge” to more accurate labels, which will lead to better trained CNN
models. Such models, in turn, will generate more representative deep image features,
which will allow for creating more meaningful lesion labels via clustering.
As a pre-processing step, we crop the lesion patches from the original DICOM
slides using the dilated bounding boxes (described in Sec. 2.1) and resize them, prior
to feeding them into the CNN model. As shown in Fig. 2, our iterative deep learning
process begins by extracting deep CNN features for each lesion patch using the ImageNet [2]) pre-trained VGG-16 [9] network. Next, it applies k-means clustering to the
deep feature encoded lesion patches after k is determined via model selection [5]. Next,
it fine-tunes the current VGG-16 using the new image labels obtained from k-means.
This yields an updated CNN model for the next iteration. The optimization cycle terminates once the convergence criteria have been satisfied.
Encoding Lesion Patches using Deep CNN Features: The VGG-16 [9] CNN architecture is adopted for patch encoding and CNN model fine-tuning to facilitate the
iterative procedure. The image features extracted from the last fully-connected layer
(e.g., FC6/FC7 of VGG-16) are used, as they are able to capture both the visual appearance and the spatial layout of any lesion, with its surrounding context.
Convergence in Patch Clustering and Categorization: We hypothesize that the
newly generated clusters will converge to the “oracle” label clusters, after undergoing
several staged of cluster optimization. Two convergence measurements are employed:
Purity [5] and Normalized Mutual Information (NMI). We assess both criteria by computing empirical similarity scores between clustering results from two adjacent iterations. If the similarity score exceeds a pre-defined threshold, the optimal clustering
driven categorization of lesion patches has been attained. For each iteration, we randomly shuffle the lesion patches and divide the data into three subsets: training (75%),
validation (10%) and testing (15%). Therefore the “improving-then-saturating” trajectory of the CNN classification accuracy on the testing set can also indicate the convergence of the clustering labels (i.e., optimal image labels have been obtained).
2.3
Multi-category Lesion Detection
Using the bounding boxes (Sec. 2.1) and their corresponding newly generated pseudocategory labels (Sec. 2.2), we develop a multi-class lesion detector adapted from the
5
Fig. 3. Flow chart of the lesion detection algorithm. Bookmarked clinical annotations provide the
ground-truth bounding boxes of lesions for detector training. In detection, the dashed and solid
boxes indicate the ground-truth annotation and its predicted lesion detection, respectively.
Faster RCNN method [8]. An input image is first processed by several convolutional
and max pooling layers to produce feature maps, as shown in Fig. 3. Next, a region
proposal network (RPN) parses the feature maps and proposes candidate lesion regions.
It estimates the probability of “target/non-target” on a fixed set of anchors (candidate
regions) on each position of the feature maps. Furthermore, the location and size of each
anchor are fine-tuned via bounding box regression. Afterwards, the region proposals
and the feature maps are sent to a Region of Interest (RoI) pooling layer, which resamples the feature maps inside each proposal to a fixed size (we use 7×7 here). These
feature maps are then fed into several fully-connected layers that predict the confidence
scores for each lesion class and run another bounding box regression for further finetuning. Non-maximum suppression (NMS) is then applied to the feature maps. Finally,
the system returns up to five detection proposals with the highest confidence scores
(> 0.5), as each image only has one bookmarked clinical annotation.
The ImageNet pretrained VGG-16 [9] model is adopted as the backbone of Faster
RCNN [8]. It is useful to remove the last pooling layer (pool4) in VGG-16 to enhance
the resolution of the feature map and to increase the sampling ratio of positive samples
(candidate regions that contain lesions). In our experiments, removing pool4 improves
the accuracy by ∼ 15%. It is critical to set the anchor sizes in RPN to fit the size of
ground-truth bounding boxes in DeepLesion dataset. Hence, we use anchors of three
scales (48, 72, 96) and aspect ratios (1:1, 1:2, 2:1) to cover most of the boxes.
For image preparation, we use the ground-truth lesion bounding boxes derived in
Sec. 2.1 incorporating enlarged spatial contexts. Each full-slice image in the detection
phase is resized, so that the longest dimension is of 512 pixels. We then train the network
as demonstrated in Fig. 3 in a multi-task fashion: two classification and two regression
losses are jointly optimized. This end-to-end training strategy is more efficient than
the four-step method in the original Faster RCNN implementation [8]. During training,
each mini-batch has 4 images, and the number of region proposals per image is 32. We
use the Caffe toolbox and the Stochastic Gradient Descent (SGD) optimizer. The base
6
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 4. Six sample detection results are illustrated with the annotation lesion patches as yellow
dashed boxes. The outputs from our proposed detection framework are shown in colored boxes
with LiVer lesion (LV) in Red, Lung Nodule (LN) in Orange, ABdomen lesion (AB) in Green,
Chest Lymph node (CL) in magenta and other MiXed lesions (MX) in blue. (a) Four lung lesions
are all correctly detected; (b) Two lymph nodes in mediastinum is presented; (c) A Ground Glass
Opacity (GGO) and a mass are detected in the lung; (d) An adrenal nodule; (e) Correct detections
on both the small abdomen lymph node nearly aorta but also other metastases in liver and spleen;
(f) Two liver metastasis are correctly detected. Three lung metastases are detected but erroneously
classified as liver lesions .
learning rate is set to 0.001, and is reduced by a factor of 10 every 20K iterations. The
network generally converges within 60K iterations.
3
Results and Discussion
Our lesion categorization method in Sec. 2.2 partitions all lesion patches into five
classes k = 5. After visual inspection supervised by a board-certificated radiologist,
four common lesion categories are found, namely lung nodule/lesion, liver lesion, chest
lymph nodes and abdominal lesions (mainly kidney lesions and lymph nodes), with
high purity scores in each category (0.980 for Lung Nodule, 0.955 for Chest Lymph
Node, 0.805 for Liver Lesion and 0.995 for Abdomen Lesion). The per-cluster purity
scores are estimated through a visual assessment by an experienced radiologist using a
set of 800 randomly selected lesion patches (200 images per-category). The remaining
bookmarked annotations are treated as a “noisy” mixed lesion class. Our optimization
framework converges after six iterations, with a high purity score of 92.3% returned
when assessing the statistical similarity or stability of two last iterations. Meanwhile,
the top-1 classification accuracy reaches 91.6%, and later fluctuates by ±2% .
7
1
0.8
0.7
0.5
Precision
Detection accuracy
0.8
0.6
0.4
0.3
Single-class lesion detection
Multi-class lesion detection
0.2
0.6
Single-class lesion detection
Liver lesion detection
Lung nodule detection
Abdomen lesions detection
Chest lymph node detection
Mixed lesions detection
0.4
0.2
0.1
0
0.1
0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0
0.9
0.2
0.4
0.6
0.8
1
Recall
IoU
(a)
(b)
Fig. 5. (a): Detection accuracy curves with different intersection-over-union (IoU) thresholds. (b):
Precision-Recall curves of single-class and five category lesion detection when IoU=0.5.
Cluster
1
2
3
4
5
Cluster category
Liver
lesion
Lung
nodule
abdomen
lesions
Chest
lymph node
Mixed
lesions
Overall
Cluster size
774
837
1270
860
1292
5033
Averaged Accuracy: w/o
categorization labels (%)
56.04
73.30
48.79
70.77
54.85
59.45
Averaged accuracy: with
categorization labels (%)
60.59
76.34
56.22
76.28
58.67
64.30
Table 1. Test detection accuracies of five deep-learned pseudo lesion categories. Note that in
DeepLesion dataset, available clinical annotations are only partial (i.e., missing lesion labels).
Hence the actual detection accuracies in both configurations are significantly higher than the
above reported values since many “false positive” detections are later verified to be true lesions.
For lesion detection, all bookmarked images are divided into training (70%), validation (15%), and testing (15%) sets, by random splitting the dataset at the patient level.
Although different lesion types may be present in an image, only one clinical annotation
per image is available. We adopt a straightforward evaluation criterion: 1) we take the
top one detected lesion candidate box (with the highest detection confidence score) per
testing image as the detection result; 2) if the intersection-over-union (IoU) between
this predicted box and the ground-truth clinical annotation box is larger than 0.5 (as
suggested by the PASCAL criterion [4]), the detection is regarded as correct, and vice
versa. The lesion category is not considered in this criterion. We denote this evaluation
metric as detection accuracy. The proposed multi-class lesion detector merely requires
88 ms to process a 512×512 test bookmarked image on a Titan X Maxwell GPU.
Two lesion detection setups or configurations are examined: single-class (all annotation bounding boxes are considered as one abnormality class), and multi-class detection
(with pseudo-category labels). Some illustrative results from the multi-category lesion
detector on the testing set are shown in Fig. 4. It can be found that our developed detector is able to detect all five types of lesions and simultaneously provide the corresponding lesion category labels. Furthermore, some detection boxes currently considered as
8
false alarms actually belong to true lesions because the lesions bounding boxes are only
partially labeled by clinical bookmark annotations. Detailed statistics of the five deeply
discovered lesion clusters in the test set are provided in Table 1. This outlines the types
of lesions in the clusters that have been verified by radiologists. The averaged accuracy
of the single-class detection is 59.45% (testing) and this score becomes 64.3% for multiclass detection (testing). From Table 1, the multi-category detector also demonstrates
accuracy improvements of 3∼8% per lesion cluster or category compared against the
one-class abnormality detector. Single-class abnormality detection appears to be a more
challenging task since it tackles detecting various types of lesions at once. This validates
that better lesion detection models can be trained if we can perform unsupervised lesion
categorization from a large collection of retrospective clinical data.
The default IoU threshold is set as 0.5. Fig. 5 (a) illustrates the detection accuracy
curves of both detection models under different IoU thresholds. The multi-category lesion detection achieves the better overall accuracy while being able to assign the lesion
labels at the same time. Fig. 5 (b) shows the corresponding detection precision-recall
curves. The performances of lung lesion detection and chest lymph node detection significantly outperform the one-class abnormality detection.
4
Conclusion
In this paper, we mine, categorize and detect one type of clinical annotations stored
in the hospital PACS system as a rich retrospective data source, to build a large-scale
Radiology lesion image database. We demonstrate the strong feasibility of employing
a new multi-category lesion detection paradigm via unified deep categorization and detection. Highly promising lesion categorization and detection performances, based on
the proposed dataset, are reported. To the best of our knowledge, this work is the first
attempt of building a scalable, multi-purpose CAD system using abundant retrospective
medical data. This is done almost effortlessly since no new arduous image annotation
workload is necessary. Our future work include extending bookmarked images to incorporate their successive slices for the scalable and precise lesion volume measurement;
extracting and integrating the lesion diagnosis prior from radiology text reports, and
improved multi-category detection methods.
References
1. A. Ben-Cohen, I. Diamant, E. Klang, M. Amitai, and H. Greenspan. Fully convolutional
network for liver segmentation and lesions detection. In MICCAI LABELS-DLMIA, 2016.
2. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE CVPR, pages 248–255, 2009.
3. A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun.
Dermatologist-level classification of skin cancer with deep neural networks. Nature,
542(7639):115–118, 2017.
4. M. Everingham, A. Eslami, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The pascal
visual object classes challenge: A retrospective. Int. J. Comp. Vis., 111(1):98–136, 2015.
5. R. Gomes, A. Krause, and P. Perona. Discriminative clustering by regularized information
maximization. In NIPS, pages 775–783, 2010.
9
6. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L.
Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740–755, 2014.
7. J. Liu, D. Wang, Z. Wei, L. Lu, L. Kim, E. Turkbey, and R. M. Summers. Colitis detection
on computed tomography using regional convolutional neural networks. In IEEE ISBI, 2016.
8. S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with
region proposal networks. In NIPS, pages 91–99, 2015.
9. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image
recognition. In ICLR. arxiv.org/abs/1409.1556, 2015.
10. N. Tajbakhsh, M. B. Gotway, and J. Liang. Computer-aided pulmonary embolism detection using a novel vessel-aligned multi-planar image representation and convolutional neural
networks. In MICCAI, pages 62–69. Springer, 2015.
| 1 |
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT
SURFACES OF BIDEGREE (2, 1)
arXiv:1211.1648v1 [math.AC] 7 Nov 2012
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
Abstract. Let U ⊆ H 0 (OP1 ×P1 (2, 1)) be a basepoint free four-dimensional
vector space. The sections corresponding to U determine a regular map φU :
P1 × P1 −→ P3 . We study the associated bigraded ideal IU ⊆ k[s, t; u, v]
from the standpoint of commutative algebra, proving that there are exactly six
numerical types of possible bigraded minimal free resolution. These resolutions
play a key role in determining the implicit equation for φU (P1 × P1 ), via work
of Busé-Jouanolou [5], Busé-Chardin [6], Botbol [2] and Botbol-DickensteinDohm [3] on the approximation complex Z. In four of the six cases IU has
a linear first syzygy; remarkably from this we obtain all differentials in the
minimal free resolution. In particular this allows us to explicitly describe the
implicit equation and singular locus of the image.
1. Introduction
A central problem in geometric modeling is to find simple (determinantal or close
to it) equations for the image of a curve or surface defined by a regular or rational
map. For surfaces the two most common situations are when P1 × P1 −→ P3 or
P2 −→ P3 . Surfaces of the first type are called tensor product surfaces and surfaces
of the latter type are called triangular surfaces. In this paper we study tensor
product surfaces of bidegree (2, 1) in P3 . The study of such surfaces goes back to
the last century–see, for example, works of Edge [17] and Salmon [26].
Let R = k[s, t, u, v] be a bigraded ring over an algebraically closed field k, with
s, t of degree (1, 0) and u, v of degree (0, 1). Let Rm,n denote the graded piece in
bidegree (m, n). A regular map P1 × P1 −→ P3 is defined by four polynomials
U = Span{p0 , p1 , p2 , p3 } ⊆ Rm,n
with no common zeros on P1 × P1 . We will study the case (m, n) = (2, 1), so
U ⊆ H 0 (OP1 ×P1 (2, 1)) = V = Span{s2 u, stu, t2 u, s2 v, stv, t2 v}.
Let IU = hp0 , p1 , p2 , p3 i ⊂ R, φU be the associated map P1 × P1 −→ P3 and
XU = φU (P1 × P1 ) ⊆ P3 .
We assume that U is basepoint free, which means that
p
IU = hs, ti ∩ hu, vi.
We determine all possible numerical types of bigraded minimal free resolution for
IU , as well as the embedded associated primes of IU . Using approximation complexes, we relate the algebraic properties of IU to the geometry of XU . The next
example illustrates our results.
Key words and phrases. Tensor product surface, bihomogeneous ideal, Segre-Veronese map.
Schenck supported by NSF 1068754, NSA H98230-11-1-0170.
1
2
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
Example 1.1. Suppose U is basepoint free and IU has a unique first syzygy of
bidegree (0, 1). Then the primary decomposition of IU is given by Corollary 3.5,
and the differentials in the bigraded minimal free resolution are given by Proposition 3.2. For example, if U = Span{s2 u, s2 v, t2 u, t2 v + stv}, then by Corollary 3.5
and Theorem 3.3, the embedded primes of IU are hs, t, ui and hs, t, vi, and by
Proposition 3.2 the bigraded Betti numbers of IU are:
0 ← IU ← R(−2, −1)4 ← R(−2, −2)⊕R(−3, −2)2 ⊕R(−4, −1)2 ← R(−4, −2)2 ← 0
Having the differentials in the free resolution allows us to use the method of approximation complexes to determine the implicit equation: it follows from Theorem 7.1
that the image of φU is the hypersurface
XU = V(x0 x21 x2 − x21 x22 + 2x0 x1 x2 x3 − x20 x23 ).
Theorem 7.3 shows that the reduced codimension one singular locus of XU is
V(x0 , x2 ) ∪ V(x1 , x3 ) ∪ V(x0 , x1 ). The key feature of this example is that there is
Figure 1. XU on the open set Ux0
a linear syzygy of bidegree (0, 1):
v · (s2 u) − u · (s2 v) = 0.
In Lemmas 3.1 and 4.1 we show that with an appropriate choice of generators for
IU , any bigraded linear first syzygy has the form above. Existence of a bidegree
(0, 1) syzygy implies that the pullbacks to P1 × P1 of the two linear forms defining
P(U ) share a factor. Theorem 8.5 connects this to work of [19].
1.1. Previous work on the (2, 1) case. For surfaces in P3 of bidegree (2, 1), in
addition to the classical work of Edge, Salmon and others, more recently Degan [13]
studied such surfaces with basepoints and Zube [30], [31] describes the possibilities
for the singular locus. In [18], Elkadi-Galligo-Lê give a geometric description of the
image and singular locus for a generic U and in [19], Galligo-Lê follow up with an
analysis for the nongeneric case. A central part of their analysis is the geometry of
a certain dual scroll which we connect to syzygies in §8.
Cox, Dickenstein and Schenck study the bigraded commutative algebra of a three
dimensional basepoint free subspace W ⊆ R2,1 in [11], showing that there are two
numerical types of possible bigraded minimal free resolution of IW , determined by
σ2,1
how P(W ) ⊆ P(R2,1 ) = P5 meets the image Σ2,1 of the Segre map P2 × P1 −→ P5 .
If W is basepoint free, then there are two possibilities: either P(W ) ∩ Σ2,1 is a finite
set of points, or a smooth conic. The current paper extends the work of [11] to the
more complicated setting of a four dimensional space of sections. A key difference
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
3
is that for a basepoint free subspace W of dimension three, there can never be a
linear syzygy on IW . As illustrated in the example above, this is not true for the
four dimensional case. It turns out that the existence of a linear syzygy provides a
very powerful tool for analyzing both the bigraded commutative algebra of IU , as
well as for determining the implicit equation and singular locus of XU . In studying
the bigraded commutative algebra of IU , we employ a wide range of tools
• Approximation complexes [2], [3], [5], [6], [7].
• Bigraded generic initial ideals [1].
• Geometry of the Segre-Veronese variety [22].
• Fitting ideals and Mapping cones [15].
• Connection between associated primes and Ext modules [16].
• Buchsbaum-Eisenbud exactness criterion [4].
1.2. Approximation complexes. The key tool in connecting the syzygies of IU
to the implicit equation for XU is an approximation complex, introduced by HerzogSimis-Vasconcelos in [23],[24]. We give more details of the construction in §7. The
basic idea is as follows: let RI = R⊕IU ⊕IU2 ⊕· · · . Then the graph Γ of the map φU
is equal to BiP roj(RI ) and the embedding of Γ in (P1 × P1 ) × P(U ) corresponds
s
to the ring map S = R[x0 , . . . , x3 ] → RI given by xi 7→ pi . Let β denote the kernel
of s, so β1 consists of the syzygies of IU and SI = SymR (I) = S/β1 . Then
Γ ⊆ BiP roj(SI ) ⊆ BiP roj(S).
The works [3], [5], [6], [2] show that if U is basepoint free, then the implicit equation
for XU may be extracted from the differentials of a complex Z associated to the
intermediate object SI and in particular the determinant of the complex is a power
of the implicit equation. In bidegree (2, 1), a result of Botbol [2] shows that the
implicit equation may be obtained from a 4 × 4 minor of d1 ; our work yields an
explicit description of the relevant minor.
1.3. Main results. The following two tables describe our classification. Type
refers to the graded Betti numbers of the bigraded minimal free resolution for IU :
we prove there are six numerical types possible. Proposition 6.3 shows that the
only possible embedded primes of IU are m = hs, t, u, vi or Pi = hli , s, ti, where li
is a linear form of bidegree (0, 1). While Type 5a and 5b have the same bigraded
Betti numbers, Proposition 3.2 and Corollary 3.5 show that both the embedded
primes and the differentials in the minimal resolution differ. We also connect our
classification to the reduced, codimension one singular locus of XU . In the table
below T denotes a twisted cubic curve, C a smooth plane conic and Li a line.
Type
1
2
3
4
5a
5b
6
Lin. Syz.
Emb. Pri.
Sing. Loc.
Example
none
m
T
s2 u+stv, t2 u, s2 v+stu, t2 v+stv
none
m, P1
C ∪ L1
s2 u, t2 u, s2 v + stu, t2 v + stv
1 type (1, 0)
m
L1
s2 u + stv, t2 u, s2 v, t2 v + stu
1 type (1, 0)
m, P1
L1
stv, t2 v, s2 v − t2 u, s2 u
1 type (0, 1)
P1 , P2
L1 ∪ L2 ∪ L3
s2 u, s2 v, t2 u, t2 v + stv
1 type (0, 1)
P1
L1 ∪ L2
s2 u, s2 v, t2 u, t2 v + stu
2 type (0, 1)
none
∅
s2 u, s2 v, t2 u, t2 v
Table 1.
4
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
The next table gives the possible numerical types for the bigraded minimal free
resolutions, where we write (i, j) for the rank one free module R(i, j). We prove
more: for Types 3, 4, 5 and 6, we determine all the differentials in the minimal free
resolution. One striking feature of Table 1 is that if IU has a linear first syzygy
(i.e. of bidegree (0, 1) or (1, 0)), then the codimension one singular locus of XU is
either empty or a union of lines. We prove this in Theorem 7.3.
Type
1
2
3
4
5
6
Bigraded Minimal Free Resolution of IU for U basepoint free
(−2, −4)
⊕
(−3, −4)2
⊕
0 ← IU ← (−2, −1)4 ←− (−3, −2)4 ←−
←− (−4, −4) ← 0
⊕
(−4, −2)3
(−4, −1)2
(−2, −3)
⊕
(−3, −3)2
⊕
0 ← IU ← (−2, −1)4 ←− (−3, −2)4 ←−
←− (−4, −3) ← 0
⊕
(−4, −2)3
(−4, −1)2
(−2, −4)
⊕
(−3, −1)
(−3, −4)2
⊕
⊕
(−4, −4)
(−3, −2)2
⊕
⊕
←− (−4, −3)2 ←−
←0
0 ← IU ← (−2, −1)4 ←−
⊕
(−5, −3)
(−3, −3)
(−5, −2)2
⊕
(−4, −2)
⊕
(−5, −1)
(−2, −3)
⊕
(−3, −1)
(−3, −3)
⊕
⊕
0 ← IU ← (−2, −1)4 ←− (−3, −2)2 ←− (−4, −3) ←− (−5, −3) ← 0
⊕
⊕
(−4, −2)
(−5, −2)2
⊕
(−5, −1)
(−2, −2)
⊕
0 ← IU ← (−2, −1)4 ←− (−3, −2)2 ←− (−4, −2)2 ← 0
⊕
(−4, −1)2
(−2, −2)2
4
⊕
0 ← IU ← (−2, −1) ←−
←− (−4, −2) ← 0
(−4, −1)2
Table 2.
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
5
2. Geometry and the Segre-Veronese variety
Consider the composite maps
(2.1)
P1 × P1
/ P(H 0 (OP1 (2))) × P(H 0 (OP1 (1)))
/ P(H 0 (OP1 ×P1 (2, 1)))
φU
π
, P(U )
The first horizontal map is ν2 × id, where ν2 is the 2-uple Veronese embedding and
the second horizontal map is the Segre map σ2,1 : P2 × P1 → P5 . The image of
σ2,1 is a smooth irreducible nondegenerate cubic threefold Σ2,1 . Any P2 ⊆ Σ2,1 is
a fiber over a point of the P1 factor and any P1 ⊆ Σ2,1 is contained in the image of
a fiber over P2 or P1 . For this see Chapter 2 of [21], which also points out that the
Segre and Veronese maps have coordinate free descriptions
ν
d
P(A)
−→
σ
P(A) × P(B) −→
P(Symd A)
P(A ⊗ B)
By dualizing we may interpret the image of νd as the variety of dth powers of linear
forms on A and the image of σ as the variety of products of linear forms. The
composition τ = σ2,1 ◦ (ν2 × id) is a Segre-Veronese map, with image consisting
of polynomials which factor as l1 (s, t)2 · l2 (u, v). Note that Σ2,1 is also the locus
of polynomials in R2,1 which factor as q(s, t) · l(u, v), with q ∈ R2,0 and l ∈ R0,1 .
Since q ∈ R2,0 factors as l1 · l2 , this means Σ2,1 is the locus of polynomials in R2,1
which factor completely as products of linear forms. As in the introduction,
U ⊆ H 0 (OP1 ×P1 (2, 1)) = V = Span{s2 u, stu, t2 u, s2 v, stv, t2 v}.
The ideal of Σ2,1 is defined by the two by two minors of
x0 x1 x2
.
x3 x4 x5
It will also be useful to understand the intersection of P(U ) with the locus of
polynomials in V which factor as the product of a form q = a0 su+a1 sv+a2 tu+a3 tv
of bidegree (1, 1) and l = b0 s + b1 t of bidegree (1, 0). This is the image of the map
P(H 0 (OP1 ×P1 (1, 1))) × P(H 0 (OP1 ×P1 (1, 0))) = P3 × P1 −→ P5 ,
(a0 : a1 : a2 : a3 ) × (b0 : b1 ) 7→ (a0 b0 : a0 b1 + a2 b0 : a2 b1 : a1 b0 : a1 b1 + a3 b0 : a3 b1 ),
which is a quartic hypersurface
Q = V(x22 x23 − x1 x2 x3 x4 + x0 x2 x24 + x21 x3 x5 − 2x0 x2 x3 x5 − x0 x1 x4 x5 + x20 x25 ).
As Table 1 shows, the key to classifying the minimal free resolutions is understanding the linear syzygies. In §3, we show that if IU has a first syzygy of bidegree
(0, 1), then after a change of coordinates, IU = hpu, pv, p2 , p3 i and if IU has a first
syzygy of bidegree (1, 0), then IU = hps, pt, p2 , p3 i.
Proposition 2.1. If U is basepoint free, then the ideal IU
(1) has a unique linear syzygy of bidegree (0, 1) iff F ⊆ P(U ) ∩ Σ2,1 , where F
is a P1 fiber of Σ2,1 .
(2) has a pair of linear syzygies of bidegree (0, 1) iff P(U ) ∩ Σ2,1 = Σ1,1 .
(3) has a unique linear syzygy of bidegree (1, 0) iff F ⊆ P(U ) ∩ Q, where F is
a P1 fiber of Q.
6
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
Proof. The ideal IU has a unique linear syzygy of bidegree (0, 1) iff qu, qv ∈ IU ,
with q ∈ R2,0 iff q · l(u, v) ∈ IU for all l(u, v) ∈ R0,1 iff P(U ) ∩ Σ2,1 contains the P1
fiber over the point q ∈ P(R2,0 ).
For the second item, the reasoning above implies that P(U ) ∩ Σ2,1 contains two
P1 fibers, over points q1 , q2 ∈ P(R2,0 ). But then IU also contains the line in P(R2,0 )
connecting q1 and q2 , as well as the P1 lying over any point on the line, yielding a
P1 × P1 .
For the third part, a linear syzygy of bidegree (1, 0) means that qs, qt ∈ IU , with
q ∈ R1,1 iff q · l(s, t) ∈ IU for all l(s, t) ∈ R1,0 iff P(U ) ∩ Q contains the P1 fiber over
the point q ∈ P(R1,1 ).
In Theorem 4.6, we show that Proposition 2.1 describes all possible linear syzygies.
3. First syzygies of bidegree (0, 1)
Our main result in this section is a complete description of the minimal free resolution when IU has a first syzygy of bidegree (0, 1). As a consequence, if IU has a
unique first syzygy of bidegree (0, 1), then the minimal free resolution has numerical
Type 5 and if there are two linear first syzygies of bidegree (0, 1), the minimal free
resolution has numerical Type 6. We begin with a simple observation
Lemma 3.1. If IU has a linear first syzygy of bidegree (0, 1), then
IU = hpu, pv, p2 , p3 i,
where p is homogeneous of bidegree (2, 0).
Proof. Rewrite the syzygy
3
X
(ai u + bi v)pi = 0 = u ·
i=0
and let g0 =
3
P
i=0
ai pi , g1 =
3
X
ai pi + v ·
i=0
3
P
3
X
bi pi ,
i=0
bi pi . The relation above implies that (g0 , g1 ) is a
i=0
syzygy on (u, v). Since the syzygy module of (u, v) is generated by the Koszul
syzygy, this means
g0
−v
=p·
g1
u
A similar argument applies if IU has a first syzygy of degree (1, 0). Lemma 3.1
has surprisingly strong consequences:
Proposition 3.2. If U is basepoint free and IU has a unique linear first syzygy of
bidegree (0, 1), then there is a complex of free R modules
φ3
φ2
φ1
F1 : 0 −→ F3 −→ F2 −→ F1 −→ IU −→ 0,
where φ1 = p0 p1 p2 p3 , with ranks and shifts matching Type 5 in Table 2.
Explicit formulas appear in the proof below. The differentials φi depend on whether
p = L1 (s, t)L2 (s, t) of Lemma 3.1 has L1 = L2 .
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
7
Proof. Since IU has a syzygy of bidegree (0, 1), by Lemma 3.1, IU = hpu, pv, p2 , p3 i.
Case 1: Suppose p = l(s, t)2 , then after a change of coordinates, p = s2 , so p0 = s2 u
and p1 = s2 v. Eliminating terms from p2 and p3 , we may assume
p2
p3
= stl1 (u, v) + t2 l2 (u, v) = t(sl1 (u, v) + tl2 (u, v))
= stl3 (u, v) + t2 l4 (u, v) = t(sl3 (u, v) + tl4 (u, v)).
Let li (u, v) = ai u + bi v and define
A(u, v) =
l1
l3
l2
l4
.
Note that det A(u, v) = q(u, v) 6= 0. The rows cannot be dependent, since U
spans a four dimensional subspace. If the columns are dependent, then {p2 , p3 } =
{tl1 (s + kt), tl2 (s + kt)}, yielding another syzygy of bidegree (0, 1), contradicting
our hypothesis. In the proof of Corollary 3.4, we show the hypothesis that U is
basepoint free implies that A(u, v) is a 1-generic matrix, which means that A(u, v)
cannot be made to have a zero entry using row and column operations. We obtain
a first syzygy of bidegree (2, 0) as follows:
s2 p2
= s3 tl1 + s2 t2 l2
= (a1 st + a2 t2 )s2 u + (b1 st + b2 t2 )s2 v
= (a1 st + a2 t2 )p0 + (b1 st + b2 t2 )p1
A similar relation holds for stp3 , yielding two first syzygies of bidegree (2, 0). We
next consider first syzygies of bidegree (1, 1). There is an obvious syzygy on p2 , p3
given by
(sl1 (u, v) + tl2 (u, v))p3 = (sl3 (u, v) + tl4 (u, v))p2
Since detA(s, t) = q(u, v) 6= 0, from
t2 q
stq
= l3 p2 − l1 p3
= l4 p2 − l2 p3
and the fact that q(u, v) = L1 (u, v)L2 (u, v) with Li (u, v) = αi u + βi v, so we obtain
a pair of relations of bidegree (1, 1):
sl4 p2 − sl2 p3
=
s2 tL1 L2
=
(α1 tL2 )s2 u + (β1 tL2 )s2 v.
Case 2: p = l(s, t) · l0 (s, t) with l, l0 independent linear forms. Then after a change
of coordinates, p = st, so p0 = stu and p1 = stv. Eliminating terms from p2 and
p3 , we may assume
p2 = s2 l1 (u, v) + t2 l2 (u, v)
p3 = s2 l3 (u, v) + t2 l4 (u, v).
Let li (u, v) = ai u + bi v. We obtain a first syzygy of bidegree (2, 0) as follows:
stp2
= s3 tl1 + st3 l2
= s2 (stl1 ) + t2 (stl2 )
= (a1 s2 + a2 t2 )stu + (b1 s2 + b2 t2 )stv
A similar relation holds for stp3 , yielding two first syzygies of bidegree (2, 0). We
next consider first syzygies of bidegree (1, 1). Since q(u, v) 6= 0, from
t2 q
s2 q
= l3 p2 − l1 p3
= l4 p2 − l2 p3
8
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
and the fact that q(u, v) = L1 (u, v)L2 (u, v) with Li (u, v) = αi u + βi v, we have
relations
sl3 p2 − sl1 p3 = st2 L1 L2 = (α1 tL2 )stu + (β1 tL2 )stv
tl4 p2 − tl2 p3 = ts2 L1 L2 = (α1 sL2 )stu + (β1 sL2 )stv,
which yield a pair of first syzygies of bidegree (1, 1). Putting everything together,
we now have candidates for the differential φ2 in both cases. Computations exactly
like those above yield similar candidates for φ3 in the two cases. In Case 1, we have
γ
δ
2
2
v α1 tL2
0
a1 st + a2 t a3 st + a4 t
s
t
−u β1 tL2
0
b1 st + b2 t2 b3 st + b4 t2
,
0
s
,
φ
=
φ2 =
3
2
0
−sl4
sl3 + tl4
−s
0
−l
−l
4
3
2
0
sl2
−sl1 − tl2
0
−s
l2
l1
where
δ = −α1 β2 t2 + (a1 st + a2 t2 )b3 − (a3 st + a4 t2 )b1
γ = −α1 β2 st + (a1 st + a2 t2 )b4 − (a3 st + a4 t2 )b2
For IU as in Case 2, let
v α1 tL2 α1 sL2 a1 s2 + a2 t2 a3 s2 + a4 t2
−u β1 tL2 β1 sL2 b1 s2 + b2 t2 b3 s2 + b4 t2
φ2 =
, φ3 =
0
−sl3
−tl4
−st
0
0
sl1
tl2
0
−st
where
γ
δ
We have already
im(φ3 ) ⊆ ker(φ2 ),
γ
0
s
−l4
l2
δ
t
0
−l3
l1
,
= (−α1 β2 + a1 b4 − a3 b2 )s2 + (a2 b4 − a4 b2 )t2
= (a1 b3 − a3 b1 )s2 + (−α1 β2 + a2 b3 − a4 b1 )t2
shown that im(φ2 ) ⊆ ker(φ1 ), and an easy check shows that
yielding a complex of free modules of numerical Type 5.
To prove that the complex above is actually exact, we use the following result of
Buchsbaum and Eisenbud [4]: a complex of free modules
φi+1
φi
φi−1
φ1
F : · · · −→ Fi −→ Fi−1 −→ · · · F1 −→ F0 ,
is exact iff
(1) rank(φi+1 ) + rank(φi ) = rank(Fi ).
(2) depth(Irank(φi ) (φi )) ≥ i.
Theorem 3.3. The complexes appearing in Proposition 3.2 are exact.
Proof. Put F0 = R. An easy check shows that rank(φi+1 ) + rank(φi ) = rank(Fi ),
so what remains is to show that depth(I2 (φ3 )) ≥ 3 and depth(I3 (φ2 )) ≥ 2. The
fact that s - γ will be useful: to see this, note that both Case 1 and Case 2, s | γ iff
a2 b4 − b2 a4 = 0, which implies l2 and l4 differ only by a scalar, contradicting the
assumption that U is basepoint free.
Case 1: We have us4 , vs4 ∈ I3 (φ2 ). Consider the minor
λ = t2 L2 (sl1 + tl2 ) ((α1 b1 − β1 a1 )s + (α1 b2 − β1 a2 )t)
obtained from the submatrix
α1 tL2
β1 tL2
sl2
0
0
−sl1 − tl2
a1 st + a2 t2
b1 st + b2 t2
0
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
9
Note that s does not divide λ, for if s|λ then either l2 = 0 or α1 b2 − β1 a2 = 0. But
none of the li can be zero because of the basepoint free assumption (see the proof
of Corollary 3.4) and if α1 b2 − β1 a2 = 0, then L1 and l2 are the same up to a scalar
multiple. Hence, since l1 l4 − l2 l3 = L1 L2 , we obtain l1 is equal to a scalar multiple
of l2 or l3 , which again violates the basepoint free assumption. To conclude, note
that u and v can not divide λ at the same time, therefore, λ and one of the us4 and
vs4 form a regular sequence in I3 (φ2 ), showing that depth of I3 (φ2 ) is at least 2.
To show that depth(I2 (φ3 )) ≥ 3, note that
I2 (φ3 ) = hsl2 , sl4 , tl4 − sl3 , tl2 − sl1 , tγ − sδ, sγ, s2 , q(u, v)i
Since l2 , l4 are independent, hsl2 , sl4 i = hsu, svi and using these we can reduce
tl4 − sl3 , tl2 − sl1 to tu, tv. Since s - γ, modulo s2 , sγ reduces to st2 . Similarly,
tγ − sδ reduces to t3 , so that in fact
I2 (φ3 ) = hsu, sv, tu, tv, s2 , st2 , t3 , q(u, v)i,
and {s2 , t3 , q(u, v)} is a regular sequence of length three.
Case 2: We have us2 t2 , vs2 t2 ∈ I3 (φ2 ). Consider the minor
λ = L2 (s2 l1 − t2 l2 ) (α1 b1 − β1 a1 )s2 + (α1 b2 − β1 a2 )t2
arising from the submatrix
α1 tL2
β1 tL2
sl1
α1 sL2
β1 sL2
tl2
a1 s2 + a2 t2
b1 s2 + b2 t2
0
Note that s and t do not divide λ, for if s|λ then either l2 = 0 or α1 b2 − β1 a2 = 0.
But none of the li can be zero because of the basepoint free assumption (see the
proof of Corollary 3.4) and if α1 b2 − β1 a2 = 0, then L1 and l2 are the same up
to a scalar multiple. Hence, since l1 l4 − l2 l3 = L1 L2 , we obtain l1 is equal to a
scalar multiple of l2 or l3 , contradicting basepoint freeness. Furthermore, u and
v cannot divide λ at the same time, so λ and one of the us2 t2 and vs2 t2 form a
regular sequence in I3 (φ2 ). To show that depth(I2 (φ3 )) ≥ 3, note that
I2 (φ3 ) = hsu, sv, tu, tv, st, tγ, sδ, q(u, v)i,
where we have replaced sli , tlj as in Case 1. If t | δ, then a1 b3 − b1 a3 = 0, which
would mean l1 = kl3 and contradict that U is basepoint free. Since s - γ and
t - δ, {tγ, sδ, q(u, v)} is regular unless δ, γ share a common factor η = (as + bt).
Multiplying out and comparing coefficients shows that this forces γ and δ to agree up
to scalar. Combining this with the fact that t - δ, s - δ, we find that δ = as2 +bst+ct2
with a 6= 0 6= c. Reducing sδ and tδ by st then implies that t3 , s3 ∈ I2 (φ3 ).
Corollary 3.4. If U is basepoint free, then IU cannot have first syzygies of both
bidegree (0, 1) and bidegree (1, 0).
Proof. Suppose there is a first syzygy of bidegree (0, 1) and proceed as in the proof
of Proposition 3.2. In the setting of Case 1,
IU = hstu, stv, s2 l1 (u, v) + t2 l2 (u, v), s2 l3 (u, v) + t2 l4 (u, v)i.
P
If there is also a linear syzygy of bidegree (1, 0), expanding out (ai s+bi t)pi shows
3
3
that
P the coefficient of s is a3 l1 + a4 l3 , and the coefficient of t is b3 l2 + b4 l4 . Since
(ai s+bi t)pi = 0, both coefficients must vanish. In the proof of Proposition 3.2 we
showed detA(u, v) = q(u, v) 6= 0. In fact, more is true: if any of the li is zero, then
10
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
U is not basepoint free. For example, if l1 = 0, then ht, l3 i is a minimal associated
prime of IU . Since a3 l1 + a4 l3 = 0 iff a3 = a4 = 0 or l1 is a scalar multiple of l3 and
the latter situation implies that U is not basepoint free, we must have a3 = a4 = 0.
Reasoning similarly for b3 l2 + b4 l4 shows that a3 = a4 = b3 = b4 = 0. This implies
the linear syzygy of bidegree (1, 0) can only involve stu, stv, which is impossible.
This proves the result in Case 1 and similar reasoning works for Case 2.
Corollary 3.5. If IU has a unique linear first syzygy of bidegree (0, 1), then IU
has either one or two embedded prime ideals of the form hs, t, Li (u, v)i. If q(u, v) =
det A(u, v) = L1 (u, v)L2 (u, v) for A(u, v) as in Theorem 3.3, then:
(1) If L1 = L2 , then the only embedded prime of IU is hs, t, L1 i.
(2) If L1 6= L2 , then IU has two embedded primes hs, t, L1 i and hs, t, L2 i.
Proof. In [16], Eisenbud, Huneke and Vasconcelos show that a prime P of codimension c is associated to R/I iff it is associated to Extc (R/I, R). If IU has
a unique linear syzygy, then the free resolution is given by Proposition 3.2, and
Ext3 (R/IU , R) = coker(φt3 ). By Proposition 20.6 of [15], if φ is a presentation
matrix for a module M , then the radicals of ann(M ) and Irank(φ) (φ) are equal.
Thus, if IU has a Type 5 resolution, the codimension three associated primes are
the codimension three associated primes of I2 (φ3 ). The proof of Theorem 3.3 shows
that in Case 1,
I2 (φ3 )
= hsu, sv, tu, tv, s2 , st2 , t3 , q(u, v)i
= hs2 , st2 , t3 , u, vi ∩ hs, t, L21 i
= hs2 , st2 , t3 , u, vi ∩ hs, t, L1 i ∩ hs, t, L2 i
if L1 = L2
if L1 6= L2 .
The embedded prime associated to hs, t, u, vi is not an issue, since we are only
interested in the codimension three associated primes. The proof for Case 2 works
in the same way.
Next, we tackle the case where the syzygy of bidegree (0, 1) is not unique.
Proposition 3.6. If U is basepoint free, then the following are equivalent
(1) The ideal IU has two linear first syzygies of bidegree (0, 1).
(2) The primary decomposition of IU is
IU = hu, vi ∩ hq1 , q2 i,
p
where hq1 , q2 i = hs, ti and qi are of bidegree (2, 0).
(3) The minimal free resolution of IU is of numerical Type 6.
(4) XU ' Σ1,1 .
Proof. By Lemma 3.1, since IU has a linear syzygy of bidegree (0, 1),
IU = hq1 u, q1 v, p2 , p3 i.
Proceed as in the proof of Proposition 3.2. In Case 1, the assumption that q1 = st
means that p2 , p3 can be reduced to have no terms involving stu and stv, hence
there cannot be a syzygy of bidegree (0, 1) involving pi and q1 u, q1 v. Therefore
the second first syzygy of bidegree (0, 1) involves only p2 and p3 and the reasoning
in the proof of Lemma 3.1 implies that hp2 , p3 i = hq2 u, q2 vi. Thus, we have the
primary decomposition
IU = hq1 , q2 i ∩ hu, vi,
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
11
√
with q1 , q2 of bidegree (2, 0). Since U is basepoint free, q1 , q2 = hs, ti, so q1 and
q2 are a regular sequence in k[s, t]. Similar reasoning applies in the situation of
Case 2. That the minimal free resolution is of numerical Type 6 follows from the
primary decomposition above, which determines the differentials in the minimal
free resolution:
q2
v
0
q2
0
−q1
−u
0
0
q2
−v
0
v
−q1
0
2
(−2,
−2)
u
0
−u
0
−q1
4
⊕
←−−−−−− (−4, −2).
IU ←− (−2, −1) ←−−−−−−−−−−−−−−−−−−−−
(−4, −1)2
The last assertion follows since
det
uq1
vq1
uq2
vq2
= 0.
Hence, the image of φU is contained in V(xy − zw) = Σ1,1 . After a change of
coordinates, q1 = s2 + ast and q2 = t2 + bst, with 0 6= a 6= b 6= 0. Therefore on the
open set Us,u ⊆ P1 × P1 the map is defined by
(at + 1, (at + 1)v, t2 + bt, (t2 + bt)u),
so the image is a surface. Finally, if XU = Σ1,1 , then with a suitable choice of basis
for U , p0 p3 − p1 p2 = 0, hence p0 |p1 p2 and p3 |p1 p2 . Since U is four dimensional, this
means we must have p0 = αβ, p1 = αγ, p2 = βδ. Without loss of generality, suppose
β is quadratic, so there is a linear first syzygy δp0 − αp2 = 0. Arguing similarly
for p3 , we find that there are two independent linear first syzygies. Lemma 4.4 of
the next section shows that if U is basepoint free, then there can be at most one
first syzygy of bidegree (1, 0), so by Corollary 3.4, IU must have two first syzygies
of bidegree (0, 1).
4. First syzygies of bidegree (1, 0)
Recall that there is an analogue of Lemma 3.1 for syzygies of bidegree (1, 0):
Lemma 4.1. If IU has a linear syzygy of bidegree (1, 0), then
IU = hps, pt, p2 , p3 i,
where p is homogeneous of bidegree (1, 1).
Lemma 4.1 has strong consequences as well: we will prove that
Proposition 4.2. If U is basepoint free and IU = hps, pt, p2 , p3 i, then
(1) IU has numerical Type 4 if and only if p is decomposable.
(2) IU has numerical Type 3 if and only if p is indecomposable.
We begin with some preliminary lemmas:
Lemma 4.3. If IU has a first syzygy of bidegree (1, 0), then IU has two minimal
syzygies of bidegree (1, 1) and if p in Lemma 4.1 factors, then IU also has a minimal
first syzygy of bidegree (0, 2).
12
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
Proof. First assume p is an irreducible bidegree (1, 1) form, then p = a0 su + a1 sv +
a2 tu + a3 tv, with a0 a3 − a1 a2 6= 0. We may assume a0 6= 0 and scale so it is one.
Then
sp = s2 u + a1 s2 v + a2 stu + a3 stv
tp = stu + a1 stv + a2 t2 u + a3 t2 v
p2 = b0 t2 u + b1 s2 v + b2 stv + b3 t2 v
p3 = c0 t2 u + c1 s2 v + c2 stv + c3 t2 v
Here we have used tp and sp to remove all the terms involving s2 u and stu from p2
and p3 . A simple but tedious calculation then shows that
p · p2
p · p3
= sp(b1 sv + b2 tv) + tp(b0 tu + b3 tv)
= sp(c1 sv + c2 tv) + tp(c0 tu + c3 tv)
Now suppose that p = L1 (s, t) · L2 (u, v) with L1 , L2 linear forms, then after a
change of coordinates, p = su and a (possibly new) set of minimal generators for
IU is hs2 u, stu, p2 , p3 i. Eliminating terms from p2 and p3 , we may assume
p2
p3
= as2 v + bstv + t2 l1
= cs2 v + dstv + t2 l2 ,
where li = li (u, v) ∈ R(0,1) . There are two first syzygies of bidegree (1, 1):
su(as2 v + bstv + t2 l1 ) = as3 uv + bs2 tuv + st2 ul1
(as + bt)v · s2 u + tl1 · stu = (as + bt)v · p0 + tl1 · p1
sup2
=
=
sup3
=
su(cs2 v + dstv + t2 l2 ) = cs3 uv + ds2 tuv + st2 ul2
= (cs + dt)v · s2 u + tl2 · stu = (cs + dt)v · p0 + tl2 · p1
A syzygy of bidegree (0, 2) is obtained via:
u(l2 p3 − l1 p2 )
= u(as2 vl2 + bstvl2 − cs2 vl1 − dstvl1 )
= (al2 − cl1 )v · s2 u + (bl2 − dl1 )v · stu
= (al2 − cl1 )v · p0 + (bl2 − dl1 )v · p1
Lemma 4.4. If U is basepoint free, then there can be at most one linear syzygy of
bidegree (1, 0).
Proof. Suppose IU has a linear syzygy of bidegree (1, 0), so that sp, tp ∈ IU , with
p = su+a1 sv+a2 tu+a3 tv. Note this takes care of both possible cases of Lemma 4.3:
in Case 1, a3 − a1 a2 = 0 (e.g. for p = su, a1 = a2 = a3 = 0) and in Case
2,
P a3 − a1 a2 6= 0. Now suppose another syzygy of bidegree (1, 0) exists: S =
(di s + ei t)pi = 0. Expanding shows that
S = d0 s3 u + (e0 + d1 )s2 tu + · · ·
So after reducing S by the Koszul syzygy on hsp, tpi, d0 = d1 = e0 = 0. In p2 and
p3 , one of b1 or c1 must be non-zero. If not, then all of tp, p2 , p3 are divisible by t
and since
sp |t=0 = s2 (u + a1 v),
this would mean ht, u + a1 vi is an associated prime of IU , contradicting basepoint
freeness. WLOG b1 6= 0, scale it to one and use it to remove the term c1 s2 v from
p3 . This means the coefficient of s3 v in S is d2 b1 = d2 , so d2 vanishes. At this
point we have established that S does not involve sp, and involves only t on tp, p2 .
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
13
Now change generators so that p02 = e1 pt + e2 p2 . This modification does not affect
tp, sp and IU , but now S involves only p02 and p3 :
(t)p02 + (d3 s + e3 t)p3 = 0.
As in the proof of Lemma 3.1, letting p002 = p02 + e3 p3 and p003 = d3 p3 , we see that
S = tp002 + sp003 = 0, so that p002 = sq and p003 = −tq, hence
IU = hs, ti ∩ hp, qi
with p, q both of bidegree (1, 1). But on P1 × P1 , V(p, q) is always nonempty, which
would mean IU has a basepoint, contradicting our hypothesis.
Remark 4.5. If the P1 fibers of Q did not intersect, Lemma 4.4 would follow easily
from Lemma 4.1. However, because Q is a projection of Σ3,1 ⊆ P7 to P5 , the P1
fibers of Q do in fact intersect.
Theorem 4.6. If U is basepoint free, then the only possibilities for linear first
syzygies of IU are
(1) IU has a unique first syzygy of bidegree (0, 1) and no other linear syzygies.
(2) IU has a pair of first syzygies of bidegree (0, 1) and no other linear syzygies.
(3) IU has a unique first syzygy of bidegree (1, 0) and no other linear syzygies.
Proof. It follows from Proposition 3.2 and Proposition 3.6 that both of the first two
items can occur. That there cannot be three or more linear syzygies of bidegree
(0, 1) follows easily from the fact that if there are two syzygies of bidegree (0, 1)
then IU has the form of Proposition 3.6 and the resolution is unique. Corollary 3.4
shows there cannot be linear syzygies of both bidegree (1, 0) and bidegree (0, 1) and
Lemma 4.4 shows there can be at most one linear syzygy of bidegree (1, 0).
Our next theorem strengthens Lemma 4.3: there is a minimal first syzygy of
bidegree (0, 2) iff the p in Lemma 4.1 factors. We need a pair of lemmas:
Lemma 4.7. If P(U ) contains a P2 fiber of Σ2,1 , then U is not basepoint free.
Proof. If P(U ) contains a P2 fiber of Σ2,1 over a point of P1 corresponding to a
linear form l(u, v), after a change of basis l(u, v) = u and so
IU = hs2 u, stu, t2 u, l1 (s, t)l2 (s, t)vi.
This implies that hu, l1 (s, t)i ∈ Ass(IU ), so U is not basepoint free.
The next lemma is similar to a result of [11], but differs due to the fact that the
subspaces P(W ) ⊆ P(V ) studied in [11] are always basepoint free.
Lemma 4.8. If U is basepoint free, then there is a minimal first syzygy on IU of
bidegree (0, 2) iff there exists P(W ) ' P2 ⊆ P(U ) such that P(W ) ∩ Σ2,1 is a smooth
conic.
P
Proof. Suppose
qi pi = 0 is a minimal P
first syzygy P
of bidegree P
(0, 2), so that
2
2
2
qi = ai u2 + bP
this
as
u
a
p
+
uv
b
p
+
v
ci pi = 0 and
i uv + ci v . Rewrite
i
i
i
i
P
P
define f0 =
ai pi , f1 =
bi pi , f2 =
ci pi . By construction, hf0 , f1 , f2 i ⊆ IU
and [f0 , f1 , f2 ] is a syzygy on [u2 , uv, v 2 ], so
f0
v
0
αv
f1 = α · −u + β · v = βv − αu , for some α, β ∈ R2,0 .
f2
0
−u
−βu
14
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
If {f0 , f1 , f2 } are not linearly independent, there exist constants ci with
c0 αv + c1 (βv − αu) − c2 βu = 0.
This implies that (c0 α + c1 β)v = (c1 α − c2 β)u, so α = kβ. But then {αu, αv} ⊆ IU ,
which means there is a minimal first syzygy of bidegree (0, 1), contradicting the
classification of §2. Letting W = Span{f0 , f1 , f2 }, we have that P2 ' P(W ) ⊆ P(U ).
The actual bidegree (0, 2) syzygy is
v
0 f0
det −u v f1 = 0.
0 −u f2
To see that the P(W ) meets Σ2,1 in a smooth conic, note that by Lemma 4.7, P(W )
cannot be equal to a P2 fiber of Σ2,1 , or P(U ) would have basepoints. The image
of the map P1 → P(W ) defined by
(x : y) 7→ x2 (αv) + xy(βv − αu) + y 2 (−βu) = (xα + yβ)(xv − yu)
is a smooth conic C ⊆ P(W ) ∩ Σ2,1 . Since P(W ) ∩ Σ2,1 is a curve of degree at most
three, if this is not the entire intersection, there would be a line L residual to C.
If L ⊆ Fx , where Fx is a P2 fiber over x ∈ P1 , then for small , Fx+ also meets
P(W ) in a line, which is impossible. If L is a P1 fiber of Σ2,1 , this would result in
a bidegree (0, 1) syzygy, which is impossible by the classification of §2.
Definition 4.9. A line l ⊆ P(s2 , st, t2 ) with l = af + bg, f, g ∈ Span{s2 , st, t2 } is
split if l has a fixed factor: for all a, b ∈ P1 , l = L(aL0 + bL00 ) with L ∈ R1,0 .
Theorem 4.10. If U is basepoint free, then IU has minimal first syzygies of bidegree
(1, 0) and (0, 2) iff
P(U ) ∩ Σ2,1 = C ∪ L,
0
where L ' P(W ) is a split line in a P2 fiber of Σ2,1 and C is a smooth conic in
P(W ), such that P(W 0 ) ∩ P(W ) = C ∩ L is a point and P(W 0 ) + P(W ) = P(U ).
Proof. Suppose there are minimal first syzygies of bidegrees (1, 0) and (0, 2). By
Lemma 4.8, the (0, 2) syzygy determines a conic C in a distinguished P(W ) ⊆ P(U ).
Every point of C lies on both a P2 and P1 fiber of Σ2,1 . No P1 fiber of Σ2,1 is
contained in P(U ), or there would be a first syzygy of bidegree (0, 1), which is
impossible by Corollary 3.4. By Lemma 4.1, there exists W 0 = Span{ps, pt} ⊆ U ,
so we have a distinguished line P(W 0 ) ⊆ P(U ). We now consider two possibilities:
Case 1: If p factors, then P(W 0 ) is a split line contained in Σ2,1 , which must
therefore be contained in a P2 fiber and p = L(s, t)l(u, v), where l(u, v) corresponds
to a point of a P1 fiber of Σ2,1 and P(W 0 )∩P(W ) is a point. In particular P(U )∩Σ2,1
is the union of a line and conic, which meet transversally at a point.
Case 2: If p does not factor, then p = a0 su+a1 sv+a2 tu+a3 tv, a0 a3 −a1 a2 6= 0. The
corresponding line L = P(ps, pt) meets P(W ) in a point and since p is irreducible
L ∩ C = ∅. Since W = Span{αv, βv − αu, −βv}, we must have
p · L0 (s, t) = aαv − bβu + c(βv − αu)
for some {a, b, c}, where L0 (s, t) = b0 s + b1 t corresponds to L ∩ P(W ). Write
p · L0 (s, t) as l1 uL0 (s, t) + l2 vL0 (s, t), where
l1 (s, t) = a0 s + a2 t
l2 (s, t) = a1 s + a3 t
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
15
Then
p · L0 (s, t)
=
=
=
l1 (s, t)L0 (s, t)u + l2 (s, t)L0 (s, t)v
aαv − bβu + c(βv − αu)
(−bβ − cα)u + (aα + cβ)v
a
c
c
b
In particular,
(4.1)
α
l2 L0
·
=
β
−l1 L0
If
det
a
c
c
b
6= 0,
then applying Cramer’s rule to Equation 4.1 shows that α and β share a common
factor L0 (s, t). But then W = Span{L0 γ, L0 δ, L0 }, which contradicts the basepoint freeness of U : change coordinates so L0 = s, so U = sγ, sδ, s, p3 . Since
p3 |s=0 = t2 l(u, v), hs, l(u, v)i is an associated prime of IU , a contradiction. To
conclude, consider the case ab − c2 = 0. Then there is a constant k such that
a
c
α
l2 L0
·
=
,
ka kc
β
−l1 L0
which forces kl1 (s, t) = −l2 (s, t). Recalling that l1 (s, t) = a0 s + a2 t and l2 (s, t) =
a1 s + a3 t, this implies that
a0 a2
det
= 0,
a1 a3
contradicting the irreducibility of p.
This shows that if IU has minimal first syzygies of bidegree (1, 0) and (0, 2), then
P(U ) ∩ Σ2,1 = C ∪ L meeting transversally at a point. The remaining implication
follows from Lemma 4.3 and Lemma 4.8.
For W ⊆ H 0 (OP1 ×P1 (2, 1)) a basepoint free subspace of dimension three, the
minimal free resolution of IW is determined in [11]: there are two possible minimal free resolutions, which depend only whether P(W ) meets Σ2,1 in a finite set
of points, or a smooth conic C. By Theorem 4.10, if there are minimal first syzygies of bidegrees (1, 0) and (0, 2), then U contains a W with P(W ) ∩ Σ2,1 = C,
which suggests building the Type 4 resolution by choosing W = Span{p0 , p1 , p2 }
to satisfy P(W ) ∩ Σ2,1 = C and constructing a mapping cone. There are two
problems with this approach. First, there does not seem to be an easy description
for hp0 , p1 , p2 i : p3 . Second, recall that the mapping cone resolution need not be
minimal. A computation shows that the shifts in the resolutions of hp0 , p1 , p2 i : p3
and hp0 , p1 , p2 i overlap and there are many cancellations. However, choosing W to
consist of two points on L and one on C solves both of these problems at once.
Lemma 4.11. In the setting of Theorem 4.10, let p0 correspond to L ∩ C and let
p1 , p2 correspond to points on L and C (respectively) distinct from p0 . If W =
Span{p0 , p1 , p2 }, then IW has a Hilbert Burch resolution. Choosing coordinates so
W = Span{sLv, tLv, βu}, the primary decomposition of IW is
∩4i=1 Ii = hs, ti2 ∩ hu, vi ∩ hβ, vi ∩ hL, ui.
16
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
Proof. After a suitable change of coordinates, W = Span{sLv, tLv, βu}, and writing β = (a0 s + a1 t)L0 (s, t), IW consists of the two by two minors of
t a0 uL0
φ2 = −s a1 uL0 .
0
Lv
Hence, the minimal free resolution of IW is
φ2
0 ←− IW ←− (−2, −1)3 ←− (−3, −1) ⊕ (−3, −2) ←− 0
For the primary decomposition, since L does not divide β,
hβ, vi ∩ hL, ui = hβL, vL, βu, vui,
and intersecting this with hs, ti2 ∩ hu, vi gives IW .
Lemma 4.12. If U is basepoint free, W = Span{sLv, tLv, βu} and p3 = αu−βv =
tLu − βv, then
IW : p3 = hβL, vL, βu, vui.
Proof. First, our choice of p0 to correspond to C ∩ L in Theorem 4.10 means we
may write α = tL. Since hs, ti2 : p3 = 1 = hu, vi : p3 ,
IW : p3 = (∩4i=1 Ii ) : p3 = (hβ, vi : p3 ) ∩ (hL, ui : p3 ).
Since p3 = tLu − βv, f p3 ∈ hβ, vi iff f tLu ∈ hβ, vi. Since tL = α and α, β and
u, v are relatively prime, this implies f ∈ hβ, vi. The same argument shows that
hL, ui : p3 must equal hL, ui.
Theorem 4.13. In the situation of Theorem 4.10, the minimal free resolution is
of Type 4. If p0 corresponds to L ∩ C, p1 6= p0 to another point on L and p2 6= p0 to
a point on C and W = Span{p0 , p1 , p2 }, then the minimal free resolution is given
by the mapping cone of IW and IW : p3 .
Proof. We construct a mapping cone resolution from the short exact sequence
·p3
0 ←− R/IU ←− R/IW ←− R(−2, −1)/IW : p3 ←− 0.
By Lemma 4.12,
IW : p3 = hβL, Lv, βu, uvi,
which by the reasoning in the proof of Proposition 3.6 has minimal free resolution:
v
0
u
0
u
(−3, −1)
(−3, −0)
−L
−β
0
0
u
⊕
⊕
−v
0
v
−L
0
(−1, −1)
(−2,
−2)
0
−β
0
−L
β
⊕
⊕
: p3 ←−
←−−−−−−−−−−−−−−−−−−−
←−−−−−− (−3, −2).
(−2, −1)
(−3, −1)
⊕
⊕
(0, −2)
(−1, −2)
IW
A check shows that there are no overlaps in the mapping cone shifts, hence the
mapping cone resolution is actually minimal.
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
17
4.1. Type 3 resolution. Finally, suppose IU = hp0 , p1 , p2 , p3 i with p0 = ps and
p1 = pt, such that p = a0 su + a1 sv + a2 tu + a3 tv is irreducible, so a0 a3 − a1 a2 6= 0.
As in the case of Type 4, the minimal free resolution will be given by a mapping
cone. However, in Type 3 the construction is more complicated: we will need two
mapping cones to compute the resolution. What is surprising is that by a judicious
change of coordinates, the bigrading allows us to reduce IU so that the equations
have a very simple form.
Theorem 4.14. If U is basepoint free and IU = hps, pt, p2 , p3 i with p irreducible,
then the IU has a mapping cone resolution, and is of numerical Type 3.
Proof. Without loss of generality, assume a0 = 1. Reducing p2 and p3 mod ps and
pt, we have
p2 = b0 t2 u + b1 s2 v + b2 stv + b3 t2 v
p3 = c0 t2 u + c1 s2 v + c2 stv + c3 t2 v
Since U is basepoint free, either b0 or c0 is nonzero, so after rescaling and reducing
p3 mod p2
p2 = t2 u + b1 s2 v + b2 stv + b3 t2 v
p3 =
c1 s2 v + c2 stv + c3 t2 v
=
(c1 s2 + c2 st + c3 t2 )v
=
L1 L2 v
for some Li ∈ R1,0 . If the Li ’s are linearly independent, then a change of variable
replaces L1 and L2 with s and t. This transforms p0 , p1 to p0 l1 , p0 l2 , but since the
li ’s are linearly independent linear forms in s and t, hp0 l1 , p0 l2 i = hp0 s, p0 ti, with p0
irreducible. So we may assume
IU = hps, pt, p2 , p3 i, where p3 = stv or s2 v.
With this change of variables,
p2 = l2 u + (b01 s2 + b02 st + b03 t2 )v
where l = as + bt and b 6= 0. We now analyze the two possible situations. First,
suppose p3 = stv. Reducing p2 modulo hps, pt, stvi yields
p2
=
=
αt2 u + b001 s2 v + b003 t2 v
(αu + b003 v)t2 + b001 s2 v
By basepoint freeness, α 6= 0, so changing variables via αu + b003 v 7→ u yields
p2 = t2 u + b001 s2 v. Notice this change of variables does not change the form of the
other pi . Now b001 6= 0 by basepoint freeness, so rescaling t (which again preserves
the form of the other pi ) shows that
IU = hps, pt, t2 u + s2 v, stvi.
Since p is irreducible, p = sl1 + tl2 with li are linearly independent elements of R0,1 .
Changing variables once again via l1 7→ u and l2 7→ v, we have
IU = hs(su + tv), t(su + tv), stQ1 , s2 Q1 + t2 Q2 i,
where Q1 = au + bv, Q2 = cu + dv are linearly independent with b 6= 0. Rescaling
v and s we may assume b = 1. Now let
IW = hs(su + tv), t(su + tv), stQ1 i.
18
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
The minimal free resolution of IW is
0 ←− IW ←− (−2, −1)3
t
−s
0
0
sQ1
p
←−
(−3, −1) ⊕ (−3, −2) ←− 0,
To obtain a mapping cone resolution, we need to compute IW : p2 . As in the Type
4 setting, we first find the primary decomposition for IW .
(1) If a = 0, then
= hs(su + tv), t(su + tv), stvi
= hu, vi ∩ hs, ti2 ∩ hu, ti ∩ hsu + tv, (s, v)2 i = ∩4i=1 Ii
IW
(2) If a 6= 0, rescale u and t by a so Q1 = u + v. Then
IW
=
=
hs(su + tv), t(su + tv), st(u + v)i
hu, vi ∩ hs, ti2 ∩ hv, si ∩ hu, ti ∩ hu + v, s − ti = ∩5i=1 Ii
Since s2 Q1 + t2 Q2 ∈ I1 ∩ I2 in both cases, IW : s2 Q1 + t2 Q2 = ∩ni=2 Ii . So if a = 0,
IW : s2 Q1 + t2 Q2
=
=
=
=
hu, ti : s2 Q1 + t2 Q2 ∩ hsu + tv, (s, v)2 i : s2 Q1 + t2 Q2
hu, ti ∩ hsu + tv, (s, v)2 i
hsu + tv, uv 2 , tv 2 , stv, s2 ti
hsu + tv, I3 (φ)i,
while if a 6= 0, IW : s2 Q1 + t2 Q2
=
=
=
=
hv, si : s2 Q1 + t2 Q2 ∩ hu, ti : s2 Q1 + t2 Q2 ∩ hu + v, s − ti : s2 Q1 + t2 Q2
hv, si ∩ hu, ti ∩ hu + v, s − ti
hsu + tv, uv(u + v), tv(u + v), tv(s − t), st(s − t)i
hsu + tv, I3 (φ)i
where
t 0
u s
φ=
0 v
0 0
0
0
if a = 0, and φ =
s
v
t 0
u s−t
0 u+v
0 0
0
0
if a 6= 0.
s
v
Since I3 (φ) has a Hilbert-Burch resolution, a resolution of IW : p2 = hsu+tv, I3 (φ)i
can be obtained as the mapping cone of I3 (φ) with I3 (φ) : p. There are no overlaps,
so the result is a minimal resolution. However, there is no need to do this, because
the change of variables allows us to do the computation directly and we find
(−1, −1)
⊕
(0, −3)
(−1, −3)2
⊕
⊕
(−2, −3)
⊕
0 ← Iw : p3 ←− (−1, −2) ←− (−2, −2)2 ←−
←0
⊕
⊕
(−3, −2)
(−2, −1)
(−3, −1)2
⊕
(−3, −0)
This concludes the proof if p3 = stv. When p3 = s2 v, the argument proceeds in
similar, but simpler, fashion.
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
19
5. No linear first syzygies
5.1. Hilbert function.
Proposition 5.1. If U is basepoint free, then there are six types of bigraded Hilbert
function in one-to-one correspondence with the resolutions of Table 2. The tables
below contain the values of hi,j = HF ((i, j), R/IU ), for i < 5, j < 6 listed in the
order corresponding to the six numerical types in Table 2.
0
1
2
3
4
5
0
1
2
3
4
5
6
1
2
4
2
0
0
0
2
3
6
1
0
0
0
3
4
8
0
0
0
0
4
5
10
0
0
0
0
0
1
2
3
4
5
0
1
2
3
4
5
6
1
2
4
2
0
0
0
2
3
6
1
0
0
0
3
4
8
1
0
0
0
4
5
10
1
0
0
0
0
1
2
3
4
5
0
1
2
3
4
5
6
1
2
4
2
1
0
0
2
3
6
1
0
0
0
3
4
8
0
0
0
0
4
5
10
0
0
0
0
0
1
2
3
4
5
0
1
2
3
4
5
6
1
2
4
2
1
0
0
2
3
6
1
0
0
0
3
4
8
1
0
0
0
4
5
10
1
0
0
0
0
1
2
3
4
5
0
1
2
3
4
5
6
1
2
4
2
0
0
0
2
3
6
2
0
0
0
3
4
8
2
0
0
0
4
5
10
2
0
0
0
0
1
2
3
4
5
0
1
2
3
4
5
6
1
2
4
2
0
0
0
2
3
6
3
0
0
0
3
4
8
4
0
0
0
4
5
10
5
0
0
0
The entries of the first two rows and the first column are clear:
h0,j = HF ((0, j), R) = j + 1, h1,j = HF ((1, j), R) = 2j + 2
and hi,0 = HF ((i, 0), R) = i + 1. Furthermore h2,1 = 2 by the linear independence
of the minimal generators of IU . The proof of the proposition is based on the
following lemmas concerning hi,j , i ≥ 2, j ≥ 1.
Lemma 5.2. For IU an ideal generated by four independent forms of bidegree (2, 1)
(1) h3,1 is the number of bidegree (1, 0) first syzygies
(2) h2,2 − 1 is the number of bidegree (0, 1) first syzygies
Proof. From the free resolution
R(−2, −3)B
⊕
0 ← R/IU ← R ← R(−2, −1)4 ←− R(−3, −1)A ←− F2 ←− F3 ← 0,
⊕
F1
we find
h3,1 = HF ((3, 1), R) − 4HF ((1, 0), R) + AHF ((0, 0), R) = 8 − 8 + A = A
h2,2 = HF ((2, 2), R) − 4HF ((0, 1), R) + BHF ((0, 0), R) = 9 − 8 + B = B + 1,
since Fi are free R-modules generated in degree (i, j) with i > 3 or j > 2.
Lemma 5.3. If U is basepoint free, then h3,2 = 0 for every numerical type.
Proof. If there are no bidegree (1, 0) syzygies then HF ((3, 1), R/IU ) = 0 and consequently HF ((3, 2), R/IU ) = 0. If there are bidegree (1, 0) syzygies then we are in
20
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
Type 3 or 4 where by Proposition 4.2 we know the relevant part of the resolution
is
R(−3, −1)
⊕
0 ← R/IU ← R ← R(−2, −1)4 ←− R(−3, −2)2 ←− F2 ←− F3 ← 0
⊕
F1
Then
h3,2 = HF ((3, 2), R) − 4HF ((1, 1), R) + HF ((0, 1), R) + 2HF ((0, 0), R)
= 12 − 16 + 2 + 2 = 0.
So far we have determined the following shape of the Hilbert function of R/IU :
0
1
2
3
4
5
0
1
2
3
4
5
6
1
2
4
2
h3,1
h4,1
h5,1
2
3
6
3
4
8
h2,2
0
0
0
h2,3
0
0
0
4
5
10
h2,4
0
0
0
If linear syzygies are present we know from the previous section the exact description of the possible minimal resolutions of IU and it is an easy check that they
agree with the last four Hilbert functions in Proposition 5.1. Next we focus on the
case when no linear syzygies are present. By Lemma 5.2 this yields h2,2 = 1 and
h3,1 = 0, hence hi,1 = 0 for i ≥ 3. We show that in the absence of linear syzygies
only the first two Hilbert functions in Proposition 5.1 may occur:
5.2. Types 1 and 2. In the following we assume that the basepoint free ideal
IU has no linear syzygies. We first determine the maximal numerical types which
correspond to the Hilbert functions found in §4.1 and then we show that only the
Betti numbers corresponding to linear syzygies cancel.
Proposition 5.4. If U is basepoint free and IU has no linear syzygies, then
(1) IU cannot have two or more linearly independent bidegree (0, 2) first syzygies
(2) IU cannot have two minimal first syzygies of bidegrees (0, 2), (0, j), j > 2
(3) IU has a single bidegree (0, 2) minimal syzygy iff h2,j = 1 for j ≥ 3
(4) IU has no bidegree (0, 2) minimal syzygy iff h2,j = 0 for j ≥ 3
Proof. (1) Suppose IU has two linearly independent bidegree (0, 2) first syzygies
which can be written down by a similar procedure to the one used in Lemma 3.1
as
u2 p + uvq + v 2 r = 0
u2 p0 + uvq 0 + v 2 r0 = 0
with p, q, r, p0 , q 0 , r0 ∈ U . Write p = p1 u + p2 v with p1 , p2 ∈ R2,0 and similarly for
p0 , q, q 0 , r, r0 . Substituting in the equations above one obtains
p1
p2 + q1
q2 + r1
r2
= 0
= 0
= 0
= 0
p02
q20
p01
+ q10
+ r10
r20
= 0
= 0
= 0
= 0
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
21
hence
p
q
r
=
p2 v,
= −(p2 u + r1 v),
=
r1 v,
p0
q0
r0
=
=
=
p02 v
0
−(p2 u + r10 v)
r10 v
are elements of IU . If both of the pairs p2 v, p02 v or r1 u, r10 u consists of linearly
independent elements of R2,1 , then U ∩ Σ2,1 contains a P1 inside each of the P2
fibers over the points corresponding to u, v in the P1 factor of the map P2 × P1 7→
P5 . Pulling back the two lines from Σ2,1 to the domain of its defining map, one
obtains two lines in P2 which must meet (or be identical). Taking the image of
the intersection point we get two elements of the form αu, αv ∈ IU which yield
a (0, 1) syzygy, thus contradicting our assumption. Therefore it must be the case
that p02 = ap2 or r10 = br1 with a, b ∈ k. The reasoning being identical, we shall
only analyze the case p02 = ap2 . A linear combination of the elements q = −(p2 u +
r1 v), q 0 = −(p02 u + r10 v) ∈ IU produces (r10 − ar1 )v ∈ IU and a linear combination
of the elements r1 u, r10 u ∈ IU produces (r10 − ar1 )u ∈ IU , hence again we obtain a
(0, 1) syzygy unless r10 = ar1 . But then (p0 , q 0 , r0 ) = a(p, q, r) and these triples yield
linearly dependent bidegree (0, 2) syzygies.
(2) The assertion that IU cannot have a bidegree (0, 2) and a (distinct) bidegree
(0, j), j ≥ 2 minimal first syzygies is proved by induction on j. The base case j = 2
has already been solved. Assume IU has a degree (0, 2) syzygy u2 p + uvq + v 2 r = 0
with p = p1 u + p2 v, q, r ∈ IU and a bidegree (0, j) syzygy uj w1 + uj−1 vw2 + . . . +
v j wj+1 = 0 with wi = yi u + zi v ∈ IU . Then as before p1 = 0, z1 = 0, r2 = 0, zj+1 =
0 and the same reasoning shows one must have z1 = ap2 or yj+1 = br1 . Again we
handle the case z1 = ap2 where a linear combination of the two syzygies produces
the new syzygy
uj−1 v(w2 − aq) + uj−2 v 2 (w3 − ar) + uj−3 v 3 (w3 ) . . . + v j wj+1 = 0.
Dividing by v: uj−1 (w2 − aq) + uj−2 v(w3 − ar) + uj−3 v 2 (w3 ) . . . + v j−1 wj+1 = 0,
which is a minimal bidegree (0, j − 1) syzygy iff the original (0, j) syzygy was
minimal. This contradicts the induction hypothesis.
(3) An argument similar to Lemma 5.2 shows that in the absence of (0, 1) syzygies
h2,3 is equal to the number of bidegree (0, 2) syzygies on IU . Note that the absence
of (0, 1) syzygies implies there can be no bidegree (0, 2) second syzygies of IU to
cancel the effect of bidegree (0, 2) first syzygies on the Hilbert function. This covers
the converse implications of both (3) and (4) as well as the case j = 3 of the direct
implications. The computation of h2,j , j ≥ 3 is completed as follows
h2,j
= HF ((2, j), R) − HF ((2, j), R(−2, −1)4 ) + HF ((2, j), R(−2, −3))
=
3(j + 1) − 4j + (j − 2)
=1
(4) In this case we compute
h2,3 = HF ((2, 3), R) − HF ((2, 3), R(−2, −1)4 ) = 12 − 12 = 0
HF ((2, 4), R) − HF ((2, 4), R(−2, −1)4 ) = 15 − 16 = −1
The fact that h2,j = 0 for higher values of j follows from h2,3 = 0. In fact even
more is true: IU is forced to have a single bidegree (0, 3) first syzygy to ensure that
h2,j = 0 for j ≥ 4.
22
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
Corollary 5.5. There are only two possible Hilbert functions for basepoint free
ideals IU without linear syzygies, depending on whether there is no (0, 2) syzygy or
exactly one (0, 2) syzygy. The two possible Hilbert functions are
0
1
2
3
4
5
0
1
2
3
4
5
6
1
2
4
2
0
0
0
2
3
6
1
0
0
0
3
4
8
0
0
0
0
4
5
10
0
0
0
0
0
1
2
3
4
5
0
1
2
3
4
5
6
1
2
4
2
0
0
0
2
3
6
1
0
0
0
3
4
8
1
0
0
0
4
5
10
1
0
0
0
Proposition 5.6. If U is basepoint free and IU has no linear syzygies, then IU has
(1) exactly 4 bidegree (1, 1) first syzygies
(2) exactly 2 bidegree (2, 0) first syzygies
Proof. Note that there cannot be any second syzygies in bidegrees (1, 1) and (2, 0)
1
1
of bidebecause of the absence of linear first syzygies. Thus the numbers β3,2
, β4,1
gree (1, 1) and (2, 0) first syzygies are determined by the Hilbert function:
1
β3,2
= h3,2 − HF ((3, 2), R) + HF ((3, 2), R(−2, −1)4 ) = 0 − 12 + 16 = 4
1
β4,1
= h4,1 − HF ((4, 1), R) + HF ((4, 1), R(−2, −1)4 ) = 0 − 10 + 12 = 2
Next we obtain upper bounds on the bigraded Betti numbers of IU by using
bigraded initial ideals. The concept of initial ideal with respect to any fixed term
order is well known and so is the cancellation principle asserting that the resolution
of an ideal can be obtained from that of its initial ideal by cancellation of some
consecutive syzygies of the same bidegree. In general the problem of determining
which cancellations occur is very difficult. In the following we exploit the cancellation principle by using the bigraded setting to our advantage. For the initial ideal
computations we use the revlex order induced by s > t > u > v.
In [1], Aramova, Crona and de Negri introduce bigeneric initial ideals as follows
(we adapt the definition to our setting): let G = GL(2, 2) × GL(2, 2) with an
element g = (dij , ekl ) ∈ G acting on the variables in R by
g : s 7→ d11 s + d12 t, t 7→ d21 s + d22 t, u 7→ e11 u + e12 v, v 7→ e21 u + e22 v
We shall make use of the following results of [1].
Theorem 5.7. [[1] Theorem 1.4] Let I ⊂ R be a bigraded ideal. There is a Zariski
open set U in G and an ideal J such that for all g ∈ U we have in(g(I)) = J.
Definition 5.8. The ideal J in Theorem 5.7 is defined to be the bigeneric initial
ideal of I, denoted by bigin(I).
Definition 5.9. A monomial ideal I ⊂ R is bi-Borel fixed if g(I) = I for any upper
triangular matrix g ∈ G.
Definition 5.10. A monomial ideal I ⊂ R = k[s, t, u, v] is strongly bistable if for
every monomial m ∈ I the following conditions are satisfied:
(1) if m is divisible by t, then sm/t ∈ I.
(2) if m is divisible by v, then um/v ∈ I .
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
23
As in the Z-graded case, the ideal bigin(I) has the same bigraded Hilbert function
as I. Propositions 1.5 and 1.6 of [1] show that bigin(I) is bi-Borel fixed, and in
characteristic zero, bigin(I) is strongly bistable.
Proposition 5.11. For each of the Hilbert functions in Proposition 5.4 there are
exactly two strongly bistable monomial ideals realizing it. These ideals and their
respective bigraded resolutions are:
(1) G1 = hs2 u, s2 v, stu, stv, t2 u2 , t2 uv, t3 u, t3 v, t2 v 3 i with minimal resolution
(5.1)
(−2, −1)4
⊕
(−2, −2)2
⊕
0 ← G1 ←
(−2, −3)
⊕
(−3, −1)2
(−2, −2)2
⊕
(−3, −2)
(−2, −3)
⊕
⊕
(−3, −3)2
(−2, −4)
(−4, −3)
⊕
⊕
⊕
←0
←− (−3, −1)2 ←− (−3, −4)2 ←−
(−4, −4)
⊕
⊕
(−4, −2)3
(−3, −2)5
⊕
⊕
(−4, −3)
(−3, −3)2
⊕
(−4, −1)2
G01 = hs2 u, s2 v, stu, t2 u, stv 2 , st2 v, t3 v, t2 v 3 i with minimal resolution
(5.2)
(−2, −1)4
⊕
(−2, −2)
⊕
0 ← G01 ←
(−2, −3)
⊕
(−3, −1)2
(−2, −2)
⊕
(−2, −3)
(−3, −3)2
⊕
⊕
(−2, −4)
(−4, −3)
(−3, −4)2
⊕
⊕
⊕
←−
←0
←− (−3, −1)2 ←−
(−4, −2)3
(−4, −4)
⊕
⊕
(−3, −2)4
⊕
(−4, −3)
(−3, −3)2
⊕
(−4, −1)2
(2) G2 = hs2 u, s2 v, stu, stv, t2 u2 , t2 uv, t3 u, t3 vi with minimal resolution
(5.3)
(−2, −1)4
⊕
0 ← G2 ← (−2, −2)2
⊕
(−3, −1)2
(−2, −2)2
⊕
(−2, −3)
(−3, −2)
⊕
⊕
←− (−3, −1)2 ←− (−3, −3)2 ←− (−4, −3) ← 0
⊕
⊕
(−3, −2)5
(−4, −2)3
⊕
(−4, −1)2
24
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
G02 = hs2 u, s2 v, stu, t2 u, stv 2 , st2 v, t3 vi with minimal resolution
(5.4)
(−2, −1)4
⊕
0 ← G02 ← (−2, −2)
⊕
(−3, −1)2
(−2, −2)
⊕
(−2, −3)
⊕
(−3, −3)2
2
⊕
←− (−4, −3) ← 0
←− (−3, −1) ←−
⊕
(−4, −2)3
(−3, −2)4
⊕
(−4, −1)2
Proof. There are only two strongly bistable sets of four monomials in R2,1 : {s2 u, s2 v,
stu, stv} and {s2 , s2 v, stu, t2 u}. To complete {s2 u, s2 v, stu, stv} to an ideal realizing one of the Hilbert functions in Proposition 5.4 we need two additional monomials
in R2,2 , which must be t2 u2 , t2 uv in order to preserve bistability. Then we must
add the two remaining monomials t3 u, t3 v in R3,1 , which yields the second Hilbert
function. To realize the first Hilbert function we must also include the remaining
monomial t2 v 3 ∈ R2,3 . To complete {s2 , s2 v, stu, t2 u} to an ideal realizing one of
the Hilbert functions in Proposition 5.4, we need one additional monomial in R2,2
which must be stv 2 in order to preserve bistability. Then we must add the two
remaining monomials st2 v, t3 v ∈ R3,1 . Then to realize the first Hilbert function,
we must add the remaining monomial t2 v 3 ∈ R2,3 .
Theorem 5.12. There are two numerical types for the minimal Betti numbers of
basepoint free ideals IU without linear syzygies.
(1) If there is a bidegree (0, 2) first syzygy then IU has numerical Type 2.
(2) If there is no bidegree (0, 2) first syzygy then IU has numerical Type 1.
Proof. Proposition 5.4 establishes that the two situations above are the only possibilities in the absence of linear syzygies and gives the Hilbert function corresponding
to each of the two cases. Proposition 5.11 identifies the possible bigeneric initial
ideals for each case. Since these bigeneric initial ideals are initial ideals obtained
following a change of coordinates, the cancellation principle applies. We now show
the resolutions (5.1), (5.2) must cancel to the Type 1 resolution and the resolutions
(5.3), (5.4) must cancel to the Type 2 resolution.
Since IU is assumed to have no linear syzygies, all linear syzygies appearing in
the resolution of its bigeneric initial ideal must cancel. Combined with Proposition
5.6, this establishes that in (5.3) or (5.4) the linear cancellations are the only ones
that occur. In (5.1), the cancellations of generators and first syzygies in bidegrees
(2, 2), (2, 3), (3, 1) are obvious. The second syzygy in bidegree (3, 2) depends on the
cancelled first syzygies, therefore it must also be cancelled. This is natural, since by
Proposition 5.6, there are exactly four bidegree (3, 2) first syzygies. An examination
of the maps in the resolution (5.1) shows that the bidegree (3, 3) second syzygies
depend on the cancelled first syzygies, so they too must cancel. Finally the bidegree
(4, 3) last syzygy depends on the previous cancelled second syzygies and so must
also cancel.
In (5.2), the cancellations of generators and first syzygies in bidegrees (2, 2),
(2, 3), (3, 1) are obvious. The second syzygies of bidegree (3, 3) depend only on the
cancelled first syzygies, so they too cancel. Finally the bidegree (4, 3) last syzygy
depends on the previous cancelled second syzygies and so it must also cancel.
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
25
6. Primary decomposition
Lemma 6.1. If U is basepoint free, all embedded primes of IU are of the form
(1) hs, t, l(u, v)i
(2) hu, v, l(s, t)i
(3) m = hs, t, u, vi
√
Proof. Since IU = hs, ti ∩ hu, vi, an embedded prime must contain either hs, ti or
hu, vi and modulo these ideals any remaining minimal generators can be considered
as irreducible polynomials in k[u, v] or k[s, t] (respectively). But the only prime
ideals here are hli i with li a linear form, or the irrelevant ideal.
Lemma 6.2. If U is basepoint free, then the primary components corresponding to
minimal associated primes of IU are
(1) Q1 = hu, vi
√
(2) Q2 = hs, ti2 or Q2 = hp, qi, with p, q ∈ R2,0 and p, q = hs, ti.
Proof. Let Q1 , Q2 be the primary components associated to hu, vi and hs, ti respectively. Since IU ⊂ Q1 ⊂ hu, vim and IU is generated in bidegree (2, 1), Q1 must
contain at least one element of bidegree (0, 1). If Q1 contains exactly one element
p(u, v) of bidegree (0, 1), then V is contained in the fiber of Σ2,1 over the point
V (p(u, v)), which contradicts the basepoint free assumption. Therefore Q1 must
contain two independent linear forms in u, v and hence Q1 = hu, vi.
Since IU ⊂ Q2 and IU contains elements of bidegree (2, 1), Q2 must contain at
least one element of bidegree (2, 0). If Q2 contains exactly one element q(s, t) of
bidegree (2, 0), then V is contained in the fiber of Σ2,1 over the point V (q(s, t)),
which contradicts the basepoint free assumption. If Q2 contains exactly two elements of bidegree (2, 0) which share a common linear factor l(s, t), then IU is
contained in the ideal hl(s, t)i, which contradicts the basepoint free assumption
as well. Since the bidegree (2, 0) part of Q2 is contained in the linear span of
s2 , t2 , st, it follows that the only possibilities consistent with the conditions above
√
are Q2 = hp, qi with p, q = hs, ti or Q2 = hs2 , t2 , sti.
Proposition 6.3. For each type of minimal free resolution of IU with U basepoint
free, the embedded primes of IU are as in Table 1.
Proof. First observe that m = hs, t, u, vi is an embedded prime for each of Type 1
to Type 4. This follows since the respective free resolutions have length four, so
Ext4R (R/IU , R) 6= 0.
0
By local duality, this is true iff Hm
(R/IU ) 6= 0 iff m ∈ Ass(IU ). Since the resolutions
for Type 5 and Type 6 have projective dimension less than four, this also shows
that in Type 5 and Type 6, m 6∈ Ass(IU ). Corollary 3.5 and Proposition 3.6 show
the embedded primes for Type 5 and Type 6 are as in Table 1.
Thus, by Lemma 6.1, all that remains is to study primes of the form hs, t, L(u, v)i
and hu, v, L(s, t)i for Type 1 through 4. For this, suppose
IU = I1 ∩ I2 ∩ I3 , where
(1) I1 is the intersection of primary components corresponding to the two minimal associated primes identified in Lemma 6.2.
(2) I2 is the intersection of embedded primary components not primary to m.
(3) I3 is primary to m.
26
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
√
By Lemma 6.2, if I1 = hu, vi∩hp, qi with p, q = hs, ti, then I1 is basepoint free and
consists of four elements of bidegree (2, 1), thus IU = I1 and has Type 6 primary
decomposition. So we may assume I1 = hu, vi ∩ hs, ti2 . Now we switch gears and
consider all ideals in the Z–grading where the variables have degree one. In the
Z–grading
HP (R/I1 , t) = 4t + 2.
Since the Hilbert polynomials of R/(I1 ∩I2 ) and R/IU are identical, we can compute
the Hilbert polynomials of R/(I1 ∩ I2 ) for Type 1 through 4 using Theorems 5.12,
4.13 and 4.14. For example, in Type 1, the bigraded minimal free resolution is
(−2, −4)
⊕
(−3, −4)2
4
4
⊕
0 ← IU ← (−2, −1) ←− (−3, −2) ←−
←− (−4, −4) ← 0.
⊕
(−4, −2)3
(−4, −1)2
Therefore, the Z–graded minimal free resolution is
(−7)2
(−6)
⊕
⊕
←− (−8) ← 0.
←−
0 ← IU ← (−3) ←−
(−6)3
(−5)6
4
Carrying this out for the other types shows that the Z–graded Hilbert polynomial
of R/(I1 ∩ I2 ) is
(1) In Type 1 and Type 3:
HP (R/(I1 ∩ I2 ), t) = 4t + 2.
(2) In Type 2 and Type 4:
HP (R/(I1 ∩ I2 ), t) = 4t + 3.
In particular, for Type 1 and Type 3,
HP (R/I1 , t) = HP (R/(I1 ∩ I2 ), t),
and in Type 2 and Type 4, the Hilbert polynomials differ by one:
HP (I1 /(I1 ∩ I2 ), t) = 1.
Now consider the short exact sequence
0 −→ I1 ∩ I2 −→ I1 −→ I1 /(I1 ∩ I2 ) −→ 0.
Since I1 ∩ I2 ⊆ I1 , in Type 1 and Type 3 where the Hilbert Polynomials are
equal, there can be no embedded primes save m. In Type 2 and Type 4, since
HP (I1 /(I1 ∩ I2 ), t) = 1, I1 /(I1 ∩ I2 ) is supported at a point of P3 which corresponds
to a codimension three prime ideal of the form hl1 , l2 , l3 i. Switching back to the
fine grading, by Lemma 6.1, this prime must be either hs, t, l(u, v)i or hu, v, l(s, t)i.
Considering the multidegrees in which the Hilbert function of I1 /(I1 ∩I2 ) is nonzero
shows that the embedded prime is of type hs, t, l(u, v)i.
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
27
7. The Approximation complex and Implicit equation of XU
The method of using moving lines and moving quadrics to obtain the implicit
equation of a curve or surface was developed by Sederberg and collaborators in
[27], [28], [29]. In [10], Cox gives a nice overview of this method and makes explicit
the connection to syzygies. In the case of tensor product surfaces these methods
were first applied by Cox-Goldman-Zhang in [12]. The approximation complex
was introduced by Herzog-Simis-Vasconcelos in [23],[24]. From a mathematical
perspective, the relation between the implicit equation and syzygies comes from
work of Busé-Jouanolou [5] and Busé-Chardin [6] on approximation complexes and
the Rees algebra; their work was extended to the multigraded setting in [2], [3]. The
next theorem follows from work of Botbol-Dickenstein-Dohm [3] on toric surface
parameterizations, and also from a more general result of Botbol [2]. The novelty
of our approach is that by obtaining an explicit description of the syzygies, we
obtain both the implicit equation for the surface and a description of the singular
locus. Theorem 7.3 gives a particularly interesting connection between syzygies of
Iu and singularities of XU .
Theorem 7.1. If U is basepoint free, then the implicit equation for XU is determinantal, obtained from the 4 × 4 minor of the first map of the approximation complex
Z in bidegree (1, 1), except for Type 6, where φU is not birational.
7.1. Background on approximation complexes. We give a brief overview of
approximation complexes, for an extended survey see [7]. For
I = hf1 , . . . , fn i ⊆ R = k[x1 , . . . xm ],
let Ki ⊆ Λi (Rn ) be the kernel of the ith Koszul differential on {f1 , . . . , fn }, and
S = R[y1 , . . . , yn ]. Then the approximation complex Z has ith term
Z i = S ⊗R K i .
The differential is the Koszul differential on {y1 , . . . , yn }. It turns out that H0 (Z)
is SI and the higher homology depends (up to isomorphism) only on I. For µ a
bidegree in R, define
d
di−1
i
Z µ : · · · −→ k[y1 , . . . , yn ] ⊗k (Ki )µ −→
k[y1 , . . . , yn ] ⊗k (Ki−1 )µ −→ · · ·
If the bidegree µ and base locus of I satisfy certain conditions, then the determinant
of Z µ is a power of the implicit equation of the image. This was first proved in [5].
In Corollary 14 of [3], Botbol-Dickenstein-Dohm give a specific bound for µ in the
case of a toric surface and map with zero-dimensional base locus and show that in
this case the gcd of the maximal minors of dµ1 is the determinant of the complex.
For four sections of bidegree (2, 1), the bound in [2] shows that µ = (1, 1). To make
things concrete, we work this out for Example 1.1.
Example 7.2. Our running example is U = Span{s2 u, s2 v, t2 u, t2 v + stv}. Since
K1 is the module of syzygies on IU , which is generated by the columns of
−v −t2
0
0
−tv
u
0
−st − t2
0
0
2
0
s
0
−sv − tv −sv
0
0
s2
tu
su
28
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
The first column encodes the relation ux1 − vx0 =
relations
s2 x2 − t2 x0
=
2
s x3 − (st + t2 )x1 =
tux3 − (sv + tv)x2 =
sux3 − svx2 − tvx0 =
0, then next four columns the
0
0
0
0
If we were in the singly graded case, we would need to use µ = 2, and a basis for Z12
consists of {s, t, u, v} · ux1 − vx0 , and the remaining four relations. With respect to
the ordered basis {s2 , st, t2 , su, sv, tu, tv, u2 , uv, v 2 } for R2 and writing · for 0, the
matrix for d21 : Z12 −→ Z02 is
·
·
·
x1
−x0
·
·
·
·
·
·
·
·
·
·
x1
−x0
·
·
·
·
·
·
·
·
·
·
x1
−x0
·
·
·
·
·
·
·
·
·
x1
−x0
x2
·
−x0
·
·
·
·
·
·
·
x3
−x1
−x1
·
·
·
·
·
·
·
·
·
·
·
−x2
x3
−x2
·
·
·
·
·
·
x3
−x2
·
−x0
·
·
·
However, this matrix represents all the first syzygies of total degree two. Restricting to the submatrix of bidegree (1, 1) syzygies corresponds to choosing rows
indexed by {su, sv, tu, tv}, yielding
x1
−x0
·
·
·
·
x1
−x0
·
−x2
x3
−x2
x3
−x2
·
−x0
We now study the observation made in the introduction, that linear syzygies
manifest in a linear singular locus. Example 7.2 again provides the key intuition:
(1,1)
a linear first syzygy gives rise to two columns of d1 .
Theorem 7.3. If U is basepoint free and IU has a unique linear syzygy, then the
codimension one singular locus of XU is a union of lines.
Proof. Without loss of generality we assume the linear syzygy involves the first two
generators p0 , p1 of IU , so that in the two remaining columns corresponding to the
linear syzygy the only nonzero entries are x0 and x1 , which appear exactly as in
(1,1)
Example 7.2. Thus, in bidegree (1, 1), the matrix for d1
has the form
x1
·
∗ ∗
−x0
·
∗ ∗
·
x1 ∗ ∗
·
−x0 ∗ ∗
Computing the determinant using two by two minors in the two left most columns
shows the implicit equation of F is of the form
x21 · f + x0 x1 g + x20 h,
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
29
which is singular along the line V(x0 , x1 ). To show that the entire singular locus
is a union of lines when IU has resolution Type ∈ {3, 4, 5}, we must analyze the
(1,1)
structure of d1 . For Type 3 and 4, Theorems 4.14 and 4.13 give the first syzygies,
and show that the implicit equation for XU is given by the determinant of
−x1
·
x2
x3
·
−x1
a1 x2 − b1 x0
a1 x3 − c1 x0
.
x0
·
a2 x2 − b0 x1
a2 x3 − c0 x1
·
x0 a3 x2 − b2 x0 − b3 x1 a3 x3 − c2 x0 − c3 x1
We showed above that V(x0 , x1 ) ⊆ Sing(XU ). Since XU \ V(x0 , x1 ) ⊆ Ux0 ∪ Ux1 ,
it suffices to check that XU ∩ Ux0 and XU ∩ Ux1 are smooth in codimension one.
XU ∩ Ux0 is defined by
−c1 y2 + b1 y3
=
+
+
+
(−b3 c0 + b0 c3 )y14 + (a3 c0 − a2 c3 )y13 y2
(−a3 b0 + a2 b3 )y13 y3 + +(−b2 c0 + b0 c2 )y13
(a1 c0 − a2 c2 − c3 )y12 y2 + (−a1 b0 + a2 b2 + b3 )y12 y3
(−b1 c0 + b0 c1 )y12 + (−a2 c1 − c2 )y1 y2 + (a2 b1 + b2 )y1 y3 .
By basepoint freeness, b1 or c1 is nonzero, as is (−b3 c0 + b0 c3 ), so in fact XU ∩ Ux0
is smooth. A similar calculation shows that XU ∩ Ux1 is also smooth, so for Type 3
and Type 4, Sing(XU ) is a line. In Type 5 the computation is more cumbersome:
with notation as in Proposition 3.2, the relevant 4 × 4 submatrix is
x1
·
a1 x3 − a3 x2
β1 α2 x1 + α1 α2 x0
−x0
·
b1 x3 − b3 x2
β1 β2 x1 + α1 β2 x0
.
·
x1 β1 α2 x1 + α1 α2 x0
a2 x3 − a4 x2
·
−x0 β1 β2 x1 + α1 β2 x0
b2 x 3 − b4 x 2
A tedious but straightforward calculation shows that Sing(XU ) consists of three
lines in Type 5a and a pair of lines in Type 5b.
Theorem 7.4. If U is basepoint free, then the codimension one singular locus of
XU is as described in Table 1.
Proof. For a resolution of Type 3,4,5, the result follows from Theorem 7.3, and for
Type 6 from Proposition 3.6. For the generic case (Type 1), the result is obtained
by Elkadi-Galligo-Le [18], so it remains to analyze Type 2. By Lemma 4.8, the
(0, 2) first syzygy implies that we can write p0 , p1 , p2 as αu, βv, αv + βu for some
α, β ∈ R2,0 . Factor α as product of two linear forms in s and t, so after a linear
change of variables we may assume α = s2 or st.
If α = s2 , write β = (ms + nt)(m0 s + n0 t), and note that n and n0 cannot both
vanish, because then β is a scalar multiple of α, violating linear independence of
the pi . Thus, after a linear change of variables, we may assume β is of the form
t(ks + lt), so the pi ’s are of the form
{s2 u, (ks + lt)tv, s2 v + (ks + lt)tu, p3 }
If l = 0, then IU = hs2 u, kstv, s2 v + kstu, p3 i, which is not basepoint free: if s = 0,
thus the first 3 polynomials vanish and p3 becomes t2 (au + bv) which vanishes for
some (u : v) ∈ P1 . So l 6= 0, and after a linear change of variables t 7→ tl and
u 7→ lu, we may assume l = 1 and hence
IU = hs2 u, t2 v + kstv, s2 v + t2 u + kstu, p3 i.
30
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
Now consider two cases, k = 0 and k 6= 0. In the latter, we may assume k = 1:
first replace ks by s and then replace k 2 u by u. In either case, reducing p3 by the
other generators of IU shows we can assume
p3 = astu + bs2 v + cstv = s(atu + bsv + ctv).
By Theorem 5.12 there is one first syzygy of bidegree (0, 2), two first syzygies of
bidegree (2, 0), and four first syzygies of bidegree (1, 1). A direct calculation shows
that two of the bidegree (1, 1) syzygies are
atu + bsv + ctv
0
0
asu − btu + csv
.
0
btv
−su
−tv
The remaining syzygies depend on k and a. For example, if k = a = 0, then the
remaining bidegree (1, 1) syzygies are
0
−b3 tv
b2 su + c2 sv
c3 sv
,
2
2
−b sv
−b csv
2
2
bsv − ctv
b tu + bcsv − c tv
and the bidegree (2, 0) syzygies are
0
b3 t2
2
bs + cst
−c3 st
0
−b3 s2
2
2 2
−t
b s − bcst + c2 t2
Thus, if k = a = 0, using the basis {su, tu, sv, tv} for R1,1 , the matrix whose
determinant gives the implicit equation for XU is
−x3
0
b2 x 1
0
0
−bx1
0
b2 x 3
bx0
cx1
c2 x1 − b2 x2 + bx3 c3 x1 − b2 cx2 + bcx3
cx0 bx2 − x3
−cx3
−b3 x0 − c2 x3
Since k = a = 0, if both b and c vanish there will be a linear syzygy on IU ,
contradicting our assumption. So suppose b 6= 0 and scale the generator p3 so
b = 1:
−x3
0
x1
0
0
−x1
0
x3
2
3
x0
cx1
c x1 − x2 + x3 c x1 − cx2 + cx3
cx0 x2 − x3
−cx3
−x0 − c2 x3
Expanding along the top two rows by 2 × 2 minors as in the proof of Theorem 7.3
shows that XU is singular along V(x1 , x3 ), and evaluating the Jacobian matrix with
x1 = 0 shows this is the only component of the codimension one singular locus with
x1 = 0. Next we consider the affine patch Ux1 . On this patch, the Jacobian ideal is
h(4x3 +c2 )(x2 −2x3 −c2 ), (x2 −2x3 −c2 )(x2 +2x3 ), 2x0 −2x2 x3 −x2 c2 +2x23 +4x3 c2 +c4 i
which has codimension one component given by
V(x2 − 2x3 − c2 , x0 − x23 ),
a plane conic. Similar calculations work for the other cases.
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
31
8. Connection to the dual scroll
We close by connecting our work to the results of Galligo-Lê in [19]. First, recall
that the ideal of Σ2,1 is defined by the two by two minors of
x0 x1 x2
.
x3 x4 x5
Combining this with the relations x21 − x0 x2 and x24 − x3 x5 arising from ν2 shows
that the image of the map τ defined in Equation 2.1 is the vanishing locus of the
two by two minors of
x0 x1 x3 x4
(8.1)
.
x1 x2 x4 x5
Let A denote the 4 × 6 matrix of coefficients of the polynomials defining U in the
monomial basis above. We regard P(U ) ,→ P(V ) via a 7→ a · A. Note that IU and
the implicit equation of XU are independent of the choice of generators pi ([7]).
The dual projective space of P(V ) is P(V ∗ ) where V ∗ = Homk (V, k) and the
projective subspace of P(V ∗ ) orthogonal to U is defined to be P((V /U )∗ ) = P(U ⊥ ),
where U ⊥ = {f ∈ V ∗ |f (u) = 0, ∀u ∈ U } is algebraically described as the kernel of
A. The elements of U ⊥ define the space of linear forms in xi which vanish on P(U ).
In Example 1.1 U = Span{s2 u, s2 v, t2 u, t2 v + stv}, so A is the matrix
1 0 0 0 0 0
0 0 0 1 0 0
0 0 1 0 0 0 ,
0 0 0 0 1 1
and P(U ) = V(x1 , x4 − x5 ) ⊆ P5 .
The conormal variety N (X) is the incidence variety defined as the closure of the
set of pairs {(x, π) ∈ P(V ) × P(V ∗ )} such that x is a smooth point of X and π is
an element of the linear subspace orthogonal to the tangent space TX,x in the sense
described above. N (X) is irreducible and for a varieties X embedded in P(V ) ' P5
the dimension of N (X) is 4. The image of the projection of N (X) onto the factor
P(V ∗ ) is by definition the dual variety of X denoted X ∗ .
g
Denote by X
U the variety XU re-embedded as a hypersurface of P(U ). Proposition 1.4 (ii) of [8] applied to our situation reveals
∗
∗
3
g
Proposition 8.1. If X
is a hypersurface which is swept by a one dimenU ⊂P
∗
g
sional family of lines, then X
is
either a 2-dimensional scroll or else a curve.
U
The cited reference includes precise conditions that allow the two possibilities to
be distinguished. Proposition 1.2 of [8] reveals the relation between XU∗ ⊂ P(V ∗ )
∗
∗
g
and X
U ⊂ P(U ), namely
∗
∗
g
Proposition 8.2. In the above setup, XU∗ ⊂ P(V ∗ ) is a cone over X
U ⊂ P(U )
⊥
∗
with vertex U = P((V /U ) ).
It will be useful for this reason to consider the map π : P(V ) → P(U ∗ ) defined
by π(p) = (`1 (p) : . . . : `4 (p)), where `1 , . . . , `4 are the defining equations of U ⊥ .
∗
g
The map π is projection from P(U ⊥ ) and π(XU∗ ) = X
U . Using a direct approach
∗
∗
g
Galligo-Lê obtain in[19] that π −1 (X
U ) is a (2,2)-scroll in P(V ) which they denote
∗
g
by F∗2,2 . For brevity we write F for π −1 (X
U ).
32
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
Galligo-Lê classify possibilities for the implicit equation of XU by considering the
pullback φ∗U of P(U ∗ ) ∩ F to (P1 × P1 )∗ . The two linear forms Li defining P(U ∗ )
pull back to give a pair of bidegree (2, 1) forms on (P1 × P1 )∗ .
Proposition 8.3 ([19] §6.5). If φ∗U (L1 ) ∩ φ∗U (L2 ) is infinite, then φ∗U (L1 ) and
φ∗U (L2 ) share a common factor g, for which the possibilities are:
(1) deg(g) = (0, 1).
(2) deg(g) = (1, 1) (g possibly not reduced).
(3) deg(g) = (1, 0) (residual system may have double or distinct roots).
(4) deg(g) = (2, 0) (g can have a double root).
Example 8.4. In the following we use capital letters to denote elements of the various dual spaces. Note that the elements of the basis of (P1 × P1 )∗ that pair dually to
{s2 u, stu, t2 u, s2 v, stv, t2 v} are respectively { 21 S 2 U, ST U, 12 T 2 U, 21 S 2 V, ST V, 12 T 2 V }.
Recall we have the following dual maps of linear spaces:
φU
P1 × P1 −→
A
P(U ) −→ P(V )
φ∗
π
U
(P1 × P1 )∗ ←−
P(U ∗ ) ←− P(V ∗ )
∗
In Example 1.1, P(U ∗ ) = V(X1 , X4 − X5 ) ⊆ P5 , φ∗U (X1 ) = ST U and φ∗U (X4 −
X5 ) = ST V − 21 T 2 V , so there is a shared common factor T of degree (1, 0) and the
residual system {SU, 12 T V − SV } has distinct roots (0 : 1) × (1 : 0), (1 : 2) × (0 : 1).
Taking points (1 : 0) × (1 : 0) and (1 : 0) × (0 : 1) on the line T = 0 and the points
above shows that the forms below are in (φ∗U P (U ∗ ))⊥ = φU −1 (P(U ))
φU ((0 : 1) × (1 : 0)) =
t2 u
φU ((1 : 2) × (0 : 1)) = (s + 2t)2 v
φU ((1 : 0) × (1 : 0)) =
s2 u
φU ((1 : 0) × (0 : 1)) =
s2 v
and in terms of our chosen basis the
0 0
0 0
1 0
0 0
corresponding matrix A is
1 0 0 0
0 1 4 4
,
0 0 0 0
0 1 0 0
whose rows span the expected linear space U with basis hx0 , x2 , x3 , x4 + x5 i.
8.1. Connecting φ∗U (L1 ) ∩ φ∗U (L2 ) to syzygies. There is a pleasant relation between Proposition 8.3 and bigraded commutative algebra. This is hinted at by the
result of [11] relating the minimal free resolution of IW to P(W ) ∩ Σ2,1 .
Theorem 8.5. If φ∗U (L1 ) ∩ φ∗U (L2 ) is infinite, then:
(1) If deg(g) = (0, 1) then U is not basepoint free.
(2) If deg(g) = (1, 1) then
(a) if g is reducible then U is not basepoint free.
(b) if g is irreducible then IU is of Type 3.
(3) If deg(g) = (1, 0) then IU is of Type 5. Furthermore
(a) The residual scheme is reduced iff IU is of Type 5a.
(b) The residual scheme has a double root iff IU is of Type 5b.
(4) If deg(g) = (2, 0) then IU is of Type 6.
Proof. We use the notational conventions of 8.4. If deg(g) = (0, 1) then we may
assume after a change of coordinates that φ∗U (L1 ) = P U and φ∗U (L2 ) = QU ,
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
33
with P, Q of bidegree (2, 0) and independent. In particular, denoting by p, q ∈ R
the dual elements of P, Q a basis for R2,1 is {pu, qu, ru, pv, qv, rv}. Hence U =
Span{ru, pv, qv, rv}, so if r = l1 (s, t)l2 (s, t) then hv, l1 (s, t)i is an associated prime
of IU and U is not basepoint free.
Next, suppose deg(g) = (1, 1). If g factors, then after a change of variable we
may assume g = SU and U ⊥ = Span{S 2 U, ST U }. This implies that hv, ti is an
associated prime of IU , so U is not basepoint free. If g is irreducible, then
g = a0 SU + a1 SV + a2 T U + a3 T V, with a0 a3 − a1 a2 6= 0.
Since U
⊥
= Span{gS, gT }, U is the kernel of
a0 a2 0 a1 a3
0 a0 a2 0 a1
so U contains the columns of
a3
a1
0
−a2
−a0
0
0
,
a3
0
a3
a1
.
0
−a2
−a0
In particular,
a3 s2 u + a1 stu − a2 s2 v − a0 stv
a3 stu + a1 t2 u − a2 stv − a0 t2 v
= s(a3 su + a1 tu − a2 sv − a0 tv) = sp
= t(a3 su + a1 tu − a2 sv − a0 tv) = tp
are both in IU , yielding a linear syzygy of bidegree (1, 0). Since p is irreducible,
the result follows from Proposition 4.2. The proofs for the remaining two cases are
similar and omitted.
There is an analog of Proposition 8.3 when the intersection of the pullbacks is
finite and a corresponding connection to the minimal free resolutions, which we
leave for the interested reader.
Concluding remarks Our work raises a number of questions:
(1) How much generalizes to other line bundles OP1 ×P1 (a, b) on P1 × P1 ? We
are at work extending the results of §7 to a more general setting.
(2) What can be said about the minimal free resolution if IU has basepoints?
(3) Is there a direct connection between embedded primes and the implicit
equation?
(4) If U ⊆ H 0 (OX (D)) is four dimensional and has base locus of dimension at
most zero and X is a toric surface, then the results of [3] give a bound on
the degree µ needed to determine the implicit equation. What can be said
about the syzygies in this case?
(5) More generally, what can be said about the multigraded free resolution of
IU , when IU is graded by Pic(X)?
Acknowledgments Evidence for this work was provided by many computations
done using Macaulay2, by Dan Grayson and Mike Stillman. Macaulay2 is freely
available at
http://www.math.uiuc.edu/Macaulay2/
and scripts to perform the computations are available at
http://www.math.uiuc.edu/~schenck/O21script
34
HAL SCHENCK, ALEXANDRA SECELEANU, AND JAVID VALIDASHTI
We thank Nicolás Botbol, Marc Chardin and Claudia Polini for useful conversations,
and an anonymous referee for a careful reading of the paper.
References
1. A. Aramova, K. Crona, E. De Negri, Bigeneric initial ideals, diagonal subalgebras and
bigraded Hilbert functions, J. Pure Appl. Algebra 150 (2000), 312–335.
2. N. Botbol, The implicit equation of a multigraded hypersurface, Journal of Algebra, J.
Algebra 348 (2011), 381-401.
3. N. Botbol, A. Dickenstein, M. Dohm, Matrix representations for toric parametrizations,
Comput. Aided Geom. Design 26 (2009), 757–771.
4. D. Buchsbaum, D. Eisenbud, What makes a complex exact?, J. Algebra 25 (1973), 259–268.
5. L. Busé, J.-P. Jouanolou, On the closed image of a rational map and the implicitization
problem, J. Algebra 265 (2003), 312–357.
6. L. Busé, M. Chardin, Implicitizing rational hypersurfaces using approximation complexes,
J. Symbolic Computation 40 (2005), 1150–1168.
7. M. Chardin, Implicitization using approximation complexes, in “Algebraic geometry and
geometric modeling”, Math. Vis., Springer, Berlin (2006), 23–35.
8. C. Ciliberto, F. Russo, A. Simis, Homaloidal hypersurfaces and hypersurfaces with vanishing
Hessian, Adv. Mat 218 (2008), 1759–1805.
9. D. Cox, The moving curve ideal and the Rees algebra, Theoret. Comput. Sci. 392 (2008),
23–36.
10. D. Cox, Curves, surfaces and syzygies, in “Topics in algebraic geometry and geometric modeling”, Contemp. Math. 334 (2003) 131–150.
11. D. Cox, A. Dickenstein, H. Schenck, A case study in bigraded commutative algebra, in
“Syzygies and Hilbert Functions”, edited by Irena Peeva, Lecture notes in Pure and Applied
Mathematics 254, (2007), 67–112.
12. D. Cox, R. Goldman, M. Zhang, On the validity of implicitization by moving quadrics for
rational surfaces with no basepoints, J. Symbolic Computation 29 (2000), 419–440.
13. W.L.F. Degen, The types of rational (2, 1)-Bézier surfaces. Comput. Aided Geom. Design 16
(1999), 639–648.
14. A. Dickenstein, I. Emiris, Multihomogeneous resultant formulae by means of complexes, J.
Symbolic Computation 34 (2003), 317–342.
15. D. Eisenbud, Commutative Algebra with a view towards Algebraic Geometry, Springer-Verlag,
Berlin-Heidelberg-New York, 1995.
16. D. Eisenbud, C. Huneke, W. Vasconcelos, Direct methods for primary decomposition,
Invent. Math. 110 (1992), 207–235.
17. W. Edge, The theory of ruled surfaces, Cambridge University Press, 1931.
18. M. Elkadi, A. Galligo and T. H. Lê, Parametrized surfaces in P3 of bidegree (1, 2), Proceedings of the 2004 International Symposium on Symbolic and Algebraic Computation, ACM,
New York, 2004, 141–148.
19. A. Galligo, T. H. Lê, General classification of (1, 2) parametric surfaces in P3 , in “Geometric
modeling and algebraic geometry”, Springer, Berlin, (2008) 93–113.
20. I. Gelfand, M. Kapranov, A. Zelevinsky, Discriminants, Resultants and Multidimensional
Determinants, Birkhauser, Boston, 1984.
21. J. Harris, Algebraic Geometry, A First Course, Springer-Verlag, Berlin-Heidelberg-New
York, 1992.
22. R. Hartshorne, Algebraic Geometry, Springer-Verlag, Berlin-Heidelberg-New York, 1977.
23. J. Herzog, A. Simis, W. Vasconcelos Approximation complexes of blowing-up rings, J.
Algebra 74 (1982), 466–493.
24. J. Herzog, A. Simis, W. Vasconcelos Approximation complexes of blowing-up rings II, J.
Algebra 82 (1983), 53–83.
25. J. Migliore, Introduction to Liason Theory and Deficiency Modules, Progress in Math. vol.
165, Birkhäuser, Boston Basel, Berlin, 1998.
26. G. Salmon, Traité de Géométrie analytique a trois dimensiones, Paris, Gauthier-Villars,
1882.
27. T. W. Sederberg, F. Chen, Implicitization using moving curves and surfaces, in Proceedings
of SIGGRAPH, 1995, 301–308.
SYZYGIES AND SINGULARITIES OF TENSOR PRODUCT SURFACES
35
28. T. W. Sederberg, R. N. Goldman and H. Du, Implicitizing rational curves by the method
of moving algebraic curves, J. Symb. Comput. 23 (1997), 153–175.
29. T. W. Sederberg, T. Saito, D. Qi and K. S. Klimaszewksi, Curve implicitization using
moving lines, Comput. Aided Geom. Des. 11 (1994), 687–706.
30. S. Zube, Correspondence and (2, 1)-Bézier surfaces, Lithuanian Math. J. 43 (2003), 83–102.
31. S. Zube, Bidegree (2, 1) parametrizable surfaces in P3 , Lithuanian Math. J. 38 (1998), 291–
308.
Department of Mathematics, University of Illinois, Urbana, IL 61801
E-mail address: [email protected]
Department of Mathematics, University of Nebraska, Lincoln, NE 68588
E-mail address: [email protected]
Department of Mathematics, University of Illinois, Urbana, IL 61801
E-mail address: [email protected]
| 0 |
Logical Methods in Computer Science
Vol. 11(4:13)2015, pp. 1–23
www.lmcs-online.org
Submitted
Published
Sep. 16, 2014
Dec. 22, 2015
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
WITH COMPOSITE REGULAR TYPES ∗
LUCA PADOVANI
Dipartimento di Informatica, Università di Torino, Italy
e-mail address: [email protected]
Abstract. We extend the linear π-calculus with composite regular types in such a way
that data containing linear values can be shared among several processes, if there is no
overlapping access to such values. We describe a type reconstruction algorithm for the
extended type system and discuss some practical aspects of its implementation.
1. Introduction
The linear π-calculus [15] is a formal model of communicating processes that distinguishes
between unlimited and linear channels. Unlimited channels can be used without restrictions,
whereas linear channels can be used for one communication only. Despite this seemingly
severe restriction, there is evidence that a significant portion of communications in actual
systems take place on linear channels [15]. It has also been shown that structured communications can be encoded using linear channels and a continuation-passing style [13, 3]. The
interest in linear channels has solid motivations: linear channels are efficient to implement,
they enable important optimizations [9, 8, 15], and communications on linear channels enjoy
important properties such as interference freedom and partial confluence [18, 15]. It follows
that understanding whether a channel is used linearly or not has a primary impact in the
analysis of systems of communicating processes.
Type reconstruction is the problem of inferring the type of entities used in an unannotated (i.e., untyped) program. In the case of the linear π-calculus, the problem translates
into understanding whether a channel is linear or unlimited, and determining the type
of messages sent over the channel. This problem has been addressed and solved in [10].
The goal of our work is the definition of a type reconstruction algorithm for the linear
2012 ACM CCS: [Theory of computation]: Models of computation; Semantics and reasoning—
Program constructs; [Software and its engineering]: Software notations and tools—General programming
languages—Language features.
Key words and phrases: linear pi-calculus, composite regular types, shared access to data structures with
linear values, type reconstruction.
∗
A preliminary version of this paper [20] appears in the proceedings of the 17th International Conference
on Foundations of Software Science and Computation Structures (FoSSaCS’14).
This work has been supported by ICT COST Action IC1201 BETTY, MIUR project CINA, Ateneo/CSP
project SALT, and the bilateral project RS13MO12 DART.
l
LOGICAL METHODS
IN COMPUTER SCIENCE
c
DOI:10.2168/LMCS-11(4:13)2015
CC
L. Padovani
Creative Commons
2
L. PADOVANI
π-calculus extended with pairs, disjoint sums, and possibly infinite types. These features,
albeit standard, gain relevance and combine in non-trivial ways with the features of the
linear π-calculus. We explain why this is the case in the rest of this section.
The term below
*succ?(x,y).y!(x + 1) | new a in (succ!(39,a) | a?(z).print!z)
(1.1)
models a program made of a persistent service (the *-prefixed process waiting for messages
on channel succ) that computes the successor of a number and a client (the new-scoped
process) that invokes the service and prints the result of the invocation. Each message sent
to the service is a pair made of the number x and a continuation channel y on which the
service sends the result of the computation back to the client. There are three channels
in this program, succ for invoking the service, print for printing numbers, and a private
channel a which is used by the client for receiving the result of the invocation. In the linear
π-calculus, types keep track of how each occurrence of a channel is being used. For example,
the above program is well typed in the environment
print : [int]0,1 , succ : [int × [int]0,1 ]ω,1
where the type of print indicates not only the type of messages sent over the channel (int
in this case), but also that print is never used for input operations (the 0 annotation) and
is used once for one output operation (the 1 annotation).
The type of succ indicates that messages sent over succ are pairs of type int×[int]0,1
– the service performs exactly one output operation on the channel y which is the second
component of the pair – and that succ is used for an unspecified number of input operations
(the ω annotation) and exactly one output operation (the 1 annotation). Interestingly, the
overall type of succ can be expressed as the combination of two slightly different types describing how each occurrence of succ is being used by the program: the leftmost occurrence
of succ is used according to the type [int × [int]0,1 ]ω,0 (arbitrary inputs, no outputs),
while the rightmost occurrence of succ is used according to the type [int × [int]0,1 ]0,1
(no inputs, one output). Following [15], we capture the overall use of a channel by means
of a combination operator + on types such that, for example,
[int × [int]0,1 ]ω,0 + [int × [int]0,1 ]0,1 = [int × [int]0,1 ]ω,1
Concerning the restricted channel a, its rightmost occurrence is used according to the type
[int]1,0 , since there a is used for one input of an integer number; the occurrence of a in
(39,a) is in a message sent on succ, and we have already argued that the service uses this
channel according to the type [int]0,1 ; the type of the leftmost, binding occurrence of a is
the combination of these two types, namely:
[int]0,1 + [int]1,0 = [int]1,1
The type of a indicates that the program performs exactly one input and exactly one output
on a, hence a is a linear channel. Since a is restricted in the program, even if the program
is extended with more processes, it is not possible to perform operations on a other than
the ones we have tracked in its type.
The key ingredient in the discussion above is the notion of type combination [15, 10, 24],
which allows us to gather the overall number of input/output operations performed on a
channel. We now discuss how type combination extends to composite and possibly infinite
types, which is the main novelty of the present work.
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
3
So far we have taken for granted the ability to perform pattern matching on the message
received by the service on succ and to assign distinct names, x and y, to the components of
the pair being analyzed. Pattern matching is usually compiled using more basic operations.
For example, in the case of pairs these operations are the fst and snd projections that
respectively extract the first and the second component of the pair. So, a low-level modeling
of the successor service that uses fst and snd could look like this:
*succ?(p).snd(p)!(fst(p) + 1)
(1.2)
This version of the service is operationally equivalent to the previous one, but from the
viewpoint of typing there is an interesting difference: in (1.1) the two components of the
pair are given distinct names x and y and each name is used once in the body of the service;
in (1.2) there is only one name p for the whole pair which is projected twice in the body of
the service. Given that each projection accesses only one of the two components of the pair
and ignores the other, we can argue that the occurrence of p in snd(p) is used according
to the type int × [int]0,1 (the 1 annotation reflects the fact that the second component
of p is a channel used for an output operation) whereas the occurrence of p in fst(p) is
used according to the type int × [int]0,0 (the second component of p is not used). The
key idea, then, is that we can extend the type combination operator + component-wise to
product types to express the overall type of p as the combination of these two types:
(int × [int]0,1 ) + (int × [int]0,0 ) = (int + int) × ([int]0,1 + [int]0,0 ) = int × [int]0,1
According to the result of such combination, the second component of p is effectively
used only once despite the multiple syntactic occurrences of p.
The extension of type combination to products carries over to disjoint sums and also
to infinite types as well. To illustrate, consider the type tlist satisfying the equality
tlist = Nil ⊕ Cons([int]1,0 × tlist )
which is the disjoint sum between Nil, the type of empty lists, and Cons([int]1,0 × tlist ),
the type of non-empty lists with head of type [int]1,0 and tail of type tlist (we will see
shortly that there is a unique type tlist satisfying the above equality relation). Now, tlist
can be expressed as the combination todd + teven , where todd and teven are the types that
satisfy the equalities
todd = Nil ⊕ Cons([int]1,0 × teven ) and
teven = Nil ⊕ Cons([int]0,0 × todd )
(1.3)
(again, there are unique todd and teven that satisfy these equalities, see Section 3).
In words, todd is the type of lists of channels in which each channel in an odd-indexed
position is used for one input, while teven is the type of lists of channel in which each
channel in an even-indexed position is used for one input. The reason why this particular
decomposition of tlist could be interesting is that it enables the sharing of a list containing
linear channels among two processes, if we know that one process uses the list according
to the type todd and the other process uses the same list according to the type teven . For
4
L. PADOVANI
example, the process R defined below
P
def
=
*odd?(l,acc,r).case l of
Nil
⇒ r!acc
Cons(x,l′ ) ⇒ x?(y).even!(l′ ,(acc + y),r)
def
Q = *even?(l,acc,r).case l of
Nil
⇒ r!acc
Cons(x,l′ ) ⇒ odd!(l′ ,acc,r)
def
R = P | Q | new a,b in (odd!(l,0,a) | even!(l,0,b) | a?(x).b?(y).r!(x + y))
uses each channel in a list l for receiving a number, sums all such numbers together, and
sends the result on another channel r. However, instead of scanning the list l sequentially
in a single thread, R spawns two parallel threads (defined by P and Q) that share the very
same list l: the first thread uses only the odd-indexed channels in l, whereas the second
thread uses only the even-indexed channels in l; the (partial) results obtained by these two
threads are collected by R on two locally created channels a and b; the overall result is
eventually sent on r. We are then able to deduce that R makes full use of the channels in
l, namely that l has type tlist , even though the list as a whole is simultaneously accessed
by two parallel threads. In general, we can see that the extension of type combination to
composite, potentially infinite types is an effective tool that fosters the parallelization of
programs and allows composite data structures containing linear values to be safely shared
by a pool of multiple processes, if there is enough information to conclude that each linear
value is accessed by exactly one of the processes in the pool.
Such detailed reasoning on the behavior of programs comes at the price of a more
sophisticated definition of type combination. This brings us back to the problem of type
reconstruction. The reconstruction algorithm described in this article is able to infer the
types todd and teven of the messages accepted by P and Q by looking at the structure of
these two processes and of understanding that the overall type of l in R is tlist , namely that
every channel in l is used exactly once.
Related work. Linear type systems with composite types have been discussed in [8, 9] for
the linear π-calculus and in [25] for a functional language. In these works, however, every
structure that contains linear values becomes linear itself (there are a few exceptions for
specific types [14] or relaxed notions of linearity [11]).
The original type reconstruction algorithm for the linear π-calculus is described in [10].
Our work extends [10] to composite and infinite types. Unlike [10], however, we do not deal
with structural subtyping, whose integration into our type reconstruction algorithm is left
for future work. The type reconstruction algorithm in [10] and the one we present share a
common structure in that they both comprise constraint generation and constraint resolution phases. The main difference concerns the fact that we have to deal with constraints
expressing the combination of yet-to-be-determined types, whereas in [10] non-trivial type
combinations only apply to channel types. This allows [10] to use an efficient constraint
resolution algorithm based on unification. In our setting, the presence of infinite types hinders the use of unification, and in some cases the resolution algorithm may conservatively
approximate the outcome in order to ensure proper termination.
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
P, Q
::=
|
|
|
|
|
|
e, f
idle
e?(x).P
e!f
P |Q
*P
new a in P
case e {i(xi ) ⇒ Pi }i=inl,inr
Process
(idle process)
(input)
(output)
(parallel composition)
(process replication)
(channel restriction)
(pattern matching)
n
u
(e,f)
fst(e)
snd(e)
inl(e)
inr(e)
Expression
(integer constant)
(name)
(pair)
(first projection)
(second projection)
(left injection)
(right injection)
::=
|
|
|
|
|
|
5
Table 1: Syntax of processes and expressions.
Session types [6, 7] describe linearized channels, namely channels that can be used for
multiple communications, but only in a sequential way. There is a tight connection between
linear and linearized channels: as shown in [13, 4, 3, 2], linearized channels can be encoded
in the linear π-calculus. A consequence of this encoding is that the type reconstruction
algorithm we present in this article can be used for inferring possibly infinite session types
(we will see an example of this feature in Section 7). The task of reconstructing session
types directly has been explored in [17], but for finite types only.
Structure of the paper. We present the calculus in Section 2 and the type system in
Section 3. The type reconstruction algorithm consists of a constraint generation phase (Section 4) and a constraint resolution phase (Section 5). We discuss some important issues
related to the implementation of the algorithm in Section 6 and a few more elaborate examples in Section 7. Section 8 concludes and hints at some ongoing and future work. Proofs
of the results in Sections 3 and 4 are in Appendixes A and B, respectively. Appendix C
illustrates a few typing derivations of examples discussed in Section 5. A proof-of-concept
implementation of the algorithm is available on the author’s home page.
2. The π-calculus with data types
In this section we define the syntax and operational semantics of the formal language we
work with, which is an extension of the π-calculus featuring base and composite data types
and a pattern matching construct.
6
L. PADOVANI
[s-par 1]
[s-par 2]
[s-par 3]
[s-rep]
idle | P ≡ P
P |Q≡Q|P
P | (Q | R) ≡ (P | Q) | R
∗P 4 ∗P | P
[s-res 2]
[s-res 1]
new a in new b in P ≡ new b in new a in P
a 6∈ fn(Q)
(new a in P ) | Q ≡ new a in (P | Q)
Table 2: Structural pre-congruence for processes.
2.1. Syntax. Let us introduce some notation first. We use integer numbers m, n, . . . , a
countable set of channels a, b, . . . , and a countable set of variables x, y, . . . which is disjoint
from the set of channels; names u, v, . . . are either channels or variables.
The syntax of expressions and processes is given in Table 1. Expressions e, f, . . .
are either integers, names, pairs (e,f) of expressions, the i-th projection of an expression
i(e) where i ∈ {fst, snd}, or the injection i(e) of an expression e using the constructor
i ∈ {inl, inr}. Using projections fst and snd instead of a pair splitting construct, as found
for instance in [24, 23], is somewhat unconventional, but helps us highlighting some features
of our type system. We will discuss some practical aspects of this choice in Section 6.3.
Values v, w, . . . are expressions without variables and occurrences of the projections
fst and snd.
Processes P , Q, . . . comprise and extend the standard constructs of the asynchronous
π-calculus. The idle process performs no action; the input process e?(x).P waits for a
message v from the channel denoted by e and continues as P where x has been replaced
by v; the output process e!f sends the value resulting from the evaluation of f on the
channel resulting from the evaluation of e; the composition P | Q executes P and Q in
parallel; the replication *P denotes infinitely many copies of P executing in parallel; the
restriction new a in P creates a new channel a with scope P . In addition to these, we
include a pattern matching construct case e {i(xi ) ⇒ Pi }i=inl,inr which evaluates e to a
value of the form i(v) for some i ∈ {inl, inr}, binds v to xi and continues as Pi . The
notions of free names fn(P ) and bound names bn(P ) of P are as expected, recalling that
case e {i(xi ) ⇒ Pi }i=inl,inr binds xi in Pi . We identify processes modulo renaming of
bound names and we write e{v/x} and P {v/x} for the capture-avoiding substitutions of v
for the free occurrences of x in e and P , respectively. Occasionally, we omit idle when it
is guarded by a prefix.
2.2. Operational semantics. The operational semantics of the language is defined in
terms of a structural pre-congruence relation for processes, an evaluation relation for expressions, and a reduction relation for processes. Structural pre-congruence 4 is meant
to rearrange process terms which should not be distinguished. The relation is defined in
Table 2, where we write P ≡ Q in place of the two inequalities P 4 Q and Q 4 P . Overall
≡ coincides with the conventional structural congruence of the π-calculus, except that, as
in [12], we omit the relation *P | P 4 *P (the reason will be explained in Remark 3.10).
ℓ
Evaluation e ↓ v and reduction P −→ Q are defined in Table 3. Both relation are
fairly standard. As in [15], reduction is decorated with a label ℓ that is either a channel
or the special symbol τ : in [r-comm] the label is the channel a on which a message is
exchanged; in [r-case] it is τ since pattern matching is an internal computation not involving
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
[e-int]
[e-chan]
n↓n
a↓a
7
[e-pair]
[e-fst]
[e-snd]
[e-inr], [e-inl]
ei ↓ vi (i=1,2)
(e1 ,e2 ) ↓ (v1 ,v2 )
e ↓ (v,w)
fst(e) ↓ v
e ↓ (v,w)
snd(e) ↓ w
e↓v
[r-comm]
ei ↓ a
[r-case]
(i=1,2)
f↓v
e ↓ k(v)
case e {i(xi ) ⇒ Pi }i=inl,inr −→ Pk {v/xk }
e1 !f | e2 ?(x).Q −→ Q{v/x}
[r-par]
[r-new 2]
[r-new 1]
P −→ P
ℓ
ℓ
a
′
P | Q −→ P ′ | Q
k ∈ {inl, inr}
τ
a
ℓ
k ∈ {inl, inr}
k(e) ↓ k(v)
P −→ Q
P −→ Q
τ
new a in P −→ new a in Q
ℓ 6= a
ℓ
new a in P −→ new a in Q
[r-struct]
P 4 P′
ℓ
P ′ −→ Q′
Q′ 4 Q
ℓ
P −→ Q
Table 3: Evaluation of expressions and reduction of processes.
communications. Note that, as we allow expressions in input and output processes for both
the subject and the object of a communication, rule [r-comm] provides suitable premises to
evaluate them. Rules [r-par], [r-new 1], and [r-new 2] propagate labels through parallel
compositions and restrictions. In [r-new 1], the label a becomes τ when it escapes the scope
of a. Rule [r-struct] closes reduction under structural congruence.
Example 2.1 (list sharing). Below are the desugared representations of P and Q discussed
in Section 1:
P′
Q′
def
= *odd?(z).
case fst(z) of
inl(_) ⇒ snd(snd(z))!fst(snd(z))
inr(x) ⇒ fst(x)?(y).even!(snd(x),(fst(snd(z)) + y,snd(snd(z))))
def
= *even?(z).
case fst(z) of
inl(_) ⇒ snd(snd(z))!fst(snd(z))
inr(x) ⇒ odd!(snd(x),(fst(snd(z)),snd(snd(z))))
where the constructors inl and inr respectively replace Nil and Cons, inl has an (unused)
argument denoted by the anonymous variable _, and tuple components are accessed using
(possibly repeated) applications of fst and snd.
8
L. PADOVANI
3. Type system
In this section we define a type system for the language presented in Section 2. The type
system extends the one for the linear π-calculus [15] with composite and possibly infinite,
regular types. The key feature of the linear π-calculus is that channel types are enriched with
information about the number of times the channels they denote are used for input/output
operations. Such number is abstracted into a use κ, . . . , which is an element of the set
{0, 1, ω} where 0 and 1 obviously stand for no use and one use only, while ω stands for any
number of uses.
Definition 3.1 (types). Types, ranged over by t, s, . . . , are the possibly infinite regular
trees built using the nullary constructor int, the unary constructors [ · ]κ1 ,κ2 for every
combination of κ1 and κ2 , the binary constructors · × · (product) and · ⊕ · (disjoint sum).
The type [t]κ1 ,κ2 denotes channels for exchanging messages of type t. The uses κ1 and
κ2 respectively denote how many input and output operations are allowed on the channel.
For example: a channel with type [t]0,1 cannot be used for input and must be used once
for sending a message of type t; a channel with type [t]0,0 cannot be used at all; a channel
with type [t]ω,ω can be used any number of times for sending and/or receiving messages of
type t. A product t1 × t2 describes pairs (v1 ,v2 ) where vi has type ti for i = 1, 2. A disjoint
sum t1 ⊕ t2 describes values of the form inl(v) where v has type t1 or of the form inr(v)
where v has type t2 . Throughout the paper we let ⊙ stand for either × or ⊕.
We do not provide a concrete, finite syntax for denoting infinite types and work directly
with regular trees instead. Recall that a regular tree is a partial function from paths to
type constructors (see e.g. [22, Chapter 21]), it consists of finitely many distinct subtrees,
and admits finite representations using either the well-known µ notation or finite systems
of equations [1] (our implementation internally uses both). Working directly with regular
trees gives us the coarsest possible notion of type equality (t = s means that t and s are
the same partial function) and it allows us to reuse some key results on regular trees that
will be essential in the following. In particular, throughout the paper we will implicitly use
the next result to define types as solutions of particular systems of equations:
Theorem 3.2. Let {αi = Ti | 1 ≤ i ≤ n} be a finite system of equations where each Ti is a
finite term built using the constructors in Definition 3.1 and the pairwise distinct unknowns
{α1 , . . . , αn }. If none of the Ti is an unknown, then there exists a unique substitution
σ = {αi 7→ ti | 1 ≤ i ≤ n} such that ti = σTi and ti is a regular tree for each 1 ≤ i ≤ n.
Proof. All the right hand sides of the equations are finite – hence regular – and different
from an unknown, therefore this result is just a particular case of [1, Theorem 4.3.1].
Example 3.3 (integer stream). The type of integer streams int × (int × (int × · · · )) is
the unique regular tree t such that t = int × t. To make sense out of this statement we have
to be sure that such t does exist and is indeed unique. Consider the equation α = int × α
obtained from the above equality by turning each occurrence of the metavariable t into the
unknown α and observe that the right hand side of such equation is not an unknown. By
Theorem 3.2, there exists a unique regular tree t such that t = int × t. Note that t consists
of two distinct subtrees, int and t itself.
Example 3.4 (lists). To verify the existence of the types todd and teven informally introduced in Section 1, consider the system of equations
{α1 = int ⊕ ([int]1,0 × α2 ), α2 = int ⊕ ([int]0,0 × α1 )}
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
9
obtained by turning the metavariables todd and teven in (1.3) respectively into the unknowns
α1 and α2 and by using basic types and disjoint sums in place of the list constructors Nil
and Cons. Theorem 3.2 says that there exist two unique regular trees todd and teven such
that todd = int ⊕ ([int]1,0 × teven ) and teven = int ⊕ ([int]0,0 × todd ). Similarly, tlist is
the unique type such that tlist = int ⊕ ([int]1,0 × tlist ).
We now define some key notions on uses and types. To begin with, we define a binary
operation + on uses that allows us to express the combined use κ1 + κ2 of a channel that is
used both as denoted by κ1 and as denoted by κ2 . Formally:
κ1 if κ2 = 0
def
(3.1)
κ1 + κ2 = κ2 if κ1 = 0
ω otherwise
Note that 0 is neutral and ω is absorbing for + and that 1 + 1 = ω, since ω is the only
use allowing us to express the fact that a channel is used twice. In a few places we will
write 2κ as an abbreviation for κ + κ.
We now lift the notion of combination from uses to types. Since types may be infinite,
we resort to a coinductive definition.
Definition 3.5 (type combination). Let Ctype be the largest relation between pairs of types
and types such that ((t1 , t2 ), s) ∈ Ctype implies either:
• t1 = t2 = s = int, or
• t1 = [t]κ1 ,κ2 and t2 = [t]κ3 ,κ4 and s = [t]κ1 +κ3 ,κ2 +κ4 , or
• t1 = t11 ⊙ t12 and t2 = t21 ⊙ t22 and s = s1 ⊙ s2 and ((t1i , t2i ), si ) ∈ Ctype for i = 1, 2.
Observe that Ctype is a partial binary function on types, that is ((t1 , t2 ), s1 ) ∈ Ctype and
((t1 , t2 ), s2 ) ∈ Ctype implies s1 = s2 . When (t, s) ∈ dom(Ctype ), we write t + s for Ctype (t, s),
that is the combination of t and s. Occasionally we also write 2t in place of t + t.
Intuitively, basic types combine with themselves and the combination of channel types
with equal message types is obtained by combining corresponding uses. For example, we
have [int]0,1 + [int]1,0 = [int]1,1 and [[int]1,0 ]0,1 + [[int]1,0 ]1,1 = [[int]1,0 ]1,ω . In
the latter example, note that the uses of channel types within the top-most ones are not
combined together. Type combination propagates component-wise on composite types. For
instance, we have ([int]0,1 × [int]0,0 ) + ([int]0,0 × [int]1,0 ) = ([int]0,1 + [int]0,0 ) ×
([int]0,0 + [int]1,0 ) = [int]0,1 × [int]1,0 . Unlike use combination, type combination is
a partial operation: it is undefined to combine two types having different structures, or to
combine two channel types carrying messages of different types. For example, int+[int]0,0
is undefined and so is [[int]0,0 ]0,1 + [[int]0,1 ]0,1 , because [int]0,0 and [int]0,1 differ.
Types that can be combined together play a central role, so we name a relation that
characterizes them:
Definition 3.6 (coherent types). We say that t and s are structurally coherent or simply
coherent, notation t ∼ s, if t + s is defined, namely there exists t′ such that ((t, s), t′ ) ∈ Ctype .
Observe that ∼ is an equivalence relation, implying that a type can always be combined
with itself (i.e., 2t is always defined). Type combination is also handy for characterizing a
fundamental partitioning of types:
Definition 3.7 (unlimited and linear types). We say that t is unlimited, notation un(t), if
2t = t. We say that it is linear otherwise.
10
L. PADOVANI
Channel types are either linear or unlimited depending on their uses. For example, [t]0,0
is unlimited because [t]0,0 +[t]0,0 = [t]0,0 , whereas [t]1,0 is linear because [t]1,0 +[t]1,0 =
[t]ω,0 6= [t]1,0 . Similarly, [t]ω,ω is unlimited while [t]0,1 and [t]1,1 are linear. Other types
are linear or unlimited depending on the channel types occurring in them. For instance,
[t]0,0 × [t]1,0 is linear while [t]0,0 × [t]ω,0 is unlimited. Note that only the topmost
channel types of a type matter. For example, [[t]1,1 ]0,0 is unlimited despite of the fact
that it contains the subterm [t]1,1 which is itself linear, because such subterm is found
within an unlimited channel type.
We use type environments to track the type of free names occurring in expressions
and processes. Type environments Γ , . . . are finite maps from names to types that we
write as u1 : t1 , . . . , un : tn . We identify type environments modulo the order of their
associations, write ∅ for the empty environment, dom(Γ ) for the domain of Γ , namely the
set of names for which there is an association in Γ , and Γ1 , Γ2 for the union of Γ1 and Γ2
when dom(Γ1 ) ∩ dom(Γ2 ) = ∅. We also extend the partial combination operation + on types
to a partial combination operation on type environments, thus:
(
Γ1 , Γ2
if dom(Γ1 ) ∩ dom(Γ2 ) = ∅
def
Γ1 + Γ2 =
(3.2)
′
′
(Γ1 + Γ2 ), u : t1 + t2 if Γi = Γi′ , u : ti for i = 1, 2
The operation + extends type combination in [15] and the ⊎ operator in [24]. Note that
Γ1 + Γ2 is undefined if there is u ∈ dom(Γ1 ) ∩ dom(Γ2 ) such that Γ1 (u) + Γ2 (u) is undefined.
Note also that dom(Γ1 + Γ2 ) = dom(Γ1 ) ∪ dom(Γ2 ). Thinking of type environments as of
specifications of the resources used by expressions/processes, Γ1 + Γ2 expresses the combined
use of the resources specified in Γ1 and Γ2 . Any resource occurring in only one of these
environments occurs in Γ1 + Γ2 ; any resource occurring in both Γ1 and Γ2 is used according
to the combination of its types in Γ1 + Γ2 . For example, if a process sends an integer over
a channel a, it will be typed in an environment that contains the association a : [int]0,1 ;
if another process uses the same channel a for receiving an integer, it will be typed in an
environment that contains the association a : [int]1,0 . Overall, the parallel composition of
the two processes uses channel a according to the type [int]0,1 + [int]1,0 = [int]1,1 and
therefore it will be typed in an environment that contains the association a : [int]1,1 .
The last notion we need before presenting the type rules is that of an unlimited type
environment. This is a plain generalization of the notion of unlimited type, extended to the
range of a type environment. We say that Γ is unlimited, notation un(Γ ), if un(Γ (u)) for
every u ∈ dom(Γ ). A process typed in an unlimited type environment need not use any of
the resources described therein.
Type rules for expressions and processes are presented in Table 4. These rules are
basically the same as those found in the literature [15, 10]. The possibility of sharing data
structures among several processes, which we have exemplified in Section 1, is a consequence
of our notion of type combination extended to composite regular types.
Type rules for expressions are unremarkable. Just observe that unused type environments must be unlimited. Also, the projections fst and snd discard one component of a
pair, so the discarded component must have an unlimited type.
Let us move on to the type rules for processes. The idle process does nothing, so it is
well typed only in an unlimited environment. Rule [t-in] types an input process e?(x).P .
The subject e must evaluate to a channel whose input use is either 1 or ω and whose output
use is either 0 or ω. We capture the first condition saying that the input use of the channel
has the form 1 + κ1 for some κ1 , and the second condition saying that the output use of
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
11
Expressions
[t-int]
[t-name]
[t-inl]
[t-inr]
un(Γ )
Γ ⊢ n : int
un(Γ )
Γ, u : t ⊢ u : t
Γ ⊢e:t
Γ ⊢ inl(e) : t ⊕ s
Γ ⊢e:s
Γ ⊢ inr(e) : t ⊕ s
[t-pair]
[t-fst]
[t-snd]
Γi ⊢ ei : ti (i=1,2)
Γ1 + Γ2 ⊢ (e1 ,e2 ) : t1 × t2
Γ ⊢e:t×s
un(s)
Γ ⊢ fst(e) : t
Γ ⊢e:t×s
un(t)
Γ ⊢ snd(e) : s
Processes
[t-idle]
un(Γ )
Γ ⊢ idle
[t-out]
[t-in]
1+κ1 ,2κ2
Γ1 ⊢ e : [t]
Γ2 , x : t ⊢ P
Γ1 + Γ2 ⊢ e?(x).P
Γ1 ⊢ e : [t]2κ1 ,1+κ2
Γ2 ⊢ f : t
Γ1 + Γ2 ⊢ e!f
[t-rep]
[t-par]
[t-new]
Γ ⊢P
un(Γ )
Γ ⊢ *P
Γi ⊢ Pi (i=1,2)
Γ1 + Γ2 ⊢ P1 | P2
Γ , a : [t]κ,κ ⊢ P
Γ ⊢ new a in P
[t-case]
Γ1 ⊢ e : t ⊕ s
Γ2 , xi : t ⊢ Pi (i=inl,inr)
Γ1 + Γ2 ⊢ case e {i(xi ) ⇒ Pi }i=inl,inr
Table 4: Type rules for expressions and processes.
the channel has the form 2κ2 for some κ2 . The continuation P is typed in an environment
enriched with the association for the received message x. Note the combination Γ1 + Γ2
in the conclusion of rule [t-in]. In particular, if e evaluates to a linear channel, its input
capability is consumed by the operation and such channel can no longer be used for inputs
in the continuation. Rule [t-out] types an output process e!f. The rule is dual to [t-in] in
that it requires the channel to which e evaluates to have a positive output use. Rule [t-rep]
states that a replicated process *P is well typed in the environment Γ provided that P is
well typed in an unlimited Γ . The rationale is that *P stands for an unbounded number
of copies of P composed in parallel, hence P cannot contain (free) linear channels. The
rules [t-par] and [t-case] are conventional, with the by now familiar use of environment
combination for properly distributing linear resources to the various subterms of a process.
The rule [t-new] is also conventional. We require the restricted channel to have the same
input and output uses. While this is not necessary for the soundness of the type system, in
practice it is a reasonable requirement. We also argue that this condition is important for
the modular application of the type reconstruction algorithm; we will discuss this aspect
more in detail in Section 6.
As in many behavioral type systems, the type environment in which the reducing process is typed may change as a consequence of the reduction. More specifically, reductions
involving a communication on channels consume 1 unit from both the input and output uses
12
L. PADOVANI
of the channel’s type. In order to properly state subject reduction, we define a reduction
ℓ
relation over type environments. In particular, we write −→ for the least relation between
type environments such that
τ
a
Γ + a : [t]1,1 −→ Γ
Γ −→ Γ
τ
In words, −→ denotes an internal computation (pattern matching) or a communication on
some restricted channel which does not consume any resource from the type environment,
a
while −→ denotes a communication on channel a which consumes 1 use from both the input
and output slots in a’s type. For example, we have
a
a : [int]1,1 −→ a : [int]0,0
a
def
by taking Γ = a : [int]0,0 in the definition of −→ above, since Γ +a : [int]1,1 = a : [int]1,1 .
The residual environment denotes the fact that the (linear) channel a can no longer be used
for communication.
Now we have:
ℓ
ℓ
Theorem 3.8. Let Γ ⊢ P and P −→ Q. Then Γ ′ ⊢ Q for some Γ ′ such that Γ −→ Γ ′ .
Theorem 3.8 establishes not only a subject reduction result, but also a soundness result
because it implies that a channel is used no more than its type allows. It is possible to
establish more properties of the linear π-calculus, such as the fact that communications
involving linear channels enjoy partial confluence. In this work we focus on the issue of
type reconstruction. The interested reader may refer to [15] for further results.
Example 3.9. We consider again the processes P ′ and Q′ in Example 2.1 and sketch a few
key derivation steps to argue that they are well typed. To this aim, consider the types todd ,
teven , and tzero that satisfy the equalities below
todd
teven
tzero
= int ⊕ ([int]0,1 × teven )
= int ⊕ ([int]0,0 × todd )
= int ⊕ ([int]0,0 × tzero )
and also consider the types of the messages respectively carried by odd and even:
sodd
seven
def
= todd × (int × [int]0,1 )
def
= teven × (int × [int]0,1 )
Now, in the inl branch of P ′ we derive (D1)
z : tzero × (int × [int]0,1 ) ⊢ z : tzero × (int × [int]0,1 )
z : tzero × (int × [int]0,1 ) ⊢ snd(z) : int × [int]0,1
z : tzero × (int × [int]0,1 ) ⊢ snd(snd(z)) : [int]0,1
using the fact that un(tzero ) and un(int). We also derive (D2)
z : tzero × (int × [int]0,0 ) ⊢ fst(snd(z)) : int
[t-snd]
[t-snd]
z : tzero × (int × [int]0,0 ) ⊢ z : tzero × (int × [int]0,0 )
z : tzero × (int × [int]0,0 ) ⊢ snd(z) : int × [int]0,0
[t-name]
[t-name]
[t-snd]
[t-fst]
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
13
using the fact that un(tzero ) and un([int]0,0 ), therefore we derive (D3)
(D1)
(D2)
z : tzero × (int × [int]0,1 ), _ : int ⊢ snd(snd(z))!fst(snd(z))
using the combination
[t-out]
(tzero × (int × [int]0,1 )) + (tzero × (int × [int]0,0 )) = tzero × (int × [int]0,1 )
Already in this sub-derivation we appreciate that although the pair z is accessed twice, its
type in the conclusion of (D3) correctly tracks the fact that the channel contained in z is
only used once, for an output.
For the inr branch in P ′ there exists another derivation (D4) concluding
..
..
.
.
even : [seven ]0,ω , x : [int]1,0 × teven , z : tzero × int × [int]1,0 ⊢ fst(x)?(y). · · ·
Now we conclude
z : sodd ⊢ z : sodd
[t-name]
(D3)
(D4)
even : [seven ]0,ω , z : sodd ⊢ case z of · · ·
[t-in]
[t-case]
odd : [sodd ]ω,0 , even : [seven ]0,ω ⊢ odd?(z).case z of · · ·
[t-in]
[t-rep]
odd : [sodd ]ω,0 , even : [seven ]0,ω ⊢ P ′
Note that odd and even must be unlimited channels because they occur free in a replicated
process, for which rule [t-rep] requires an unlimited environment. A similar derivation
shows that Q′ is well typed in an environment where the types of odd and even have
swapped uses
..
.
[t-rep]
odd : [sodd ]0,ω , even : [seven ]ω,0 ⊢ Q′
so the combined types of odd and even are [sodd ]ω,ω and [seven ]ω,ω , respectively. Using
these, we find a typing derivation for the process R in Section 1. Proceeding bottom-up we
have
..
.
odd : [sodd ]ω,ω , l : todd , a : [int]0,1 ⊢ odd!(l,0,a)
and
[t-out]
..
.
even : [seven ]ω,ω , l : teven , b : [int]0,1 ⊢ even!(l,0,b)
[t-out]
as well as
..
.
[t-in]
a : [int]1,0 , b : [int]1,0 , r : [int]0,1 ⊢ a?(x).b?(y).r!(x + y)
from which we conclude
..
.
==========ω,ω
==============ω,ω
================0,1
=============== [t-new] (twice)
odd : [sodd ] , even : [seven ] , l : tlist , r : [int] ⊢ new a,b in · · ·
using the property todd + teven = tlist .
14
L. PADOVANI
We conclude this section with a technical remark to justify the use of a structural
precongruence relation in place of a more familiar symmetric one.
Remark 3.10. Let us show why the relation *P | P 4 *P would invalidate Theorem 3.8
(more specifically, Lemma A.3) in our setting (a similar phenomenon is described in [12]).
To this aim, consider the process
def
P = a?(x).new c in (*c?(y).c!y | c!b)
def
and the type environment Γκ = a : [int]ω,0 , b : [int]0,κ for an arbitrary κ. We can derive
..
..
.
.
[t-out]
..
c : [[int]0,κ ]ω,ω , y : [int]0,κ ⊢ c!y
.
[t-in]
..
..
.
.
c : [[int]0,κ ]ω,ω ⊢ c?(y).c!y
c : [[int]0,κ ]ω,ω ⊢ *c?(y).c!y
..
.
[t-rep]
Γκ , c : [[int]0,κ ]0,1 ⊢ c!b
Γκ , c : [[int]0,κ ]ω,ω ⊢ *c?(y).c!y | c!b
Γκ ⊢ new c in · · ·
[t-out]
[t-par]
[t-new]
[t-in]
Γκ ⊢ P
where we have elided a few obvious typing derivations for expressions. In particular, we
can find a derivation where b has an unlimited type (κ = 0) and another one where b has
a linear type (κ = 1). This is possible because channel c, which is restricted within P ,
can be given different types – respectively, [[int]0,0 ]ω,ω and [[int]0,1 ]ω,ω – in the two
derivations. We can now obtain
..
.
..
.
Γ ⊢P
0
Γ0 ⊢ *P
[t-rep]
Γ1 ⊢ P
[r-par]
Γ1 ⊢ *P | P
because un(Γ0 ) and Γ0 + Γ1 = Γ1 . If we allowed the structural congruence rule *P | P 4 *P ,
then Γ1 ⊢ *P would not be derivable because Γ1 is linear, hence typing would not be
preserved by structural pre-congruence. This problem is avoided in [15, 10] by limiting
replication to input prefixes, omitting any structural congruence rule for replications, and
adding a dedicated synchronization rule for them. In [15] it is stated that “the full picalculus replication operator poses no problems for the linear type system”, but this holds
because there the calculus is typed, so multiple typing derivations for the same process P
above would assign the same type to c and, in turn, the same type to b.
4. Constraint Generation
We formalize the problem of type reconstruction as follows: given a process P , find a type
environment Γ such that Γ ⊢ P , provided there is one. In general, in the derivation for
Γ ⊢ P we also want to identify as many linear channels as possible. We will address this
latter aspect in Section 5.
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
T, S ::=
|
|
|
|
α
int
[T]U,V
T×S
T⊕S
Type expression
(type variable)
(integer)
(channel)
(product)
(disjoint sum)
U, V ::=
|
|
̺
κ
U+V
15
Use expression
(use variable)
(use constant)
(use combination)
Table 5: Syntax of use and type expressions.
4.1. Syntax-directed generation algorithm. The type rules shown in Table 4 rely on
a fair amount of guessing that concerns the structure of types in the type environment,
how they are split/combined using +, and the uses occurring in them. So, these rules
cannot be easily interpreted as a type reconstruction algorithm. The way we follow to
define one is conventional: first, we give an alternative set of (almost) syntax-directed
rules that generate constraints on types; then, we search for a solution of such constraints.
The main technical challenge is that we cannot base our type reconstruction algorithm on
conventional unification because we have to deal with constraints expressing not only the
equality between types and uses, but also the combination of types and uses. In addition,
we work with possibly infinite types.
To get started, we introduce use and type expressions, which share the same structure
as uses/types but they differ from them in two fundamental ways:
(1) We allow use/type variables to stand for unknown uses/types.
(2) We can express symbolically the combination of use expressions.
We therefore introduce a countable set of use variables ̺, . . . as well as a countable set
of type variables α, β, . . . ; the syntax of use expressions U, V, . . . and of type expressions
T, S, . . . is given in Table 5. Observe that every use is also a use expression and every finite
type is also a type expression. We say that T is proper if it is different from a type variable.
Constraints ϕ, . . . are defined by the grammar below:
ϕ ::=
|
|
|
T=
ˆ S
T=
ˆ S1 + S2
T∼
ˆ S
U=
ˆ V
Constraint
(type equality)
(type combination)
(type coherence)
(use equality)
Constraints express relations between types/uses that must be satisfied in order for a given
process to be well typed. In particular, we need to express equality constraints between
types (T =
ˆ S) and uses (U =
ˆ V), coherence constraints (T ∼
ˆ S), and combination constraints
between types (T =
ˆ S1 + S2 ). We will write un(T) as an abbreviation for the constraint
T =
ˆ T + T. This notation is motivated by Definition 3.7, according to which a type is
unlimited if and only if it is equal to its own combination. We let C, . . . range over finite
constraint sets. The set of expressions of a constraint set C, written expr(C), is the (finite)
set of use and type expressions occurring in the constraints in C.
The type reconstruction algorithm generates type environments for the expressions
and processes being analyzed. Unlike the environments in Section 3, these environments
associate names with type expressions. For this reason we will let ∆, . . . range over the
16
L. PADOVANI
[c-env 1]
[c-env 2]
dom(∆1 ) ∩ dom(∆2 ) = ∅
∆1 ⊔ ∆2
∆1 , ∆2 ; ∅
∆1 ⊔ ∆2
∆; C
α fresh
(∆1 , u : T) ⊔ (∆2 , u : S)
∆, u : α; C ∪ {α =
ˆ T + S}
[m-env 1]
∅⊓∅
∅; ∅
[m-env 2]
∆1 ⊓ ∆2
(∆1 , u : T) ⊓ (∆2 , u : S)
∆; C
∆, u : T; C ∪ {T =
ˆ S}
Table 6: Combining and merging operators for type environments.
environments generated by the reconstruction algorithm, although we will refer to them as
type environments.
The algorithm also uses two auxiliary operators ⊔ and ⊓ defined in Table 6. The
relation ∆1 ⊔ ∆2
∆; C combines two type environments ∆1 and ∆2 into ∆ when the names
in dom(∆1 ) ∪ dom(∆2 ) are used both as specified in ∆1 and also as specified in ∆2 and, in
doing so, generates a set of constraints C. So ⊔ is analogous to + in (3.2). When ∆1 and ∆2
have disjoint domains, ∆ is just the union of ∆1 and ∆2 and no constraints are generated.
Any name u that occurs in dom(∆1 )∩dom(∆2 ) is used according to the combination of ∆1 (u)
and ∆2 (u). In general, ∆1 (u) and ∆2 (u) are type expressions with free type variables, hence
this combination cannot be “computed” or “checked” right away. Instead, it is recorded as
the constraint α =
ˆ ∆1 (u) + ∆2 (u) where α is a fresh type variable.
The relation ∆1 ⊓ ∆2
∆; C merges two type environments ∆1 and ∆2 into ∆ when
the names in dom(∆1 ) ∪ dom(∆2 ) are used either as specified in ∆1 or as specified in ∆2
and, in doing so, generates a constraint set C. This merging is necessary when typing the
alternative branches of a case: recall that rule [t-case] in Table 4 requires the same type
environment Γ for typing the two branches of a case. Consequently, ∆1 ⊓ ∆2 is defined only
when ∆1 and ∆2 have the same domain, and produces a set of constraints C saying that the
corresponding types of the names in ∆1 and ∆2 must be equal.
The rules of the type reconstruction algorithm are presented in Table 7 and derive
judgments e : T ◮ ∆; C for expressions and P ◮ ∆; C for processes. In both cases, ∆ is the
generated environment that contains associations for all the free names in e and P , while
C is the set of constraints that must hold in order for e or P to be well typed in ∆. In a
judgment e : T ◮ ∆; C, the type expression T denotes the type of the expression e.
There is a close correspondence between the type system (Table 4) and the reconstruction algorithm (Table 7). In a nutshell, unknown uses and types become fresh use and type
variables (all use/type variables introduced by the rules are assumed to be fresh), every
application of + in Table 4 becomes an application of ⊔ in Table 7, and every assumption
on the form of types becomes a constraint. Constraints accumulate from the premises to
the conclusion of each rule of the reconstruction algorithm, which we now review briefly.
Rule [i-int] deals with integer constants. Their type is obviously int, they contain
no free names and therefore they generate the empty environment and the empty set of
constraints. Rule [i-name] deals with the free occurrence of a name u. A fresh type variable
standing for the type of this occurrence of u is created and used in the resulting type
environment u : α. Again, no constraints are generated. In general, different occurrences
of the same name may have different types which are eventually combined with α later
on in the reconstruction process. In rules [i-inl] and [i-inr] the type of the summand that
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
17
Expressions
[i-int]
[i-name]
n : int ◮ ∅; ∅
u : α ◮ u : α; ∅
[i-inl]
[i-inr]
e : T ◮ ∆; C
inl(e) : T ⊕ α ◮ ∆; C
e : T ◮ ∆; C
inr(e) : α ⊕ T ◮ ∆; C
[i-pair]
ei : Ti ◮ ∆i ; Ci (i=1,2)
∆1 ⊔ ∆2
∆; C3
(e1 ,e2 ) : T1 × T2 ◮ ∆; C1 ∪ C2 ∪ C3
[i-fst]
[i-snd]
e : T ◮ ∆; C
fst(e) : α ◮ ∆; C ∪ {T =
ˆ α × β, un(β)}
e : T ◮ ∆; C
snd(e) : β ◮ ∆; C ∪ {T =
ˆ α × β, un(α)}
Processes
[i-idle]
idle ◮ ∅; ∅
[i-in]
e : T ◮ ∆1 ; C1
P ◮ ∆2 , x : S; C2
∆1 ⊔ ∆2
e?(x).P ◮ ∆; C1 ∪ C2 ∪ C3 ∪ {T =
ˆ [S]
1+̺1 ,2̺2
∆; C3
}
[i-rep]
[i-out]
e : T ◮ ∆1 ; C1
f : S ◮ ∆2 ; C2
∆1 ⊔ ∆2
∆; C3
e!f ◮ ∆; C1 ∪ C2 ∪ C3 ∪ {T =
ˆ [S]2̺1 ,1+̺2 }
P ◮ ∆; C
∆⊔∆
∆′ ; C ′
*P ◮ ∆′ ; C ∪ C ′
[i-par]
[i-new]
Pi ◮ ∆i ; Ci (i=1,2)
∆1 ⊔ ∆2
∆; C3
P1 | P2 ◮ ∆; C1 ∪ C2 ∪ C3
P ◮ ∆, a : T; C
new a in P ◮ ∆; C ∪ {T =
ˆ [α]̺,̺ }
[i-case]
e : T ◮ ∆1 ; C1
∆inl ⊓ ∆inr
Pi ◮ ∆i , xi : Ti ; Ci (i=inl,inr)
∆2 ; C2
∆1 ⊔ ∆2
∆; C3
case e {i(xi ) ⇒ Pi }i=inl,inr ◮ ∆; C1 ∪ C2 ∪ C3 ∪ Cinl ∪ Cinr ∪ {T =
ˆ Tinl ⊕ Tinr }
[i-weak]
P ◮ ∆; C
P ◮ ∆, u : α; C ∪ {un(α)}
Table 7: Constraint generation for expressions and processes.
was guessed in [t-inl] and [t-inr] becomes a fresh type variable. Rule [t-pair] creates a
product type from the type of the components of the pairs, combines the corresponding
environments and joins all the constraints generated in the process. Rules [i-fst] and [i-snd]
deal with pair projections. The type T of the projected expression must be a product of
the form α × β. Since the first projection discards the second component of a pair, β must
be unlimited in [i-fst]. Symmetrically for [i-snd].
18
L. PADOVANI
Continuing on with the rules for processes, let us consider [i-in] and [i-out]. The main
difference between these rules and the corresponding ones [t-in] and [t-out] is that the
use information of the channel on which the communication occurs is unknown, hence it is
represented using fresh use variables. The 1 + ̺i part accounts for the fact that the channel
is being used at least once, for an input or an output. The 2̺j part accounts for the fact that
the use information concerning the capability (either input or output) that is not exercised
must be unlimited (note that we extend the notation 2κ to use expressions). Rule [i-rep]
deals with a replicated process *P . In the type system, *P is well typed in an unlimited
environment. Here, we are building up the type environment for *P and we do so by
combining the environment ∆ generated by P with itself. The rationale is that ∆ ⊔ ∆ yields
an unlimited type environment that grants at least all the capabilities granted by ∆. By
now most of the main ingredients of the constraint generation algorithm have been revealed,
and the remaining rules contain no further novelties but the expected use of the merging
operator ⊓ in [i-case]. There is, however, a rule [i-weak] that has no correspondence in
Table 4. This rule is necessary because [i-in], [i-new], and [i-case], which correspond to
the binding constructs of the calculus, assume that the names they bind do occur in the
premises on these rules. But since type environments are generated by the algorithm as
it works through an expression or a process, this may not be the case if a bound name is
never used and therefore never occurs in that expression or process. Furthermore, the ⊓
operator is defined only on type environments having the same domain. This may not be
the case if a name occurs in only one branch of a pattern matching, and not in the other
one. With rule [i-weak] we can introduce missing names in type environments wherever this
is necessary. Naturally, an unused name has an unknown type α that must be unlimited,
whence the constraint un(α) (see Example 4.4 for an instance where [i-weak] is necessary).
Strictly speaking, with [i-weak] this set of rules is not syntax directed, which in principle is
a problem if we want to consider this as an algorithm. In practice, the places where [i-weak]
may be necessary are easy to spot (in the premises of all the aforementioned rules for the
binding constructs). What we gain with [i-weak] is a simpler presentation of the rules for
constraint generation.
4.2. Correctness and completeness. If the constraint set generated from P is satisfiable,
then it corresponds to a typing for P . To formalize this property, we must first define what
“satisfiability” means for a constraint set.
A substitution σ is a finite map from type variables to types and from use variables to
uses. We write dom(σ) for the set of type and use variables for which there is an association
in σ. The application of a substitution σ to a use/type expression U/T, respectively denoted
by σU and σT, replaces use variables ̺ and type variables α in U/T with the corresponding
uses σ(̺) and types σ(α) and computes use combinations whenever possible:
σ(α)
if T = α ∈ dom(σ)
if U = ̺ ∈ dom(σ)
σ(̺)
[σS]σU,σV if T = [S]U,V
def
def
σU = σU1 + σU2 if U = U1 + U2
σT =
σT1 ⊙ σT2 if T = T1 ⊙ T2
U
otherwise
T
otherwise
We will make sure that the application of a substitution σ to a type expression T is always
well defined: either dom(σ) contains no type variables, in which case σT is a type expression,
or dom(σ) includes all use/type variables occurring in T, in which case we say that σ covers
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
19
T and σT is a type.
We extend application pointwise to type environments, namely
def
σ∆ = {u : σ∆(u) | u ∈ dom(∆)}, and we say that σ covers ∆ if it covers all the type
expressions in the range of ∆.
Definition 4.1 (solution, satisfiability, equivalence). A substitution σ is a solution for a
constraint set C if it covers all the T ∈ expr(C) and the following conditions hold:
• T=
ˆ S ∈ C implies σT = σS, and
• T=
ˆ S1 + S2 ∈ C implies σT = σS1 + σS2 , and
• T∼
ˆ S ∈ C implies σT ∼ σS, and
• U=
ˆ V ∈ C implies σU = σV.
We say that C is satisfiable if it has a solution and unsatisfiable otherwise. We say that
C1 and C2 are equivalent if they have the same solutions.
We can now state the correctness result for the type reconstruction algorithm:
Theorem 4.2. If P ◮ ∆; C and σ is a solution for C that covers ∆, then σ∆ ⊢ P .
Note that Theorem 4.2 not only requires σ to be a solution for C, but also that σ must
include suitable substitutions for all use and type variables occurring in ∆. Indeed, it may
happen that ∆ contains use/type variables not involved in any constraint in C, therefore a
solution for C does not necessarily cover ∆.
The reconstruction algorithm is also complete, in the sense that each type environment
Γ such that Γ ⊢ P can be obtained by applying a solution for C to ∆.
Theorem 4.3. If Γ ⊢ P , then there exist ∆, C, and σ such that P ◮ ∆; C and σ is a solution
for C that covers ∆ and Γ = σ∆.
Example 4.4. Below we illustrate the reconstruction algorithm at work on the process
new a in (a!3 | a?(x).idle)
which will be instrumental also in the following section:
idle ◮ ∅; ∅
a : α1 ◮ a : α1 ; ∅
3 : int ◮ ∅; ∅
a!3 ◮ a : α1 ; {α1 =
ˆ [int]2̺1 ,1+̺2 }
[i-out]
a : α2 ◮ a : α2 ; ∅
[i-idle]
idle ◮ x : γ; {un(γ)}
[i-weak]
a?(x).idle ◮ a : α2 ; {α2 =
ˆ [γ]1+̺3 ,2̺4 , un(γ)}
a!3 | a?(x).idle ◮ a : α; {α =
ˆ α1 + α2 , α1 =
ˆ [int]2̺1 ,1+̺2 , α2 =
ˆ [γ]1+̺3 ,2̺4 , un(γ)}
[i-par]
[i-new]
new a in (a!3 | a?(x).idle) ◮ ∅; {α =
ˆ [δ]̺5 ,̺5 , α =
ˆ α1 + α2 , . . . }
The synthesized environment is empty, since the process has no free names, and the resulting
constraint set is
{α =
ˆ [δ]̺5 ,̺5 , α =
ˆ α1 + α2 , α1 =
ˆ [int]2̺1 ,1+̺2 , α2 =
ˆ [γ]1+̺3 ,2̺4 , un(γ)}
Observe that a is used twice and each occurrence is assigned a distinct type variable αi .
Eventually, the reconstruction algorithm finds out that the same channel a is used simultaneously in different parts of the process, so it records the fact that the overall type α of a
must be the combination of α1 and α2 in the constraint α =
ˆ α1 + α2 .
A solution for the obtained constraint set is the substitution
{α 7→ [int]1,1 , α1 7→ [int]0,1 , α2 7→ [int]1,0 , γ 7→ int, δ 7→ int, ̺1..4 7→ 0, ̺5 7→ 1}
[i-in]
20
L. PADOVANI
confirming that a is a linear channel. This is not the only solution of the constraint set:
another one can be obtained by setting all the use variables to ω, although in this case a is
not recognized as a linear channel.
Note also that the application of [i-in] is possible only if the name x of the received
message occurs in the environment synthesized for the continuation process idle. Since the
continuation process contains no occurrence of x, this name can only be introduced using
[i-weak]. In general, [i-weak] is necessary to prove the completeness of the reconstruction
algorithm as stated in Theorem 4.3. For example, x : int ⊢ idle is derivable according
to the rules in Table 4, but as we have seen in the above derivation the reconstruction
algorithm without [i-weak] would synthesize for idle an empty environment, not containing
an association for x.
Example 4.5. We compute the constraint set of a simple process that accesses the same
composite structure containing linear values. The process in Example 2.1 is too large to be
discussed in full, so we consider the following, simpler process
fst(x)?(y).snd(x)!(y + 1)
which uses a pair x of channels and sends on the second channel in the pair the successor
of the number received from the first channel (we assume that the language and the type
reconstruction algorithm have been extended in the obvious way to support operations on
numbers such as addition). We derive
x : α1 ◮ x : α1 ; ∅
[i-name]
fst(x) : β1 ◮ x : α1 ; {α1 =
ˆ β1 × β2 , un(β2 )}
for the first projection of x and
x : α2 ◮ x : α2 ; ∅
[i-fst]
[i-name]
[i-snd]
snd(x) : γ2 ◮ x : α2 ; {α2 =
ˆ γ1 × γ2 , un(γ1 )}
for the second projection of x. For the output operation we derive
..
.
y : δ ◮ y : δ; ∅
[i-name]
1 : int ◮ ∅;
[i-int]
y + 1 : int ◮ y : δ; {δ =
ˆ int}
snd(x)!(y + 1) ◮ x : α2 , y : δ; {α2 =
ˆ γ1 × γ2 , un(γ1 ), γ2 =
ˆ [int]2̺3 ,1+̺4 , δ =
ˆ int}
so for the whole process we obtain
..
.
fst(x)?(y).snd(x)!(y + 1) ◮ x : α; {α =
ˆ α1 + α2 , α1 =
ˆ β1 × β2 , α2 =
ˆ γ1 × γ2 ,
β1 =
ˆ [δ]1+̺1 ,2̺2 , γ2 =
ˆ [int]2̺3 ,1+̺4 ,
un(β2 ), un(γ1 ), δ =
ˆ int}
[i-out]
[i-in]
Like in Example 4.4, here too the variable x is used multiple times and each occurrence
is assigned a distinct type variable αi , but this time such type variables must be assigned
with a pair type in order for the constraint set to be solved.
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
21
5. Constraint Solving
In this section we describe an algorithm that determines whether a given constraint set C is
satisfiable and, if this is the case, computes a solution for C. Among all possible solutions
for C, we strive to find one that allows us to identify as many linear channels as possible.
To this aim, it is convenient to recall the notion of solution preciseness from [10].
Definition 5.1 (solution preciseness). Let ≤ be the total order on uses such that 0 ≤ 1 ≤ ω.
Given two solutions σ1 and σ2 for a constraint set C, we say that σ1 is more precise than
σ2 if σ1 (̺) ≤ σ2 (̺) for every ̺ ∈ expr(C).
Roughly speaking, the preciseness of a solution is measured in terms of the numbers of
unused and linear channels it identifies, which are related to the number of use variables
assigned to 0 and 1. We will use Definition 5.1 as a guideline for developing our algorithm,
although the algorithm may be unable to find the most precise solution. There are two
reasons for this. First, there can be solutions with minimal use assignments that are incomparable according to Definition 5.1. This is related to the fact that the type system
presented in Section 3 lacks the principal typing property. Second, to ensure termination
when constraints concern infinite types, our algorithm makes some simplifying assumptions
that may – in principle – imply a loss of precision of the resulting solution (see Example 5.11). Despite this, experience with the implementation suggests that the algorithm is
indeed capable of identifying as many unused and linear channels as possible in practical
situations, even when infinite types are involved. Before embarking in the technical description of the algorithm, we survey the key issues that we have to address and how they are
addressed.
5.1. Overview. We begin by considering again the simple process below
new a in (a!3 | a?(x).idle)
(5.1)
for which we have shown the reconstruction algorithm at work in Example 4.4. The process
contains three occurrences of the channel a, two of them in subject position for input/output
operations and one binding occurrence in the new construct. We have seen that the constraint generation algorithm associates the two rightmost occurrences of a with two type
variables α1 and α2 that must respectively satisfy the constraints
α1 =
ˆ [int]2̺1 ,1+̺2
α2 =
ˆ [γ]
1+̺3 ,2̺4
(5.2)
(5.3)
whereas the leftmost occurrence of a has a type α which must satisfy the constraints
α =
ˆ α1 + α2
α =
ˆ [δ]̺5 ,̺5
(5.4)
(5.5)
Even if none of these constraints concerns use variables directly, use variables are subject
to implicit constraints that should be taken into account for finding a precise solution. To
expose such implicit constraints, observe that in this first example we are in the fortunate
situation where the type variables α, α1 , and α2 occur on the left-hand side of a constraint
of the form β =
ˆ T where T is different from a type variable. In this case we say that β
is defined and we call T its definition. If we substitute each type variable in (5.4) with its
definition we obtain
[δ]̺5 ,̺5 =
ˆ [int]2̺1 ,1+̺2 + [γ]1+̺3 ,2̺4
22
L. PADOVANI
that reveals the relationships between the use variables. Knowing how type combination
operates (Definition 3.5), we can derive two constraints concerning use variables
̺5 =
ˆ 2̺1 + 1 + ̺3
̺5 =
ˆ 1 + ̺2 + 2̺4
for which it is easy to figure out a solution that includes the substitutions {̺1..4 7→ 0, ̺5 7→ 1}
(see Example 4.4). No substitution can be more precise than this one hence such solution,
which identifies a as a linear channel, is in fact optimal.
Let us now consider the following variation of (5.1)
a!3 | a?(x).idle
where we have removed the restriction. In this case the generated constraints are the same
(5.2), (5.3), and (5.4) as above, except that there is no constraint (5.5) that provides a
definition for α. In a sense, α is defined because we know that it must be the combination
of α1 and α2 for which we do have definitions. However, in order to come up with a general
strategy for solving constraint sets, it is convenient to complete the constraint set with a
defining equation for α: we know that α must be a channel type with messages of type int,
because that is the shape of the definition for α1 , but we do not know precisely the overall
uses of α. Therefore, we generate a new constraint defining the structure of the type α, but
with fresh use variables ̺5 and ̺6 in place of the unknown uses:
α=
ˆ [int]̺5 ,̺6
We can now proceed as before, by substituting all type variables in (5.4) with their
definition and deriving the use constraints below:
̺5 =
ˆ 2̺1 + 1 + ̺3
̺6 =
ˆ 1 + ̺2 + 2̺4
Note that, unlike in (5.1), we do not know whether ̺5 and ̺6 are required to be equal
or not. Here we are typing an open process which, in principle, may be composed in parallel
with other uses of the same channel a. Nonetheless, we can easily find a solution analogous
to the previous one but with the use assignments {̺1..5 7→ 0, ̺5,6 7→ 1}.
The idea of completing constraints with missing definitions is a fundamental ingredient
of our constraint solving technique. In the previous example, completion was somehow
superfluous because we could have obtained a definition for α by combining the definitions
of α1 and α2 , which were available. However, completion allowed us to patch the constraint
set so that it could be handled as in the previous case of process (5.1). In fact, it is easy to
find processes for which completion becomes essential. Consider for example
new a in (a!3 | b!a)
(5.6)
where the bound channel a is used once for an output and then extruded through a free
channel b. For this process, the reconstruction algorithm infers the type environment b : β
and the constraints below:
α1
β
α
α
=
ˆ
=
ˆ
=
ˆ
=
ˆ
[int]2̺1 ,1+̺2
[α2 ]2̺3 ,1+̺4
α1 + α2
[δ]̺5 ,̺5
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
23
where the three occurrences of a are associated from left to right with the type variables α,
α1 , and α2 (Section C gives the derivation for (5.6)). Note that there is no constraint that
defines α2 . In fact, there is just no constraint with α2 on the left hand side at all. The only
hint that we have concerning α2 is that it must yield α when combined with α1 . Therefore,
according to the definition of type combination, we can once more deduce that α2 shares
the same structure as α and α1 and we can complete the set of constraints with
α2 =
ˆ [int]̺6 ,̺7
where ̺6 and ̺7 are fresh use variables.
After performing the usual substitutions, we can finally derive the use constraints
̺5 =
ˆ 2̺1 + ̺6
̺5 =
ˆ 1 + ̺2 + ̺7
for which we find a solution including the assignments {̺1..4,7 7→ 0, ̺5,6 7→ 1}. The interesting fact about this solution is the substitution ̺6 7→ 1, meaning that the constraint solver
has inferred an input operation for the rightmost occurrence of a in (5.6), even though there
is no explicit evidence of this operation in the process itself. The input operation is deduced
“by subtraction”, seeing that a is used once in (5.6) for an output operation and knowing
that a restricted (linear) channel like a must also be used for a matching input operation.
Note also that this is not the only possible solution for the use constraints. If, for
example, it turns out that the extruded occurrence of a is never used (or is used twice) for
an input, it is possible to obtain various solutions that include the assignments {̺5,6 7→ ω}.
However, the solution we have found above is the most precise according to Definition 5.1.
It is not always possible to find the most precise solution. This can be seen in the
following variation of (5.6)
new a in (a!3 | b!a | c!a)
(5.7)
where a is extruded twice, on b and on c (Section C gives the derivation). Here, as in
(5.6), an input use for a is deduced “by subtraction”, but there is an ambiguity as to
whether such input capability is transmitted through b or through c. Hence, there exist two
incomparable solutions for the constraint set generated for (5.6). The lack of an optimal
solution in general (hence of a principal typing) is a consequence of the condition imposing
equal uses for restricted channels (see [t-new] and [i-new]). Without this condition, it would
be possible to find the most precise solution for the constraints generated by (5.7) noticing
that a is never explicitly used for an input operation, and therefore its input use could be
0. We think that this approach hinders the applicability of the reconstruction algorithm
in practice, where separate compilation and type reconstruction of large programs are real
concerns. We will elaborate more on this in Example 7.3. For the time being, let us analyze
one last example showing a feature that we do not handle in our type system, namely
polymorphism. The process
a?(x).b!x
(5.8)
models a forwarder that receives a message x from a and sends it on b. For this process the
constraint generation algorithm yields the environment a : α, b : β and the constraints
α =
ˆ [γ]1+̺1 ,2̺2
β =
ˆ [γ]2̺3 ,1+̺4
(Section C gives the complete derivation). In particular, there is no constraint concerning
the type variable γ and for good reasons: since the message x is only passed around in (5.8)
24
L. PADOVANI
[c-axiom]
C ∪ {ϕ}
ϕ
[c-refl]
[c-symm]
[c-trans]
T ∈ expr(C)
C
T R̂ S
C
C
C
S R̂ T
T R̂ T
C
[c-coh 1]
[c-coh 2]
[c-cong 1]
C
C
C
C
T=
ˆ S
T∼
ˆ S
T=
ˆ S1 + S2
C T∼
ˆ Si
i ∈ {1, 2}
[c-cong 2]
C
T R̂ T′
T′ R̂ S
T R̂ S
[T]U1 ,U2 ∼
ˆ [S]V1 ,V2
C T=
ˆ S
[c-cong 3]
T1 ⊙ T2 R̂ S1 ⊙ S2
C
C
Ti R̂ Si
C
i ∈ {1, 2}
T1 ⊙ T2 =
ˆ S1 ⊙ S2 + S3 ⊙ S4
C Ti =
ˆ Si + Si+2
i ∈ {1, 2}
[c-subst]
C
T1 =
ˆ T2 + T3
C Ti =
ˆ Si
C S1 =
ˆ S2 + S3
[c-use 1]
C
(1≤i≤3)
[c-use 2]
[T]U1 ,U2 =
ˆ [S]V1 ,V2
i ∈ {1, 2}
C Ui =
ˆ Vi
C
[T]U1 ,U2 =
ˆ [S1 ]V1 ,V2 + [S2 ]V3 ,V4
i ∈ {1, 2}
C Ui =
ˆ Vi + Vi+2
Table 8: Constraint deduction system.
but never actually used, the channels a and b should be considered polymorphic. Note that
in this case we know nothing about the structure of γ hence completion of the constraint
set is not applicable. In this work we do not deal with polymorphism and will refrain from
solving sets of constraints where there is no (structural) information for unconstrained type
variables. Just observe that handling polymorphism is not simply a matter of allowing
(universally quantified) type variables in types. For example, a type variable involved in a
constraint α =
ˆ α+ α does not have any structural information and therefore is polymorphic,
but can only be instantiated with unlimited types. The implementation has a defaulting
mechanism that forces unconstrained type variables to a base type.
We now formalize the ideas presented so far into an algorithm, for which we have already
identified the key phases: the ability to recognize types that “share the same structure”,
which we call structurally coherent (Definition 3.6); the completion of a set of constraints
with “missing definitions” so that each type variable has a proper definition; the derivation
and solution of use constraints. Let us proceed in order.
5.2. Verification. In Section 5.1 we have seen that some constraints can be derived from
the ones produced during the constraint generation phase (Section 4). We now define a
deduction system that, starting from a given constraint set C, computes all the “derivable
facts” about the types in expr(C). Such deduction system is presented as a set of inference
rules in Table 8, where R ranges over the symbols = and ∼. Each rule derives a judgment of
the form C ϕ meaning that the constraint ϕ is derivable from those in C (Proposition 5.2
below formalizes this property). Rule [c-axiom] simply takes each constraint in C as an
axiom. Rules [c-refl], [c-symm], and [c-trans] state the obvious reflexivity, symmetry, and
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
25
transitivity of = and ∼. Rules [c-coh 1] and [c-coh 2] deduce coherence relations: equality
implies coherence, for = ⊆ ∼, and each component of a combination is coherent to the combination itself (and therefore, by transitivity, to the other component). Rules [c-cong 1]
through [c-cong 3] state congruence properties of = and ∼ which follow directly from
Definition 3.5: when two channel types are coherent, their message types must be equal;
corresponding components of R-related composite types are R-related. Rule [c-subst] allows the substitution of equal types in combinations. Finally, [c-use 1] and [c-use 2] allow
us to deduce use constraints of the form U =
ˆ V involving use variables. Both rules are
self-explanatory and follow directly from Definition 3.5.
We state two important properties of this deduction system:
Proposition 5.2. Let C ϕ. The following properties hold:
(1) C and C ∪ {ϕ} are equivalent (see Definition 4.1).
(2) expr(C) = expr(C ∪ {ϕ}).
Proof. A simple induction on the derivation of C
ϕ.
The first property confirms that all the derivable relations are already encoded in the
original constraint set, in a possibly implicit form. The deduction system makes them
explicit. The second property assures us that no new type expressions are introduced by
the deduction system. Since the inference rules in Section 4 always generate finite constraint
sets, this implies that the set of all derivable constraints is also finite and can be computed
in finite time. This is important because the presence or absence of particular constraints
determines the (un)satisfiability of a constraint set:
Proposition 5.3. If C T ∼
ˆ S where T and S are proper type expressions with different
topmost constructors, then C has no solution.
Proof. Suppose by contradiction that σ is a solution for C. By Proposition 5.2(1) we have
that σ is also a solution for C ∪ {T ∼
ˆ S}. This is absurd, for if T and S have different
topmost constructors, then so do σT and σS, hence σT 6∼ σS.
The converse of Proposition 5.3 is not true in general. For example, the constraint set
ˆ [int]1,0 } has no solution because of the implicit constraints on corresponding
{[int]0,1 =
uses and yet it satisfies the premises of Proposition 5.3. However, when C is a constraint
set generated by the inference rules in Section 4, the converse of Proposition 5.3 holds.
This means that we can use structural coherence as a necessary and sufficient condition for
establishing the satisfiability of constraint sets generated by the reconstruction algorithm.
Before proving this fact we introduce some useful notation. For R ∈ {=, ∼} let
def
RC = {(T, S) | C
T R̂ S}
and observe that RC is an equivalence relation on expr(C) by construction, because of the
rules [c-refl], [c-symm], and [c-trans]. Therefore, it partitions the type expressions in C
into R-equivalence classes. Now, we need some way to choose, from each R-equivalence
class, one representative element of the class. To this aim, we fix a total order ⊑ between
type expressions such that T ⊑ α for every proper T and every α and we define:1
1In a Haskell or OCaml implementation such total order could be, for instance, the one automatically
defined for the algebraic data type that represents type expressions and where the value constructor representing type variables is the last one in the data type definition.
26
L. PADOVANI
Definition 5.4 (canonical representative). Let crepR (C, T) be the ⊑-least type expression
S such that T RC S. We say that crepR (C, T) is the canonical representative of T with
respect to the relation RC .
Note that, depending on ⊑, we may have different definitions of crepR (C, T). The exact
choice of the canonical representative does not affect the ability of the algorithm to compute
a solution for a constraint set (Theorem 5.12) although – in principle – it may affect the
precision of the solution (Example 5.11). Note also that, because of the assumption we have
made on the total order ⊑, crepR (C, T) is proper whenever T is proper or when T is some
type variable α such that there is a “definition” for α in C. In fact, it is now time to define
precisely the notion of defined and undefined type variables:
Definition 5.5 (defined and undefined type variables). Let
def
def R (C) = {α ∈ expr(C) | crepR (C, α) is proper}
def
undef R (C) = {α ∈ expr(C) \ def R (C)}
We say that α is R-defined or R-undefined in C according to α ∈ def R (C) or α ∈ undef R (C).
We can now prove that the coherence check is also a sufficient condition for satisfiability.
Proposition 5.6. Let P ◮ ∆; C. If C T ∼
ˆ S where T and S are proper type expressions
implies that T and S have the same topmost constructor, then C has a solution.
Proof. We only sketch the proof, since we will prove a more general result later on (see
def
Theorem 5.12). Consider the use substitution σuse = {̺ 7→ ω | ̺ ∈ expr(C)} mapping all
use variables in C to ω, let Σ be the system of equations {αi = Ti | 1 ≤ i ≤ n} defined by
def
Σ = {α = σuse crep∼ (C, α) | α ∈ def ∼ (C)} ∪ {α = int | α ∈ undef ∼ (C)}
and observe that every Ti is a proper type expression. From Theorem 3.2 we know that
Σ has a unique solution σtype = {αi 7→ ti | 1 ≤ i ≤ n} such that ti = σtype Ti for every
1 ≤ i ≤ n. It only remains to show that σuse ∪ σtype is a solution for C. This follows from
two facts: (1) from the hypothesis P ◮ ∆; C we know that all channel types in C have one
use variable in each of their use slots, hence the substitution σuse forces all uses to ω; (2)
from the hypothesis and the rules [c-cong *] we know that all proper type expressions in
the same (∼)-equivalence class have the same topmost constructor.
In Proposition 5.6, for finding a substitution for all the type variables in C, we default
each type variable in undef ∼ (C) to int. This substitution is necessary in order to satisfy
the constraints un(α), namely those of the form α =
ˆ α + α, when α ∈ undef ∼ (C). These
α’s are the “polymorphic type variables” that we have already discussed earlier. Since we
leave polymorphism for future work, in the rest of this section we make the assumption that
undef ∼ (C) = ∅, namely that all type variables are (∼)-defined.
Example 5.7. Below is a summary of the constraint set C generated in Example 4.5:
α=
ˆ α1 + α2
δ=
ˆ int
α1 =
ˆ β1 × β2
α2 =
ˆ γ1 × γ2
β1 =
ˆ [δ]1+̺1 ,2̺2
β2 =
ˆ β2 + β2
γ1 =
ˆ γ1 + γ1
γ2 =
ˆ [int]2̺3 ,1+̺4
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
27
Note that {α, β2 , γ1 } ⊆ def ∼ (C) \ def = (C). In particular, they all have a proper canonical
representative, which we may assume to be the following ones:
crep∼ (C, α) = crep∼ (C, α1 ) = crep∼ (C, α2 )
crep∼ (C, β1 ) = crep∼ (C, γ1 )
crep∼ (C, β2 ) = crep∼ (C, γ2 )
crep∼ (C, δ)
=
=
=
=
β1 × β2
[δ]1+̺1 ,2̺2
[int]2̺3 ,1+̺4
int
It is immediate to verify that the condition of Proposition 5.6 holds, hence we conclude that
C is satisfiable. Indeed, a solution for C is
{α, α1,2 7→ [int]ω,ω × [int]ω,ω , β1,2 , γ1,2 7→ [int]ω,ω , δ 7→ int, ̺1..4 7→ ω}
even though we will find a more precise solution in Example 5.13.
5.3. Constraint set completion. If the satisfiability of the constraint set is established
(Proposition 5.6), the subsequent step is its completion in such a way that every type
variable α has a definition in the form of a constraint α =
ˆ T where T is proper. Recall that
this step is instrumental for discovering all the (implicit) use constraints.
In Example 5.7 we have seen that some type variables may be (∼)-defined but (=)undefined. The ∼ relation provides information about the structure of the type that should
be assigned to the type variable, but says nothing about the uses in them. Hence, the main
task of completion is the creation of fresh use variables for those channel types of which
only the structure is known. In the process, fresh type variables need to be created as well,
and we should make sure that all such type variables are (=)-defined to guarantee that
completion eventually terminates. We will be able to do this, possibly at the cost of some
precision of the resulting solution.
We begin the formalization of completion by introducing an injective function t that,
given a pair of type variables α and β, creates a new type variable t(α, β). We assume that
t(α, β) is different from any type variable generated by the algorithm in Section 4 so that
the type variables obtained through t are effectively fresh. Then we define an instantiation
function instance that, given a type variable α and a type expression T, produces a new type
expression that is structurally coherent to T, but where all use expressions and type variables
have been respectively replaced by fresh use and type variables. The first argument α of
instance records the fact that such instantiation is necessary for completing α. Formally:
t(α, β)
if T = β
int
if T = int
def
(5.9)
instance(α, T) =
̺
,̺
1
2
[S]
if T = [S]U,V , ̺i fresh
instance(α, T1 ) ⊙ instance(α, T2 ) if T = T1 ⊙ T2
All the equations but the first one are easily explained: the instance of int cannot
be anything but int itself; the instance of a channel type [S]U,V is the type expression
[S]̺1 ,̺2 where we generate two fresh use variables corresponding to U and V; the instance
of a composite type T ⊙ S is the composition of the instances of T and S. For example, we
have
instance(α, β × [[int]U1 ,U2 ]V1 ,V2 ) = t(α, β) × [[int]U1 ,U2 ]̺1 ,̺2
28
L. PADOVANI
where ̺1 and ̺2 are fresh. Note that, while instantiating a channel type [S]U,V , there is no
need to instantiate S because [t]κ1 ,κ2 ∼ [s]κ3 ,κ4 implies t = s so S is exactly the message
type we must use in the instance of [S]U,V .
Concerning the first equation in (5.9), in principle we want instance(α, β) to be the same
as instance(α, crep∼ (C, β)), but doing so directly would lead to an ill-founded definition for
instance, since nothing prevents β from occurring in crep∼ (C, β) (types can be infinite). We
therefore instantiate β to a new type variable t(α, β) which will in turn be defined by a new
constraint t(α, β) =
ˆ instance(α, crep∼ (C, β)).
There are a couple of subtleties concerning the definition of instance. The first one is
that, strictly speaking, instance is a relation rather than a function because the fresh use
variables in (5.9) are not uniquely determined. In practice, instance can be turned into a
proper function by devising a deterministic mechanism that picks fresh use variables in a
way similar to the t function that we have defined above. The formal details are tedious
but well understood, so we consider the definition of instance above satisfactory as is. The
second subtlety is way more serious and has to do with the instantiation of type variables
(first equation in (5.9)) which hides a potential approximation due to this completion phase.
To illustrate the issue, suppose that
α∼
ˆ [int]U,V × α
(5.10)
is the only constraint concerning α in some constraint set C so that we need to provide a
(=)-definition for α. According to (5.9) we have
instance(α, [int]U,V × α) = [int]̺1 ,̺2 × t(α, α)
so by adding the constraints
α=
ˆ t(α, α)
and
t(α, α) =
ˆ [int]̺1 ,̺2 × t(α, α)
(5.11)
we complete the definition for α. There is a fundamental difference between the constraint
(5.10) and those in (5.11) in that the former admits far more solutions than those admitted
by (5.11). For example, the constraint (5.10) can be satisfied by a solution that contains the
assignment α 7→ t where t = [int]1,0 × [int]0,1 × t, but the constraint (5.11) cannot. The
problem of a constraint like (5.10) is that, when we only have structural information about
a type variable, we have no clue about the uses in its definition, if they follow a pattern,
and what the pattern is. In principle, in order to account for all the possibilities, we should
generate fresh use variables in place of any use slot in the possibly infinite type. In practice,
however, we want completion to eventually terminate, and the definition of instance given
by (5.9) is one easy way to ensure this: what we are saying there is that each type variable
β that contributes to the definition of a (=)-undefined type variable α is instantiated only
once. This trivially guarantees completion termination, for there is only a finite number of
distinct variables to be instantiated. The price we pay with this definition of instance is a
potential loss of precision in the solution of use constraints. We say “potential” because we
have been unable to identify a concrete example that exhibits such loss of precision. Part of
the difficulty of this exercise is due to the fact that the effects of the approximation on the
solution of use constraints may depend on the particular choice of canonical representatives,
which is an implementation detail of the constraint solver (see Definition 5.4). In part, the
effects of the approximation are limited to peculiar situations:
(1) There is only a fraction of constraint sets where the same type variable occurring in
several different positions must be instantiated, namely those having as solution types
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
29
with infinite branches containing only finitely many channel type constructors. The
constraint (5.10) is one such example. In all the other cases, the given definition of
instance does not involve any approximation.
(2) A significant fraction of the type variables for which only structural information is known
are those generated by the rules [i-fst], [i-snd], and [i-weak]. These type variables
stand for unlimited types, namely for types whose uses are either 0 or ω. In fact, in
most cases all the uses in these unlimited types are 0. Therefore, the fact that only
a handful of fresh use variables is created, instead of infinitely many, does not cause
any approximation at all, since the use variables in these type expressions would all be
instantiated to 0 anyway.
We define the completion of a constraint set C as the least superset of C where all the
(=)-undefined type variables in C have been properly instantiated:
Definition 5.8 (completion). The completion of C, written C, is the least set such that:
(1) C ⊆ C;
(2) α ∈ undef = (C) implies α =
ˆ t(α, α) ∈ C;
(3) t(α, β) ∈ expr(C) implies t(α, β) =
ˆ instance(α, crep∼ (C, β)) ∈ C.
The completion C of a finite constraint set C can always be computed in finite time
as the number of necessary instantiations is bound by the square of the cardinality of
undef = (C). Because of the approximation of instances for undefined variables, C and C are
not equivalent in general (see Example 5.11 below). However, the introduction of instances
does not affect the satisfiability of the set of constraints.
Proposition 5.9. The following properties hold:
(1) If C is satisfiable, then C is satisfiable.
(2) If σ is a solution for C, then σ is also a solution for C.
Proof. Each (∼)-equivalence class in C contains exactly one (∼)-equivalence class in C, for
each new type expression that has been introduced in C is structurally coherent to an
existing type expression in C. Then item (1) is a consequence of Proposition 5.6, while
item (2) follows from the fact that C ⊆ C.
Example 5.10. Considering the constraint set C in Example 5.7, we have three type variables requiring instantiation, namely α, β2 , and γ1 . According to Definition 5.8, and using
the same canonical representatives mentioned in Example 5.7, we augment the constraint
set with the constraints
α=
ˆ t(α, α)
β2 =
ˆ t(β2 , β2 )
γ1 =
ˆ t(γ1 , γ1 )
t(α, α)
t(α, β1 )
t(α, β2 )
t(β2 , β2 )
t(γ1 , γ1 )
=
ˆ
=
ˆ
=
ˆ
=
ˆ
=
ˆ
instance(α, β1 × β2 )
instance(α, [δ]1+̺1 ,2̺2 )
instance(α, [int]2̺3 ,1+̺4 )
instance(β2 , [int]2̺3 ,1+̺4 )
instance(γ1 , [δ]1+̺1 ,2̺2 )
=
=
=
=
=
t(α, β1 ) × t(α, β2 )
[δ]̺5 ,̺6
[int]̺7 ,̺8
[int]̺9 ,̺10
[δ]̺11 ,̺12
where the ̺i with i ≥ 5 are all fresh.
Observe that the canonical (∼)-representative of β2 is instantiated twice, once for defining α and once for defining β2 itself. We will see in Example 5.13 that this double instanti
ation is key for inferring that snd(x) in Example 4.5 is used linearly.
30
L. PADOVANI
Example 5.11. In this example we show the potential effects of instantiation on the ability
of the type reconstruction algorithm to identify linear channels. To this aim, consider the
following constraint set
∼
ˆ
=
ˆ
=
ˆ
=
ˆ
α
β
γ
α
[int]U,V × α
[int]0,1+̺1 × [int]0,2̺2 × β
[int]0,0 × [int]0,0 × γ
β+γ
where, to limit the number of use variables without defeating the purpose of the example,
we write the constant use 0 in a few use slots. Observe that this constraint set admits
the solution {α 7→ t, β 7→ t, γ 7→ s, ̺1,2 7→ 0} where t and s are the types that satisfy the
equalities t = [int]0,1 × [int]0,0 × t and s = [int]0,0 × s. Yet, if we instantiate α following
the procedure outlined above we obtain the constraints
α=
ˆ t(α, α)
and
t(α, α) =
ˆ [int]̺3 ,̺4 × t(α, α)
and now the two constraints below follow by the congruence rule [c-cong *]:
[int]̺3 ,̺4 =
ˆ [int]0,1+̺1 + [int]0,0
[int]̺3 ,̺4 =
ˆ [int]0,2̺2 + [int]0,0
This implies that the use variable ̺4 must simultaneously satisfy the constraints
̺4 =
ˆ 1 + ̺1
and
̺4 =
ˆ 2̺2
which is only possible if we assign ̺1 and ̺2 to a use other than 0 and ̺4 to ω. In other
words, after completion the only feasible solutions for the constraint set above have the form
{α 7→ t′ , β 7→ t′ , γ 7→ s, ̺1,2 7→ κ, ̺3 7→ 0, ̺4 7→ ω} for 1 ≤ κ where t′ = [int]0,ω × t′ , which
are less precise than the one that we could figure out before the instantiation: t denotes an
infinite tuple of channels in which those in odd-indexed positions are used for performing
exactly one output operation; t′ denotes an infinite tuple of channels, each being used for
an unspecified number of output operations.
5.4. Solution synthesis. In this phase, substitutions are found for all the use and type
variables that occur in a (completed) constraint set. We have already seen that it is always
possible to consider a trivial use substitution that assigns each use variable to ω. In this
phase, however, we have all the information for finding a use substitution that, albeit not
necessarily optimal because of the approximation during the completion phase, is minimal
according to the ≤ precision order on uses of Definition 5.1.
The first step for computing a use substitution is to collect the whole set of constraints concerning use expressions. This is done by repeatedly applying the rules [c-use 1]
and [c-use 2] shown in Table 8. Note that the set of derivable use constraints is finite and
can be computed in finite time because C is finite. Also, we are sure to derive all possible
use constraints if we apply these two rules to a completed constraint set.
Once use constraints have been determined, any particular substitution for use variables
can be found by means of an exhaustive search over all the possible substitutions: the
number of such substitutions is finite because the number of use variables is finite and so is
the domain {0, 1, ω} on which they range. Clearly this brute force approach is not practical
in general and in Section 6 we will discuss two techniques that reduce the search space
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
31
for use substitutions. The main result of this section is independent of the particular use
substitution σuse that has been identified.
Theorem 5.12 (correctness of the constraint solving algorithm). Let P ◮ ∆; C. If
(1) C
T∼
ˆ S where T and S are proper type expressions implies that T and S have the
same topmost constructor, and
(2) σuse is a solution of the use constraints of C, and
def
(3) σtype is the solution of the system Σ = {α = σuse crep= (C, α) | α ∈ expr(C)},
then σuse ∪ σtype is a solution for C.
def
Proof. Let σ = σuse ∪ σtype . We have to prove the implications of Definition 4.1 for C. We
focus on constraints of the form T =
ˆ S1 + S2 , the other constraints being simpler and/or
handled in a similar way.
def
ˆ S1 + S2 }. It is enough to show that R satisfies
Let R = {((σS1 , σS2 ), σT) | C T =
the conditions of Definition 3.5, since type combination is the largest relation that satisfies
those same conditions. Suppose ((s1 , s2 ), t) ∈ R. Then there exist T, S1 , and S2 such that
C T=
ˆ S1 +S2 and t = σT and si = σSi for i = 1, 2. Without loss of generality, we may also
assume that T, S1 , and S2 are proper type expressions. Indeed, suppose that this is not the
ˆ S1 + S2
case and, for instance, T = α. Then, from [c-subst] we have that C crep= (C, α) =
and, since σ is a solution of Σ, we know that σ(α) = σcrep= (C, α). Therefore, the same
pair ((s1 , s2 ), t) ∈ R can also be obtained from the triple (crep= (C, α), S1 , S2 ) whose first
component is proper. The same argument applies for S1 and S2 .
Now we reason by cases on the structure of T, S1 , and S2 , knowing that all these type
expressions have the same topmost constructor from hypothesis (1) and [c-coh 2]:
• If T = S1 = S2 = int, then condition (1) of Definition 3.5 is satisfied.
• If T = [T′ ]U1 ,U2 and Si = [S′i ]V2i−1 ,V2i for i = 1, 2, then from [c-coh 2] and [c-cong 1]
we deduce C
T′ =
ˆ S′i and from [c-use 2] we deduce C
Ui =
ˆ Vi + Vi+2 for i = 1, 2.
Since σ is a solution for the equality constraints in C, we deduce σT′ = σS1 = σS2 . Since
σ is a solution for the use constraints in C, we conclude σUi = σVi + σVi+2 for i = 1, 2.
Hence, condition (2) of Definition 3.5 is satisfied.
• If T = T1 ⊙ T2 and Si = Si1 ⊙ Si2 , then from [c-cong 3] we deduce C Ti =
ˆ Si1 + Si2 for
i = 1, 2. We conclude ((σSi1 , σSi2 ), σTi ) ∈ R by definition of R, hence condition (3) of
Definition 3.5 is satisfied.
Note that the statement of Theorem 5.12 embeds the constraint solving algorithm, which
includes a verification phase (item (1)), a constraint completion phase along with an (unspecified, but effective) computation of a solution for the use constraints (item (2)), and
the computation of a solution for the original constraint set in the form of a finite system
of equations (item (3)). The conclusion of the theorem states that the algorithm is correct.
Example 5.13. There are three combination constraints in the set C obtained in Example 5.10, namely α =
ˆ α1 + α2 , β2 =
ˆ β2 + β2 , and γ1 =
ˆ γ1 + γ1 . By performing suitable
substitutions with [c-subst] we obtain
[c-axiom]
C α=
ˆ α1 + α2
================================= [c-subst] (multiple applications)
C t(α, β1 ) × t(α, β2 ) =
ˆ β1 × β2 + γ1 × γ2
C
t(α, βi ) =
ˆ βi + γi
[c-cong 3]
32
L. PADOVANI
from which we can further derive
..
.
C t(α, β1 ) =
ˆ β1 + γ1
=======
========1+̺
====
=========== [c-subst] (multiple applications)
̺5 ,̺6
C [δ]
=
ˆ [δ] 1 ,2̺2 + [δ]̺11 ,̺12
as well as
..
.
C t(α, β2 ) =
ˆ β2 + γ2
=======
========̺==
============ [c-subst] (multiple applications)
̺7 ,̺8
C [δ]
=
ˆ [δ] 9 ,̺10 + [δ]2̺3 ,1+̺4
Analogous derivations can be found starting from β2 =
ˆ β2 + β2 and γ1 =
ˆ γ1 + γ1 . At this
point, using [c-use 2], we derive the following set of use constraints:
̺5
̺6
̺7
̺8
=
ˆ
=
ˆ
=
ˆ
=
ˆ
1 + ̺1 + ̺11
2̺2 + ̺12
̺9 + 2̺3
̺10 + 1 + ̺4
̺11
̺12
̺9
̺10
=
ˆ
=
ˆ
=
ˆ
=
ˆ
2̺11
2̺12
2̺9
2̺10
for which we find the most precise solution {̺1..4,6,7,9..12 7→ 0, ̺5,8 7→ 1}.
From this set of use constraints we can also appreciate the increased accuracy deriving
from distinguishing the instance t(α, β2 ) of the type variable β2 used for defining α from
the instance t(β2 , β2 ) of the same type variable β2 for defining β2 itself. Had we chosen to
generate a unique instance of β2 , which is equivalent to saying that ̺8 and ̺10 are the same
use variable, we would be required to satisfy the use constraint
̺10 + 1 + ̺4 =
ˆ 2̺10
which is only possible if we take ̺8 = ̺10 = ω. But this assignment fails to recognize that
snd(x) is used linearly in the process of Example 4.5.
6. Implementation
In this section we cover a few practical aspects concerning the implementation of the type
reconstruction algorithm.
6.1. Derived constraints. The verification phase of the solver algorithm requires finding
all the constraints of the form T ∼
ˆ S that are derivable from a given constraint set C. Doing
so allows the algorithm to determine whether C is satisfiable or not (Proposition 5.6). In
principle, then, one should compute the whole set of constraints derivable from C. The
particular nature of the ∼ relation enables a more efficient way of handling this phase.
The key observation is that there is no need to ever perform substitutions (with the rule
[c-subst]) in order to find all the ∼
ˆ constraints. This is because [c-coh 2] allows one to
relate the type expressions in a combination, since they must all be structurally coherent
and ∼ is insensitive to the actual content of the use slots in channel types. This means
that all ∼
ˆ constraints can be computed efficiently using conventional unification techniques
(ignoring the content of use slots). In fact, the implementation uses unification also for
the constraints of the form T =
ˆ S. Once all the =
ˆ constraints have been found and the
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
33
constraint set has been completed, substitutions in constraints expressing combinations can
be performed efficiently by mapping each type variable to its canonical representative.
6.2. Use constraints resolution. In Section 5 we have refrained from providing any detail
about how use constraints are solved and argued that a particular use substitution can
always be found given that both the set of constraints and the domain of use variables are
finite. While this argument suffices for establishing the decidability of this crucial phase of
the reconstruction algorithm, a naı̈ve solver based on an exhaustive search of all the use
substitutions would be unusable, since the number of use variables is typically large, even in
small processes. Incidentally, note that completion contributes significantly to this number,
since it generates fresh use variables for all the instantiated channel types.
There are two simple yet effective strategies that can be used for speeding up the search
of a particular use substitution (both have been implemented in the prototype). The first
strategy is based on the observation that, although the set of use variables can be large, it
can often be partitioned into many independent subsets. Finding partitions is easy: two
variables ̺1 and ̺2 are related in C if C U =
ˆ V and ̺1 , ̺2 occur in U =
ˆ V (regardless of
where ̺1 and ̺2 occur exactly). The dependencies between variables induce a partitioning
of the use constraints such that the use variables occurring in the constraints of a partition
are all related among them, and are not related with any other use variable occurring in
a use constraint outside the partition. Once the partitioning of use constraints has been
determined, each partition can be solved independently of the others.
The second strategy is based on the observation that many use constraints have the form
̺=
ˆ U where ̺ does not occur in U. In this case, the value of ̺ is in fact determined by U.
So, U can be substituted in place of all the occurrences of ̺ in a given set of use constraints
and, once a substitution is found for the use variables in the set of use constraints with the
substitution, the substitution for ̺ can be determined by simply evaluating U under such
substitution.
6.3. Pair splitting versus pair projection. It is usually the case that linearly typed
languages provide a dedicated construct for splitting pairs (a notable exception is [14]).
The language introduced in [23, Chapter 1], for example, has an expression form
split e as x,y in f
that evaluates e to a pair, binds the first and second component of the pair respectively
to the variables x and y, and then evaluates f. At the same time, no pair projection
primitives are usually provided. This is because in most linear type systems linear values
“contaminate” with linearity the composite data structures in which they occur: for example,
a pair containing linear values is itself a linear value and can only be used once, whereas
for extracting both components of a pair using the projections one would have to project
the pair twice, once using fst and one more time using snd. For this reason, the split
construct becomes the only way to use linear pairs without violating linearity, as it grants
access to both components of a pair but accessing the pair only once.
The process language we used in an early version of this article [20] provided a split
construct for splitting pairs and did not have the projections fst and snd. In fact, the
ability to use fst and snd without violating linearity constraints in our type system was
pointed out by a reviewer of [20] and in this article we have decided to promote projections
as the sole mechanism for accessing pair components. Notwithstanding this, there is a
34
L. PADOVANI
practical point in favor of split when considering an actual implementation of the type
system. Indeed, the pair projection rules [i-fst] and [i-snd] are among the few that generate
constraints of the form un(α) for some type variable α. In the case of [i-fst] and [i-snd],
the unlimited type variable stands for the component of the pair that is discarded by the
projection. For instance, we can derive
x : β1 ◮ x : β1 ; ∅
x : β2 ◮ x : β2 ; ∅
fst(x) : α1 ◮ x : β1 ; {β1 =
ˆ α1 × γ1 , un(γ1 )}
snd(x) : α2 ◮ x : β2 ; {β2 =
ˆ γ2 × α2 , un(γ2 )}
(fst(x),snd(x)) : α1 × α2 ◮ x : α; {α =
ˆ β1 + β2 , β1 =
ˆ α1 × γ1 , β2 =
ˆ γ2 × α2 , un(γ1 ), un(γ2 )}
and we observe that γ1 and γ2 are examples of those type variables for which only structural
information is known, but no definition is present in the constraint set. Compare this with
a hypothetical derivation concerning a splitting construct (for expressions)
x1 : α1 ◮ x1 : α1 ; ∅
x : α ◮ x : α; ∅
x2 : α2 ◮ x2 : α2 ; ∅
(x1 ,x2 ) : α1 × α2 ◮ x1 : α1 , x2 : α2 ; ∅
split x as x1 ,x2 in (x1 ,x2 ) : α1 × α2 ◮ x : α; {α =
ˆ α1 × α2 }
producing a much smaller constraint set which, in addition, is free from un(·) constraints
and includes a definition for α. The constraint set obtained from the second derivation
is somewhat easier to solve, if only because it requires no completion, meaning fewer use
variables to generate and fewer chances of stumbling on the approximated solution of use
constraints (Example 5.11).
Incidentally we observe, somehow surprisingly, that the two constraint sets are not
exactly equivalent. In particular, the constraint set obtained from the first derivation admits
a solution containing the substitutions
{α 7→ [int]ω,0 × [int]0,ω , α1 7→ [int]1,0 , α2 7→ [int]0,1 }
whereas in the second derivation, if we fix α as in the substitution above, we can only have
{α 7→ [int]ω,0 × [int]0,ω , α1 7→ [int]ω,0 , α2 7→ [int]0,ω }
meaning that, using projections, it is possible to extract from a pair only the needed capabilities, provided that what remains unused has an unlimited type. On the contrary, split
always extracts the full set of capabilities from each component of the pair.
In conclusion, in spite of the features of the type system we argue that it is a good
idea to provide both pair projections and pair splitting, and that pair splitting should be
preferred whenever convenient to use.
7. Examples
In this section we discuss three more elaborate examples that highlight the features of
our type reconstruction algorithm. For better clarity, in these examples we extend the
language with triples, boolean values, conditional branching, arithmetic and relational operators, OCaml-like polymorphic variants [16, Chapter 4], and a more general form of
pattern matching. All these extensions can be easily accommodated or encoded in the language presented in Section 2 and are supported by the prototype implementation of the
reconstruction algorithm.
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
35
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Figure 1: Regions of a complete binary tree used by take.
Example 7.1. The purpose of this example is to show the reconstruction algorithm at
work on a fairly complex traversal of a binary tree. The traversal is realized by the two
processes take and skip below
*take?(x).case x of
Leaf
⇒ idle
Node(c,y,z) ⇒ c!3 | take!y | skip!z
| *skip?(x).case x of
Leaf
⇒ idle
Node(_,y,z) ⇒ skip!y | take!z
where, as customary, we identify the name of a process with the replicated channel on which
the process waits for invocations.
Both take and skip receive as argument a binary tree x and analyze its structure
by means of pattern matching. If the tree is empty, no further operation is performed.
When take receives a non-empty tree, it uses the channel c found at the root of the tree, it
recursively visits the left branch y and passes the right branch z to skip. The process skip
does not use the channel found at the root of the tree, but visits the left branch recursively
and passes the right branch to take.
The types inferred for take and skip are
take : [t]ω,ω
and
skip : [s]ω,ω
where t and s are the types that satisfy the equalities
t = Leaf ⊕ Node([int]0,1 × t × s)
s = Leaf ⊕ Node([int]0,0 × s × t)
In words, take uses every channel that is found after an even number of right traversals,
whereas skip uses every channel that is found after an odd number of right traversals.
Figure 1 depicts the regions of a (complete) binary tree of depth 4 that are used by take,
while the unmarked regions are those used by skip. Overall, the invocation
take!tree | skip!tree
allows the reconstruction algorithm to infer that all the channels in tree are used, namely
that tree has type ttree = Leaf ⊕ Node([int]0,1 × ttree × ttree ) = t + s.
Example 7.2. In this example we show how our type reconstruction algorithm can be
used for inferring session types. Some familiarity with the related literature and particularly with [3, 2] is assumed. Session types [6, 7, 5] are protocol specifications describing
the sequence of input/output operations that are meant to be performed on a (private)
communication channel. In most presentations, session types T , . . . include constructs like
36
L. PADOVANI
?t.T (input a message of type t, then use the channel according to T ) or !t.T (output a
message of type t, then use the channel according to T ) and possibly others for describing
terminated protocols and protocols with branching structure. By considering also recursive
session types (as done, e.g., in [5]), or by taking the regular trees over such constructors
(as we have done for our type language in this paper), it is possible to describe potentially
infinite protocols. For instance, the infinite regular tree T satisfying the equality
T = !int.?bool.T
describes the protocol followed by a process that alternates outputs of integers and inputs
of booleans on a session channel, whereas the infinite regular tree S satisfying the equality
S = ?int.!bool.S
describes the protocol followed by a process that alternates inputs of integers and outputs
of booleans. According to the conventional terminology, T and S above are dual of each
other: each action described in T (like the output of a message of type int) is matched by
a corresponding co-action in S (like the input of a message of type int). This implies that
two processes that respectively follow the protocols T and S when using the same session
channel can interact without errors: when one process sends a message of type t on the
channel, the other process is ready to receive a message of the same type from the channel.
Two such processes are those yielded by the outputs foo!c and bar!c below:
*foo?(x).x!random.x?(_).foo!x
| *bar?(y).y?(n).y!(n mod 2).bar!y
| new c in (foo!c | bar!c)
It is easy to trace a correspondence of the actions described by T with the operations
performed on x, and of the actions described by S with the operations performed on y.
Given that x and y are instantiated with the same channel c, and given the duality that
relates T and S, this process exhibits no communication errors even if the same channel c
is exchanging messages of different types (int or bool). For this reason, c is not a linear
channel and the above process is ill typed according to our typing discipline. However, as
discussed in [13, 3, 2], binary sessions and binary session types can be encoded in the linear
π-calculus using a continuation passing style. The key idea of the encoding is that each
communication in a session is performed on a distinct linear channel, and the exchanged
message carries, along with the actual payload, a continuation channel on which the rest of
the conversation takes place. According to this intuition, the process above is encoded in
the linear π-calculus as the term:
*foo?(x).new a in (x!(random,a) | a?(_,x′ ).foo!x′ )
| *bar?(y).y?(n,y ′ ).new b in (y ′ !(n mod 2,b) | bar!b)
| new c in (foo!c | bar!c)
where a, b, and c are all linear channels (with possibly different types) used for exactly one
communication. The encoding of processes using (binary) sessions into the linear π-calculus
induces a corresponding encoding of session types into linear channel types. In particular,
input and output session types are encoded according to the laws
J?t.T K = [t × JT K]1,0
J!t.T K = [t × JT K]0,1
(7.1)
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
37
where we use T to denote the dual protocol of T . Such encoding is nothing but the coinductive extension of the one described in [3] to infinite protocols. Note that in J!t.T K, the type
of the continuation channel is the encoding of the dual of T . This is because the transmitted
continuation will be used by the receiver process in a complementary fashion with respect
to T , which instead describes the continuation of the protocol from the viewpoint of the
sender. As an example, the protocols T and S above can be respectively encoded as the
types t and s that satisfy the equalities
t = [int × [bool × s]0,1 ]0,1
s = [int × [bool × s]0,1 ]1,0
It turns out that these are the types that our type reconstruction algorithm associates with
x and y. This is not a coincidence, for essentially three reasons: (1) the encoding of a
well-typed process making use of binary sessions is always a well-typed process in the linear
π-calculus [3, 2], (2) our type reconstruction algorithm is complete (Theorem 4.3), and (3)
it can identify a channel as linear when it is used for one communication only (Section 5.4).
The upshot is that, once the types t and s have been reconstructed, the protocols T and S
can be obtained by a straightforward procedure that “decodes” t and s using the inverse
of the transformation sketched by the equations (7.1). There is a technical advantage of
such rather indirect way of performing session type reconstruction. Duality accounts for
a good share of the complexity of algorithms that reconstruct session types directly [17].
However, as the authors of [3] point out, the notion of duality that relates T and S – and
that globally affects their structure – boils down to a local swapping of uses in the topmost
channel types in t and s. This is a general property of the encoding that has important
practical implications: the hard work is carried over during type reconstruction for the
linear π-calculus, where there is no duality to worry about; once such phase is completed,
session types can be obtained from linear channel types with little effort.
We have equipped the prototype implementation of the type reconstruction algorithm
with a flag that decodes linear channel types into session types (the decoding procedure
accounts for a handful of lines of code). In this way, the tool can be used for inferring
the communication protocol of processes encoded in the linear π-calculus. Since the type
reconstruction algorithm supports infinite and disjoint sum types, both infinite protocols
and protocols with branches can be inferred. Examples of such processes, like for instance
the server for mathematical operations described in [5], are illustrated on the home page of
the tool and in its source archive.
Example 7.3. In this example we motivate the requirement expressed in the rules [t-new]
and [i-new] imposing that the type of restricted channels should have the same use in its
input/output use slots. To this aim, consider the process below
*filter?(a,b).a?(n,c).if n ≥ 0 then new d in (b!(n,d1 ) | filter!(c,d2 ))
else filter!(c,b)
which filters numbers received from channel a and forwards the non-negative ones on channel
b. Each number n comes along with a continuation channel c from which the next number
in the stream will be received. Symmetrically, any message sent on b includes a continuation
d on which the next non-negative number will be sent. For convenience, we distinguish d
bound by new from the two rightmost occurrences d1 and d2 of d.
For this process the reconstruction algorithm infers the type
filter : [t × [int × t]0,1 ]ω,ω
(7.2)
38
L. PADOVANI
where t is the type that satisfies the equality t = [int × t]1,0 meaning that d1 and d2 are
respectively assigned the types t and [int × t]0,1 and overall d has type t + [int × t]0,1 =
[int × t]1,0 + [int × t]0,1 = [int × t]1,1 . The reason why d2 has type [int × t]0,1 , namely
that d2 is used for an output operation, is clear, since d2 must have the same type as b
and b is indeed used for an output operation in the body of filter. However, in the whole
process there is no explicit evidence that d1 will be used for an input operation, and the
input use 1 in its type t = [int × t]1,0 is deduced “by subtraction”, as we have discussed
in the informal overview at the beginning of Section 5.
If we do not impose the constraint that restricted (linear) channel should have the same
input/output use, we can find a more precise solution that determines for filter the type
filter : [t × [int × s]0,1 ]ω,ω
(7.3)
0,0
where s is the type that satisfies the equality s = [int × s] . According to (7.3), d1 is
assigned the type s saying that no operation will ever be performed on it. This phenomenon
is a consequence of the fact that, when we apply the type reconstruction algorithm on an
isolated process, like filter above, which is never invoked, the reconstruction algorithm
has only a partial view of the behavior of the process on the channel it creates. For extruded
channels like d, in particular, the algorithm is unable to infer any direct use. We argue that
the typing (7.3) renders filter a useless process from which it is not possible to receive any
message, unless filter is typed along with the rest of the program that invokes it. But this
latter strategy prevents de facto the modular application of the reconstruction algorithm
to the separate constituents of a program.
The typing (7.2) is made possible by the completion phase (Section 5), which is an
original feature of our type reconstruction algorithm. The prototype implementation of
the algorithm provides a flag that disables the constraint on equal uses in [i-new] allowing
experimentation of the behavior of the algorithm on examples like this one.
8. Concluding Remarks
Previous works on the linear π-calculus either do not treat composite types [15, 10] or
are based on an interpretation of linearity that limits data sharing and parallelism [8, 9].
Type reconstruction for recursive or, somewhat equivalently, infinite types has also been
neglected, despite the key role played by these types for describing structured data (lists,
trees, etc.) and structured interactions [2]. In this work we have extended the linear πcalculus with both composite and infinite types and have adopted a more relaxed attitude
towards linearity that fosters data sharing and parallelism while maintaining the availability
of a type reconstruction algorithm. The extension is a very natural one, as witnessed by
the fact that our type system uses essentially the same rules of previous works, the main
novelty being a different type combination operator. This small change has nonetheless nontrivial consequences on the reconstruction algorithm, which must reconcile the propagation
of constraints across composite types and the impossibility to rely on plain type unification:
different occurrences of the same identifier may be assigned different types and types may
be infinite. Our extension also gives renewed relevance to types like [t]0,0 . In previous
works these types were admitted but essentially useless: channels with such types could
only be passed around in messages without actually ever being used. That is, they could
be erased without affecting processes. In our type system, it is the existence of these types
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
39
that enables the sharing of structured data (see the decomposition of tlist into teven and todd
in Section 1).
Binary sessions [6, 7] can be encoded into the linear π-calculus [13, 3]. Thus, we indirectly provide a complete reconstruction algorithm for possibly infinite, higher-order, binary
session types. As shown in [17], direct session type reconstruction poses two major technical
challenges: on the one hand, the necessity to deal with dual types; on the other hand, the
fact that subtyping must be taken into account for that is the only way to properly handle
selections in conditionals. Interestingly, both complications disappear when session types
are encoded in the linear π-calculus: duality simply turns into swapping the input/output
use annotations in channel types [3], whereas selections become outputs of variant data
types which can be dealt with using conventional techniques based on unification [16].
To assess the feasibility of the approach, we have implemented the type reconstruction
algorithm in a tool for the static analysis of π-calculus processes. Given that even simple
processes generate large constraint sets, the prototype has been invaluable for testing the
algorithm at work on non-trivial examples. The reconstruction described in this article
is only the first step for more advanced forms of analysis, such as those for reasoning on
deadlocks and locks [19]. We have extended the tool in such a way that subsequent analyses
can be plugged on top of the reconstruction algorithm for linear channels [21].
Structural subtyping and polymorphism are two natural developments of our work. The
former has already been considered in [9], but it is necessary to understand how it integrates
with our notion of type combination and how it affects constraint generation and resolution.
Polymorphism makes sense for unlimited channels only (there is little point in having polymorphic linear channels, since they can only be used once anyway). Nevertheless, support
for polymorphism is not entirely trivial, since some type variables may need to be restricted
to unlimited types. For example, the channel first in the process *first?(x,y).y!fst(x)
would have type ∀α.∀β.un(β) ⇒ [(α × β) × [α]0,1 ]ω,0 .
Acknowledgements. The author is grateful to the anonymous reviewers whose numerous
questions, detailed comments and suggestions have significantly contributed to improving
both content and presentation of this article. The author is also grateful to Naoki Kobayashi
for his comments on an earlier version of the article.
References
[1] B. Courcelle. Fundamental properties of infinite trees. Theor. Comp. Sci., 25:95–169, 1983.
[2] O. Dardha. Recursive session types revisited. In BEAT’14, 2014.
[3] O. Dardha, E. Giachino, and D. Sangiorgi. Session types revisited. In PPDP’12, pages 139–150. ACM,
2012.
[4] R. Demangeon and K. Honda. Full abstraction in a subtyped pi-calculus with linear types. In CONCUR’11, LNCS 6901, pages 280–296. Springer, 2011.
[5] S. J. Gay and M. Hole. Subtyping for session types in the pi calculus. Acta Informatica, 42(2-3):191–225,
2005.
[6] K. Honda. Types for dyadic interaction. In CONCUR’93, LNCS 715, pages 509–523. Springer, 1993.
[7] K. Honda, V. T. Vasconcelos, and M. Kubo. Language primitives and type disciplines for structured
communication-based programming. In ESOP’98, LNCS 1381, pages 122–138. Springer, 1998.
[8] A. Igarashi. Type-based analysis of usage of values for concurrent programming languages, 1997. Available at http://www.sato.kuis.kyoto-u.ac.jp/~ igarashi/papers/.
[9] A. Igarashi and N. Kobayashi. Type-based analysis of communication for concurrent programming
languages. In SAS’97, LNCS 1302, pages 187–201. Springer, 1997.
40
L. PADOVANI
[10] A. Igarashi and N. Kobayashi. Type Reconstruction for Linear π-Calculus with I/O Subtyping. Inf. and
Comp., 161(1):1–44, 2000.
[11] N. Kobayashi. Quasi-linear types. In POPL’99, pages 29–42. ACM, 1999.
[12] N. Kobayashi. A type system for lock-free processes. Inf. and Comp., 177(2):122–159, 2002.
[13] N. Kobayashi. Type systems for concurrent programs. In 10th Anniversary Colloquium of UNU/IIST, LNCS 2757, pages 439–453. Springer, 2002. Extended version at
http://www.kb.ecei.tohoku.ac.jp/~ koba/papers/tutorial-type-extended.pdf.
[14] N. Kobayashi. A new type system for deadlock-free processes. In CONCUR’06, LNCS 4137, pages
233–247. Springer, 2006.
[15] N. Kobayashi, B. C. Pierce, and D. N. Turner. Linearity and the pi-calculus. ACM Trans. Program.
Lang. Syst., 21(5):914–947, 1999.
[16] X. Leroy, D. Doligez, A. Frisch, J. Garrigue, D. Rémy, and J. Vouillon. The OCaml system release 4.01,
2013. Available at http://caml.inria.fr/pub/docs/manual-ocaml-4.01/index.html.
[17] L. G. Mezzina. How to infer finite session types in a calculus of services and sessions. In COORDINATION’08, LNCS 5052, pages 216–231. Springer, 2008.
[18] U. Nestmann and M. Steffen. Typing confluence. In FMICS’97, pages 77–101, 1997. Also available as
report ERCIM-10/97-R052, European Research Consortium for Informatics and Mathematics, 1997.
[19] L. Padovani. Deadlock and Lock Freedom in the Linear π-Calculus. In CSL-LICS’14, pages 72:1–72:10.
ACM, 2014.
[20] L. Padovani. Type reconstruction for the linear π-calculus with composite and equi-recursive types. In
FoSSaCS’14, LNCS 8412, pages 88–102. Springer, 2014.
[21] L. Padovani, T.-C. Chen, and A. Tosatto. Type Reconstruction Algorithms for Deadlock-Free and
Lock-Free Linear π-Calculi. In COORDINATION’15, LNCS 9037, pages 83–98. Springer, 2015.
[22] B. C. Pierce. Types and Programming Languages. The MIT Press, 2002.
[23] B. C. Pierce. Advanced Topics in Types and Programming Languages. The MIT Press, 2004.
[24] D. Sangiorgi and D. Walker. The Pi-Calculus - A theory of mobile processes. Cambridge University
Press, 2001.
[25] D. N. Turner, P. Wadler, and C. Mossin. Once upon a type. In FPCA’95, pages 1–11, 1995.
Appendix A. Supplement to Section 3
To prove Theorem 3.8 we need a series of standard auxiliary results, including weakening
(Lemma A.1) and substitution (Lemma A.2) for both expressions and processes.
Lemma A.1 (weakening). The following properties hold:
(1) If Γ ⊢ e : t and un(Γ ′ ) and Γ + Γ ′ is defined, then Γ + Γ ′ ⊢ e : t.
(2) If Γ ⊢ P and un(Γ ′ ) and Γ + Γ ′ is defined, then Γ + Γ ′ ⊢ P .
Proof. Both items are proved by a standard induction on the typing derivation. In case (2)
we assume, without loss of generality, that bn(P ) ∩ dom(Γ ) = ∅ (recall that we identify
processes modulo renaming of bound names).
Lemma A.2 (substitution). Let Γ1 ⊢ v : t. The following properties hold:
(1) If Γ2 , x : t ⊢ e : s and Γ1 + Γ2 is defined, then Γ1 + Γ2 ⊢ e{v/x} : s.
(2) If Γ2 , x : t ⊢ P and Γ1 + Γ2 is defined, then Γ1 + Γ2 ⊢ P {v/x}.
Proof. The proofs are standard, except for the following property of the type system: un(t)
implies un(Γ1 ), which can be easily proved by induction on the derivation of Γ1 ⊢ v : t.
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
41
Next is type preservation under structural pre-congruence.
Lemma A.3. If Γ ⊢ P and P 4 Q, then Γ ⊢ Q.
Proof. We only show the case in which a replicated process is expanded. Assume P =
*P ′ 4 *P ′ | P ′ = Q. From the hypothesis Γ ⊢ P and [t-rep] we deduce Γ ⊢ P ′ and un(Γ ).
By definition of unlimited environment (see Definition 3.7) we have Γ = Γ + Γ . We conclude
Γ ⊢ Q with an application of [t-par].
ℓ
ℓ
Lemma A.4. If Γ −→ Γ ′ and Γ + Γ ′′ is defined, then Γ + Γ ′′ −→ Γ ′ + Γ ′′ .
ℓ
Proof. Easy consequences of the definition of −→ on type environments.
We conclude with type preservation for expressions and subject reduction for processes.
Lemma A.5. Let Γ ⊢ e : t and e ↓ v. Then Γ ⊢ v : t.
Proof. By induction on e ↓ v using the hypothesis that e is well typed.
ℓ
ℓ
Theorem 3.8. Let Γ ⊢ P and P −→ Q. Then Γ ′ ⊢ Q for some Γ ′ such that Γ −→ Γ ′ .
ℓ
Proof. By induction on the derivation of P −→ Q and by cases on the last rule applied. We
only show a few interesting cases; the others are either similar or simpler.
[r-comm] Then P = e1 !f | e2 ?(x).R and ei ↓ a for every i = 1, 2 and f ↓ v and ℓ = a and
Q = R{v/x}. From [t-par] we deduce Γ = Γ1 + Γ2 where Γ1 ⊢ e1 !f and Γ2 ⊢ e2 ?(x).R. From
[t-out] we deduce Γ1 = Γ11 + Γ12 and Γ11 ⊢ e1 : [t]2κ1 ,1+κ2 and Γ12 ⊢ f : t. From [t-in] we
deduce Γ2 = Γ21 +Γ22 and Γ21 ⊢ e2 : [s]1+κ3 ,2κ4 and Γ22 , x : s ⊢ R. From Lemma A.5 we have
Γ11 ⊢ a : [t]2κ1 ,1+κ2 and Γ12 ⊢ v : t and Γ21 ⊢ a : [s]1+κ3 ,2κ4 . Also, since Γ11 + Γ21 is defined,
it must be the case that t = s. Note that 1 + κ2 = 1 + 2κ2 and 1 + κ3 = 1 + 2κ3 . Hence,
′ , a : [t]2κ1 ,1+κ2 = (Γ ′ , a : [t]2κ1 ,2κ2 ) + a : [t]0,1 and
from [t-name] we deduce that Γ11 = Γ11
11
′ , a : [t]1+κ3 ,2κ4 = (Γ ′ , a : [t]2κ3 ,2κ4 ) + a : [t]1,0 for some unlimited Γ ′ and Γ ′ .
Γ21 = Γ21
21
21
11
def ′
def ′
2κ3 ,2κ4
2κ1 ,2κ2
′′
′′
′′
′′
and observe that Γ11 and Γ21 are also
and Γ12 = Γ21 , a : [t]
Let Γ11 = Γ11 , a : [t]
′′ + Γ + Γ ′′ + Γ .
unlimited. From Lemma A.2 we deduce Γ12 + Γ22 ⊢ R{v/x}. Take Γ ′ = Γ11
22
12
21
a
′
′
From Lemma A.1 we deduce Γ ⊢ Q and we conclude by observing that Γ −→ Γ thanks to
Lemma A.4.
[r-case] Then P = case e {i(xi ) ⇒ Pi }i=inl,inr and e ↓ k(v) for some k ∈ {inl, inr} and
ℓ = τ and Q = Pk {v/xk }. From [t-case] we deduce that Γ = Γ1 + Γ2 and Γ1 ⊢ e : tinl ⊕ tinr
and Γ2 , x : tk ⊢ Pk . From Lemma A.5 and either [t-inl] or [t-inr] we deduce Γ1 ⊢ v : tk . We
conclude Γ ⊢ Pk {v/xk } by Lemma A.2.
ℓ
[r-par] Then P = P1 | P2 and P1 −→ P1′ and Q = P1′ | P2 . From [t-par] we deduce
Γ = Γ1 + Γ2 and Γi ⊢ Pi . By induction hypothesis we deduce Γ1′ ⊢ P1′ for some Γ1′ such that
ℓ
ℓ
Γ1 −→ Γ1′ . By Proposition A.4 we deduce that Γ −→ Γ1′ + Γ2 . We conclude Γ ′ ⊢ Q by taking
Γ ′ = Γ1′ + Γ2 .
42
L. PADOVANI
Appendix B. Supplement to Section 4
First of all we prove two technical lemmas that explain the relationship between the operators ⊔ and ⊓ used by the constraint generation rules (Table 7) and type environment
combination + and equality used in the type rules (Table 4).
Lemma B.1. If ∆1 ⊔ ∆2
∆; C and σ is a solution for C covering ∆, then σ∆ = σ∆1 + σ∆2 .
Proof. By induction on the derivation of ∆1 ⊔ ∆2
applied. We have two cases:
∆; C and by cases on the last rule
dom(∆1 ) ∩ dom(∆2 ) = ∅ Then ∆ = ∆1 , ∆2 and we conclude σ∆ = σ∆1 , σ∆2 = σ∆1 + σ∆2 .
∆1 = ∆′1 , u : T and ∆2 = ∆′2 , u : S Then ∆′1 ⊔ ∆′2
∆′ ; C ′ and ∆ = ∆′ , u : α and C =
C ′ ∪ {α =
ˆ T + S} for some α. Since σ is a solution for C, we deduce σ(α) = σT + σS. By
induction hypothesis we deduce σ∆′ = σ∆′1 + σ∆′2 . We conclude σ∆ = σ∆′ , u : σ(α) =
σ∆′ , u : σT + σS = (σ∆′1 + σ∆′2 ), u : σT + σS = σ∆1 + σ∆2 .
Lemma B.2. If ∆1 ⊓∆2
∆; C and σ is a solution for C covering ∆, then σ∆ = σ∆1 = σ∆2 .
Proof. Straightforward consequence of the definition of ∆1 ⊓ ∆2
∆; C.
The correctness of constraint generation is proved by the next two results.
Lemma B.3. If e : T ◮ ∆; C and σ is a solution for C covering ∆, then σ∆ ⊢ e : σT.
Proof. By induction on the derivation of e : T ◮ ∆; C and by cases on the last rule applied.
We only show two significant cases.
[i-name] Then e = u and T = α fresh and ∆ = u : α and C = ∅. We have σ∆ = u : σ(α)
and σT = σ(α), hence we conclude σ∆ ⊢ e : σT.
[i-pair] Then e = (e1 ,e2 ) and T = T1 × T2 and C = C1 ∪ C2 ∪ C3 where ∆1 ⊔ ∆2
∆; C3
and ei : Ti ◮ ∆i ; Ci for i = 1, 2. We know that σ is a solution for Ci for all i = 1, 2, 3. By
induction hypothesis we deduce σ∆i ⊢ e : σTi for i = 1, 2. From Lemma B.1 we obtain
σ∆ = σ∆1 + σ∆2 . We conclude with an application of [t-pair].
Theorem 4.2. If P ◮ ∆; C and σ is a solution for C that covers ∆, then σ∆ ⊢ P .
Proof. By induction on the derivation of P ◮ ∆; C and by cases on the last rule applied.
[i-idle] Then P = idle and ∆ = ∅ and C = ∅. We conclude with an application of [t-idle].
[i-in] Then P = e?(x).Q and e : T ◮ ∆1 ; C1 and Q ◮ ∆2 , x : S; C2 and ∆1 ⊔ ∆2
∆; C3 and
1+̺1 ,2̺2
C = C1 ∪ C2 ∪ C3 ∪ {T =
ˆ [S]
}. By Lemma B.3 we deduce σ∆1 ⊢ e : σT. By induction
hypothesis we deduce σ∆2 , x : σS ⊢ Q. By Lemma B.1 we deduce σ∆ = σ∆1 + σ∆2 . From
the hypothesis that σ is a solution for C we know σT = [σS]1+σ(̺1 ),2σ(̺2 ) . We conclude
with an application of [t-in].
[i-out] Then P = e!f and e : T ◮ ∆1 ; C1 and f : S ◮ ∆2 ; C2 and ∆1 ⊔ ∆2
2̺1 ,1+̺2
∆; C3 and
C = C1 ∪ C2 ∪ C3 ∪ {T =
ˆ [S]
}. By Lemma B.3 we deduce σ∆1 ⊢ e : σT and
σ∆2 ⊢ f : σS. By Lemma B.1 we deduce σ∆ = σ∆1 + σ∆2 . From the hypothesis that σ
is a solution for C we know σT = [σS]2σ(̺1 ),1+σ(̺2 ) . We conclude with an application of
[t-out].
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
43
[i-par] Then P = P1 |P2 and Pi ◮ ∆i ; Ci for i = 1, 2 and ∆1 ⊔∆2
∆; C3 and C = C1 ∪C2 ∪C3 .
By induction hypothesis we deduce σ∆i ⊢ Pi for i = 1, 2. By Lemma B.1 we deduce
σ∆ = σ∆1 + σ∆2 . We conclude with an application of [t-par].
[i-rep] Then P = *Q and Q ◮ ∆′ ; C1 and ∆′ ⊔ ∆′
∆; C2 and C = C1 ∪ C2 . By induction
hypothesis we deduce σ∆′ ⊢ Q. By Lemma B.1 we deduce σ∆ = σ∆′ +σ∆′ . By Definition 3.7
we know that un(σ∆) holds. Furthermore, σ∆′ + σ∆ is defined. By Lemma A.1 and
Definition 3.5 we deduce σ∆ ⊢ Q. We conclude with an application of [t-rep].
ˆ [α]̺,̺ }. By
[i-new] Then P = new a in Q and Q ◮ ∆, a : T; C ′ and C = C ′ ∪ {T =
induction hypothesis we deduce σ∆, a : σT ⊢ Q. Since σ is a solution for C ′ we know that
σT = [σ(α)]σ(̺),σ(̺) . We conclude with an application of [t-new].
[i-case] Then P = case e {i(xi ) ⇒ Pi }i=inl,inr and e : t ◮ ∆1 ; C1 and Pi ◮ ∆i , xi : Ti ; Ci
for i = inl, inr and ∆inl ⊓ ∆inr
∆2 ; C2 and ∆1 ⊔ ∆2
∆; C3 and C = C1 ∪ C2 ∪
ˆ Tinl ⊕ Tinr }. By Lemma B.3 we deduce σ∆1 ⊢ e : σT. By
C3 ∪ Cinl ∪ Cinr ∪ {T =
induction hypothesis we deduce σ∆i ⊢ Pi for i = inl, inr. By Lemma B.2 we deduce
σ∆inl = σ∆inr = σ∆2 . By Lemma B.1 we deduce σ∆ = σ∆1 + σ∆2 . Since σ is a solution
for C, we have σT = σTinl ⊕ σTinr . We conclude with an application of [t-case].
[i-weak] Then ∆ = ∆′ , u : α and C = C ′ ∪ {un(α)} where α is fresh and P ◮ ∆′ ; C ′ . By
induction hypothesis we deduce σ∆′ ⊢ P . Since σ is a solution for C ′ we know that un(σ(α))
holds. Since u 6∈ dom(∆′ ) we know that ∆′ σ + u : σ(α) is defined. By Lemma A.1(2) we
conclude σ∆′ , u : σ(α) ⊢ P .
The next lemma relates once more ⊔ and type environment combination +. It is, in a
sense, the inverse of Lemma B.1.
Lemma B.4. If σ∆1 +σ∆2 is defined, then there exist ∆, C, and σ ′ ⊇ σ such that ∆1 ⊔∆2
∆; C and σ ′ is a solution for C that covers ∆.
Proof. By induction on the maximum size of ∆1 and ∆2 . We distinguish two cases.
def
def
def
dom(∆1 ) ∩ dom(∆2 ) = ∅ We conclude by taking ∆ = ∆1 , ∆2 and C = ∅ and σ ′ = σ and
observing that ∆1 ⊔ ∆2
∆; ∅.
∆1 = ∆′1 , u : T and ∆2 = ∆′2 , u : S Since σ∆1 + σ∆2 is defined, we know that σ∆′1 + σ∆′2 is
defined as well and furthermore that (σ∆1 + σ∆2 )(u) = σT + σS. By induction hypothesis
∆′ ; C ′ and σ ′′ is a
we deduce that there exist ∆′ , C ′ , and σ ′′ ⊇ σ such that ∆′1 ⊔ ∆′2
def
def
solution for C ′ that covers ∆′ . Take ∆ = ∆′ , u : α where α is fresh, C = C ′ ∪ {α =
ˆ T + S}
def ′′
′
′
and σ = σ ∪ {α 7→ σT + σS}. We conclude observing that σ is a solution for C that covers
∆.
In order to prove the completeness of type reconstruction for expressions, we extend
the reconstruction algorithm with one more weakening rule for expressions:
[i-weak expr]
e : T ◮ ∆; C
e : T ◮ ∆, u : α; C ∪ un(α)
44
L. PADOVANI
This rule is unnecessary as far as completeness is concerned, because there is already a weakening rule [i-weak] for processes that can be used to subsume it. However, [i-weak expr]
simplifies both the proofs and the statements of the results that follow.
Lemma B.5. If Γ ⊢ e : t, then there exist T, ∆, C, and σ such that e : T ◮ ∆; C and σ is a
solution for C and Γ = σ∆ and t = σT.
Proof. By induction on the derivation of Γ ⊢ e : t and by cases on the last rule applied. We
only show two representative cases.
def
[t-name] Then e = u and Γ = Γ ′ , u : t and un(Γ ′ ). Let Γ ′ = {ui : ti }i∈I . Take T = α
def
def
def
and ∆ = {ui : αi }i∈I , u : α and C = {un(αi ) | i ∈ I} and σ = {αi 7→ ti }i∈I ∪ {α 7→ t}
where α and the αi ’s are all fresh type variables. Observe that e : T ◮ ∆; C by means of
one application of [i-name] and as many applications of [i-weak expr] as the cardinality of
I. We conclude observing that σ is a solution for C and Γ = σ∆ and t = σT by definition
of σ.
[t-pair] Then e = (e1 ,e2 ) and Γ = Γ1 + Γ2 and t = t1 × t2 and Γi ⊢ ei : ti for i = 1, 2.
By induction hypothesis we deduce that there exist Ti , ∆i , Ci , and σi solution for Ci such
that ei : Ti ◮ ∆i ; Ci and Γi = σi ∆i and ti = σi Ti for i = 1, 2. Since the reconstruction
algorithm always chooses fresh type variables, we also know that dom(σ1 ) ∩ dom(σ2 ) = ∅.
def
Take σ ′ = σ1 ∪ σ2 . We have that σ ′ ∆1 + σ ′ ∆2 = Γ1 + Γ2 is defined. Therefore, by Lemma B.4,
we deduce that there exist ∆, C3 , and σ ⊇ σ ′ such that ∆1 ⊔ ∆2
∆; C3 and σ is a solution
def
for C that covers ∆. We conclude with an application of [i-pair] and taking T = T1 × T2
def
and C = C1 ∪ C2 ∪ C3 .
Theorem 4.3. If Γ ⊢ P , then there exist ∆, C, and σ such that P ◮ ∆; C and σ is a solution
for C that covers ∆ and Γ = σ∆.
Proof. By induction on the derivation of Γ ⊢ P and by cases on the last rule applied. We
only show a few cases, the others being analogous.
def
[t-idle] Then P = idle and un(Γ ). Let Γ = {ui : ti }i∈I . Take ∆ = {ui : αi }i∈I and
def
def
C = {un(αi )}i∈I and σ = {αi 7→ ti }i∈I where the αi ’s are all fresh type variables. By
repeated applications of [i-weak] and one application of [i-idle] we derive idle ◮ ∆; C. We
conclude observing that σ is a solution for C and Γ = σ∆.
[t-in] Then P = e?(x).Q and Γ = Γ1 + Γ2 and Γ1 ⊢ e : [t]1+κ1 ,2κ2 and Γ2 , x : t ⊢ Q. By
Lemma B.5 we deduce that there exist T, ∆1 , C1 , and σ1 solution for C1 such that e : T ◮
∆1 ; C1 and Γ1 = σ1 ∆1 and [t]1+κ1 ,2κ2 = σ1 T. By induction hypothesis we deduce that there
exist ∆′2 , C2 , and σ2 solution for C2 such that Γ2 , x : t = σ2 ∆′2 . Then it must be the case that
∆′2 = ∆2 , x : S for some ∆2 and S such that Γ2 = σ2 ∆2 and t = σ2 S. Since all type variables
chosen by the type reconstruction algorithm are fresh, we know that dom(σ1 )∩ dom(σ2 ) = ∅.
def
Take σ ′ = σ1 ∪σ2 ∪{̺1 7→ κ1 , ̺2 7→ κ2 }. Observe that σ ′ ∆1 +σ ′ ∆2 = Γ1 +Γ2 which is defined.
By Lemma B.4 we deduce that there exist ∆, C3 , and σ ⊇ σ ′ such that ∆1 ⊔ ∆2
∆; C3
def
1+̺1 ,2̺2
and σ is a solution for C3 that covers ∆. Take C = C1 ∪ C2 ∪ C3 ∪ {T =
ˆ [S]
}. Then
σ is a solution for C, because σT = [t]1+κ1 ,2κ2 = [σS]1+σ(̺1 ),2σ(̺2 ) = σ[S]1+̺1 ,2̺2 . Also,
by Lemma B.1 we have σ∆ = σ∆1 + σ∆2 = Γ1 + Γ2 = Γ . We conclude P ◮ ∆; C with an
application of [i-in].
TYPE RECONSTRUCTION FOR THE LINEAR π-CALCULUS
45
[t-par] Then P = P1 | P2 and Γ = Γ1 + Γ2 and Γi ⊢ Pi for i = 1, 2. By induction hypothesis
we deduce that, for every i = 1, 2, there exist ∆i , Ci , and σi solution for Ci such that
Pi ◮ ∆i ; Ci and Γi = σi ∆i . We also know that dom(σ1 ) ∩ dom(σ2 ) = ∅ because type/use
def
variables are always chosen fresh. Take σ ′ = σ1 ∪ σ2 . By Lemma B.4 we deduce that there
exist ∆, C3 , and σ ⊇ σ ′ such that ∆1 ⊔ ∆2
∆; C3 and σ is a solution for C3 that covers ∆.
By Lemma B.1 we also deduce that σ∆ = σ∆1 + σ∆2 = Γ1 + Γ2 = Γ . We conclude by taking
def
C = C1 ∪ C2 ∪ C3 with an application of [i-par].
[t-rep] Then P = *Q and Γ ⊢ Q and un(Γ ). By induction hypothesis we deduce that there
exist ∆′ , C ′ , and σ ′ solution for C ′ such that Q ◮ ∆′ ; C ′ and Γ = σ ′ ∆′ . Obviously σ ′ ∆′ + σ ′ ∆′
is defined, hence by Lemma B.4 we deduce that there exist ∆, C ′′ , and σ ⊇ σ ′ such that
∆′ ⊔ ∆′
∆; C ′′ and σ is a solution for C ′′ . By Lemma B.1 we deduce σ∆ = σ∆′ + σ∆′ =
Γ + Γ = Γ , where the last equality follows from the hypothesis un(Γ ) and Definition 3.7. We
def
conclude P ◮ ∆; C with an application of [i-rep] by taking C = C ′ ∪ C ′′ .
Appendix C. Supplement to Section 5
Below is the derivation showing the reconstruction algorithm at work on the process (5.6).
a : α1 ◮ a : α1 ; ∅
3 : int ◮ ∅; ∅
2̺1 ,1+̺2
a!3 ◮ a : α1 ; {α1 =
ˆ [int]
}
[i-out]
b : β ◮ b : β; ∅
a : α2 ◮ a : α2 ; ∅
b!a ◮ a : α2 , b : β; {β =
ˆ [α2 ]2̺3 ,1+̺4 }
ˆ [α2 ]2̺3 ,1+̺4 }
a!3 | b!a ◮ a : α, b : β; {α =
ˆ α1 + α2 , α1 =
ˆ [int]2̺1 ,1+̺2 , β =
new a in (a!3 | b!a) ◮ b : β; {α =
ˆ [δ]̺5 ,̺5 , α =
ˆ α1 + α2 , . . . }
[i-out]
[i-par]
[i-new]
Below is the derivation showing the reconstruction algorithm at work on the process (5.7).
Only the relevant differences with respect to the derivation above are shown.
..
..
..
..
.
.
.
.
..
.
b!a ◮ a : α2 , b : β; {β =
ˆ [α2 ]2̺3 ,1+̺4 }
c!a ◮ a : α3 , c : γ; {γ =
ˆ [α3 ]2̺5 ,1+̺6 }
b!a | c!a ◮ a : α23 , b : β, c : γ; {α23 =
ˆ α2 + α3 , . . . }
a!3 | b!a | c!a ◮ a : α, b : β, c : γ; {α =
ˆ α1 + α23 , . . . }
[i-par]
[i-par]
new a in (a!3 | b!a | c!a) ◮ b : β, c : γ; {α =
ˆ [δ]̺5 ,̺5 , α =
ˆ α1 + α23 , . . . }
[i-new]
Below is the derivation showing the reconstruction algorithm at work on the process (5.8).
b : β ◮ b : β; ∅
x : γ ◮ x : γ; ∅
b!x ◮ b : β, x : γ; {β =
ˆ [γ]2̺1 ,1+̺2 }
[i-out]
a?(x).b!x ◮ a : α, b : β; {α =
ˆ [γ]1+̺3 ,2̺4 , β =
ˆ [γ]2̺1 ,1+̺2 }
[i-in]
This work is licensed under the Creative Commons Attribution-NoDerivs License. To view
a copy of this license, visit http://creativecommons.org/licenses/by-nd/2.0/ or send a
letter to Creative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or
Eisenacher Strasse 2, 10777 Berlin, Germany
| 1 |
Ensemble of Heterogeneous Flexible Neural Trees Using Multiobjective
Genetic Programming
Varun Kumar Ojhaa,∗, Ajith Abrahamb , Václav Snášela
a IT4Innovations,
arXiv:1705.05592v1 [cs.NE] 16 May 2017
b Machine
VŠB-Technical University of Ostrava, Ostrava, Czech Republic
Intelligence Research Labs (MIR Labs), Auburn, WA, USA
Abstract
Machine learning algorithms are inherently multiobjective in nature, where approximation error minimization and model’s complexity simplification are two conflicting objectives. We proposed a multiobjective
genetic programming (MOGP) for creating a heterogeneous flexible neural tree (HFNT), tree-like flexible
feedforward neural network model. The functional heterogeneity in neural tree nodes was introduced to
capture a better insight of data during learning because each input in a dataset possess different features.
MOGP guided an initial HFNT population towards Pareto-optimal solutions, where the final population
was used for making an ensemble system. A diversity index measure along with approximation error and
complexity was introduced to maintain diversity among the candidates in the population. Hence, the
ensemble was created by using accurate, structurally simple, and diverse candidates from MOGP final
population. Differential evolution algorithm was applied to fine-tune the underlying parameters of the selected candidates. A comprehensive test over classification, regression, and time-series datasets proved the
efficiency of the proposed algorithm over other available prediction methods. Moreover, the heterogeneous
creation of HFNT proved to be efficient in making ensemble system from the final population.
Keywords: Pareto-based multiobjectives, flexible neural tree, ensemble, approximation, feature
selection;
1. Introduction
Structure optimization of a feedforward neural network (FNN) and its impact on FNN’s generalization
ability inspired the flexible neural tree (FNT) [1]. FNN components such as weights, structure, and
activation function are the potential candidates for the optimization, which improves FNN’s generalization
ability to a great extent [2]. These efforts are notable because of FNN’s ability to solve a large range of realworld problems [3, 4, 5, 6]. Followings are the significance structure optimization methods: constructive
and pruning algorithms [7, 8], EPNet [2], NeuroEvolution of Augmenting Topologies [9], sparse neural
∗ Varun
Kumar Ojha
Email addresses: [email protected] (Varun Kumar Ojha), [email protected] (Ajith Abraham),
[email protected] (Václav Snášel)
Preprint submitted to Applied Soft Computing, Volume 52 Pages 909 to 924
May 17, 2017
trees [10], Cooperative co-evolution approach [11], etc. Similarly, many efforts focus on the optimization
of hybrid training of FNN such as [12, 13, 14]. FNT was an additional step into this series of efforts,
which was proposed to evolve as a tree-like feed-forward neural network model, where the probabilistic
incremental program evolution (PIPE) [15] was applied optimize the tree structure [1]. The underlying
parameter vector of the developed FNT (weights associated with the edges and arguments of the activation
functions) was optimized by metaheuristic algorithms, which are nature-inspired parameter optimization
algorithms [16]. The evolutionary process allowed FNT to select significant input features from an input
feature set.
In the design of FNT, the non-leaf nodes are the computational node, which takes an activation
function. Hence, rather than relying on a fixed activation function, if the selection of activation function
at the computational nodes is allowed to be selected by the evolutionary process. Then, it produces
heterogeneous FNTs (HFNT) with the heterogeneity in its structure, computational nodes, and input set.
In addition, heterogeneous function allowed HFNT to capture different feature of the datasets efficiently
since each input in the datasets posses different features. The evolutionary process provides adaptation
in structure, weights, activation functions, and input features. Therefore, an optimum HFNT is the
one that offers the lowest approximation error with the simplest tree structure and the smallest input
feature set. However, approximation error minimization and structure simplification are two conflicting
objectives [17]. Hence, a multiobjective evolutionary approach [18] may offer an optimal solution(s) by
maintaining a balance between these objectives.
Moreover, in the proposed work, an evolutionary process guides a population of HFNTs towards
Pareto-optimum solutions. Hence, the final population may contain several solutions that are close to
the best solution. Therefore, an ensemble system was constructed by exploiting many candidates of the
population (candidate, solution, and model are synonymous in this article). Such ensemble system takes
advantage of many solutions including the best solution [19]. Diversity among the chosen candidates holds
the key in making a good ensemble system [20]. Therefore, the solutions in a final population should fulfill
the following objectives: low approximation error, structural simplicity, and high diversity. However, these
objectives are conflicting to each other. A fast elitist nondominated sorting genetic algorithm (NSGA-II)based multiobjective genetic programming (MOGP) was employed to guide a population of HFNTs [21].
The underlying parameters of selected models were further optimized by using differential evaluation (DE)
algorithm [22]. Therefore, we may summarize the key contributions of this work are as follows:
1) A heterogeneous flexible neural tree (HFNT) for function approximation and feature selection was
proposed.
2) HFNT was studied under an NSGA-II-based multiobjective genetic programming framework. Thus,
it was termed HFNTM .
3) Alongside approximation error and tree size (complexity), a diversity index was introduced to maintain
2
diversity among the candidates in the population.
4) HFNTM was found competitive with other algorithms when compared and cross-validated over classification, regression, and time-series datasets.
5) The proposed evolutionary weighted ensemble of HFNTs final population further improved its performance.
A detailed literature review provides an overview of FNT usage over the past few years (Section
2). Conclusions derived from literature survey supports our HFNTM approach, where a Pareto-based
multiobjective genetic programming was used for HFNT optimization (Section 3.1). Section 3.2 provides
a detailed discussion on the basics of HFNT: MOGP for HFNT structure optimization, and DE for HFNT
parameter optimization. The efficiency of the above-mentioned hybrid and complex multiobjective FNT
algorithm (HFNTM ) was tested over various prediction problems using a comprehensive experimental setup (Section 4). The experimental results support the merits of proposed approach (Section 5). Finally,
we provide a discussion of experimental outcomes in Section 6 followed by conclusions in Section 7.
2. Literature Review
The literature survey describes the following points: basics of FNT, approaches that improvised FNT,
and FNTs successful application to various real-life problems. Subsequently, the shortcomings of basic
FNT version are concluded that inspired us to propose HFNTM .
FNT was first proposed by Chen et al. [1], where a tree-like-structure was optimized by using PIPE.
Then, its approximation ability was tested for time-series forecasting [1] and intrusion detection [23],
where a variant of simulated annealing (called degraded ceiling) [24], and particle swarm optimization
(PSO) [25], respectively, were used for FNT parameter optimization. Since FNT is capable of input feature
selection, in [26], FNT was applied for selecting input features in several classification tasks, in which
FNT structure was optimized by using genetic programming (GP) [27], and the parameter optimization
was accomplished by using memetic algorithm [28]. Additionally, they defined five different mutation
operators, namely, changing one terminal node, all terminal nodes, growing a randomly selected sub-tree,
pruning a randomly selected sub-tree, and pruning redundant terminals. Li et al. [29] proposed FNTbased construction of decision trees whose nodes were conditionally replaced by neural node (activation
node) to deal with continuous attributes when solving classification tasks. In many other FNT based
approaches, like in [30], GP was applied to evolve hierarchical radial-basis-function network model, and
in [31] a multi-input-multi-output FNT model was evolved. Wu et al. [32] proposed to use grammar guided
GP [33] for FNT structure optimization. Similarly, in [34], authors proposed to apply multi-expression
programming (MEP) [35] for FNT structure optimization and immune programming algorithm [36] for
the parameter vector optimization. To improve classification accuracy of FNT, Yang et al. [37] proposed
a hybridization of FNT with a further-division-of-partition-space method. In [38], authors illustrated
3
crossover and mutation operators for evolving FNT using GP and optimized the tree parameters using
PSO algorithm.
A model is considered efficient if it has generalization ability. We know that a consensus decision
is better than an individual decision. Hence, an ensemble of FNTs may lead to a better-generalized
performance than a single FNT. To address this, in [39], authors proposed to make an ensemble of FNTs
to predict the chaotic behavior of stock market indices. Similarly, in [40], the proposed FNTs ensemble
predicted the breast cancer and network traffic better than individual FNT. In [41], protein dissolution
prediction was easier using ensemble than the individual FNT.
To improve the efficiency in terms of computation, Peng et al. [42] proposed a parallel evolving algorithm for FNT, where the parallelization took place in both tree-structure and parameter vector populations. In another parallel approach, Wang et al. [43] used gene expression programming (GEP) [44] for
evolving FNT and used PSO for parameter optimization.
A multi-agent system [45] based FNT (MAS-FNT) algorithm was proposed in [46], which used GEP
and PSO for the structure and parameter optimization, respectively. The MAS-FNT algorithm relied on
the division of the main population into sub-population, where each sub-population offered local solutions
and the best local solution was picked-up by analyzing tree complexity and accuracy.
Chen et al. [1, 26] referred the arbitrary choice of activation function at non-leaf nodes. However, they
were restricted to use only Gaussian functions. A performance analysis of various activation function is
available in [47]. Bouaziz et al. [48, 49] proposed to use beta-basis function at non-leaf nodes of an FNT.
Since beta-basis function has several controlling parameters such as shape, size, and center, they claimed
that the beta-basis function has advantages over other two parametric activation functions. Similarly,
many other forms of neural tree formation such as balanced neural tree [50], generalized neural tree [51],
and convex objective function neural tree [52], were focused on the tree improvement of neural nodes.
FNT was chosen over the conventional neural network based models for various real-world applications
related to prediction modeling, pattern recognition, feature selection, etc. Some examples of such applications are cement-decomposing-furnace production-process modeling [53], time-series prediction from
gene expression profiling [54]. stock-index modeling [39], anomaly detection in peer-to-peer traffic [55],
intrusion detection [56], face identification [57], gesture recognition [58], shareholder’s management risk
prediction [59], cancer classification [60], somatic mutation, risk prediction in grid computing [61], etc.
The following conclusions can be drawn from the literature survey. First, FNT was successfully used
in various real-world applications with better performance than other existing function approximation
models. However, it was mostly used in time-series analysis. Second, the lowest approximation error
obtained by an individual FNT during an evolutionary phase was considered as the best structure that
propagated to the parameter optimization phase. Hence, there was no consideration as far as structural
simplicity and generalization ability are concerned. Third, the computational nodes of the FNT were
4
fixed initially, and little efforts were made to allow for its automatic adaptation. Fourth, little attention
was paid to the statistical validation of FNT model, e.g., mostly the single best model was presented as
the experimental outcome. However, the evolutionary process and the meta-heuristics being stochastic
in nature, statistical validation is inevitably crucial for performance comparisons. Finally, to create a
generalized model, an ensemble of FNTs were used. However, FNTs were created separately for making
the ensemble. Due to stochastic nature of the evolutionary process, FNT can be structurally distinct when
created at different instances. Therefore, no explicit attention was paid to create diverse FNTs within a
population itself for making ensemble. In this article, a heterogeneous FNT called HFNT was proposed
to improve the basic FNT model and its performance by addressing above mentioned shortcomings.
3. Multi-objectives and Flexible Neural Tree
In this section, first, Pareto-based multiobjective is discussed. Second, we offer a detailed discussion
on FNT and its structure and parameter optimization using NSGA-II-based MOGP and DE, respectively.
Followed by a discussion on making an evolutionary weighted ensemble of the candidates from the final
population.
3.1. Pareto-Based Multi-objectives
Usually, learning algorithms owns a single objective, i.e., the approximation error minimization, which
is often achieved by minimizing mean squared error (MSE) on the learning data. MSE E on a learning
data is computed as:
E=
N
1 X
(di − yi )2 ,
N i=1
(1)
where di and yi are the desired output and the model’s output, respectively and N indicates total data
pairs in the learning set. Additionally, a statistical goodness measure, called, correlation coefficient r that
tells the relationship between two variables (i.e., between the desired output d and the model’s output y)
may also be used as an objective. Correlation coefficient r is computed as:
PN
i=1 di − d̄i (yi − ȳi )
,
r = qP
2 PN
N
2
(y
−
ȳ
)
d
−
d̄
i
i
i
i
i=1
i=1
(2)
where d̄ and ȳ are means of the desired output d and the model’s output y, respectively.
However, single objective comes at the expense of model’s complexity or generalization ability on
unseen data, where generalization ability broadly depends on the model’s complexity [62]. A common
model complexity indicator is the number of free parameters in the model. The approximation error
(1) and the number of free parameters minimization are two conflicting objectives. One approach is to
combine these two objectives as:
f = αE + (1 − α)D,
5
(3)
where 0 ≤ α ≤ 1 is a constant, E is the MSE (1) and D is the total free parameter in a model.
The scalarized objective f in (3), however, has two disadvantages. First, determining an appropriate α
that controls the conflicting objectives. Hence, generalization ability of the produced model will be a
mystery [63]. Second, the scalarized objective f in (3) leads to a single best model that tells nothing
about how the conflicting objectives were achieved. In other words, no single solution exists that may
satisfy both objectives, simultaneously.
We study a multiobjective optimization problem of the form:
minimize {f1 (w), f2 (w), . . . , fm (w)}
subject to w ∈ W
where we have m ≥ 2 objective functions fi : Rn → R. We denote the vector of objective functions by
f (w) = hf1 (w), f2 (w), . . . , fm (w)i. The decision (variable) vectors w = hw1 , w2 , . . . , wn i belong to the
set W ⊂ Rn , which is a subset of the decision variable space Rn . The word ‘minimize’ means that we
want to minimize all the objective functions simultaneously.
A nondominated solution is one in which no one objective function can be improved without a simultaneous detriment to at least one of the other objectives of the solution [21]. The nondominated solution
is also known as a Pareto-optimal solution.
Definition 1. Pareto-dominance - A solution w1 is said to dominate a solution w2 if ∀i = 1, 2, . . . , m,
fi (w1 ) ≤ fi (w2 ), and there exists j ∈ {1, 2, . . . , m} such that fj (w1 ) < fj (w2 ) holds.
Definition 2. Pareto-optimal - A solution w1 is called Pareto-optimal if there does not exist any other
solution that dominates it. A set Pareto-optimal solution is called Pareto-front.
Algorithm 1 is a basic framework of NSGA-II based MOGP, which was used for computing Paretooptimal solutions from an initial HFNT population. The individuals in MOGP were sorted according to
their dominance in population. Note that the function size(·) returns total number of rows (population
size) for a 2-D matrix and returns total number of elements for a vector. The Moreover, individuals were
sorted according to the rank/Pareto-front. MOGP is an elitist algorithm that allowed the best individuals
to propagate into next generation. Diversity in the population was maintained by measuring crowding
distance among the individuals [21].
6
Data: Problem and Objectives
Result: A bag M of solutions selected from Pareto-fronts
initialization: HFNT population P ;
evaluation: nondominated sorting of P ;
while termination criteria not satisfied do
selection: binary tournament selection;
generation: a new population Q;
recombination: R = P + Q;
evaluation: nondominated sorting of R;
elitism: P = size(P ) best individuals from R;
end
Algorithm 1: NSGA-II based multiobjective genetic programming
3.2. Heterogeneous Flexible Neural Tree
HFNT is analogous to a multi-layer feedforward neural network that has over-layer connections and
activation function at the nodes. HFNT construction has two phases [1]: 1) the tree construction phase,
in which evolutionary algorithms are applied to construct tree-like structure; and 2) the parameter-tuning
phase, in which genotype of HFNT (underlying parameters of tree-structure) is optimized by using parameter optimization algorithms.
To create a near-optimum model, phase one starts with random tree-like structures (population of
initial solutions), where parameters of each tree are fixed by a random guess. Once a near-optimum
tree structure is obtained, parameter-tuning phase optimizes its parameter. The phases are repeated
until a satisfactory solution is obtained. Figure 1 is a lucid illustration of these two phases that work
in some co-evolutionary manner. From Figure 1, it may be observed that two global search algorithms
MOGP (for structure optimization) and DE (for parameter optimization) works in a nested manner to
obtain a near optimum tree that may have less complex tree structure and better parameter. Moreover,
evolutionary algorithm allowed HFNT to select activation functions and input feature at the nodes from
sets of activation functions and input features, respectively. Thus, HFNT possesses automatic feature
selection ability.
3.2.1. Basic Idea of HFNT
An HFNT S is a collection of function set F and instruction set T :
n
o
U (k)
U (k)
U (k)
S = F ∪ T = +2 , +3 , · · · , +tn
∪ {x1 , x2 , . . . , xd }
(4)
where +kj (j = 2, 3, . . . , tn) denotes a non-leaf instruction (a computational node). It receives 2 ≤ j ≤ tn
arguments and U (k) is a function that randomly takes an activation function from a set of k activation
functions. Maximum arguments tn to a computational node are predefined. A set of seven activation
7
Input: Training data and
parameter settings
MOGP/SOGP: Initialization of HFNT
Population and objective function setting
Yes
No
If MOGP ?
NSGA-II-based
nondominated sorting
Fitness based sorting
New population using selection,
crossover, and mutation
Fitness Evaluation
No
max
iteration?
Yes
DE: Initialization of the population for parameter
tuning for a selected fixed HFNT structure
New population using selection,
crossover, and mutation
No
max
iteration?
Yes
No
Satisfactory
solution found ?
Yes
STOP
Figure 1: Co-evolutionary construction of the heterogeneous flexible neural tree.
8
Table 1: Set of activation function used in neural tree construction
Activation-function
k
Expression for ϕki (a, b, x)
Gaussian Function
1
f (x, a, b) = exp −((x − a)2 )/(b2 )
Tangent-Hyperbolic
2
f (x) = (ex − e−x )/(ex + e−x )
Fermi Function
3
f (x) = 1/(1 + e−x )
Linear Fermi
4
f (x, a, b) = a × 1/((1 + e−x )) + b
Linear Tangent-hyperbolic
5
f (x, a, b) = a × (ex − e−x )/(ex + e−x ) + b
Bipolar Sigmoid
6
f (x, a) = (1 − e−2xa )/(a(1 + e−2xa ))
Unipolar Sigmoid
7
f (x, a) = (2|a|)/(1 + e−2|a|x )
functions is shown in Table 1. Leaf node’s instruction x1 , x2 , . . . , xd denotes input variables. Figure 2 is
an illustration of a typical HFNT. Similarly, Figure 3 is an illustration of a typical node in an HFNT.
The i-th computational node (Figure 3) of a tree (say i-th node in Figure 2) receives ni inputs (denoted
as zji ) through ni connection-weights (denoted as wji ) and takes two adjustable parameters ai and bi that
represents the arguments of the activation function ϕki (.) at that node. The purpose of using an activation
function at a computational node is to limit the output of the computational node within a certain range.
For example, if the i-th node contains a Gaussian function k = 1 (Table 1). Then, its output yi is
computed as:
oi − ai
yi = ϕki (ai , bi , oi ) = exp −
bi
(5)
where oi is the weighted summation of the inputs zji and weights wji (j = 1 to ni ) at the i-th computational
node (Figure 3), also known as excitation of the node. The net excitation oi of the i-th node is computed
as:
i
oi =
n
X
wji zji
(6)
j=1
where zji ∈ {x1 , x2 , . . . , xd } or, zji ∈ {y1 , y2 , . . . , ym }, i.e., zji can be either an input feature (leaf node value)
or the output of another node (a computational node output) in the tree. Weight wji is the connection
weight of real value in the range [wl , wu ]. Similarly, the output of a tree y is computed from the root node
of the tree, which is recursively computed by computing each node’s output using (5) from right to left
in a depth-first method.
The fitness of a tree depends on the problem. Usually, learning algorithm uses approximation error,
i.e., MSE (1). Other fitness measures associated with the tree are tree size and diversity index. The tree
size is the number of nodes (excluding root node) in a tree, e.g., the number of computational nodes and
leaf nodes in the tree in Figure 2 is 11 (three computational nodes and eight leaf-nodes). The number
of distinct activation functions (including root node function) randomly selected from a set of activation
functions gives the diversity index of a tree. Total activation functions (denoted as k in +kj ) selected by
the tree in Figure 2 is three (+13 , +43 , and +53 ). Hence, its diversity index is three.
9
+13 Root node
x1
+52
x2
x3
+13
Leaf nodes
x2
x1
x3
x1
+43 Function nodes
x3
Depth-first search computation
y
Figure 2: Typical representation of a neural tree S = F ∪ T whose function instruction set F = +13 , +42 , +53
and terminal
instruction set T = {x1 , x2 , x3 , x4 }.
z1i
w1i
w2i
z2i
yi
i
n
P
j=1
wji zji
wni i
zni i
Figure 3: Illustration of a computational node. The variable ni indicates the number of inputs zji and weights wji received
at the i-th node and the variable y i is the output of the i-th node.
10
3.3. Structure and Parameter Learning (Near optimal Tree)
A tree that offers the lowest approximation error and the simplest structure is a near optimal tree,
which can be obtained by using an evolutionary algorithm such as GP [27], PIPE [15], GEP [44], MEP [35],
and so on. To optimize tree parameters, algorithms such as genetic algorithm [64], evolution strategy [64],
artificial bee colony [65], PSO [25, 66], DE [22], and any hybrid algorithm such as GA and PSO [67] can
be used.
3.3.1. Tree-construction
The proposed multiobjective optimization of FNT has three fitness measures: approximation error (1)
minimization, tree size minimization, and diversity index maximization. These objectives are simultaneously optimized during the tree construction phase using MOGP, which guides an initial population
P of random tree-structures according to Algorithm 1. The detailed description of the components of
Algorithm 1 are as follows:
Selection. In selection operation, a mating pool of size size(P )r is created using binary tournament
selection, where two candidates are randomly selected from a population and the best (according to rank
and crowding distance) among them is placed into the mating pool. This process is continued until the
mating pool is full. An offspring population Q is generated by using the individuals of mating pool.
Two distinct individuals (parents) are randomly selected from the mating pool to create new individuals
using genetic operators: crossover and mutation. The crossover and mutation operators are applied with
probabilities pc and pm, respectively.
Crossover. In crossover operation, randomly selected sub-trees of two parent trees were swapped. The
swapping includes the exchange of activation-nodes, weights, and inputs as it is described in [38, 64, 68].
Mutation. The mutation of a selected individual from mating pool took place in the following manner [38,
64, 68]:
1) A randomly selected terminal node is replaced by a newly generated terminal node.
2) All terminal nodes of the selected tree were replaced by randomly generated new terminal nodes.
3) A randomly selected terminal node or a computational node is replaced by a randomly generated
sub-tree.
4) A randomly selected terminal node is replaced by a randomly generated computational node.
In the proposed MOGP, during the each mutation operation event, one of the above-mentioned four
mutation operators was randomly selected for mutation of the tree.
Recombination. The offspring population Q and the main population P , are merged to make a combined
population R.
11
Elitism. In this step, size(Q) worst individuals are weeded out. In other words, size(P ) best individuals
are propagated to a new generation as main population P .
3.3.2. Parameter-tuning
In parameter-tuning phase, a single objective, i.e., approximation error was used in optimization of
HFNT parameter by DE. The tree parameters such as weights of tree edges and arguments of activation
functions were encoded into a vector w = hw1 , w2 , . . . , wn i for the optimization. In addition, a crossvalidation (CV) phase was used for statistical validation of HFNTs.
The basics of DE is as follows. For an initial population H of parameter vectors wi for i = 1 to size(H),
DE repeats its steps mutation, recombination, and selection until an optimum parameter vector w∗ is
obtained. DE updates each parameter vector wi ∈ H by selecting the best vector wgi and three random
vectors r0i , r1i , and r2i from H such that r0i 6= r1i 6= r2i holds. The random vector r0 is considered as a
t
trial vector wit . Hence, for all i = 1, 2, . . . , size(H), and j = 1, 2, . . . , n, the j-th variable wij
of i-th
trail-vectors wit is generated by using crossover, mutation, and recombination as:
r(0) + F (wg − r0 ) + F (r1 − r2 ) u < cr k j = k
ij
ij
ij
ij
ij
ij
t
wij =
r(0)
uij ≥ cr & j 6= k
ij
(7)
where k is a random index in [1, n], uij is within [0, 1], k is in {1, 2, . . . , n}, cr is crossover probability,
and F ∈ [0, 2] is mutation factor. The trail vector wit is selected if
wt
i
wi =
w
i
f (wit ) < f (wi )
f (wit ) ≥ f (wi )
(8)
where f (.) returns fitness of a vector as per (1). Hence, the process of crossover, mutation, recombination,
and selection are repeated until an optimal parameter vector solution w∗ is found.
3.4. Ensemble: Making use of MOGP Final Population
In tree construction phase, MOGP provides a population from which we can select tree models for
making the ensemble. Three conflicting objectives such as approximation error, tree size, and diversity
index allows the creation of Pareto-optimal solutions, where solutions are distributed on various Paretooptimal fronts according to their rank in population. Ensemble candidates can be selected from the first
line of solutions (Front 1), or they can be chosen by examining the three objectives depending on the user’s
need and preference. Accuracy and diversity among the ensemble candidate are important [20]. Hence, in
this work, approximation error, and diversity among the candidates were given preference over tree size.
Not to confuse “diversity index ” with “diversity”. The diversity index is an objective in MOGP, and the
diversity is the number of distinct candidates in an ensemble. A collection M of the diverse candidate
is called a bag of candidates [69]. In this work, any two trees were considered diverse (distinct) if the
12
followings hold: 1) Two trees were of different size. 2) The number of function nodes/or leaf nodes in two
trees were dissimilar. 3) Two models used a different set of input features. 4) Two models used a different
set of activation functions. Hence, diversity div of ensemble M (a bag of solutions) was computed as:
div =
distinct(M )
,
size(M )
(9)
where distinct(M ) is a function that returns total distinct models in an ensemble M and size(M ) is a
total number of models in the bag.
Now, for a classification problem, to compute combined vote of respective candidate’s outputs m1 ,
m2 , . . ., msize(M ) of bag M and classes ω1 , ω2 , . . . , ωC , we used an indicator function I (.) which takes 1
if ‘.’ is true, and takes 0 if ‘.’ is false. Thus, ensemble decisions by weighted majority voting is computed
as [70, 71]:
size(M )
X
C
y = arg max
j=1
wt I (mt = ωj ) ,
(10)
t=1
where wt is weight associated with the t-th candidate mt in an ensemble M and y is set to class ωj if the
total weighted vote received by ωj is higher than the total vote received by any other class. Similarly, the
ensemble of regression methods was computed by weighted arithmetic mean as [70]:
size(M )
y=
X
wt mt ,
(11)
t=1
where wt and mt are weight and output of t-th candidate in a bag M , respectively, and y is the ensemble
output, which is then used for computing MSE (1) and correlation coefficient (2). The weights may be
computed according to fitness of the models, or by using a metaheuristic algorithm. In this work, DE
was applied to compute the ensemble weights wt , where population size was set to 100 and number of
function evaluation was set to 300,000.
3.5. Multiobjective: A General Optimization Strategy
A summary of general HFNT learning algorithm is as follows:
Step 1. Initializing HFNT training parameters.
Step 2. Apply tree construction phase to guide initial HFNT population towards Pareto-optimal solutions.
Step 3. Select tree-model(s) from MOGP final population according to their approximation error, tree
size, and diversity index from the Pareto front.
Step 4. Apply parameter-tuning phase to optimize the selected tree-model(s).
Step 5. Go to Step 2, if no satisfactory solution found. Else go to Step 6.
Step 6. Using a cross-validation (CV) method to validate the chosen model(s).
Step 7. Use the chosen tree-model(s) for making ensemble (recommended).
Step 8. Compute ensemble results of the ensemble model (recommended).
13
Table 2: Multiobjective flexible neural tree parameter set-up for the experiments
Parameter
Definition
Default Rang
Value
Scaling
Input-features scaling range.
[0,1]
Tree height
Maximum depth (layers) of a tree model.
4
Tree arity
Maximum arguments of a node +ktn .
[dl, du], dl ∈ R, du ∈ R
td ∈ Z+ |td > 1
tn ∈ Z+ |n ≥ 2
5
Node range
Search space of functions arguments.
[nl, nu], nl ∈ R, nu ∈ R
[0,1]
Edge range
Search space for edges (weights) of tree.
[wl , wu ], wl ∈ R, wu ∈ R
[-1,1]
P
MOGP population.
size(P ) > 20
30
Mutation
Mutation probability
pm
0.3
Crossover
Crossover probability
pc = 1 − pm
0.7
Mating pool
Size of the pool of selected candidates.
size(P )r, 0 ≤ r ≤ 1
0.5
Tournament
Tournament selection size.
2 ≤ bt ≤ size(P )
2
H
DE population.
50
General ig
Maximum number of trails.
Structure is
MOGP iterations
Parameter ip
DE iterations
size(H) ≥ 50
ig ∈ Z+ |ig > 1
is ∈ Z+ |is ≥ 50
ip ∈ Z+ |ip ≥ 100
3
30
1000
4. Experimental Set-Up
Several experiments were designed for evaluating the proposed HFNTM . A careful parameter-setting
was used for testing its efficiency. A detailed description of the parameter-setting is given in Table 2,
which includes: definitions, default range, and selected value. The phases of the algorithm were repeated
until the stopping criteria met, i.e., either the lowest predefined approximation error was achieved, or the
maximum function evaluations were reached. The repetition holds the key to obtaining a good solution. A
carefully designed repetition of these two phases may offer a good solution in fewer of function evaluations.
In this experiment, three general repetitions ig were used with 30 tree construction iterations is , and
1000 parameter-tuning iterations ip (Figure 1). Hence, the maximum function evaluation1 [size(P ) +
ig {is (size(P ) + size(P )r) + ip size(H)}] was 154, 080. The DE version DE/rand − to − best/1/bin [22]
with cr equal to 0.9 and F equal to 0.7 was used in the parameter-tuning phase.
The experiments were conducted over classification, regression, and time-series datasets. A detailed
description of the chosen dataset from the UCI machine learning [72] and KEEL [73] repository is available
in Table A.17. The parameter-setting mentioned in Table 2 was used for the experiments over each dataset.
Since the stochastic algorithms depend on random initialization, a pseudorandom number generator called,
Mersenne Twister algorithm that draws random values using probability distribution in a pseudo-random
manner was used for initialization of HFNTs [74]. Hence, each run of the experiment was conducted
with a random seed drawn from the system. We compared HFNTM performance with various other
1 Initial
GP population + three repetition ((GP population + mating pool size) × MOGP iterations + MH population
× MH iterations) = 30 + 3 × [(30 + 15) × 30 + 50 × 1000] = 154, 080.
14
High
Size & Diversity-index
12
5
4.5
4
3.5
3
2.5
2
1.5
1
14
Pareto-front surface
tofro
nt
l
ine
8
6
Par
e
to-f
4
ron
t li
2
10
8
Siz
e
6
4
Lo
w
0.08 0.1
0.2 0.22
0.16 0.18
High
0.12 0.14
Error
Low
0
0.08
Low
(a) Error versus tree size versus diversity index
Figure 4:
Pa
re
10
ne
12
h
Low
High
Diversity
Low
Hig
14
0.1
0.12
0.14
Error
0.16
0.18
0.2
0.22
High
(b) Error versus tree size and diversity index
Pareto-front of a final population of 50 individuals generated from the training dataset of time-series problem
MGS. (a) 3-D plot of solutions and a Pareto-front is a surface. (b) 2-D plot of Error versus complexity (in blue dots) and
Error versus diversity (in red squares).
approximation models collected from literature. A list of such models is provided in Table B.18. A
developed software tool based on the proposed HFNTM algorithm for predictive modeling is available
in [75].
To construct good ensemble systems, highly diverse and accurate candidates were selected in the
ensemble bag M . To increase diversity (9) among the candidates, the Pareto-optimal solutions were
examined by giving preference to the candidates with low approximation error, small tree size and distinct
from others selected candidates. Hence, size(M ) candidates were selected from a population P . An
illustration of such selection method is shown in Figure 4, which represents an MOGP final population of
50 candidate solutions computed over dataset MGS.
MOGP simultaneously optimized three objectives. Hence, the solutions were arranged on the threedimensional map (Figure 4(a)), in which along the x-axis, error was plotted; along the y-axis, tree size
was plotted; and along z-axis, diversity index (diversity) was plotted. However, for the simplicity, we have
arranged solutions also in 2-D plots (Figure 4(b)), in which along the x-axis, computed error was plotted;
and along the y-axis, tree size (indicated by blue dots) and diversity index (indicated by red squares)
were plotted. From Figure 4(b), it is evident that a clear choice is difficult since decreasing approximation
error increases models tree size (blue dots in Figure 4(b)). Similarly, decreasing approximation error
increases models tree size and diversity (red squares in Figure 4(b)). Hence, solutions along the Paretofront (rank-1), i.e., Pareto surface indicated in the 3-D map of the solutions in Figure 4(a) were chosen for
the ensemble. For all datasets, ensemble candidates were selected by examining Pareto-fronts in a similar
fashion as described for the dataset MGS in Figure 4.
The purpose of our experiment was to obtain sufficiently good prediction models by enhancing pre15
50
140
1
2
3
4
100
4
5
7
8
40
Tree size
Tree size
180
60
20
0
1
5
30
2
6
3
7
4
8
20
10
1
250
500
750
Single objective optimization course
0
1000
(a) Single objective optimization
1
250
500
750
Multiobjective optimization course
1000
(b) Multiobjective objective optimization
Figure 5: Comparison of single and multiobjective optimization course.
dictability and lowering complexity. We used MOGP for optimization of HFNTs. Hence, we were compromising fitness by lowering models complexity. In single objective optimization, we only looked for models
fitness. Therefore, we did not possess control over model’s complexity. Figure 5 illustrates eight runs of
both single and multiobjective optimization course of HFNT, where models tree size (complexity) is indicated along y-axis and x-axis indicates fitness value of the HFNT models. The results shown in Figure 5
was conducted over MGS dataset. For each single objective GP and multiobjective GP, optimization
course was noted, i.e., successive fitness reduction and tree size were noted for 1000 iterations.
It is evident from Figure 5 that the HFNTM approach leads HFNT optimization by lowering model’s
complexity. Whereas, in the single objective, model’s complexity was unbounded and was abruptly
increased. The average tree size of eight runs of single and eight runs of multiobjective were 39.265 and
10.25, respectively; whereas, the average fitness were 0.1423 and 0.1393, respectively. However, in single
objective optimization, given the fact that the tree size is unbounded, the fitness of a model may improve
at the expense of model’s complexity. Hence, the experiments were set-up for multiobjective optimization
that provides a balance between both objectives as described in Figure 4.
5. Results
Experimental results were classified into three categories: classification, regression, and time-series.
Each category has two parts: 1) First part describes the best and average results obtained from the
experiments; 2) Second part describes ensemble results using tabular and graphical form.
5.1. Classification dataset
We chose five classification datasets for evaluating HFNTM , and the classification accuracy was computed as:
fa =
tp + tn
,
tp + f n + f p + tn
(12)
where tp is the total positive samples correctly classified as positive samples, tn is the total negative
samples correctly classified as negative samples, f p is the total negative samples incorrectly classified as
16
positive samples, and f n is the total positive samples incorrectly classified as negative samples. Here, for
a binary class classification problem, the positive sample indicates the class labeled with ‘1’ and negative
sample indicates class labeled with ‘0’. Similarly, for a three-class ( ω1 , ω2 , and ω3 ) classification problem,
the samples which are labeled as a class ω1 are set to 1, 0, 0, i.e., set to positive for class ω1 and negative
for ω2 , and ω3 . The samples which are labeled as a class ω2 are set to 0, 1, 0, and the samples which are
labeled as a class ω3 are set to 0, 0, 1.
5.1.1. 10-Fold CV
The experiments on classification dataset were conducted in three batches that produced 30 models,
and each model was cross-validated using 10-fold CV, in which a dataset is equally divided into 10 sets
and the training of a model was repeated 10 times. Each time a distinct set was picked for the testing
the models, and the rest of nine set was picked for the training of the model. Accordingly, the obtained
results are summarized in Table 3. Each batch of experiment produced an ensemble system of 10 models
whose results are shown in Table 7.
The obtained results presented in Table 3 describes the best and mean results of 30 models. We
present a comparative study of the best 10-fold CV models results of HFNTM and the results reported in
the literature in Table 4. In Table 4, the results of HDT and FNT [29] were of 10 fold CV results on the
test dataset. Whereas, the results of FNT [76] was the best test accuracy and not the CV results. The
results summarized in Table 4 suggests a comparatively better performance of the proposed HFNTM over
the previous approaches. For the illustration of a model created by HFNTM approach, we chose the best
model of dataset WDB that has a test accuracy of 97.02% (shown in Table 3). A pictorial representation
of the WDB model is shown in Figure 6, where the model’s tree size is 7, total input features are 5,
(x3 , x4 , x12 , x17 , and x22 ) and the selected activation function is tangent hyperbolic (k = 2) at both the
non-leaf nodes. Similarly, we may represent models of all other datasets.
Table 3: Best and mean results of 30 10-fold CV models (300 runs) of HFNTM
Best of 30 models
Mean of 30 models
Data
train fa
test fa
tree size
Features
train fa
test fa
avg. tree size
diversity
AUS
87.41%
87.39%
4
3
86.59%
85.73%
5.07
0.73
HRT
87.41%
87.04%
8
5
82.40%
80.28%
7.50
0.70
ION
90.92%
90.29%
5
3
87.54%
86.14%
6.70
0.83
PIM
78.67%
78.03%
10
5
71.12%
70.30%
6.33
8.67
WDB
97.02%
96.96%
6
5
94.51%
93.67%
7.97
0.73
In this work, Friedman test was conducted to examine the significance of the algorithms. For this
purpose, the classification accuracy (test results) was considered (Table 4). The average ranks obtained
by each method in the Friedman test is shown in Table 5. The Friedman statistic at α = 0.05 (distributed
according to chi-square with 2 degrees of freedom) is 5.991, i.e., χ2(α,2) = 5.991. The obtained test value Q
17
Table 4: Comparative results: 10-fold CV test accuracy fa and variance σ of algorithms
Algorithms
AUS
test fa
HRT
σ test fa
ION
σ test fa
PIM
σ test fa
WDB
σ test fa
HDT [29]
86.96% 2.058 76.86% 2.086 89.65% 1.624 73.95% 2.374
FNT [29]
83.88% 4.083 83.82% 3.934 88.03% 0.953 77.05% 2.747
FNT [76]
HFNTM
93.66%
σ
n/a
87.39% 0.029 87.04% 0.053 90.29% 0.044 78.03% 0.013 96.96% 0.005
according to Friedman statistic is 6. Since Q > χ2(α,2) , then the null hypothesis that “there is no difference
between the algorithms” is rejected. In other words, the computed p-value by Friedman test is 0.049787
which is less than or equal to 0.05, i.e., p-value ≤ α-value. Hence, we reject the null hypothesis.
Table 5 describes the significance of differences between the algorithms. To compare the differences
between the best rank algorithm in Friedman test, i.e., between the proposed algorithm HFNTM and the
other two algorithms, Holm’s method [77] was used. Holm’s method rejects the hypothesis of equality
between the best algorithm (HFNTM ) and other algorithms if the p-value is less than α/i, where i is
the position of an algorithm in a list sorted in ascending order of z-value (Table 6). From the post hoc
analysis, it was observed that the proposed algorithm HFNTM outperformed both HDT [29] and FNT [29]
algorithms.
Table 5: Average rankings of the algorithms
Algorithm
Ranking
HFNTM
1.0
HDT
2.5
FNT
2.5
Table 6: Post Hoc comparison between HFNTM and other algorithms for α = 0.1
i
algorithm
z
p
α/i
Hypothesis
2
HDT
2.12132
0.033895
0.05
rejected
1
FNT
2.12132
0.033895
0.1
rejected
5.1.2. Ensembles
The best accuracy and the average accuracy of 30 models presented in Table 3 are the evidence of
HFNTM efficiency. However, as mentioned earlier, a generalized solution may be obtained by using an
ensemble. All 30 models were created in three batches. Hence, three ensemble systems were obtained. The
results of those ensemble systems are presented in Table 7, where ensemble results are the accuracies fa
obtained by weighted majority voting (10). In Table 7, the classification accuracies fa were computed over
18
y
+22
03
-0
.98
0.288
0.8
x3
1
x4
+22
-0
.60
-0.529
27
0.5
3
x17
x12
x22
Figure 6: HFNT model of classification dataset WDB (test fa = 97.02%).
CV test dataset. From Table 7, it may be observed that high diversity among the ensemble candidates
offered comparatively higher accuracy. Hence, an ensemble model may be adopted by examining the
performance of an ensemble system, i.e., average tree size (complexity) of the candidates within the
ensemble and the selected input features.
An ensemble system created from a genetic evolution and adaptation is crucial for feature selection
and analysis. Summarized ensemble results in Table 7 gives the following useful information about the
HFNTM feature selection ability: 1) TSF - total selected features; 2) MSF - most significant (frequently
selected) features; and 3) MIF - most infrequently selected features. Table 7 illustrates feature selection
results.
Table 7: Ensemble results (10-fold CV) of each classification dataset
Data
Batch
AUS
HRT
test fa
avg. D
div (9)
TSF
1
86.96%
5
0.7
4
2
85.51%
6
0.7
5
3
86.81%
1
77.41%
2
70.37%
3
ION
PIM
WDB
87.04%
4.2
0.8
5
6.8
0.5
6
7.6
0.6
9
8.1
1
10
1
82.86%
7.2
0.9
15
2
90.29%
7.3
1
16
3
86.57%
5.6
0.6
6
1
76.32%
6.9
1
8
2
64.74%
5.6
0.7
7
3
64.21%
7.4
0.9
8
1
94.29%
8.2
0.7
15
2
93.75%
5
1
15
3
94.29%
10.7
0.5
19
15
MSF
MIF
x6 , x8 , x10 ,
x1 , x2 , x3 ,
x12
x11 , x14
x3 , x4 , x12 ,
x13
x6
x15 , x16 , x18 ,
x2 , x4 , x5 , x27
x19 , x21 , x23 ,
x25 , x30 , x32
x1 , x3 , x4 , x5 ,
x6 , x7
x2
x21 , x22 , x24 ,
x1 , x5 , x6 , x8 ,
x25
x14 , x20 , x30
5.2. Regression dataset
5.2.1. 5-Fold CV
For regression dataset, the performance of HFNTM was examined by using 5-fold CV method, in which
the dataset was divided into 5 sets, each was 20% in size, and the process was repeated five times. Each
time, four set was used to training and one set for testing. Hence, a total 5 runs were used for each model.
As described in [78], MSE E = 0.5 × E was used for evaluating HFNTM , where E was computed as
per (1). The training MSE is represented as En and test MSE is represented as Et . Such setting of MSE
computation and cross-validation was taken for comparing the results collected from [78]. Table 8 presents
results of 5-fold CV of each dataset for 30 models. Hence, each presented result is averaged over a total 150
runs of experiments. Similarly, in Table 9, a comparison between HFNTM and other collected algorithms
from literature is shown. It is evident from comparative results that HFNTM performs very competitive
to other algorithms. The literature results were averaged over 30 runs of experiments; whereas, HFNTM
results were averaged of 150 runs of experiments. Hence, a competitive result of HFNTM is evidence of
its efficiency.
Moreover, HFNTM is distinct from the other algorithm mentioned in Table 9 because it performs
feature selection and models complexity minimization, simultaneously. On the other hand, the other
algorithms used entire available features. Therefore, the result’s comparisons were limited to assessing
average MSE, where HFNTM , which gives simple models in comparison to others, stands firmly competitive with the others. An illustration of the best model of regression dataset DEE is provided in Figure 7,
where the model offered a test MSE Et of 0.077, tree size equal to 10, and four selected input features
(x1 , x3 , x4 , and x5 ). The selected activation functions were unipolar sigmoid (+72 ), bipolar sigmoid (+62 ),
tangent hyperbolic (+22 ), and Gaussian (+12 ). Note that while creating HFNT models, the datasets were
normalized as described in Table 2 and the output of models were denormalized accordingly. Therefore,
normalized inputs should be presented to the tree (Figure 7), and the output y of the tree (Figure 7)
should be denormalized.
Table 8: Best and mean results of 30 5-fold CV models (150 runs) of HFNTM .
Best of 30 models
Mean of 30 models
Data
train En
test Et
tree size
#Features
train En
test Et
tree size
diversity
ABL
2.228
2.256
14
5
2.578
2.511
11.23
0.7
BAS
198250
209582
11
5
261811
288688.6
7.69
0.6
DEE
0.076
0.077
10
4
0.0807
0.086
11.7
0.7
∗
ELV
8.33
8.36
11
7
1.35
1.35
7.63
0.5
FRD
2.342
2.425
6
5
3.218
3.293
6.98
0.34
Note:
∗
Results of ELV should be multiplied with 10-5
20
Table 9: Comparative results: 5-fold CV training MSE En and test MSE Et of algorithms.
Algorithms
ABL
MLP
BAS
En
Et
En
ELV∗
DEE
Et
En
Et
En
FRD
Et
-
2.694
-
540302
-
0.101
-
2.04
ANFIS-SUB
2.008
2.733
119561
1089824
3087
2083
61.417
61.35
TSK-IRL
2.581
2.642
0.545
882.016
LINEAR-LMS
2.413
2.472
224684
269123
0.081
0.085
4.254
4.288
En
Et
0.085
3.158
0.433
1.419
3.612
3.653
3.194
2.04
2.412
9607
461402
0.662
0.682
0.322
1.07
METSK-HDe
2.205
2.392
47900
368820
0.03
0.103
6.75
7.02
1.075
1.887
∗∗
2.578
2.511
261811
288688.6
0.0807
0.086
1.35
1.35
3.218
3.293
LEL-TSK
HFNTM
Note:
∗
ELV results should be multiplied with 10-5 ,
∗∗
HFNTM results were averaged over 150 runs compared to MLP,
ANFIS-SUB, TSK-IRL, LINEAR-LMS, LEL-TSK, and METSK-HDe , which were averaged over 30 runs.
For regression datasets, Friedman test was conducted to examine the significance of the algorithms.
For this purpose, the best test MSE was considered of the algorithms MLP, ANFIS-SUB, TSK-IRL,
LINEAR-LMS, LEL-TSK, and METSK-HDe from Table 9 and the best test MSE of algorithm HFNTM
was considered from Table 8. The average ranks obtained by each method in the Friedman test is shown
in Table 10. The Friedman statistic at α = 0.05 (distributed according to chi-square with 5 degrees of
freedom) is 11, i.e., χ2(α,5) = 11. The obtained test value Q according to Friedman statistic is 11. Since
Q > χ2(α,5) , then the null hypothesis that “there is no difference between the algorithms” is rejected. In
other words, the computed p-value by Friedman test is 0.05 which is less than or equal to 0.05, i.e., p-value
≤ α-value. Hence, we reject the null hypothesis.
Table 10: Average rankings of the algorithms
Algorithm
Ranking
HFNTM
1.5
e
METSK-HD
2.75
LEL-TSK
3.25
LINEAR-LSM
3.5
MLP
4.5
ANFIS-SUB
5.5
From the Friedman test, it is clear that the proposed algorithm HFNTM performed best among all
the other algorithms. However, in the post-hoc analysis presented in Table 11 describes the significance
of difference between the algorithms. For this purpose, we apply Holm’s method [77], which rejects the
hypothesis of equality between the best algorithm (HFNTM ) and other algorithms if the p-value is less
than α/i, where i is the position of an algorithm in a list sorted ascending order of z-value (Table 11).
In the obtained result, the equality between ANFIS-SUB, MLP and HFNTM was rejected, whereas
21
the HFNTM equality with other algorithms can not be rejected with α = 0.1, i.e., with 90% confidence.
However, the p-value shown in Table 11 indicates the quality of their performance and the statistical
closeness to the algorithm HFNTM . It can be observed that the algorithm METSK-HDe performed closer
to algorithm HFNTM , followed by LEL-TSK, and LINEAR-LSM.
Table 11: Post Hoc comparison between HFNTM and other algorithms for α = 0.1.
i
algorithm
z
p
α/i
Hypothesis
5
ANFIS-SUB
3.023716
0.002497
0.02
rejected
4
MLP
2.267787
0.023342
0.025
rejected
3
LINEAR-LSM
1.511858
0.13057
0.033
2
LEL-TSK
1.322876
0.185877
0.05
1
METSK-HDe
0.944911
0.344704
0.1
y
+72
54
0.35
0.979
-0.2
0.444
x1
6
x5
+62 -1.0
0.7
0.5
51
37
1.0
4
x5
57
x4
-0.
0.8
+22
26
x5
0.6
09
+12 0.999
6.68e−5
x3
Figure 7: HFNT model of regression dataset DEE (test MSE Et = 0.077).
5.2.2. Ensembles
For each dataset, we constructed five ensemble systems by using 10 models in each batch. In each
batch, 10 models were created and cross-validated using 5 × 2-fold CV. In 5 × 2-fold CV, a dataset is
randomly divided into two equal sets: A and B. Such partition of the dataset was repeated five times
and each time when the set A was presented for training, the set B was presented for testing, and vice
versa. Hence, total 10 runs of experiments for each model was performed. The collected ensemble results
are presented in Table 12, where ensemble outputs were obtained by using weighted arithmetic mean as
mentioned in (11).
The weights of models were computed by using DE algorithm, where the parameter setting was
similar to the one mentioned in classification dataset. Ensemble results shown in Table 12 are MSE
22
Table 12: Ensemble test MSE Et computed for 5 × 2-fold CV of 10 model in each batch
Data
batch MSE Et
ABL
BAS∗
DEE
EVL∗∗
FRD
Note:
rt avg. D div (9) TSF MSF
1
3.004
0.65
5
0.1
3
2
2.537
0.72
3
3.042
0.65
8.3
1
7
8.5
0.5
5
4
2.294
0.75
10.7
1
7
5
1
2.412
0.73
11.2
0.7
7
2.932
0.79
5.6
0.3
5
2
3.275
0.76
8.2
0.3
6
3
3.178
0.77
5
0.2
7
4
3.051
0.78
5.7
0.3
5
5
2.707
0.81
7.3
0.7
9
4
1
0.112
0.88
4.3
0.2
2
0.115
0.88
8.9
0.6
6
3
0.108
0.88
5.4
0.5
3
4
0.123
0.87
10.8
0.9
5
5
0.111
0.88
5.2
0.6
4
1
1.126
0.71
9.3
0.1
12
2
1.265
0.67
9.6
0.1
12
3
1.124
0.71
10.4
0.1
15
4
1.097
0.72
9.2
0.2
10
5
2.047
0.31
3.8
0.4
3
1
3.987
0.86
6.2
0.2
4
2
4.154
0.83
8
0.2
4
3
4.306
0.83
5.2
0.4
5
4
3.809
0.86
7.8
0.5
4
5
2.395
0.91
7.7
0.4
5
x2 , x3 , x5 ,
x6
MIF
x1
x3 , x7 , x8 ,
x1 , x2 , x5 ,
x9 , x11 , x13
x6 , x10
x1 , x3 , x4 ,
x5 , x6
x1 , x3 , x4 ,
x6 , x17
x1 , x2 , x4 ,
x5
x2
x7 , x8 , x12
x3
∗ BAS results should be multiplied with 105 , ∗∗ ELV results should be multiplied with 10-5 .
and correlation coefficient computed on CV test dataset. From ensemble results, it can be said that the
ensemble with higher diversity offered better results than the ensemble with lower diversity. The models
of the ensemble were examined to evaluate MSF and MIF presented in Table 12. A graphical illustration
of ensemble results is shown in Figure 8 using scattered (regression) plots, where a scatter plots show
how much one variable is affected by another (in this case model’s and desired outputs). Moreover, it
tells the relationship between two variables, i.e., their correlation. Plots shown in Figure 8 represents the
best ensemble batch (numbers indicated bold in Table 12) four, five, three, four and five where MSEs
are 2.2938, 270706, 0.1085, 1.10E−05 and 2.3956, respectively. The values of r2 in plots tell about the
regression curve fitting over CV test datasets. In other words, it can be said that the ensemble models
were obtained with generalization ability.
5.3. Time-series dataset
5.3.1. 2-Fold CV
In literature survey, it was found that efficiency of most of the FNT-based models was evaluated over
time-series dataset. Mostly, Macky-Glass (MGS) dataset was used for this purpose. However, only the
best-obtained results were reported. For time-series prediction problems, the performances were computed
23
20
4500
R² = 0.573
18
3500
14
prediction
prediction
R² = 0.6272
4000
16
12
10
8
3000
2500
2000
1500
6
4
1000
2
500
0
0
0
5
10
15
20
target
25
0
30
1000
2000
3000
4000
5000
6000
7000
target
(a) Dataset ABL. rt = 0.75
(b) Dataset BAS. Et = 0.81
0.07
5
R² = 0.7682
4.5
0.06
4
0.05
prediction
3
2.5
2
1.5
R² = 0.5481
0.04
0.03
0.02
1
0.01
0.5
0
0
0
1
2
3
4
target
5
0
6
0.01
0.02
0.03
0.04
0.05
0.06
target
(c) Dataset DEE. rt = 0.88
(d) Dataset EVL. rt = 0.72
30
R² = 0.8228
25
prediction
prediction
3.5
20
15
10
5
0
0
5
10
15
target
20
25
30
35
(e) Dataset FRD. rt = 0.91
Figure 8: Regression plots of the best ensemble batches on datasets R1, R2, R3, R4, and R5.
24
0.07
using the root of mean squared error (RMSE), i.e., we took the square root of E given in (1). Additionally,
correlation coefficient (2) was also used for evaluating algorithms performance.
For the experiments, first 50% of the dataset was taken for training and the rest of 50% was used for
testing. Table 13 describes the results obtained by HFNTM , where En is RMSE for training set and Et
is RMSE for test-set. The best test RMSE obtained by HFNTM was Et = 0.00859 and Et = 0.06349
on datasets MGS and WWR, respectively. HFNTM results are competitive with most of the algorithms
listed in Table 14. Only a few algorithms such as LNF and FWNN-M reported better results than the one
obtained by HFNTM . FNT based algorithms such as FNT [1] and FBBFNT-EGP&PSO reported RMSEs
close to the results obtained by HFNTM . The average RMSEs and its variance over test-set of 70 models
were 0.10568 and 0.00283, and 0.097783 and 0.00015 on dataset MGS and WWR, respectively. The low
variance indicates that most models were able to produce results around the average RMSE value. The
results reported by other function approximation algorithms (Table 13) were merely the best RMSEs.
Hence, the robustness of other reported algorithm cannot be compared with the HFNTM . However, the
advantage of using HFNTM over other algorithms is evident from the fact that the average complexity of
the predictive models were 8.15 and 8.05 for datasets MGA and WWR, respectively.
The best model obtained for dataset WWR is shown in Figure 9, where the tree size is equal to 17 and
followings are the selected activation functions: tangent hyperbolic, Gaussian, unipolar sigmoid, bipolar
sigmoid and linear tangent hyperbolic. The selected input features in the tree (Figure 9) are x1 , x2 , x3
and x4 . Since in time series category experiment, we have only two datasets and for each dataset HFNTM
was compared with different models from literature. Hence, the statistical test was not conducted in this
category because differences between algorithms are easy to determine from Table 14.
Table 13: Best and mean results 2-fold CV training RMSE En and test RMSE Et .
Best of 70 models
Data
En
Et
D
Features
Mean of 70 models
En
Et
D
MGS
0.00859
0.00798
21
4
0.10385
0.10568
8.15
WWR
0.06437
0.06349
17
4
0.10246
0.09778
8.05
5.3.2. Ensembles
The ensemble results of time-series datasets are presented in Table 15, where the best ensemble system
of dataset MGS (marked bold in Table 15) offered a test RMSE Et = 0.018151 with a test correlation
coefficient rt = 0.99. Similarly, the best ensemble system of dataset WWR (marked bold in Table 15)
offered a test RMSE Et = 0.063286 with a test correlation coefficient rt = 0.953. However, apart from
the best results, most of the ensemble produced low RMSEs, i.e., high correlation coefficients. The best
ensemble batches (marked bold in Table 15) of dataset MGS and WWR were used for graphical plots in
Figure 10. A one-to-one fitting of target and prediction values is the evidence of a high correlation between
model’s output and desired output, which is a significant indicator of model’s efficient performance.
25
Table 14: Comparative results: training RMSE En and test RMSE Et for 2-fold CV.
Algorithms
MGS
CPSO
WWR
En
Et
0.0199
0.0322
PSO-BBFN
-
0.027
HCMSPSO
0.0095
0.0208
HMDDE-BBFNN
0.0094
0.017
G-BBFNN
-
0.013
Classical RBF
0.0096
0.0114
FNT [1]
0.0071
0.0069
FBBFNT-EGP&PSO
0.0053
0.0054
FWNN-M
0.0013
0.00114
LNF
0.0007
0.00079
-
-
BPNN
EFuNNs
HFNTM
En
Et
-
0.200
-
-
0.1063
0.0824
0.00859
0.00798
0.064377
0.063489
y
+23
0.99 +62
0.99 +73
99
-1.0
0.9
-0.819
-0.9
+52 0.11364
8.95e−5
84
-0.205
-0.6
49
03
-0.8
0.1
28
x1
-0
.96
2
x2
x4
x3
x1
x4
9
22
x3
0.9999 +1
0.1501 3
0.99
0.01462
1
0.11364 +3
0.3
x3
2
x1
-0.
54
99
0.9
34
x1
-0.906
0.792
0.1
-0.513
x4
x3
Figure 9: HFNT model of time-series dataset WWR (RMSE = 0.063489).
6. Discussions
HFNTM was examined over three categories of datasets: classification, regression, and time-series.
The results presented in Section 5, clearly suggests a superior performance of HFNTM approach. In
HFNTM approach, MOGP guided an initial HFNT population towards Pareto-optimal solutions, where
HFNT final population was a mixture of heterogeneous HFNTs. Alongside, accuracy and simplicity, a
Pareto-based multiobjective approach ensured diversity among the candidates in final population. Hence,
HFNTs in the final population were fairly accurate, simple, and diverse. Moreover, HFNTs in the final
26
Table 15: Ensemble results computed for 50% test samples of time-series datasets
Data
batch
MGS
1
WWR
Et
rt
avg. tree size
div (9)
TSF
MSF
MIF
0.018
0.99
9.4
0.6
4
x1 , x3 , x4
-
2
0.045
0.98
5.8
0.2
3
3
0.026
0.99
15.2
0.5
3
4
0.109
0.92
5.1
0.4
3
5
0.156
0.89
7
0.2
3
6
0.059
0.97
8.2
0.5
3
7
0.054
0.98
6.4
0.4
4
1
0.073
0.94
5
0.1
3
x1 , x2
-
2
0.112
0.85
6
0.2
2
3
0.097
0.91
10.6
0.3
4
4
0.113
0.84
5
0.1
2
5
0.063
0.96
14.4
0.9
4
6
0.099
0.89
8.5
0.7
3
7
0.101
0.88
6.9
0.4
3
Note: Et , rt , and div indicate test RMSE, test correlation coefficient, and diversity, respectively
population were diverse according to structure, parameters, activation function, and input feature. Hence,
the model’s selection from Pareto-fronts, as indicated in Section 4, led to a good ensemble system.
Table 16: Performance of activation functions during the best performing ensembles
activation function (k)
Data
1
2
3
4
5
6
7
AUS
10
-
-
2
-
-
-
HRT
10
-
9
4
-
5
3
ION
6
5
-
-
2
4
4
PIM
3
8
2
5
2
1
-
WDB
-
3
-
7
8
10
8
ABL
2
10
-
-
-
10
-
BAS
2
5
-
-
2
10
-
DEE
-
6
6
4
4
10
-
EVL
10
5
-
3
-
-
6
FRD
10
10
-
-
-
-
-
MGS
4
1
-
2
1
10
10
WWR
10
-
4
-
4
7
-
Total
67
53
21
27
23
67
31
Note: 67 is the best and 21 is the worst
HFNTM was applied to solve classification, regression, and time-series problems. Since HFNTM is
stochastic in nature, its performance was affected by several factors: random generator algorithm, random
seed, the efficiency of the meta-heuristic algorithm used in parameter-tuning phase, the activation function
27
1.4
1.2
outputs
1
0.8
0.6
0.4
Target
0.2
Prediction
1
18
35
52
69
86
103
120
137
154
171
188
205
222
239
256
273
290
307
324
341
358
375
392
409
426
443
460
477
494
0
time steps
(a) Dataset MGS Et = 01815
1.2
Target
1
Prediction
outputs
0.8
0.6
0.4
0.2
1
10
19
28
37
46
55
64
73
82
91
100
109
118
127
136
145
154
163
172
181
190
199
208
217
226
235
0
time steps
(b) Dataset WWR Et = 0.06328
Figure 10: Target versus prediction plot obtained for time-series datasets MGS and WWR.
selected at the nodes, etc. Therefore, to examine the performance of HFNTM , several HFNT-models
were created using different random seeds and the best and average approximation error of all created
models were examined. In Section 5, as far as the best model is concerned, the performance of HFNTM
surpass other approximation models mentioned from literature. Additionally, in the case of each dataset,
a very low average value (high accuracy in the case of classification and low approximation errors in
case of regression and time-series) were obtained, which significantly suggests that HFNTM often led
to good solutions. Similarly, in the case of the ensembles, it is clear from the result that combined
output of diverse and accurate candidates offered high quality (in terms of generalization ability and
accuracy) approximation/prediction model. From the results, it is clear that the final population of
HFNTM offered the best ensemble when the models were carefully examined based on approximation
error, average complexity (tree size), and selected features.
Moreover, the performances of the best performing activation functions were examined. For this purpose, the best ensemble system obtained for each dataset were considered. Accordingly, the performance
of activation functions was evaluated as follows. The best ensemble system of each dataset had 10 models;
28
therefore, in how many models (among 10) an activation function k appeared, was counted. Hence, for a
dataset, if an activation function appeared in all models of an ensemble system, then the total count was
10. Subsequently, counting was performed for all the activation functions for the best ensemble systems
of all the datasets. Table 16, shows the performance of the activation functions. It can be observed that
the activation function Gaussian (k = 1) and Bipolar Sigmoid (k = 6) performed the best among all the
other activation functions followed by Tangent-hyperbolic (k = 2) function. Hence, no one activation
function performed exceptionally well. Therefore, the efforts of selecting activation function, adaptively,
by MOGP was essential in HFNTs performance.
In this work, we were limited to examine the performance of our approach to only benchmark problems.
Therefore, in presences of no free lunch theorem [79, 80] and the algorithm’s dependencies on random
number generator, which are platforms, programming language, and implementation sensitive [81], it is
clear that performance of the mentioned approach is subjected to careful choice of training condition and
parameter-setting when it comes to deal with other real-world problems.
7. Conclusion
Effective use of the final population of the heterogeneous flexible neural trees (HFNTs) evolved using
Pareto-based multiobjective genetic programming (MOGP) and the subsequent parameter tuning by differential evolution led to the formation of high-quality ensemble systems. The simultaneous optimization
of accuracy, complexity, and diversity solved the problem of structural complexity that was inevitably
imposed when a single objective was used. MOGP used in the tree construction phase often guided an
initial HFNT population towards a population in which the candidates were highly accurate, structurally
simple, and diverse. Therefore, the selected candidates helped in the formation of a good ensemble system.
The result obtained by HFNTM approach supports its superior performance over the algorithms collected
for the comparison. In addition, HFNTM provides adaptation in structure, computational nodes, and
input feature space. Hence, HFNT is an effective algorithm for automatic feature selection, data analysis,
and modeling.
Acknowledgment
This work was supported by the IPROCOM Marie Curie Initial Training Network, funded through
the People Programme (Marie Curie Actions) of the European Unions Seventh Framework Programme
FP7/20072013/, under REA grant agreement number 316555.
29
References
References
[1] Y. Chen, B. Yang, J. Dong, A. Abraham, Time-series forecasting using flexible neural tree model,
Information Sciences 174 (3) (2005) 219–235.
[2] X. Yao, Y. Liu, A new evolutionary system for evolving artificial neural networks, IEEE Transactions
on Neural Networks 8 (3) (1997) 694–713.
[3] I. Basheer, M. Hajmeer, Artificial neural networks: Fundamentals, computing, design, and application, Journal of Microbiological Methods 43 (1) (2000) 3–31.
[4] A. J. Maren, C. T. Harston, R. M. Pap, Handbook of neural computing applications, Academic Press,
2014.
[5] I. K. Sethi, A. K. Jain, Artificial neural networks and statistical pattern recognition: Old and new
connections, Vol. 1, Elsevier, 2014.
[6] M. Tkáč, R. Verner, Artificial neural networks in business: Two decades of research, Applied Soft
Computing 38 (2016) 788–804.
[7] S. E. Fahlman, C. Lebière, The cascade-correlation learning architecture, in: D. S. Touretzky (Ed.),
Advances in Neural Information Processing Systems 2, Morgan Kaufmann Publishers Inc., 1990, pp.
524–532.
[8] J.-P. Nadal, Study of a growth algorithm for a feedforward network, International Journal of Neural
Systems 1 (1) (1989) 55–59.
[9] K. O. Stanley, R. Miikkulainen, Evolving neural networks through augmenting topologies, Evolutionary Computation 10 (2) (2002) 99–127.
[10] B.-T. Zhang, P. Ohm, H. Mühlenbein, Evolutionary induction of sparse neural trees, Evolutionary
Computation 5 (2) (1997) 213–236.
[11] M. A. Potter, K. A. De Jong, Cooperative coevolution: An architecture for evolving coadapted
subcomponents, Evolutionary computation 8 (1) (2000) 1–29.
[12] M. Yaghini, M. M. Khoshraftar, M. Fallahi, A hybrid algorithm for artificial neural network training,
Engineering Applications of Artificial Intelligence 26 (1) (2013) 293–301.
[13] S. Wang, Y. Zhang, Z. Dong, S. Du, G. Ji, J. Yan, J. Yang, Q. Wang, C. Feng, P. Phillips, Feedforward neural network optimized by hybridization of PSO and ABC for abnormal brain detection,
International Journal of Imaging Systems and Technology 25 (2) (2015) 153–164.
30
[14] S. Wang, Y. Zhang, G. Ji, J. Yang, J. Wu, L. Wei, Fruit classification by wavelet-entropy and feedforward neural network trained by fitness-scaled chaotic abc and biogeography-based optimization,
Entropy 17 (8) (2015) 5711–5728.
[15] R. Salustowicz, J. Schmidhuber, Probabilistic incremental program evolution, Evolutionary Computation 5 (2) (1997) 123–141.
[16] A. K. Kar, Bio inspired computing–a review of algorithms and scope of applications, Expert Systems
with Applications 59 (2016) 20–32.
[17] Y. Jin, B. Sendhoff, Pareto-based multiobjective machine learning: An overview and case studies,
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 38 (3)
(2008) 397–415.
[18] K. Deb, Multi-objective optimization using evolutionary algorithms, Vol. 16, John Wiley & Sons,
2001.
[19] X. Yao, Y. Liu, Making use of population information in evolutionary artificial neural networks, IEEE
Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 28 (3) (1998) 417–425.
[20] L. I. Kuncheva, C. J. Whitaker, Measures of diversity in classifier ensembles and their relationship
with the ensemble accuracy, Machine Learning 51 (2) (2003) 181–207.
[21] K. Deb, S. Agrawal, A. Pratap, T. Meyarivan, A fast elitist non-dominated sorting genetic algorithm
for multi-objective optimization: NSGA-II, in: Parallel Problem Solving from Nature PPSN VI, Vol.
1917 of Lecture Notes in Computer Science, Springer, 2000, pp. 849–858.
[22] S. Das, S. S. Mullick, P. Suganthan, Recent advances in differential evolution–an updated survey,
Swarm and Evolutionary Computation 27 (2016) 1–30.
[23] Y. Chen, A. Abraham, J. Yang, Feature selection and intrusion detection using hybrid flexible neural
tree, in: Advances in Neural Networks–ISNN, Vol. 3498 of Lecture Notes in Computer Science,
Springer, 2005, pp. 439–444.
[24] L. Sánchez, I. Couso, J. A. Corrales, Combining GP operators with SA search to evolve fuzzy rule
based classifiers, Information Sciences 136 (1) (2001) 175–191.
[25] J. Kennedy, R. C. Eberhart, Y. Shi, Swarm Intelligence, Morgan Kaufmann, 2001.
[26] Y. Chen, A. Abraham, B. Yang, Feature selection and classification using flexible neural tree, Neurocomputing 70 (1) (2006) 305–313.
[27] R. Riolo, J. H. Moore, M. Kotanchek, Genetic programming theory and practice XI, Springer, 2014.
31
[28] X. Chen, Y.-S. Ong, M.-H. Lim, K. C. Tan, A multi-facet survey on memetic computation, IEEE
Transactions on Evolutionary Computation 15 (5) (2011) 591–607.
[29] H.-J. Li, Z.-X. Wang, L.-M. Wang, S.-M. Yuan, Flexible neural tree for pattern recognition, in:
Advances in Neural Networks–ISNN, Vol. 3971 of Lecture Notes in Computer Science, Springer
Berlin Heidelberg, 2006, pp. 903–908.
[30] Y. Chen, Y. Wang, B. Yang, Evolving hierarchical RBF neural networks for breast cancer detection,
in: Neural Information Processing, Vol. 4234 of Lecture Notes in Computer Science, Springer, 2006,
pp. 137–144.
[31] Y. Chen, F. Chen, J. Yang, Evolving MIMO flexible neural trees for nonlinear system identification,
in: International Conference on Artificial Intelligence, Vol. 1, 2007, pp. 373–377.
[32] P. Wu, Y. Chen, Grammar guided genetic programming for flexible neural trees optimization, in:
Advances in Knowledge Discovery and Data Mining, Springer, 2007, pp. 964–971.
[33] Y. Shan, R. McKay, R. Baxter, H. Abbass, D. Essam, H. Nguyen, Grammar model-based program
evolution, in: Congress on Evolutionary Computation, Vol. 1, 2004, pp. 478–485.
[34] G. Jia, Y. Chen, Q. Wu, A MEP and IP based flexible neural tree model for exchange rate forecasting,
in: Fourth International Conference on Natural Computation, Vol. 5, IEEE, 2008, pp. 299–303.
[35] M. Oltean, C. Groşan, Evolving evolutionary algorithms using multi expression programming, in:
Advances in Artificial Life, Springer, 2003, pp. 651–658.
[36] P. Musilek, A. Lau, M. Reformat, L. Wyard-Scott, Immune programming, Information Sciences
176 (8) (2006) 972–1002.
[37] B. Yang, L. Wang, Z. Chen, Y. Chen, R. Sun, A novel classification method using the combination
of FDPS and flexible neural tree, Neurocomputing 73 (46) (2010) 690 – 699.
[38] S. Bouaziz, H. Dhahri, A. M. Alimi, A. Abraham, Evolving flexible beta basis function neural tree
using extended genetic programming & hybrid artificial bee colony, Applied Soft Computing.
[39] Y. Chen, B. Yang, A. Abraham, Flexible neural trees ensemble for stock index modeling, Neurocomputing 70 (46) (2007) 697 – 703.
[40] B. Yang, M. Jiang, Y. Chen, Q. Meng, A. Abraham, Ensemble of flexible neural tree and ordinary
differential equations for small-time scale network traffic prediction, Journal of Computers 8 (12)
(2013) 3039–3046.
32
[41] V. K. Ojha, A. Abraham, V. Snasel, Ensemble of heterogeneous flexible neural tree for the approximation and feature-selection of Poly (Lactic-co-glycolic Acid) micro-and nanoparticle, in: Proceedings
of the Second International Afro-European Conference for Industrial Advancement AECIA 2015,
Springer, 2016, pp. 155–165.
[42] L. Peng, B. Yang, L. Zhang, Y. Chen, A parallel evolving algorithm for flexible neural tree, Parallel
Computing 37 (10–11) (2011) 653–666.
[43] L. Wang, B. Yang, Y. Chen, X. Zhao, J. Chang, H. Wang, Modeling early-age hydration kinetics
of portland cement using flexible neural tree, Neural Computing and Applications 21 (5) (2012)
877–889.
[44] C. Ferreira, Gene expression programming: mathematical modeling by an artificial intelligence,
Vol. 21, Springer, 2006.
[45] G. Weiss, Multiagent systems: A modern approach to distributed artificial intelligence, MIT Press,
1999.
[46] M. Ammar, S. Bouaziz, A. M. Alimi, A. Abraham, Negotiation process for bi-objective multi-agent
flexible neural tree model, in: International Joint Conference on Neural Networks (IJCNN), 2015,
IEEE, 2015, pp. 1–9.
[47] T. Burianek, S. Basterrech, Performance analysis of the activation neuron function in the flexible
neural tree model, in: Proceedings of the Dateso 2014 Annual International Workshop on DAtabases,
TExts, Specifications and Objects, 2014, pp. 35–46.
[48] S. Bouaziz, H. Dhahri, A. M. Alimi, A. Abraham, A hybrid learning algorithm for evolving flexible
beta basis function neural tree model, Neurocomputing 117 (2013) 107–117.
[49] S. Bouaziz, A. M. Alimi, A. Abraham, Universal approximation propriety of flexible beta basis
function neural tree, in: International Joint Conference on Neural Networks, IEEE, 2014, pp. 573–
580.
[50] C. Micheloni, A. Rani, S. Kumar, G. L. Foresti, A balanced neural tree for pattern classification,
Neural Networks 27 (2012) 81–90.
[51] G. L. Foresti, C. Micheloni, Generalized neural trees for pattern classification, IEEE Transactions on
Neural Networks 13 (6) (2002) 1540–1547.
[52] A. Rani, G. L. Foresti, C. Micheloni, A neural tree for classification using convex objective function,
Pattern Recognition Letters 68 (2015) 41–47.
33
[53] Q. Shou-ning, L. Zhao-lian, C. Guang-qiang, Z. Bing, W. Su-juan, Modeling of cement decomposing
furnace production process based on flexible neural tree, in: Information Management, Innovation
Management and Industrial Engineering, Vol. 3, IEEE, 2008, pp. 128–133.
[54] B. Yang, Y. Chen, M. Jiang, Reverse engineering of gene regulatory networks using flexible neural
tree models, Neurocomputing 99 (2013) 458–466.
[55] Z. Chen, B. Yang, Y. Chen, A. Abraham, C. Grosan, L. Peng, Online hybrid traffic classifier for
peer-to-peer systems based on network processors, Applied Soft Computing 9 (2) (2009) 685–694.
[56] T. Novosad, J. Platos, V. Snásel, A. Abraham, Fast intrusion detection system based on flexible
neural tree, in: International Conference on Information Assurance and Security, IEEE, 2010, pp.
106–111.
[57] Y.-Q. Pan, Y. Liu, Y.-W. Zheng, Face recognition using kernel PCA and hybrid flexible neural tree,
in: International Conference on Wavelet Analysis and Pattern Recognition, 2007. ICWAPR’07, Vol. 3,
IEEE, 2007, pp. 1361–1366.
[58] Y. Guo, Q. Wang, S. Huang, A. Abraham, Flexible neural trees for online hand gesture recognition
using surface electromyography, Journal of Computers 7 (5) (2012) 1099–1103.
[59] S. Qu, A. Fu, W. Xu, Controlling shareholders management risk warning based on flexible neural
tree, Journal of Computers 6 (11) (2011) 2440–2445.
[60] A. Rajini, V. K. David, Swarm optimization and flexible neural tree for microarray data classification,
in: International Conference on Computational Science, Engineering and Information Technology,
ACM, 2012, pp. 261–268.
[61] S. Abdelwahab, V. K. Ojha, A. Abraham, Ensemble of flexible neural trees for predicting risk in
grid computing environment, in: Innovations in Bio-Inspired Computing and Applications, Springer,
2016, pp. 151–161.
[62] Y. Jin, B. Sendhoff, E. Körner, Evolutionary multi-objective optimization for simultaneous generation
of signal-type and symbol-type representations, in: Evolutionary Multi-Criterion Optimization, Vol.
3410 of Lecture Notes in Computer Science, Springer, 2005, pp. 752–766.
[63] I. Das, J. E. Dennis, A closer look at drawbacks of minimizing weighted sums of objectives for pareto
set generation in multicriteria optimization problems, Structural optimization 14 (1) (1997) 63–69.
[64] A. E. Eiben, J. E. Smith, Introduction to Evolutionary Computing, Springer, 2015.
[65] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization:
Artificial bee colony (ABC) algorithm, Journal of Global Optimization 39 (3) (2007) 459–471.
34
[66] Y. Zhang, S. Wang, G. Ji, A comprehensive survey on particle swarm optimization algorithm and its
applications, Mathematical Problems in Engineering 2015 (2015) 1–38.
[67] C.-F. Juang, A hybrid of genetic algorithm and particle swarm optimization for recurrent network
design, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 34 (2) (2004)
997–1006.
[68] W. Wongseree, N. Chaiyaratana, K. Vichittumaros, P. Winichagoon, S. Fucharoen, Thalassaemia
classification by neural networks and genetic programming, Information Sciences 177 (3) (2007) 771
– 786.
[69] T. Hastie, R. Tibshirani, J. Friedman, T. Hastie, J. Friedman, R. Tibshirani, The elements of statistical learning, Vol. 2, Springer, 2009.
[70] R. Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems Magazine 6 (3)
(2006) 21–45.
[71] Z.-H. Zhou, Ensemble methods: Foundations and algorithms, CRC Press, 2012.
[72] M. Lichman, UCI machine learning repository, http://archive.ics.uci.edu/ml Accessed on:
01.05.2016 (2013).
[73] J. Alcala-Fdez, L. Sanchez, S. Garcia, M. J. del Jesus, S. Ventura, J. Garrell, J. Otero, C. Romero,
J. Bacardit, V. M. Rivas, et al., Keel: a software tool to assess evolutionary algorithms for data
mining problems, Soft Computing 13 (3) (2009) 307–318.
[74] M. Matsumoto, T. Nishimura, Mersenne twister: A 623-dimensionally equidistributed uniform
pseudo-random number generator, ACM Transactions on Modeling and Computer Simulation 8 (1)
(1998) 3–30.
[75] V. K. Ojha, MOGP-FNT multiobjective flexible neural tree tool, http://dap.vsb.cz/aat/ Accessed
on: 01.05.2016 (May 2016).
[76] Y. Chen, A. Abraham, Y. Zhang, et al., Ensemble of flexible neural trees for breast cancer detection,
The International Journal of Information Technology and Intelligent Computing 1 (1) (2006) 187–201.
[77] S. Holm, A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics
(1979) 65–70.
[78] M. J. Gacto, M. Galende, R. Alcalá, F. Herrera, METSK-HDe : A multiobjective evolutionary algorithm to learn accurate TSK-fuzzy systems in high-dimensional and large-scale regression problems,
Information Sciences 276 (2014) 63–79.
35
[79] D. H. Wolpert, W. G. Macready, No free lunch theorems for optimization, IEEE Transactions on
Evolutionary Computation 1 (1) (1997) 67–82.
[80] M. Koppen, D. H. Wolpert, W. G. Macready, Remarks on a recent paper on the” no free lunch”
theorems, IEEE Transactions on Evolutionary Computation 5 (3) (2001) 295–296.
[81] P. L’Ecuyer, F. Panneton, Fast random number generators based on linear recurrences modulo 2:
Overview and comparison, in: Proceedings of the 2005 Winter Simulation Conference, IEEE, 2005,
pp. 10–pp.
[82] S. Haykin, Neural networks and learning machines, Vol. 3, Pearson Education Upper Saddle River,
2009.
[83] Z.-H. Zhou, Z.-Q. Chen, Hybrid decision tree, Knowledge-Based Systems 15 (8) (2002) 515–528.
[84] J.-S. R. Jang, ANFIS: adaptive-network-based fuzzy inference system, IEEE Transactions on Systems,
Man and Cybernetics 23 (3) (1993) 665–685.
[85] O. Cordón, F. Herrera, A two-stage evolutionary process for designing TSK fuzzy rule-based systems,
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 29 (6) (1999) 703–715.
[86] J. S. Rustagi, Optimization techniques in statistics, Academic Press, 1994.
[87] R. Alcalá, J. Alcalá-Fdez, J. Casillas, O. Cordón, F. Herrera, Local identification of prototypes for
genetic learning of accurate tsk fuzzy rule-based systems, International Journal of Intelligent Systems
22 (9) (2007) 909–941.
[88] K. B. Cho, B. H. Wang, Radial basis function based adaptive fuzzy systems and their applications
to system identification and prediction, Fuzzy Sets and Systems 83 (3) (1996) 325–339.
[89] F. Van den Bergh, A. P. Engelbrecht, A cooperative approach to particle swarm optimization, IEEE
Transactions on Evolutionary Computation 8 (3) (2004) 225–239.
[90] A. M. A. H. Dhahri, F. Karray, Designing beta basis function neural network for optimization using
particle swarm optimization, in: IEEE Joint Conference on Neural Network, 2008, pp. 2564–2571.
[91] C. Aouiti, A. M. Alimi, A. Maalej, A genetic designed beta basis function neural network for approximating multi-variables functions, in: International Conference on Artificial Neural Nets and Genetic
Algorithms, Springer, 2001, pp. 383–386.
[92] C.-F. Juang, C.-M. Hsiao, C.-H. Hsu, Hierarchical cluster-based multispecies particle-swarm optimization for fuzzy-system optimization, IEEE Transactions on Fuzzy Systems 18 (1) (2010) 14–26.
36
[93] S. Yilmaz, Y. Oysal, Fuzzy wavelet neural network models for prediction and identification of dynamical systems, IEEE Transactions on Neural Networks 21 (10) (2010) 1599–1609.
[94] H. Dhahri, A. M. Alimi, A. Abraham, Hierarchical multi-dimensional differential evolution for the
design of beta basis function neural network, Neurocomputing 97 (2012) 131–140.
[95] A. Miranian, M. Abdollahzade, Developing a local least-squares support vector machines-based neurofuzzy model for nonlinear and chaotic time series prediction, IEEE Transactions on Neural Networks
and Learning Systems 24 (2) (2013) 207–218.
[96] N. K. Kasabov, Foundations of neural networks, fuzzy systems, and knowledge engineering, Marcel
Alencar, 1996.
[97] N. Kasabov, Evolving fuzzy neural networks for adaptive, on-line intelligent agents and systems, in:
Recent Advances in Mechatronics, Springer, Berlin, 1999.
[98] S. Bouaziz, A. M. Alimi, A. Abraham, Extended immune programming and opposite-based PSO for
evolving flexible beta basis function neural tree, in: IEEE International Conference on Cybernetics,
IEEE, 2013, pp. 13–18.
37
Appendix A. Dataset Description
Table A.17: Collected datasets for testing HFNTM
Index
Name
Features
Samples
Output
AUS
Australia
14
691
2
HRT
Heart
13
270
2
ION
Ionshpere
33
351
2
PIM
Pima
8
768
2
WDB
Wdbc
30
569
2
ABL
Abalone
8
4177
1
16
337
1
6
365
1
BAS
Baseball
DEE
DEE
EVL
Elevators
18
16599
1
FRD
Fridman
5
1200
1
MGS
Mackey-Glass
4
1000
1
WWR
Waste Water
4
475
1
Type
Classification
Regression
Time-series
Appendix B. Algorithms from literature
Table B.18: Algorithms from literature for the comparative study with HFNTM
Ref. Algorithms
Definition
[82] MLP
Multi-layer Perceptron
[83] HDT
Hybrid Decision Tree
[76] FNT
Flexible Neural Tree
[84] ANFIS-SUB
Adaptive Neuro-Fuzzy Inference System Using Subtractive Clustering
[85] TSK-IRL
Genetic Learning of TSK-rules Under Iterative Rule Learning
[86] LINEAR-LMS
Least Mean Squares Linear Regression
[87] LEL-TSK
Local Evolutionary Learning of TSK-rules
[88] RBF
Classical Radial Basis Function
[89] CPSO
Cooperative Particle Swarm Optimization (PSO)
[90] PSO-BBFN
PSO-based Beta Basis Function Neural Network
[91] G-BBFNN
GA-based BBFNN
[92] HCMSPSO
Hierarchical Cluster-Based Multispecies PSO
[93] FWNN-M
Fuzzy Wavelet Neural Network Models
[94] HMDDE-BBFNN
Hierarchical Multidimensional DE-Based BBFNN
[95] LNF
Local Least-Squares Support Vector Machines-Based Neuro-Fuzzy Mode
[96] BPNN
Back-propagation Neural Network
[97] EFuNNs
Evolving Fuzzy Neural Networks
[98] FBBFNT-EGP&PSO Extended Immune Programming and Opposite-PSO for Flexible BBFNN
[78] METSK-HDe
Multiobjective Evolutionary Learning of TSK-rules for High-Dimensional Problems
38
| 9 |
Feedback Control of Real-Time Display Advertising
Weinan Zhang1 , Yifei Rong2,1 , Jun Wang1 , Tianchi Zhu3 , Xiaofan Wang4
1
University College London, 2 YOYI Inc., 3 Big Tree Times Co., 4 Shanghai Jiao Tong University
arXiv:1603.01055v1 [cs.GT] 3 Mar 2016
1
{w.zhang, j.wang}@cs.ucl.ac.uk, 2 [email protected]
3
[email protected], 4 [email protected]
ABSTRACT
Real-Time Bidding (RTB) is revolutionising display advertising by facilitating per-impression auctions to buy ad impressions as they are being generated. Being able to use
impression-level data, such as user cookies, encourages user
behaviour targeting, and hence has significantly improved
the effectiveness of ad campaigns. However, a fundamental
drawback of RTB is its instability because the bid decision
is made per impression and there are enormous fluctuations
in campaigns’ key performance indicators (KPIs). As such,
advertisers face great difficulty in controlling their campaign
performance against the associated costs. In this paper, we
propose a feedback control mechanism for RTB which helps
advertisers dynamically adjust the bids to effectively control
the KPIs, e.g., the auction winning ratio and the effective
cost per click. We further formulate an optimisation framework to show that the proposed feedback control mechanism
also has the ability of optimising campaign performance. By
settling the effective cost per click at an optimal reference
value, the number of campaign’s ad clicks can be maximised
with the budget constraint. Our empirical study based on
real-world data verifies the effectiveness and robustness of
our RTB control system in various situations. The proposed
feedback control mechanism has also been deployed on a
commercial RTB platform and the online test has shown its
success in generating controllable advertising performance.
Keywords
Feedback Control, Demand-Side Platform, Real-Time Bidding, Display Advertising
1. INTRODUCTION
Emerged in 2009, Real-Time Bidding (RTB) has become a
new paradigm in display advertising [22, 12]. Different from
the conventional human negotiation or pre-setting a fixed
price for impressions, RTB creates an impression-level auction and enables advertisers to bid for individual impression
through computer algorithms served by demand-side platforms (DSPs) [33]. The bid decision could depend on the
evaluation of both the utility (e.g., the likelihood and economic value of an impression for generating click or conversion) and the cost (e.g., the actual paid price) of each ad
WSDM 2016, February 22-25, 2016, San Francisco, CA, USA.
arXiv version.
impression. More importantly, real-time information such as
the specific user demographics, interest segments and various context information is leveraged to help the bidding algorithms evaluate each ad impression. With the real-time
decision making mechanism, it is reported that RTB yields
significantly higher return-on-investment (ROI) than other
online advertising forms [31].
Despite the ability of delivering performance-driven advertising, RTB, unfortunately, results in high volatilities, measured by major Key Performance Indicators (KPIs), such as
CPM (cost per mille), AWR (auction winning ratio), eCPC
(effective cost per click) and CTR (click-through rate). To
illustrate this, Figure 1 plots the four major KPIs over time
for two example campaigns in a real-world RTB dataset. All
four KPIs fluctuate heavily across the time under a widelyused bidding strategy [25]. Such instability causes advertisers ample difficulty in optimising and controlling the KPIs
against their cost.
In this paper, we propose to employ feedback control theory [2] to solve the instability problem in RTB. Feedback
controllers are widely used in various applications for maintaining dynamically changing variables at the predefined reference values. The application scenarios range from the
plane direction control [23] to the robot artificial intelligence
[26]. In our RTB scenario, the specific KPI value, depending on the requirements from the advertisers, is regarded as
the variable we want to control with a pre-specified reference value. Our study focuses on two use cases. (i) For
performance-driven advertising, we concern with the feedback control of the average cost on acquiring a click, measured by effective cost per click (eCPC). (ii) For branding
based advertising, to ensure a certain high exposure of a
campaign, we focus on the control of the ratio of winning
the auctions for the targeted impressions, measured by auction winning ratio (AWR). More specifically, we take each
of them as the control input signal and consider the gain
(the adjustment value) of bid price as the control output
signal for each incoming ad display opportunity (the bid request). We develop two controllers to test: the widely used
proportional-integral-derivative (PID) controller [6] and the
waterlevel-based (WL) controller [10]. We conduct largescale experiments to test the feedback control performance
with different settings of reference value and reference dynamics. Through the empirical study, we find that the PID
and WL controllers are capable of controlling eCPC and
AWR, while PID further provides a better control accuracy
and robustness than WL.
Furthermore, we investigate whether the proposed feedback control can be employed for controllable bid optimisation. It is common that the performance of an ad campaign (e.g., eCPC) varies from different channels (e.g., ad
exchanges, user geographic regions and PC/mobile devices)
100
●
0.4
●
●
● ●
90
80
●
●
70
●
●
●
●
AWR
CPM
●
●
●
●
●
3476
●
●
● ● ●
● ● ●
●
●
0
5
●
0.1
●
5
10
15
20
Hour
●
●
●
●
50
3358
●
●
● ●
●
0
campaign id
●
0.2
●
60
0.3
●
●
● ● ● ●
● ● ●
10
●
15
20
Hour
●
●
75
●
●
●
●
campaign id
●
●
CTR
eCPC
0.010
●
50
●
●
●
●
●
● ●
●
● ●
●
●
●
●
5
●
● ●
10
●
●
● ● ●
3358
3476
●
●
●
●
●
●
15
●
●
●
●
●
●
●
0.000
0
●
●
0.005
25
●
●
●
20
Hour
0
5
10
15
20
Hour
Figure 1: The instability of CPM (cost per mille), AWR
(auction winning ratio), eCPC (effective cost per click), and
CTR (click-through rate) for two sample campaigns without
a controller. Dataset: iPinYou.
[34]. If one can reallocate some budget from less cost-effective
channels to more cost-effective ones, the campaign-level performance would improve [35]. In this paper, we formulate
the multi-channel bid optimisation problem and propose a
model to calculate the optimal reference eCPC for each channel. Our experiments show that the campaign-level click
number and eCPC achieve significant improvements with
the same budget.
Moreover, the proposed feedback control mechanism has
been implemented and integrated in a commercial DSP. The
conducted live test shows that in a real and noisy setting
the proposed feedback mechanism has the ability to produce
controllable advertising performance.
To sum up, the contributions of our work are as follows.
(i) We study the instability problem in RTB and investigate
its solution by leveraging the feedback control mechanism.
(ii) Comprehensive offline and online experiments show that
PID controller is better than other alternatives and finds
the optimal way to settle the variable in almost all studied
cases. (iii) We further discover that feedback controllers
are of great potential to perform bid optimisation through
settling the eCPC at the reference value calculated by our
proposed mathematical click maximisation framework.
The rest of this paper is organised as follows. Section 2
provides preliminaries for RTB and feedback control. Our
solution is formally presented in Section 3. The empirical
study is reported in Section 4, while the online deployment
and live test are given in Section 5. Section 6 discusses the
related work and we finally conclude this paper in Section 7.
2. PRELIMINARIES
To make the paper self-contained, in this section, we take
a brief review on the RTB eco-system, bidding strategies,
and some basics of feedback control theory.
and each of its qualified ads, the bidding agent calculates a
bid price. Then the bid response (ad, bid price) is sent back
to the exchange. (3) Having received the bid responses from
the advertisers, the ad exchange hosts an auction and picks
the ad with the highest bid as the auction winner. (4) Then
the winner is notified of the auction winning from the ad
exchange. (5) Finally, the winner’s ad will be shown to the
visitor along with the regular content of the publisher’s site.
It is commonly known that a long time page-loading would
greatly reduce users’ satisfactory [22]. Thus, advertiser bidding agents are usually required to return a bid in a very
short time frame (e.g., 100 ms). (6) The user’s feedback
(e.g., click and conversion) on the displayed ad is tracked
and finally sent back to the winner advertiser. For a detailed
discussion about RTB eco-systems, we refer to [34, 31]. The
above interaction steps have the corresponding positions in
Figure 2, as we will discuss later.
2.2 Bidding Strategies
A basic problem for DSP bidding agents is to figure out
how much to bid for an incoming bid request. The bid decision depends on two factors for each ad impression: the utility (e.g., CTR, expected revenue) and cost (i.e., expected
charged price) [33]. In a widely adopted bidding strategy
[25], the utility is evaluated by CTR estimation while the
base bid price is tuned based on the bid landscape [9] for
the cost evaluation. The generalised bidding strategy in [25]
is
b(t) = b0
θt
,
θ0
(1)
where θt is the estimated CTR for the bid request at moment
t; θ0 is the average CTR under a target condition (e.g., a user
interest segment); and b0 is the tuned base bid price for the
target condition. In this work, we adopt this widely used
bidding strategy and adopt a logistic CTR estimator [27].
2.3 Feedback Control Theory
Feedback control theory deals with the reaction and control of dynamic systems from feedback and outside noise [2].
The usual objective of feedback control theory is to control a
dynamic system so that the system output follows a desired
control signal, called the reference, which may be a fixed
or changing value. To attain this objective, a controller is
designed to monitors the output and compares it with the
reference. The difference between actual and desired output,
called the error factor, is applied as feedback from the dynamic system to the control system. With the specific control function, the controller outputs the control signal, which
is then transformed by the actuator into the system input
signal sent back to the dynamic system. These processes
form a feedback control loop between the dynamic system
and the controller. Control techniques are widely used in
various engineering applications for maintaining some signals at the predefined or changing reference values, such as
plane navigation [23] and water distribution control [10].
2.1 RTB Flow Steps
3. RTB FEEDBACK CONTROL SYSTEM
The interaction process among the main components of
the RTB eco-system is summarised into the following steps:
(0) when a user visits an ad-supported site (e.g., web pages,
streaming videos and mobile apps), each ad placement will
trigger a call for ad (ad request) to the ad exchange. (1)
The ad exchange sends the bid requests for this particular
ad impression to each advertiser’s DSP bidding agent, along
with the available information such as the user and context
information. (2) With the information of the bid request
Figure 2 presents the diagram of the proposed RTB feedback control system. The traditional bidding strategy is
represented as the bid calculator module in the DSP bidding agent. The controller plays as a role which adjusts the
bid price from the bid calculator.
Specifically, the monitor receives the auction win notice
from the ad exchange and the user click feedback from the ad
tracking system, which as a whole we regard as the dynamic
system. Then the current KPI values, such as AWR and
Control
Function
Error
factors
Measured
KPI value
Actuator
Adjusted
Bid price
2. Bid
Response
4. Win
Notice
RTB
Ad
Exchange
5. Ad
Reference
KPI
150
100
100
50
25
Page
50
0
0
1
eCPC can be calculated. If the task is to control the eCPC
with the reference value, the error factor between the reference eCPC and the measured eCPC is calculated then sent
into the control function. The output control signal is sent
to the actuator, which uses the control signal to adjust the
original bid price from the bid calculator. The adjusted bid
price is packaged with the qualified ad into the bid response
and sent back to the ad exchange for auction.
3.1 Actuator
For the bid request at the moment t, the actuator takes
into the current control signal φ(t) to adjust the bid price
from b(t) (Eq. (1)) to a new value ba (t). In our model,
the control signal, which will be mathematically defined in
the next subsections, is a gain on the bid price. Generally,
when the control signal φ(t) is zero, there should be no bid
adjustment. There could be different actuator models, and
in our work we choose to use
ba (t) = b(t) exp{φ(t)},
(2)
where the model satisfies ba (t) = b(t) when φ(t) = 0. Other
models such as the linear model ba (t) ≡ b(t)(1 + φ(t)) are
also investigated in our study but it performs poorly in the
situations when a big negative control signal is sent to the
actuator, where the linear actuator will usually respond a
negative or a zero bid, which is meaningless in our scenario.
By contrast, the exponential model is a suitable solution to
addressing the above drawback because it naturally avoids
generating a negative bid. In the later empirical study we
mainly report the analysis based on the exponential-form
actuator model.
3.2 PID Controller
The first controller we investigate is the classic PID controller [6]. As its name implies, a PID controller produces
the control signal from a linear combination of the proportional factor, the integral factor and the derivative factor
based on the error factor:
e(tk ) = xr − x(tk ),
(3)
e(tj )△tj + λD
△e(tk )
,
△tk
2
3
Ad Exchange
0
1
2
3
Ad Exchange
1
2
3
Ad Exchange
User
Figure 2: Feedback controller integrated in the RTB system.
j=1
50
3. Auction
6. User Feedback
φ(tk+1 ) ← λP e(tk ) + λI
Campaign 3476
200
75
0. Ad
Request
Monitor
k
X
Campaign 3427
150
eCPC
Bid Calculator
Bid Price
Control
Signal
Campaign 1458
100
eCPC
Ad
Controller
Dynamic System
1. Bid
Request
eCPC
DSP Bidding Agent
(4)
where the error factor e(tk ) is the reference value xr minus the current controlled variable value x(tk ), the update
time interval is given as △tj = tj − tj−1 , the change of error factors is △e(tk ) = e(tk ) − e(tk−1 ), and λP , λI , λD are
the weight parameters for each control factor. Note that
here the control factors are all in discrete time (t1 , t2 , . . .)
because bidding events are discrete and it is practical to
periodically update the control factors. All control factors
(φ(t), e(tk ), λP , λI , λD ) remain the same between two updates. Thus for all time t between tk and tk+1 , the control
signal φ(t) in Eq. (2) equals φ(tk ).We see that P factor tends
to push the current variable value to the reference value; I
Figure 3: Different eCPCs across different ad exchanges.
Dataset: iPinYou.
factor reduces the accumulative error from the beginning to
the current time; D factor controls the fluctuation of the
variable.
3.3 Waterlevel-based Controller
The Waterlevel-based (WL) controller is another feedback
control model which was originally used to switching devices
controlled by water level [10]:
φ(tk+1 ) ← φ(tk ) + γ(xr − x(tk )),
(5)
where γ is the step size parameter for φ(tk ) update in exponential scale.
Compared to PID, the WL controller only takes the difference between the variable value and the reference value
into consideration. Moreover, it provides a sequential control signal. That is, the next control signal is an adjustment
based on the previous one.
3.4 Setting References for Click Maximisation
Given that the feedback controller is an effective tool to
deliver advertisers’ KPI goal, in this subsection, we demonstrate that the feedback control mechanism can be leveraged as a model-free click maximisation framework embedded with any bidding strategies [25, 33] and performs automatic budget allocation [17] across different channels via
setting smart reference values.
When an advertiser specifies the targeted audience (usually also combined with ad impression contextual categories)
for their specific campaign, the impressions that fit the target rules may come from separate channels such as different
ad exchanges, user regions, users’ PC/mobile devices etc.
It is common that the DSP integrates with several ad exchanges and delivers the required ad impressions from all
those ad exchanges (as long as the impressions fit the target rule), although the market prices [1] may be significantly
different. Figure 3 illustrates that, for the same campaign,
there is a difference in terms of eCPC across different ad
exchanges. As pointed out in [34], the differences are also
found in other channels such as user regions and devices.
The cost differences provide advertisers a further opportunity to optimise their campaign performance based on
eCPCs. To see this, suppose a DSP is integrated to two
ad exchanges A and B. For a campaign in this DSP, if its
eCPC from exchange A is higher than that from exchange B,
which means the inventories from exchange B is more cost
effective than those from exchange A, then by reallocating
some budget from exchange A to B will potentially reduce
the overall eCPC of this campaign. Practically the budget
reallocation can be done by reducing the bids for exchange
A while increasing the bids for exchange B. Here we formally propose a model of calculating the equilibrium eCPC
of each ad exchange, which will be used as the optimal reference eCPC for the feedback control that leads to a maximum
number of clicks given the budget constraint.
Mathematically, suppose for a given ad campaign, there
are n ad exchanges (could be other channels), i.e., 1, 2, . . . , n,
that have the ad volume for a target rule. In our formula-
Campaign 1458
Campaign 3427
900
700
800
600
Campaign 3476
400
700
Clicks
Clicks
Clicks
exchange
500
1
300
2
3
200
400
600
100
300
500
0
5
10
15
20
0
10
20
eCPC
30
40
0
30
eCPC
60
90
120
eCPC
Figure 4: Number of Clicks against eCPC. Clicks and eCPC
are calculated across the whole iPinYou training dataset of
each campaign by tuning b0 in Eq. (1).
tion we focus on optimising clicks, while the formulation of
conversions can be obtained similarly. Let ξi be the eCPC
on ad exchange i, and ci (ξi ) be the click number that the
campaign acquires in the campaign’s lifetime if we tune the
bid price to make its eCPC be ξi for ad exchange i. For
advertisers, they want to maximise the campaign-level click
number given the campaign budget B [33]:
max
ξ1 ,...,ξn
s.t.
X
ci (ξi )
(6)
X
ci (ξi )ξi = B.
(7)
i
i
Its Lagrangian is
L(ξ1 , . . . , ξn , α) =
X
ci (ξi ) − α(
X
ci (ξi )ξi − B),
(8)
i
i
where α is the Lagrangian multiplier. Then we take its gradient on ξi and let it be 0:
∂L(ξ1 , . . . , ξn , α)
= c′i (ξi ) − α(c′i (ξi )ξi + ci (ξi )) = 0, (9)
∂ξi
c′ (ξi )ξi + ci (ξi )
ci (ξi )
1
= i
= ξi + ′
, (10)
α
c′i (ξi )
ci (ξi )
where the equation holds for each ad exchange i. As such, we
can use α to bridge the equations for any two ad exchanges
i and j:
1
ci (ξi )
cj (ξj )
= ξi + ′
= ξj + ′
.
α
ci (ξi )
cj (ξj )
(11)
So the optimal solution condition is given as follows:
1
c1 (ξ1 )
c2 (ξ2 )
cn (ξn )
= ξ1 + ′
= ξ2 + ′
= · · · = ξn + ′
, (12)
α
c1 (ξ1 )
c2 (ξ2 )
cn (ξn )
X
ci (ξi )ξi = B.
(13)
i
With sufficient data instances, we find that ci (ξi ) is usually
a concave and smooth function. Some examples are given
in Figure 4. Based on the observation, it is reasonable to
define a general polynomial form of the ci (ξi ) functions:
ξ bi
i
ci (ξi ) = c∗i ai ∗
,
(14)
ξi
where ξi∗ is the campaign’s historic average eCPC on the ad
inventories from ad exchange i during the training data period, and c∗i is the corresponding click number. These two
factors are directly obtained from the training data. Parameters ai and bi are to be tuned to fit the training data.
Substituting Eq. (14) into Eq. (12) gives
ci (ξi )
1
= ξi + ′
= ξi +
α
ci (ξi )
c∗
i ai bi
ξ
ξi∗ bi i
c∗
i ai
bi ξibi −1
ξ∗ bi
i
1
ξi .
= 1+
bi
We can then rewrite Eq. (12) as
1
1
1
1
= 1+
ξ1 = 1 +
ξ2 = · · · = 1 +
ξn . (16)
α
b1
b2
bn
bi
.
(17)
Thus ξi =
α(bi + 1)
(15)
Interestingly, from Eq. (17) we find that the equilibrium is
not in the state that the eCPCs from the exchanges are the
same. Instead, it is when any amount of budget reallocated
among the exchanges does not make any more total clicks;
for instance, in a two-exchange case, the equilibrium reaches
when the increase of the clicks from one exchange equals
the decrease from the other (Eq. (9)). More specifically,
from Eq. (17) we observe that for ad exchange i, if its click
function ci (ξi ) is quite flat, i.e., the click number increases
much slowly as its eCPC increases in a certain area, then
i
its learned bi should be small. This means the factor bib+1
is small as well; then from Eq. (17) we can see the optimal
eCPC in ad exchange i should be relatively small.
Substituting Eqs. (14) and (17) into Eq. (7) gives
X c∗i ai bi bi +1 1 bi +1
= B,
(18)
α
ξi∗ bi bi + 1
i
where for simplicity, we denote for each ad exchange i, its
bi +1
c∗ a
i
as δi . This give us a simpler form
parameter ξi∗ bii bib+1
i
as:
X 1 bi +1
= B.
(19)
δi
α
i
There is no closed form to P
solve Eq. (19) for α. However,
as bi cannot be negative and i δi ( α1 )bi +1 monotonically increases against α1 , one can easily obtain the solution for α by
using a numeric solution such as the stochastic gradient decent or the Newton method [5]. Finally, based on the solved
α, we can find the optimal eCPC ξi for each ad exchange i
using Eq. (17). In fact, these eCPCs are the reference value
we want the campaign to achieve for the corresponding ad
exchanges. We can use PID controllers, by setting xr in
Eq. (3) as ξi for each ad exchange i, to achieve these reference eCPCs so as to achieve the maximum number of clicks
on the campaign level.
As a special case, if we regard the whole volume of the
campaign as one channel, this method can be directly used
as a general bid optimisation tool. It makes use of the campaign’s historic data to decide the optimal eCPC and then
the click optimisation is performed by control the eCPC to
settle at the optimal eCPC as reference. Note that this
multi-channel click maximisation framework is flexible to incorporate any bidding strategies.
4. EMPIRICAL STUDY
We conduct comprehensive experiments to study the proposed RTB feedback control mechanism. Our focus in this
section is on offline evaluation using a publicly-available realworld dataset. To make our experiment repeatable, we have
published the experiment code1 . The online deployment and
test on a commercial DSP will be reported in Section 5.
4.1 Evaluation Setup
Dataset. We test our system on a publicly available
dataset collected from iPinYou DSP [19]. It contains the
ad log data from 9 campaigns during 10 days in 2013, which
consists of 64.75M bid records, 19.50M impressions, 14.79K
1
https://github.com/wnzhang/rtbcontrol
clicks and 16K Chinese Yuan (CNY) expense. According to
the data publisher [19], the last three-day data of each campaign is split as the test data and the rest as the training
data. The dataset disk size is 35GB. More statistics and
analysis of the dataset is available in [34]. The dataset is
in a record-per-row format, where each row consists of three
parts: (i) The features for this auction, e.g., the time, location, IP address, the URL/domain of the publisher, ad slot
size, user interest segments etc. The features of each record
are indexed as a 0.7M-dimension sparse binary vector which
is fed into a logistic regression CTR estimator of the bidding
strategy in Eq. (1); (ii) The auction winning price, which is
the threshold of the bid to win this auction; (iii) The user
feedback on the ad impression, i.e., click or not.
Evaluation Protocol. We follow the evaluation protocol from previous studies on bid optimisation [33, 34] and
an RTB contest [19] to run our experiment. Specifically, for
each data record, we pass the feature information to our bidding agent. Our bidding agent generates a new bid based
on the CTR prediction and other parameters in Eq. (1). We
then compare the generated bid with the logged actual auction winning price. If the bid is higher than the auction
winning price, we know the bidding agent has won this auction, paid the winning price, and obtained the ad impression.
If from the ad impression record there is a click, then the
placement has generated a positive outcome (one click) with
a cost equal to the winning price. If there is no click, the
placement has resulted in a negative outcome and wasted the
money. The control parameters are updated every 2 hours
(as one round).
It is worth mentioning that historical user feedback has
been widely used for evaluating information retrieval systems [29] and recommender systems [13]. All of them used
historic clicks as a proxy for relevancy to train the prediction model as well as to form the ground truth. Similarly,
our evaluation protocol keeps the user contexts, displayed
ads (creatives etc.), bid requests, and auction environment
unchanged. We intend to answer that under the same context if the advertiser were given a different or better bidding
strategy or employed a feedback loop, whether they would
be able to get more clicks with the budget limitation. The
click would stay the same as nothing has been changed for
the users. This methodology works well for evaluating bid
optimisation [1, 33] and has been adopted in the display
advertising industry [19].
Evaluation Measures. We adopt several commonly used
measures in feedback control systems [3]. We define the error
band as the ±10% interval around the reference value. If the
controlled variable settles within this area, we consider that
the variable is successfully controlled. The speed of convergence (to the reference value) is also important. Specifically,
we evaluate the rise time to check how fast the controlled
variable will get into the error band. We also use the settling time to evaluate how fast the controlled variable will
be successfully restricted into the error band. However, fast
convergence may bring the problem of inaccurate control.
Thus, two control accuracy measures are introduced. We
use the overshoot to measure the percentage of value that the
controlled variable passes over the reference value. After the
settling (called the steady state), we use the RMSE-SS to
evaluate the root mean square error between the controlled
variable value and the reference value. At last, we measure
the control stability by calculating the standard deviation
of the variable value after the settling, named as SD-SS.
For bid optimisation performance, we use the campaign’s
total achieved click number and eCPC as the prime evaluation measures. We also monitor the impression related
performance such as impression number, AWR and CPM.
Table 1: Overall control performance on eCPC.
Cpg.
1458
2259
2261
2821
2997
3358
3386
3427
3476
Cntr
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
Rise
1
6
7
6
3
5
17
17
3
9
1
1
1
Settling
5
36
7
23
22
17
7
13
12
5
-
Overshoot
7.73
0
8.03
0
17.66
0
14.47
0
0.75
0
23.89
0
7.90
0
29.03
0
7.64
17.11
RMSE-SS
0.0325
0.0845
0.0449
0.0299
0.0242
0.0361
0.0337
0.0341
0.0396
0.0327
-
SD-SS
0.0313
0.0103
0.0411
0.0294
0.0216
0.026
0.0287
0.0341
0.0332
0.031
-
Table 2: Overall control performance on AWR.
Cpg.
1458
2259
2261
2821
2997
3358
3386
3427
3476
Cntr
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
PID
WL
Rise
4
3
4
1
1
1
6
1
1
1
2
1
4
1
2
3
2
1
Settling
10
7
6
13
4
8
3
8
8
7
8
5
6
13
6
7
Overshoot
16.86
0.00
17.08
3.91
16.39
2.02
16.44
5.77
13.68
0.00
22.08
0.13
18.85
2.95
26.63
0.24
27.15
1.49
RMSE-SS
0.0153
0.0448
0.0076
0.0833
0.0205
0.0086
0.0501
0.0151
0.0250
0.0332
0.0133
0.0300
0.0200
0.0482
0.0175
0.0308
SD-SS
0.0093
0.0231
0.0072
0.0113
0.0203
0.0086
0.0332
0.0151
0.0213
0.0211
0.0118
0.0291
0.0179
0.0257
0.0161
0.0271
Empirical Study Organisation. Our empirical study
consists of five parts with the focus on controlling two KPIs:
eCPC and AWR. (i) In Section 4.2, we answer whether the
proposed feedback control systems are practically capable
of controlling the KPIs. (ii) In Section 4.3, we study the
control difficulty with different reference value settings. (iii)
In Section 4.4, we focus on the PID controller and investigate its attributes on settling the target variable. (iv) In
Section 4.5, we leverage the PID controllers as a bid optimisation tool and study their performance on optimising the
campaign’s clicks and eCPC across multiple ad exchanges.
(v) Finally, more discussions about PID parameter tuning
and online update will be given in Section 4.6.
4.2 Control Capability
For each campaign, we check the performance of the two
controllers on two KPIs. We first tune the control parameters on the training data to minimise the settling time.
Then we adopt the controllers over the test data and observe the performance. The detailed control performance on
each campaign is provided in Table 1 for eCPC2 and Table 2 for AWR. Figure 5 shows the controlled KPI curves
against the timesteps (i.e., round). The dashed horizontal
line means the reference.
We see from the results that (i) all the PID controllers
can settle both KPIs within the error band (with the settling time less than 40 rounds), which indicates that the
PID control is capable of settling both KPIs at the given
reference value. (ii) The WL controller on eCPC does not
work that well on test data, even though we could find good
parameters on training data. This is due to the fact that
WL controller tries to affect the average system behaviour
through transient performance feedbacks while facing the
2
“-” cells mean invalid because of the failure to rise or settle.
Campaign 1458
●
●
0
●
●
●
●
●
●●
●
●
10
●
●●
●
●
●
●
●●
●
20
●
●●●
●
●
●●●●●
WL
●
30
●
●
0.5
40
●
PID
●
●
●
●● ●●●●●●●
●●●●●●●●●●●●●●●●● ●●●●
●●
●
50
10
●●●●
20
30
controller
●
●
0
●●
●
●
●
●
●
●
● ● ● ●●●●●●●
WL
●
●●
● ●●●●●●●●●●●
●
●
●
10
20
30
40
PID
● ● ●● ●●
●●● ●●
●●●●●● ●●●●●●●● ●●●●●
10
20
30
40
round
0.05
30
0.04
SD−SS
RMSE−SS
Settling Time
●
0.03
0.03
0.02
●
10
●
●
0.02
0.01
low
mid high
Reference
low mid high
Reference
low mid high
Reference
(a) PID on eCPC
0.030
0.025
0.025
0.020
●
0.020
SD−SS
RMSE−SS
Settling Time
10.0
7.5
0.015
0.015
5.0
0.010
0.010
low mid high
Reference
●●●●●●●●
low mid high
Reference
high
30
●
0.4
●●●●
40
●
●
●
●
●
●
0
●●● ● ●
●●
●
10
●● ●●
●
●● ●
0
10
●
●
●
● ● ● ● ● ● ●●●●●●●●●●●●
20
30
40
●
reference
0.8
●
low
middle
0.6
●●●
●
40
●
WL on AWR
●
●
●
round
1.0
60
●
high
●
●
●●●● ●●
●●●
20
●
●● ●●●●●●●●●
30
●
●●●
40
round
0.4
0
●
●
●
●
●
●
●
10
●
●
●
●
●
●
●
●
●
●
●
●
●●●
20
● ● ●●●●●●●● ● ● ●
30
40
round
Figure 7: Control performance for campaign 3386 on eCPC
and AWR with different reference values.
●
0.04
●
●
●●
20
20
●
0
Figure 5: Control performance on eCPC and AWR.
20
10
●●
round
40
eCPC
AWR
40
●
●●
WL on eCPC
0.8
0.6
●●
●●●
●
●
100
●
80
●
●●●
low
middle
●
●●
round
80
120
●
●●
0
40
●
0.6
●
30
Campaign 3358
1.0
●
●
round
Campaign 3358
●
●●●
●
0
●
●
●
●
round
reference
0.8
●
AWR
●
●
●
●
●
0.7
0.6
●
40
●
AWR
●
70
controller
eCPC
0.8
AWR
eCPC
0.9
60
50
eCPC
PID on AWR
1.0
●
●
70
160
PID on eCPC
Campaign 1458
1.0
●
low mid high
Reference
(b) PID on AWR
Figure 6: Control difficulty comparison with PID.
huge dynamics of RTB. (iii) For WL on AWR, most campaigns are controllable while there are still two campaigns
that fail to settle at the reference value. (iv) Compared
to PID on AWR, WL always results in higher RMSE-SS
and SD-SS values but lower overshoot percentage. Those
control settings with a fairly short rise time usually face
a higher overshoot. (v) In addition, we observe that the
campaigns with higher CTR estimator AUC performance
(referring [34]) normally get shorter settling time.
According to above results, PID controller outperforms
the WL controller in the tested RTB cases. We believe this
is due to the fact that the integral factor in PID controller
helps reduce the accumulative error (i.e., RMSE-SS) and the
derivative factor helps reduce the variable fluctuation (i.e.,
SD-SS). And it is easier to settle the AWR than the eCPC.
This is mainly because AWR only depends on the market
price distribution while eCPC additionally involves the user
feedback, i.e., CTR, where the prediction is associated with
significant uncertainty.
4.3 Control Difficulty
In this section, we extend our control capability experiments further by adding higher and lower reference values in
comparison. Our goal is to investigate the impact of different
levels of reference values on control difficulty. We follow the
same scheme to train and test the controllers as Section 4.2.
However, instead of showing the exact performance value,
our focus here is on the performance comparison with different reference settings.
The distribution of achieved settling time, RMSE-SS and
SD-SS, with the setting of three reference levels, i.e., low,
middle and high, are shown in the form of box plot [20] in
the Figure 6(a) and 6(b) for the eCPC and AWR control with
PID. We observe that the average settling time, RMSE-SS
and SD-SS, are reduced as the reference values get higher.
This shows that generally the control tasks with higher reference eCPC and AWR are easier to achieve because one can
simply bid higher to win more and spend more. Also as the
higher reference is closer to the initial performance value, the
control signal does not bring serious bias or volatility, which
leads to the lower RMSE-SS and SD-SS. For the page limit,
the control performance with WL is not presented here. The
results are similar with PID.
Figure 7 gives the specific control curves of the two controllers with three reference levels on a sample campaign
3386. We find that the reference value which is farthest
away from the initial value of the controlled variable brings
the largest difficulty for settling, both on eCPC and AWR.
This suggests that advertisers setting an ambitious control
target will introduce the risk of unsettling or large volatility.
The advertisers should try to find a best trade-off between
the target value and the practical control performance.
4.4 PID Settling: Static vs. Dynamic References
The combination of proportional, integral and derivative
factors enables the PID feedback to automatically adjust the
settling progress during the control lifetime with high efficiency [7]. Alternatively, one can empirically adjust the reference value in order to achieve the desired reference value.
For example of eCPC control, if the campaign’s achieved
eCPC is higher than the initial reference value right after
exhausting the first half budget, the advertiser might want
to lower the reference value in order to accelerate the downward adjustment and finally reach its initial eCPC target
before running out of the budget. PID feedback controller
implicitly handles such problem via its integration factor [7,
28]. In this section, we investigate with our RTB feedback
control mechanism whether it is still necessary for advertisers to intentionally adjust the reference value according to
the campaign’s real-time performance.
Dynamic Reference Adjustment Model. To simulate the advertisers’ strategies to adaptively change the reference value of eCPC and AWR under the budget constraint,
we propose a dynamic reference adjustment model to calculate the new reference xr (tk+1 ) after tk :
xr (tk+1 ) =
(B − s(tk ))xr x(tk )
,
Bx(tk ) − s(tk )xr
(20)
Campaign 2259
350
●●
●
Campaign 2259
1.0
●
●
300
●
AWR
eCPC
0.9
●
250
●
●●
●●
●
●●
●
●● ●●
●●
10
●● ●
●
20
30
●
0.5
40
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
5
10
15
round
Campaign 3427
Campaign 3427
20
2259
●
variable
AWR
70
60
●
●
●
● ●
●
●
●
● ●
●
●
50
0.8
●
●
●
40
0
● ●
5
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
original−ref
●
●
●
10
15
20
0
5
10
round
Figure 8: Dynamic reference control with PID.
6e+06
0.10
0.05
●
●
●
●
2e+06
0.08
SD−SS
20
4e+06
RMSE−SS
Settling Cost
Settling Time
0.04
30
0.06
0.03
●
0.02
●
0.01
0.04
10
0e+00
dyn
st
Reference
0.00
●
dyn st
Reference
dyn st
Reference
●
dyn st
Reference
(a) PID on eCPC
10
6e+06
6
2e+06
0.03
0.04
0.03
●
●
dyn
st
Reference
Rise
13
15
13
10
3
3
3
7
0
6
3
15
4
Settling
26
18
13
38
14
29
30
38
35
17
10
15
38
Cpg.
3358
3386
3427
3476
AdEx
1
2
3
1
2
3
1
2
3
1
2
3
Rise
9
14
26
6
12
1
16
35
23
18
22
19
Settling
20
39
26
18
12
1
16
35
23
29
28
22
0.02
Furthermore, we directly compare the quantitative control
performance between dynamic-reference controllers (dyn) with
the standard static-reference ones (st) using PID. Besides the
settling time, we also compare the settling cost, which is the
spent budget before settling. The overall performance across
all the campaigns is shown in Figure 9(a) for eCPC control
and Figure 9(b) for AWR control. The results show that (i)
for eCPC control, the dynamic-reference controllers do not
perform better than the static-reference ones; (ii) for AWR
control, the dynamic-reference controllers could reduce the
settling time and cost, but the accuracy (RMSE-SS) and stability (SD-SS) is much worse than the static-reference controllers. This is because the dynamic reference itself brings
volatility (see Figure 8). These results demonstrate that
PID controller does perform a good enough way to settling
the variable towards the pre-specified reference without the
need of dynamically adjusting the reference to accelerate using our methods. Other dynamic reference models might be
somewhat effective but this is not the focus of this paper.
0.02
●
4
SD−SS
4e+06
RMSE−SS
Settling Cost
Settling Time
0.05
8
2821
AdEx
1
2
3
1
2
3
1
2
3
1
2
3
4
15
round
40
2261
eCPC/AWR
dynamic−ref
0.6
● ● ●
●
●
Cpg.
1458
●
●
0
1.0
●
●
●
round
●
●
80
eCPC
eCPC/AWR
dynamic−ref
0.7
0.6
●
●●
●
●
0
●
0.8
Table 3: Control performance on multi-exchanges with the
reference eCPC set for click maximisation.
original−ref
●
●●
200
90
●●
●
●●
variable
dyn st
Reference
4.5 Reference Setting for Click Maximisation
0.01
0.01
0e+00
dyn st
Reference
dyn st
Reference
(b) PID on AWR
Figure 9: Dynamic v.s. static reference with PID.
where xr is the initial reference value, x(tk ) is the achieved
KPI (eCPC or AWR) at timestep tk , B is the campaign
budget, s(tk ) is the cost so far. We can see from Eq. (20)
that when xr (tk ) = xr , xr (tk+1 ) will be set the same as xr ;
when xr (tk ) > xr , xr (tk+1 ) will be set lower than xr and
vice versa. For readability, we leave the detailed derivation
in appendix. Using Eq. (20) we calculate the new reference
eCPC/AWR xr (tk+1 ) and use it to substitute xr in Eq. (3)
to calculated the error factor so as to make the dynamicreference control.
Results and Discussions. Figure 8 shows the PID control performance with dynamic reference calculated based
on Eq. (20). The campaign performance gets stopped at the
point where the budget is exhausted. From the figure, we see
that for both eCPC and AWR control, the dynamic reference
takes an aggressive approach and pushes the eCPC or AWR
across the original reference value (dashed line). This actually simulates some advertisers’ strategy: when the performance is lower than the reference, then higher the dynamic
reference to push the total performance to the initial reference more quickly. Furthermore, for AWR control, we can
see the dynamic reference fluctuates seriously when the budget is to be exhausted soon. This is because when there is
insufficient budget left, the reference value will be set much
high or low by Eq. (20) in order to push the performance
back to the initial target. Apparently this is an ineffective
solution.
We now study how the proposed feedback control could
be used for click optimisation purpose. As we have discussed in Section 3.4, bid requests usually come from different ad exchanges where the market power and thus the
CPM prices are disparate. We have shown that given a budget constraint, the number of clicks is maximised if one can
control the eCPC in each ad exchange by settling it at an
optimal eCPC reference for each of them, respectively.
In this experiment, we build a PID feedback controller
for each of its integrated ad exchanges, where their reference eCPCs are calculated via Eqs. (17) and (19). We train
the PID parameters on the training data of each campaign,
and then test the bidding performance on the test data.
As shown in Table 3, the eCPC on all the ad exchanges
for all tested campaigns get settled at the reference values3
(settling time less than 40). We denote our multi-exchange
eCPC feedback control method as multiple. Besides multiple,
we also test a baseline method which assigns a single optimal
uniform eCPC reference across all the ad exchanges, denoted
as uniform. We also use the linear bidding strategy without
feedback control [25] as a baseline, denoted as none4 .
The comparisons over various evaluation measures are reported in Figure 10. We observe that (i) the feedbackcontrol-enabled bidding strategies uniform and multiple significantly outperform the non-controlled bidding strategy
none in terms of the number of achieved clicks and eCPC.
This suggests that properly controlling eCPCs would lead
to an optimal solution for maximising clicks. (ii) By reallo3
Campaign 2997 is only integrated with one ad exchange,
thus not compared here.
4
Other bidding strategies [33, 17] are also investigated. Producing similar results, they are omitted here for clarity.
80
50
1500
Campaign 1458
Campaign 2997
30
●
40
●●
25
0.10%
20
0
none
uniform multiple
0.00%
none
uniform
multiple
Multiple v.s. None
Multiple v.s. Uniform
8%
Click Improvement
150%
100%
50%
6%
4%
2%
0%
0%
1458 2259 2261 2821 3358 3386 3427 3476 total
1458 2259 2261 2821 3358 3386 3427 3476 total
60
1.0e+06
0.4
7.5e+05
40
AWR
0.3
CPM
5.0e+05
0.2
20
2.5e+05
0.1
0.0e+00
0
none uniform multiple
0.0
none
uniform
multiple
none
uniform multiple
Figure 10: Bid optimisation performance.
Campaign 1458
6
control
multiple
exchange
●
●
4
1
eCPC
2
●
3
none
●
●
●
●
●
●
●
●
●
●
●
● ●
● ●
●
●
●
● ●
●
●
uniform
● ●
●
●
● ● ● ●
● ● ● ● ● ●
●
2
0
10
20
30
40
round
Figure 11: Settlement of multi-exchange feedback control.
cating the budget via setting different reference eCPCs on
different ad exchanges, multiple further outperforms uniform
on 7 out of 8 tested campaigns. (iii) On the impression related measures, the feedback-control-enabled bidding strategies earn more impressions than the non-controlled bidding
strategy by actively lowering their bids (CPM) and thus
AWR, but achieving more bid volumes. This suggests that
by allocating more budget to the lower valued impressions,
one could potentially generate more clicks. As a by-product,
this confirms the theoretical finding reported in [33].
As a case study, Figure 11 plots the settling performance of
the three methods on campaign 1458. The three dashed horizontal lines are the reference eCPCs on three ad exchanges.
We see that the eCPCs on the three ad exchanges successfully settle at the reference eCPCs. At the same time, the
campaign-level eCPC (multiple) settles at a lower value than
uniform and none.
4.6 PID Parameter Tuning
In this subsection, we share some lessons learned about
PID controller parameter tuning and online update.
Parameter Search. Empirically, λD does not change the
control performance significantly. Just a small valued λD ,
e.g., 1 × 10−5 , will reduce the overshoot and slightly shorten
the settling time. Thus the parameter search is focused on
●
●
●
●
●
●
●
●
●●
●
●●
●●
update
●
●
●
●●●
●●●●●●
●●
offline
●●
online
●
● ●● ● ●
●
●●●●●●●●●●
● ●●
●
●
●
●
●
●●
●
●
●
●
●●●●● ●● ●
10
10
20
30
40
round
uniform multiple
●
20
15
●
0
none
●
●●
●
10
0
Click Improvement
●
0.05%
500
Impressions
40
eCPC
20
eCPC
30
CTR
eCPC
Clicks
60
1000
●
0.15%
0
10
20
30
40
round
Figure 12: Control with online/offline parameter updating.
λP and λI . Instead of using the computationally expensive
grid search, we perform an adaptive coordinate search. For
every update, we fix one parameter and shoot another one to
seek for the optimal value leading shortest settling time, and
the line searching step length shrinks exponentially for each
shooting. Normally after 3 or 4 iterations, the local optima
is reached and we find such solution is highly comparable
with the expensive grid search.
Setting φ(t) Bounds. We also find that setting up upper/lower bounds of control signal φ(t) is important to make
KPIs controllable. Due to the dynamics in RTB, it is common that user CTR drops during a period, which makes
eCPC much higher. The corresponding feedback would probably result in a large negative gain on the bids, leading extremely low bid price and thus no win, no click and no additional cost at all for remaining rounds. In such case, a
proper lower bound (-2) of φ(t) aims to eliminate above extreme effects by preventing from a seriously negative control
signal. In addition, an upper bound (5) is used in order to
avoid excessive variable growth beyond the reference value.
Online Parameter Updating. As the DSP running
with feedback control, the collected data can be immediately utilised for training a new PID controller and updating the older one. We investigate the possibility of the online
updating of PID parameters with the recent data. Specifically, after initialising the PID parameters using training
data, we re-train the controller for every 10 rounds (i.e., before round 10, 20 and 30) in the test stage using all previous
data with the same parameter searching method as in the
training stage. The parameter searching in re-training takes
about 10 minutes for each controller, which is far shorter
than the round period (2 hours). Figure 12 shows the control
performance with PID parameters tuned online and offline
respectively. As we can see after the 10th round (i.e., the
first online tuning point), the online-tuned PIDs manage to
control the eCPC around the reference value more effectively
than the offline-tuned one, resulting shorter settling time
and lower overshoot. In addition, no obvious disturbance
or instability occurs when we switch parameters. With the
online parameter updating, we can start to train the controllers based on several-hour training data and adaptively
update the parameters from the new data to improve the
control performance.
5. ONLINE DEPLOYMENT AND TEST
The proposed RTB feedback control system has been deployed and tested in live on BigTree DSP5 , a performancedriven mobile advertising DSP in China. BigTree DSP focuses on the programmatic buying for optimal advertising
performance on mobile devices, which makes it an ideal place
to test our proposed solution.
The deployment environment is based on Aliyun elastic
cloud computing servers. A three-node cluster is deployed
for the DSP bidding agent, where each node is in Ubuntu
12.04, with 8 core Intel Xeon CPU E5-2630 (2.30GHz) and
5
http://www.bigtree.mobi/
Online eCPC control
90
ding strategy. In sum, the successful eCPC control on an
online commercial DSP verifies the effectiveness of our proposed feedback control RTB system.
2500
80
2000
70
60
clicks
eCPC
1500
50
40
variable
reference
eCPC
clicks
1000
30
20
500
10
0
0
0
12
24
36
48
60
72
84
Hour
Figure 13: The online eCPC control performance and the
accumulative click numbers of a mobile game campaign on
BigTree DSP.
Bids
Impressions
Clicks
CTR
1.000
1.000
1.000
1.000
0.975
0.975
0.975
0.975
0.950
0.950
0.950
0.950
0.925
0.925
0.925
0.925
0.900
0.900
0.900
none
PID
none
PID
0.900
none
PID
none
PID
Figure 14: Relative performance for online test.
8GB RAM. The controller module is implemented in Python
with uWSGI and Nginx.
For BigTree DSP controller module, we deploy the PID
control function and tune its parameters. Specifically, we
use the last 6-week bidding log data in 2014 as the training data for tuning PID parameters. A three-fold validation
process is performed to evaluate the generalisation of the
PID control performance, where the previous week data is
used as the training data while the later week data is used
for validation. The control factors (φ(t), e(tk ) in Eq. (4)) are
updated for every 90 minutes. After acquiring a set of robust and effective PID parameters, we launch the controller
module, including the monitor and actuator submodules, on
BigTree DSP.
Figure 13 shows the online eCPC control performance on
one of the iOS mobile game campaigns during 84 hours from
7 Jan. 2015 to 10 Jan. 2015. The reference eCPC is set as
28 RMB cent by the advertiser, which is about 0.8 times the
average eCPC value of the previous week where there was
no control. Following the same training process described
in the previous section, we update the online control factors
for every 90 minutes. From the result we can see the eCPC
value dropped from the beginning 79 cent to 30 during the
first day and then settled closed to the reference afterwards.
In the meantime, A/B testing is used to compare with
the non-controlled bidding agent (with the same sampling
rate but disjoint bid requests). Figure 14 shows the corresponding advertising performance comparison between a
non-controlled bidding agent and the PID-control bidding
agent during the test period with the same budget. As we
can see, by settling the eCPC value around the lower reference eCPC, the PID-control bidding agent acquires more
bid volume and win more (higher-CTR) impressions and
clicks, which demonstrates its ability of optimising the performance.
Compared with the offline empirical study, the online running is more challenging: (i) all pipeline steps including the
update of the CTR estimator, the KPI monitor linked to the
database and the PID controller should operate smoothly
against the market turbulence; (ii) the real market competition is highly dynamic during the new year period when we
launched our test; (iii) other competitors might tune their
bidding strategies independently or according to any changes
of their performance after we employed the controlled bid-
6. RELATED WORK
Enabling the impression-level evaluation and bidding, much
research work has been done on RTB display advertising, including bidding strategy optimisation [25, 33], reserve price
optimisation [30], ad exchange auction design [4], and ad
tracking [11].
In order to perform the optimal bidding, the DSP bidding agent should estimate both utility and cost of a given
ad impression. The impression-level utility evaluation, including CTR and conversion rate (CVR) estimation, is the
essential part for each bidding agent in DSPs. In [18] the
sparsity problem of CVR estimation is handled by modelling the conversions at different hierarchical levels. The
user click behaviour on mobile RTB ads is studied in [24].
On the cost evaluation side, bid landscape modelling and
forecasting is much important to inform the bidding agent
about the competitiveness of the market. The authors in
[9] break down the campaign-level bid landscape forecasting problem into “samples” by targeting rules and then employ a mixture model of log-normal distributions to build
the campaign-level bid landscape. The authors in [16] try
to reduce the bid landscape forecasting error through frequently re-building the landscape models. Based on the
utility and cost evaluation of the ad inventory, bid optimisation is performed to improve the advertising performance
under the campaign budget constraint. Given the estimated
CTR/CVR, the authors in [25, 18] employ linear bidding
functions based on truth-telling attributes of second price
auctions. However, given the budget constraint, the advertisers’ bidding behaviour is not truth-telling. The authors in
[33] propose a general bid optimisation framework to maximise the desired advertising KPI (e.g., total click number)
under the budget constraint. Besides the general bid optimisation, the explicit bidding rules such as frequency and
recency capping are studied in [31]. Moreover, the budget
pacing [17] which refers to smoothly delivering the campaign
budget is another important problem for DSPs.
There are a few research papers on recommender systems
leveraging feedback controllers for performance maintenance
and improvement. In [21], a rating updating algorithm based
on the PID controller is developed to exclude unfair ratings
in order to build a robust reputation system. The authors
in [32] apply a self-monitoring and self-adaptive approach
to perform a dynamic update of the training data fed into
the recommender system to automatically balance the computational cost and the prediction accuracy. Furthermore,
the authors in [14] adopt the more effective and well-studied
PID controller to the data-feeding scheme of recommender
systems, which is proved to be practically effective in their
studied training task.
Compared to the work of controlling the recommender system performance by changing the number of training cases,
our control task in RTB is more challenging, with much various dynamics from advertising environment such as the fluctuation of market price, auction volume and user behaviour.
In [8], the authors discuss multiple aspects in a performancedriven RTB system, where the impression volume control is
one of discussed aspects. Specifically, WL and a modelbased controller are implemented to control the impression
volume during each time interval. In [15], feedback control
is used to perform budget pacing in order to stablise the
conversion volume. Compared to [8, 15], our work is a more
comprehensive study focused on the feedback control techniques to address the instability problem in RTB. Besides
WL, we intensively investigate the PID controller, which
takes more factors into consideration than WL. For the controlled KPIs, we look into the control tasks on both eCPC
and AWR, which are crucial KPIs for performance-driven
campaigns and branding-based campaigns, respectively. In
addition, we proposed an effective model to calculate the
optimal eCPC reference to maximise the campaign’s clicks
using feedback controllers.
7. CONCLUSIONS
In this paper, we have proposed a feedback control mechanism for RTB display advertising, with the aim of improving its robustness of achieving the advertiser’s KPI goal. We
mainly studied PID and WL controllers for controlling the
eCPC and AWR KPIs. Through our comprehensive empirical study, we have the following discoveries. (i) Despite of
the high dynamics in RTB, the KPI variables are controllable using our feedback control mechanism. (ii) Different
reference values bring different control difficulties, which are
reflected in the control speed, accuracy and stability. (iii)
PID controller naturally finds its best way to settle the variable, and there is no necessity to adjust the reference value
for accelerating the PID settling. (iv) By settling the eCPCs
to the optimised reference values, the feedback controller is
capable of making bid optimisation. Deployed on a commercial DSP, the online test demonstrates the effectiveness
of the feedback control mechanism in generating controllable
advertising performance. In the future work, we will further
study the applications based on feedback controllers in RTB,
such as budget pacing and retargeting frequency capping.
8. REFERENCES
[1] K. Amin, M. Kearns, P. Key, and A. Schwaighofer. Budget
optimization for sponsored search: Censored learning in mdps.
UAI, 2012.
[2] K. J. Åström and P. Kumar. Control: A perspective.
Automatica, 50(1):3–43, 2014.
[3] K. J. Åström and R. M. Murray. Feedback systems: an
introduction for scientists and engineers. Princeton university
press, 2010.
[4] S. Balseiro, O. Besbes, and G. Y. Weintraub. Repeated auctions
with budgets in ad exchanges: Approximations and design.
Columbia Business School Research Paper, (12/55), 2014.
[5] R. Battiti. First-and second-order methods for learning:
between steepest descent and newton’s method. Neural
computation, 4(2):141–166, 1992.
[6] S. Bennett. Development of the pid controller. Control
Systems, 13(6):58–62, 1993.
[7] S. P. Bhattacharyya, H. Chapellat, and L. H. Keel. Robust
control. The Parametric Approach, by Prentice Hall PTR,
1995.
[8] Y. Chen, P. Berkhin, B. Anderson, and N. R. Devanur.
Real-time bidding algorithms for performance-based display ad
allocation. KDD, 2011.
[9] Y. Cui, R. Zhang, W. Li, and J. Mao. Bid landscape forecasting
in online ad exchange marketplace. In KDD, pages 265–273.
ACM, 2011.
[10] B. W. Dezotell. Water level controller, June 9 1936. US Patent
2,043,530.
[11] R. Gomer, E. M. Rodrigues, N. Milic-Frayling, and
M. Schraefel. Network analysis of third party tracking: User
exposure to tracking cookies through search. In WI, 2013.
[12] Google. The arrival of real-time bidding. Technical report, 2011.
[13] Y. Hu, Y. Koren, and C. Volinsky. Collaborative filtering for
implicit feedback datasets. In ICDM, 2008.
[14] T. Jambor, J. Wang, and N. Lathia. Using control theory for
stable and efficient recommender systems. In WWW, 2012.
[15] N. Karlsson and J. Zhang. Applications of feedback control in
online advertising. In American Control Conference, 2013.
[16] K. J. Lang, B. Moseley, and S. Vassilvitskii. Handling forecast
errors while bidding for display advertising. In WWW, 2012.
[17] K.-C. Lee, A. Jalali, and A. Dasdan. Real time bid optimization
with smooth budget delivery in online advertising. In ADKDD,
2013.
[18] K.-C. Lee, B. Orten, A. Dasdan, and W. Li. Estimating
conversion rate in display advertising from past performance
data. In KDD, 2012.
[19] H. Liao, L. Peng, Z. Liu, and X. Shen. ipinyou global rtb
bidding algorithm competition dataset. In ADKDD, 2014.
[20] R. McGill, J. W. Tukey, and W. A. Larsen. Variations of box
plots. The American Statistician, 32(1):12–16, 1978.
[21] K. Meng, Y. Wang, X. Zhang, X.-c. Xiao, et al. Control theory
based rating recommendation for reputation systems. In
ICNSC, 2006.
[22] S. Muthukrishnan. Ad exchanges: Research issues. In Internet
and network economics, pages 1–12. Springer, 2009.
[23] R. C. Nelson. Flight stability and automatic control, volume 2.
WCB/McGraw Hill, 1998.
[24] R. J. Oentaryo, E. P. Lim, D. J. W. Low, D. Lo, and
M. Finegold. Predicting response in mobile advertising with
hierarchical importance-aware factorization machine. In
WSDM, 2014.
[25] C. Perlich, B. Dalessandro, R. Hook, O. Stitelman, T. Raeder,
and F. Provost. Bid optimizing and inventory scoring in
targeted online advertising. In KDD, pages 804–812, 2012.
[26] G. Peterson and D. J. Cook. Decision-theoretic layered robotic
control architecture. In AAAI/IAAI, page 976, 1999.
[27] M. Richardson, E. Dominowska, and R. Ragno. Predicting
clicks: estimating the click-through rate for new ads. In WWW,
pages 521–530. ACM, 2007.
[28] D. E. Rivera, M. Morari, and S. Skogestad. Internal model
control: Pid controller design. Industrial & engineering
chemistry process design and development, 25(1), 1986.
[29] J. Xu, C. Chen, G. Xu, H. Li, and E. R. T. Abib. Improving
quality of training data for learning to rank using click-through
data. In WSDM, 2010.
[30] S. Yuan, B. Chen, J. Wang, P. Mason, and S. Seljan. An
Empirical Study of Reserve Price Optimisation in Real-Time
Bidding. In KDD, 2014.
[31] S. Yuan, J. Wang, and X. Zhao. Real-time bidding for online
advertising: measurement and analysis. In ADKDD, 2013.
[32] V. Zanardi and L. Capra. Dynamic updating of online
recommender systems via feed-forward controllers. In SEAMS,
2011.
[33] W. Zhang, S. Yuan, and J. Wang. Optimal real-time bidding for
display advertising. In KDD, 2014.
[34] W. Zhang, S. Yuan, J. Wang, and X. Shen. Real-time bidding
benchmarking with ipinyou dataset. arXiv, 2014.
[35] W. Zhang, Y. Zhang, B. Gao, Y. Yu, X. Yuan, and T.-Y. Liu.
Joint optimization of bid and budget allocation in sponsored
search. In KDD, 2012.
APPENDIX
Reference Adjust Models. Here we provide the detailed
derivation of the proposed dynamic-reference model Eq. (20)
in Section 4.4. We mainly introduce the derivation of reference eCPC adjustment, while the derivation of reference
AWR adjustment can be obtained similarly.
Let ξr be the initial eCPC target, ξ(tk ) be the achieved
eCPC before the moment tk , s(tk ) be the total cost so far,
and B be the campaign budget. In such setting, the current achieved click number is s(tk )/ξ(tk ) and the target click
number is B/ξr . In order to push the overall eCPC to ξr ,
i.e., push the total click number B/ξr with the budget B,
the reference eCPC for the remaining time ξr (tk+1 ) should
satisfy
B − s(tk )
B
s(tk )
+
= .
ξ(tk )
ξr (tk+1 )
ξr
(21)
Solving the equation we have
ξr (tk+1 ) =
(B − s(tk ))ξr ξ(tk )
.
Bξ(tk ) − s(tk )ξr
(22)
The derivation of reference AWR adjustment is much similar but with an extra winning function which links between
the bid price and winning probability [33]. The result formula is just the same as Eq. (22). Using x as a general
notation for eCPC and AWR variables results in Eq. (20) in
Section 4.4.
| 3 |
Dual Recurrent Attention Units for Visual Question Answering
Ahmed Osman 1,2 , Wojciech Samek 1
1
Fraunhofer Heinrich Hertz Institute, Berlin, Germany
2
University of Freiburg, Freiburg, Germany
arXiv:1802.00209v1 [cs.AI] 1 Feb 2018
{ahmed.osman, wojciech.samek}@hhi.fraunhofer.de
Abstract
We propose an architecture for VQA which utilizes recurrent layers to generate visual and textual attention. The
memory characteristic of the proposed recurrent attention
units offers a rich joint embedding of visual and textual features and enables the model to reason relations between
several parts of the image and question. Our single model
outperforms the first place winner on the VQA 1.0 dataset,
performs within margin to the current state-of-the-art ensemble model. We also experiment with replacing attention
mechanisms in other state-of-the-art models with our implementation and show increased accuracy. In both cases,
our recurrent attention mechanism improves performance
in tasks requiring sequential or relational reasoning on the
VQA dataset.
Figure 1. Diagram of the DRAU network.
tial manner which recurrent layers are better suited due to
their ability to capture relevant information over an input
sequence.
In this paper, we propose a RNN-based joint representation to generate visual and textual attention. We argue
that embedding a RNN in the joint representation helps the
model process information in a sequential manner and determine what is relevant to solve the task. We refer to the
combination of RNN embedding and attention as Recurrent
Textual Attention Unit (RTAU) and Recurrent Visual Attention Unit (RVAU) respective of their purpose. Furthermore,
we employ these units in a fairly simple network, referred
to as Dual Recurrent Attention Units (DRAU) network, and
show improved results over several baselines. Finally, we
enhance state-of-the-art models by replacing the model’s
default attention mechanism with RVAU.
Our main contributions are the following:
1. Introduction
Although convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been successfully
applied to various image and natural language processing
tasks (cf. [1, 2, 3, 4]), these breakthroughs only slowly
translate to multimodal tasks such as visual question answering (VQA) where the model needs to create a joint understanding of the image and question. Such multimodal
tasks require joint visual and textual representations.
Since global features can hardly answer questions about
certain local parts of the input, attention mechanisms have
been extensively used in VQA recently [5, 6, 7, 8, 9, 10,
11, 12]. It attempts to make the model predict based on
spatial or lingual context. However, most attention mechanisms used in VQA models are rather simple, consisting
of two convolutional layers followed by a softmax to generate the attention weights which are summed over the image features. These shallow attention mechanisms may fail
to select the relevant information from the joint representation of the question and image. Creating attention for complex questions, particularly sequential or relational reasoning questions, requires processing information in a sequen-
• We introduce a novel approach to generate soft attention. To the best of our knowledge, this is the first attempt to generate attention maps using recurrent neural
networks. We provide quantitative and qualitative results showing performance improvements over the de1
fault attention used in most VQA models.
• Our attention modules are modular, thus, they can substitute existing attention mechanisms in most models
fairly easily. We show that state-of-the-art models with
RVAU “plugged-in” perform consistently better than
their vanilla counterparts.
2. Related Work
This section discusses common methods that have been
explored in the past for VQA.
Bilinear representations Fukui et al. [7] use compact bilinear pooling to attend over the image features and combine it with the language representation. The basic concept
behind compact bilinear pooling is approximating the outer
product by randomly projecting the embeddings to a higher
dimensional space using Count Sketch projection [13] and
then exploiting Fast Fourier Transforms to compute an efficient convolution. An ensemble model using MCB won first
place in VQA (1.0) 2016 challenge. Kim et al. [5] argues
that compact bilinear pooling is still expensive to compute
and shows that it can be replaced by element-wise product (Hadamard product) and a linear mapping (i.e. fullyconnected layer) which gives a lower dimensional representation and also improves the model accuracy. Recently,
Ben-younes et al. [14] proposed using Tucker decomposition [15] with a low-rank matrix constraint as a bilinear
representation. They propose this fusion scheme in an architecture they refer to as MUTAN which as of this writing
is the current state-of-the-art on the VQA 1.0 dataset.
Attention-based Closely related to our work, Lu et al. [9]
were the first to feature a co-attention mechanism that applies attention to both the question and image. Nam et al.
[6] use a Dual Attention Network (DAN) that employs attention on both text and visual features iteratively to predict
the result. The goal behind this is to allow the image and
question attentions to iteratively guide each other in a synergistic manner.
RNNs for VQA Using recurrent neural networks (RNNs)
for VQA has been explored in the past. Xiong et al. [16]
build upon the dynamic memory network from Kumar and
Varaiya [17] and proposes DMN+. DMN+ uses episodic
modules which contain attention-based Gated Recurrent
Units (GRUs). Note that this is not the same as what we
propose; Xiong et al. generate soft attention using convolutional layers and then uses it to substitute the update gate of
the GRU. In contrast, our approach uses the recurrent layers
to generate the attention. Noh and Han [8] propose recurrent answering units in which each unit is a complete module that can answer a question about an image. They use
joint loss minimization to train the units. However during
testing, they use the first answering unit which was trained
from other units through backpropagation.
Notable mentions Kazemi and Elqursh [18] show that a
simple model can get state-of-the-art results with proper
training parameters. Wu et al. [19] construct a textual representation of the semantic content of an image and merges
it with textual information sourced from a knowledge base.
Ray et al. [20] introduce a task of identifying relevant questions for VQA. Kim et al. [21] apply residual learning techniques to VQA and propose a novel attention image attention visualization method using backpropagation.
3. Dual Recurrent Attention in VQA
We propose our method in this section. Figure 1 illustrates the flow of information in the DRAU model. Given
an image and question, we create the input representations
v and q. Next, these features are combined by 1 × 1 convolutions into two separate branches. Then, the branches
are passed to an RTAU and RVAU. Finally, the branches are
combined using a fusion operation and fed to the final classifier. The full architecture of the network is depicted in
Figure 2.
3.1. Input Representation
Image representation We use the 152-layer “ResNet”
pretrained CNN from He et al. [1] to extract image features.
Similar to [7, 6], we resize the images to 448 × 448 and
extract the last layer before the final pooling layer (res5c)
with size 2048 × 14 × 14. Finally, we use l2 normalization on all dimensions. Recently, Anderson et al. [22] have
shown that object-level features can provide a significant
performance uplift compared to global-level features from
pretrained CNNs. Therefore, we experiment with replacing
the ResNet features with FRCNN [23] features with a fixed
number of proposals per image (K = 36).
Question representation We use a fairly similar representation as [7]. In short, the question is tokenized and encoded using an embedding layer followed by a tanh activation. We also exploit pretrained GloVe vectors [24] and
concatenate them with the output of the embedding layer.
The concatenated vector is fed to a two-layer unidirectional
LSTM that contains 1024 hidden states each. In contrast
to Fukui et al., we use all the hidden states of both LSTMs
rather than concatenating the final states to represent the final question representation.
3.2. 1 × 1 Convolution and PReLU
We apply multiple 1 × 1 convolution layers in the network for mainly two reasons. First, they learn weights from
Figure 2. The proposed network. ⊕ denotes concatenation.
the image and question representations in the early layers.
This is important especially for the image representation,
since it was originally trained for a different task. Second,
they are used to generate a common representation size.
To obtain a joint representation, we apply 1 × 1 convolutions followed by PReLU activations [1] on both the image
and question representations. Through empirical evidence,
PReLU activations were found to reduce training time significantly and improve performance compared to ReLU and
tanh activations. We provide these results in Section 4.
3.3. Recurrent Attention Units
The result from the above-mentioned layers is concatenated and fed to two separate recurrent attention units
(RAU). Each RAU starts with another 1 × 1 convolution
and PReLU activation:
ca = PReLU Wa x
(1)
where Wa is the 1 × 1 convolution weights, x is the input to
the RAU, and ca is the output of the first PReLU.
Furthermore, we feed the previous output into an unidirectional LSTM:
ha,n = LSTM ca,n
(2)
Next, we use the attention weights to compute a
weighted average of the image and question features.
atta,n =
N
X
Watt,n fn
(4)
n=1
where fn is the input representation and atta,n is the attention applied on the input. Finally, the attention maps are fed
into a fully-connected layer followed by a PReLU activation. Figure 3 illustrates the structure of a RAU.
yatt,n = PReLU Wout atta,n
(5)
where Wout is a weight vector of the fully connected layer
and yatt,n is the output of each RAU.
3.4. Reasoning layer
A fusion operation is used to merge the textual and visual
branches. For DRAU, we experiment with using elementwise multiplication (Hadamard product) and MCB [7, 25].
The result of the fusion is given to a many-class classifier
using the top 3000 frequent answers. We use a single-layer
softmax with cross-entropy loss. This can be written as:
Pa = softmax fusion op ytext , yvis Wans
(6)
where ha,n is the hidden state at time n.
To generate the attention weights, we feed all the hidden
states of the previous LSTM to a 1 × 1 convolution layer
followed by a softmax function. The 1×1 convolution layer
could be interpreted as the number of glimpses the model
sees.
Watt,n = softmax PReLU Wg ha,n
(3)
where ytext and yvis are the outputs of the RAUs, Wans
represents the weights of the multi-way classifier, and Pa is
the probability of the top 3000 frequent answers.
The final answer â is chosen according to the following:
where Wg is the glimpses’ weights and Watt,n is the attention weight vector.
Experiments are performed on the VQA 1.0 and 2.0
datasets [26, 27]. These datasets use images from the
â = argmax Pa
(7)
4. Experiments and Results
VQA 1.0 Validation Split
Open Ended Task
Baselines
Y/N Num. Other
Language only
Simple MCB
Joint LSTM
Figure 3. Recurrent Attention Unit.
MS-COCO dataset [28] and generate questions and labels
(10 labels per question) using Amazon’s Mechanical Turk
(AMT). Compared to VQA 1.0, VQA 2.0 adds more imagequestion pairs to balance the language prior present in the
VQA 1.0 dataset. The ground truth answers in the VQA
dataset are evaluated using human consensus.
P
a is in human annotation
,1
(8)
Acc(a) = min
3
We evaluate our results on the validation, test-dev, teststd splits of each dataset. Models evaluated on the validation set use train and Visual Genome for training. For
the other splits, we include the validation set in the training
data. However, the models using FRCNN features do not
use data augmentation with Visual Genome.
To train our model, we use Adam [29] for optimization
with β1 = 0.9, β2 = 0.999, and an initial learning rate
of = 7 × 10−4 . The final model is trained with a small
batch size of 32 for 400K iterations. We did not fully explore tuning the batch size which explains the relatively
high number of training iterations. Dropout (p = 0.3) is
applied after each LSTM and after the fusion operation. All
weights are initialized as described in [30] except LSTM
layers which use an uniform weight distribution. The pretrained ResNet was fixed during training due to the massive
computational overhead of fine-tuning the network for the
VQA task. While VQA datasets provide 10 answers per
image-question pair, we sample one answer randomly for
each training iteration.
4.1. VQA 1.0 Experiments
During early experiments, the VQA 2.0 dataset was not
yet released. Thus, the baselines and early models were
evaluated on the VQA 1.0 dataset. While building the final model, several parameters were changed, mainly, the
learning rate, activation functions, dropout value, and other
modifications which we discuss in this section.
Baselines We started by designing three baseline architectures. The first baseline produced predictions solely
from the question while totally ignoring the image. The
78.56
78.64
79.90
27.98
32.98
36.96
30.76
39.79
49.58
All
48.3
54.82
59.34
Table 1. Evaluation of the baseline models on the VQA 1.0 validation split.
model used the same question representation described in
[7] and passed the output to a softmax 3000-way classification layer. The goal of this architecture was to assess the
extent of the language bias present in VQA.
The second baseline is a simple joint representation of
the image features and the language representation. The
representations were combined using the compact bilinear
pooling from [25]. We chose this method specifically because it was shown to be effective by Fukui et al. [7]. The
main objective of this model is to measure how a robust
pooling method of multimodal features would perform on
its own without a deep architecture or attention. We refer to
this model as Simple MCB.
For the last baseline, we substituted the compact bilinear pooling from Simple MCB with an LSTM consisting
of hidden states equal to the image size. A 1 × 1 convolutional layer followed by a tanh activation were used on the
image features prior to the LSTM, while the question representation was replicated to have a common embedding size
for both representations This model is referred to as Joint
LSTM.
We begin by testing our baseline models on the VQA
1.0 validation set. As shown in Table 1, the language-only
baseline model managed to get 48.3% overall. More impressively, it scored 78.56% on Yes/No questions. The Simple MCB model further improves the overall performance,
although little improvement is gained in the binary Yes/No
tasks. Replacing MCB with our basic Joint LSTM embedding improves performance across the board.
Modifications to the Joint LSTM Model We test several variations of the Joint LSTM baseline which are highlighted in Table 2. Using PReLU activations has helped
in two ways. First, it reduced time for convergence from
240K iterations to 120K. Second, the overall accuracy has
improved, especially in the Other category. The next modifications were inspired by the results from [18]. We experimented with appending positional features which can be described as the coordinates of each pixel to the depth/feature
dimension of the image representation. When unnormalized
with respect to the other features, it worsened results significantly, dropping the overall accuracy by over 2 points.
Model
VQA 1.0 Validation Split
Open Ended Task
Y/N Num. Other
Joint LSTM baseline
PReLU
Pos. features
Pos. features (norm.)
High dropout
Extra FC
79.90
79.61
79.68
79.69
79.03
78.86
36.96
36.21
36.52
36.36
34.84
33.51
49.58
50.77
46.59
50.69
47.25
45.57
All
59.34
59.74
57.71
59.75
57.59
56.51
Table 2. Evaluation of the Joint LSTM model and its modifications
on the VQA 1.0 validation split.
Normalizing positional features did not have enough of a
noticeable improvement (0.01 points overall) to warrant its
effectiveness. Next, all dropout values are increased from
0.3 to 0.5 deteriorated the network’s accuracy, particularly
in the Number and Other categories. The final modification was inserting a fully connected layer with 1024 hidden
units before the classifier, which surprisingly dropped the
accuracy massively.
4.2. VQA 2.0 Experiments
After the release of VQA 2.0, we shifted our empirical
evaluation towards the newer dataset. First, we retrain and
retest our best performing VQA 1.0 model Joint LSTM as
well as several improvements and modifications.
Since VQA 2.0 was built to reduce the language prior
and bias inherent in VQA, the accuracy of Joint LSTM drops
significantly as shown in Table 3. Note that all the models that were trained so far do not have explicit visual or
textual attention implemented. Our first network with explicit visual attention, RVAU, shows an accuracy jump by
almost 3 points compared to the Joint LSTM model. This
result highlights the importance of attention for good performance in VQA. Training the RVAU network as a multilabel task (RVAUmultilabel ), i.e. using all available annotations
at each training iteration, drops the accuracy horribly. This
is the biggest drop in performance so far. This might be
caused by the variety of annotations in VQA for each question which makes the task for optimizing all answers at once
much harder.
DRAU Evaluation The addition of RTAU marks the creation of our DRAU network. The DRAU model shows favorable improvements over the RVAU model. Adding textual attention improves overall accuracy by 0.56 points.
Substituting the PReLU activations with ReLU (DRAUReLU )
massively drops performance. While further training might
have helped the model improve, PReLU offers much faster
1 Concurrent
Work
Model
VQA 2.0 Validation Split
Open Ended Task
Y/N Num.
Other
All
Joint LSTM w/PReLU
72.04
37.95
48.58
56.00
RVAU
RVAUmultilabel
74.59
77.53
37.75
36.05
52.81
40.18
59.02
53.67
DRAUHadamard fusion
DRAUanswer vocab = 5k
DRAUReLU
DRAUno final dropout
DRAUhigh final dropout
76.62
76.33
72.69
77.02
76.47
38.92
38.21
34.92
38.26
38.71
52.09
51.85
45.05
50.17
52.52
59.58
59.27
54.11
58.69
59.71
-
-
-
59.14
59.67
MCB [26]
Kazemi and Elqursh [18]1
Table 3. Evaluation of RVAU and DRAU-based models on the
VQA 2.0 validation split.
convergence. Increasing the value of the dropout layer after
the fusion operation (DRAUhigh final dropout ) improves performance by 0.13 points, in contrast to the results of the Joint
LSTM model on VQA 1.0. Note that on the VQA 1.0 tests,
we changed the values of all layers that we apply dropout
on, but here we only change the last one after the fusion
operation. Totally removing this dropout layer worsens accuracy. This suggests that the optimal dropout value should
be tuned per-layer.
We test a few variations of DRAU on the test-dev set. We
can observe that VQA benefits from more training data; the
same DRAU network performs better (62.24% vs. 59.58%)
thanks to the additional data. Most of the literature resize
the original ResNet features from 224 × 224 to 448 × 448.
To test the effect of this scaling, we train a DRAU variant
with the original ResNet size (DRAUsmall ). Reducing the
image feature size from 2048 × 14 × 14 to 2048 × 7 × 7
adversely affects accuracy as shown in Table 4. Adding
more glimpses significantly reduces the model’s accuracy
(DRAUglimpses = 4 ). A cause of this performance drop could
be related to the fact that LSTMs process the input in a onedimensional fashion and thus decide that each input is either
relevant or non-relevant. This might explain why the attention maps of DRAU separate the objects from the background in two glimpses as we will mention in Section 5.
2D Grid LSTMs [31] might help remove this limitation.
Removing the extra data from Visual Genome hurts the
model’s accuracy. That supports the fact that VQA is very
diverse and that extra data helps the model perform better.
Finally, substituting Hadamard product of MCB in the final
fusion operation boosts the network’s accuracy significantly
by 1.17 points (DRAUMCB fusion ).
As mentioned in Section 3.1, we experiment replacing
the global ResNet features with object-level features as sug-
VQA 2.0 Test-Dev Split
Open Ended Task
Y/N
Num. Other
Model
DRAUHadamard fusion
DRAUsmall
DRAUglimpses = 4
DRAUno genome
DRAUMCB fusion
DRAUFRCNN features
78.27
77.53
76.82
79.63
78.97
82.85
40.31
38.78
39.15
39.55
40.06
44.78
53.57
49.93
51.07
51.81
55.47
57.4
4.4. DRAU versus the state-of-the-art
All
62.24
60.03
60.32
61.88
63.41
66.45
Table 4. Evaluation of later DRAU-based models on the VQA 2.0
test-dev split.
VQA 2.0 Test-dev Split
Open Ended Task
Y/N Num. Other
Model
All
3
MCB [7]
MCB w/RVAU
78.41
77.31
38.81
40.12
53.23
54.64
61.96
62.33
MUTAN [14]
MUTAN w/RVAU
79.06
79.33
38.95
39.48
53.46
53.28
62.36
62.45
Table 5. Results of state-of-the-art models with RVAU.
gested by [22]. This change provides a significant performance increase of 3.04 points (DRAUFRCNN features ).
4.3. Transplanting RVAU in other models
To verify the effectiveness of the recurrent attention
units, we replace the attention layers in MCB and MUTAN
[14] with RVAU.
For MCB [7] we remove all the layers after the first MCB
operation until the first 2048-d output and replace them with
RVAU. Due to GPU memory constraints, we reduced the
size of each hidden unit in RVAU’s LSTM from 2048 to
1024. In the same setting, RVAU significantly helps improve the original MCB model’s accuracy as shown in Table 5. The most noticeable performance boost can be seen
in the number category, which supports our hypothesis that
recurrent layers are more suited for sequential reasoning.
Furthermore, we test RVAU in the MUTAN model [14].
The authors use a multimodal vector with dimension size of
510 for the joint representations. For coherence, we change
the usual dimension size in RVAU to 510. At the time of this
writing, the authors have not released results on VQA 2.0
using a single model rather than a model ensemble. Therefore, we train a single-model MUTAN using the authors’
implementation.2 The story does not change here, RVAU
improves the model’s overall accuracy.
2 https://github.com/Cadene/vqa.pytorch
3 http://www.visualqa.org/roe_2017.html
VQA 1.0 Table 6 shows a comparison between DRAU
and other state-of-the-art models. Excluding model ensembles, DRAU performs favorably against other models. To
the best of our knowledge, [5] has the best single model
performance of 65.07% on the test-std split which is very
close our best model (65.03%). Small modifications or hyperparameter tuning could push our model further. Finally,
the FRCNN image features boosts the model’s performance
close to the state-of-the-art ensemble model.
VQA 2.0 Our model DRAUMCB fusion landed the 8th place
in the VQA 2.0 Test-standard task.4 . Currently, all reported
submissions that outperform our single model use model
ensembles. Using FRCNN features boosted the model’s
performance to outperform some of the ensemble models
(66.85%). The first place submission [22] reports using
an ensemble of 30 models. In their report, the best single
model that uses FRCNN features achieves 65.67% on the
test-standard split which is outperformed by our best single
model DRAUFRCNN features .
5. DRAU versus MCB
In this section, we provide qualitative results that highlight the effect of the recurrent layers compared to the MCB
model.
The strength of RAUs is notable in tasks that require sequentially processing the image or relational/multi-step reasoning. In the same setting, DRAU outperforms MCB in
counting questions. This is validated in a subset of the validation split questions in the VQA 2.0 dataset as shown in
Figure 4. Figure 5 shows some qualitative results between
DRAU and MCB. For fair comparison we compare the first
attention map of MCB with the second attention map of our
model. We do so because the authors of MCB [7] visualize
the first map in their work5 . Furthermore, the first glimpse
of our model seems to be the complement of the second
attention, i.e. the model separates the background and the
target object(s) into separate attention maps. We have not
tested the visual effect of more than two glimpses on our
model.
In Figure 5, it is clear that the recurrence helps the model
attend to multiple targets as apparent in the difference of the
attention maps between the two models. DRAU seems to
also know how to count the right object(s). The top right example in Figure 5 illustrates that DRAU is not easily fooled
by counting whatever object is present in the image but
rather the object that is needed to answer the question. This
4 https://evalai.cloudcv.org/web/challenges/
challenge-page/1/leaderboard
5 https://github.com/akirafukui/vqa-mcb/blob/
master/server/server.py#L185
Model
VQA 1.0 Open Ended Task
Test-dev
Y/N Num. Other
All
Y/N
Test-standard
Num. Other
SAN [10]
DMN+ [16]
MRN [21]
HieCoAtt [9]
RAU [8]
DAN [6]
MCB [7] (e = 7)
MLB [5] (1 model)
MLB [5] (e = 7)
MUTAN [14] (e = 5)
79.3
80.5
82.28
79.7
81.9
83.0
83.4
84.57
85.14
36.6
36.8
38.82
38.7
39.0
39.1
39.8
39.21
39.81
46.1
48.3
49.25
51.7
53.0
53.9
58.5
57.81
58.52
58.7
60.3
61.68
61.8
63.3
64.3
66.7
66.77
67.42
82.39
81.7
82.8
83.24
84.02
84.61
84.91
38.23
38.2
38.1
39.47
37.90
39.07
39.79
49.41
52.8
54.0
58.00
54.77
57.79
58.35
58.9
60.4
61.84
62.1
63.2
64.2
66.47
65.07
66.89
67.36
DRAUHadamard fusion
DRAUMCB fusion
DRAUFRCNN features
82.73
82.44
84.92
38.18
38.22
39.16
54.43
56.30
57.70
64.3
65.1
66.86
82.41
84.87
38.33
40.02
55.97
57.91
65.03
67.16
All
Table 6. DRAU compared to the state-of-the-art on the VQA 1.0 dataset.e = n corresponds to a model ensemble of size n.
Model
VQA 2.0 Open Ended Task
Test-dev
Y/N Num. Other
All
Y/N
Test-standard
Num. Other
UPC
MIC TJ
neural-vqa-attention[10]
CRCV REU
VQATeam MCB [26]
DCD ZJU[32]
VQAMachine [33]
POSTECH
UPMC-LIP6[14]
LV NUS[34]
DLAIT
HDU-USYD-UNCC[35]
Adelaide-Teney ACRV MSR[36]
67.1
69.02
70.1
73.91
78.41
79.84
79.4
78.98
81.96
81.95
82.94
84.39
85.24
31.54
34.52
35.39
36.82
38.81
38.72
40.95
40.9
41.62
48.31
47.08
45.76
48.19
25.46
35.76
47.32
54.85
53.23
53.08
53.24
55.35
57.07
59.99
59.94
59.14
59.88
43.3
49.32
55.35
60.65
61.96
62.47
62.62
63.45
65.57
67.71
67.95
68.02
69.00
66.97
69.22
69.77
74.08
78.82
79.85
79.82
79.32
82.07
81.92
83.17
84.5
85.54
31.38
34.16
35.65
36.43
38.28
38.64
40.91
40.67
41.06
48.38
46.66
45.39
47.45
25.81
35.97
47.18
54.84
53.36
52.95
53.35
55.3
57.12
59.63
60.15
59.01
59.82
43.48
49.56
55.28
60.81
62.27
62.54
62.97
63.66
65.71
67.64
68.22
68.09
69.13
DRAUHadamard fusion
DRAUMCB fusion
DRAUFRCNN features
78.27
78.97
82.85
40.31
40.06
44.78
53.58
55.48
57.4
62.24
63.41
66.45
78.86
79.27
83.35
39.91
40.15
44.37
53.76
55.55
57.63
62.66
63.71
66.85
All
Table 7. DRAU compared to the current submissions on the VQA 2.0 dataset.
property also translates to questions that require relational
reasoning. The second column in Figure 5 demonstrates
how DRAU can attend the location required to answer the
question based on the textual and visual attention maps.
6. Conclusion
We proposed an architecture for VQA with a novel attention unit, termed the Recurrent Attention Unit (RAU).
The recurrent layers help guide the textual and visual atten-
tion since the network can reason relations between several
parts of the image and question. We provided quantitative
and qualitative results indicating the usefulness of a recurrent attention mechanism. Our DRAU model showed improved performance in tasks requiring sequential/complex
reasoning such as counting or relational reasoning over [7],
the winners of the VQA 2016 challenge. In VQA 1.0, we
achieved near state-of-the-art results for single model performance with our DRAU network (65.03 vs. 65.07 [5]).
Adding the FRCNN features gets the model within margin
Question Type (Occurrence)
Accuracy(%)
47.28
46.31
43.98
42.06
DRAU
MCB
44.30
42.57
Visual Question Answering and Visual Grounding,” in Empirical Methods in Natural Language Processing (EMNLP),
2016, pp. 457–468.
[8] H. Noh and B. Han, “Training Recurrent Answering Units
with Joint Loss Minimization for VQA,” arXiv:1606.03647,
2016.
[9] J. Lu, J. Yang, D. Batra, and D. Parikh, “Hierarchical
Question-Image Co-Attention for Visual Question Answering,” in Advances in Neural Information Processing Systems
(NIPS), 2016, pp. 289–297.
how many
people are in
(905)
how many
people are
(2,005)
how many
(20,462)
Figure 4. Results on questions that require counting in the VQA
2.0 validation set.
of the state-of-the-art 5-model ensemble MUTAN [14]. Finally, we demonstrated that substituting the visual attention
mechanism in other networks, MCB [7] and MUTAN [14],
consistently improves their performance. In future work we
will investigate implicit recurrent attention mechanism using recently proposed explanation methods [37, 38].
References
[1] K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into
Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” in IEEE International Conference on
Computer Vision (ICCV), 2015, pp. 1026–1034.
[2] S. Bosse, D. Maniry, K.-R. Müller, T. Wiegand, and
W. Samek, “Deep neural networks for no-reference and fullreference image quality assessment,” IEEE Transactions on
Image Processing, vol. 27, no. 1, pp. 206–219, 2018.
[10] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola, “Stacked Attention Networks for Image Question Answering,” in IEEE
Conference on Computer Vision and Pattern Recognition
(CVPR), 2016, pp. 21–29.
[11] K. Chen, J. Wang, L.-C. Chen, H. Gao, W. Xu, and
R. Nevatia, “ABC-CNN: An Attention Based Convolutional Neural Network for Visual Question Answering,”
arXiv:1511.05960, 2015.
[12] H. Xu and K. Saenko, “Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question
Answering,” in European Conference on Computer Vision
(ECCV), 2016, pp. 451–466.
[13] M. Charikar, K. Chen, and M. Farach-Colton, “Finding frequent items in data streams,” Theoretical Computer Science,
vol. 312, no. 1, pp. 3–15, 2004.
[14] H. Ben-younes, R. Cadene, M. Cord, and N. Thome, “MUTAN: Multimodal Tucker Fusion for Visual Question Answering,” arXiv:1705.06676, 2017.
[15] L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311,
1966.
[3] D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine
Translation by Jointly Learning to Align and Translate,”
in International Conference on Representation Learning
(ICLR), 2015.
[16] C. Xiong, S. Merity, and R. Socher, “Dynamic Memory Networks for Visual and Textual Question Answering,” in International Conference on Machine Learning (ICML), 2016,
pp. 2397–2406.
[4] R. Nallapati, B. Zhou, C. Gulcehre, B. Xiang et al., “Abstractive text summarization using sequence-to-sequence rnns
and beyond,” arXiv:1602.06023, 2016.
[17] P. R. Kumar and P. Varaiya, Stochastic systems: Estimation,
identification, and adaptive control. SIAM, 2015.
[5] J.-H. Kim, K.-W. On, J. Kim, J.-W. Ha, and B.-T. Zhang,
“Hadamard Product for Low-rank Bilinear Pooling,” in International Conference on Representation Learning (ICLR),
2017.
[6] H. Nam, J.-W. Ha, and J. Kim, “Dual Attention Networks
for Multimodal Reasoning and Matching,” in IEEE International Conference on Computer Vision (CVPR), 2017, pp.
299–307.
[7] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and
M. Rohrbach, “Multimodal Compact Bilinear Pooling for
[18] V. Kazemi and A. Elqursh, “Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering,”
arXiv:1704.03162, 2017.
[19] Q. Wu, P. Wang, C. Shen, A. Dick, and A. van den Hengel, “Ask Me Anything: Free-form Visual Question Answering Based on Knowledge from External Sources,” in IEEE
Conference on Computer Vision and Pattern Recognition
(CVPR), 2016, pp. 4622–4630.
[20] A. Ray, G. Christie, M. Bansal, D. Batra, and D. Parikh,
“Question Relevance in VQA: Identifying Non-Visual And
False-Premise Questions,” in Empirical Methods in Natural
Language Processing (EMNLP), 2016, pp. 919–924.
How many
How many
camels are in
lanterns hang
the photo?
DRAU: 0
off the clock
tower ?
DRAU: 4
How many
lanterns hang
off the clock
tower?
GT: 4
MCB: 1
How many
camels are in
the photo?
GT: 0
What is on the
floor leaning on
the bench in
How many
people are
between the
shown ?
DRAU: 3
How many
people are
shown?
GT: 3
MCB: 2
people ?
DRAU: racket
What is on the
floor leaning on
the bench in
between the
people?
GT: racket
does the vase
animals are
depict ?
DRAU: owl
there ?
DRAU: 2
MCB: one
MCB: backpack
What animal
How many
How many
animals are
there?
GT: 2
MCB: 1
What animal
does the vase
depict?
GT: owl
MCB: elephant
Figure 5. DRAU vs. MCB Qualitative examples. Attention maps for both models shown, only DRAU has textual attention.
[21] J.-H. Kim, S.-W. Lee, D. Kwak, M.-O. Heo, J. Kim, J.-W.
Ha, and B.-T. Zhang, “Multimodal Residual Learning for
Visual QA,” in Advances in Neural Information Processing
Systems 29, 2016, pp. 361–369.
[22] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson,
S. Gould, and L. Zhang, “Bottom-Up and Top-Down At-
tention for Image Captioning and VQA,” arXiv:1707.07998,
2017.
[23] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN:
Towards Real-Time Object Detection with Region Proposal
Networks,” arXiv:1506.01497, Jun. 2015.
[24] J. Pennington, R. Socher, and C. D. Manning, “GloVe:
Global Vectors for Word Representation,” in Empirical
Methods in Natural Language Processing (EMNLP), 2014,
pp. 1532–1543.
[25] Y. Gao, O. Beijbom, N. Zhang, and T. Darrell, “Compact
bilinear pooling,” in IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), 2016, pp. 317–326.
[26] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and
D. Parikh, “Making the V in VQA Matter: Elevating the
Role of Image Understanding in Visual Question Answering,” in IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2017, pp. 6904–6913.
[27] S. Antol, A. Agrawal, J. Lu, M. Mitchell, C. L. Zitnick,
D. Batra, and D. Parikh, “VQA: Visual Question Answering,” in IEEE International Conference on Computer Vision
(CVPR), 2015, pp. 2425–2433.
[28] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European Conference on Computer Vision – ECCV 2014, 2014, pp. 740–755.
[29] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic
Optimization,” arXiv:1412.6980, 2014.
[30] X. Glorot and Y. Bengio, “Understanding the difficulty of
training deep feedforward neural networks,” in AISTATS,
2010, pp. 249–256.
[31] N. Kalchbrenner, I. Danihelka, and A. Graves, “Grid long
short-term memory,” arXiv:1507.01526, 2015.
[32] Y. Lin, Z. Pang, D. Wang, and Y. Zhuang, “Task-driven Visual Saliency and Attention-based Visual Question Answering,” arXiv:1702.06700, 2017.
[33] P. Wang, Q. Wu, C. Shen, and A. van den Hengel, “The
VQA-Machine: Learning How to Use Existing Vision Algorithms to Answer New Questions,” arXiv:1612.05386, 2016.
[34] I. Ilievski and J. Feng, “A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models,” arXiv:1708.00584, 2017.
[35] Z. Yu, J. Yu, C. Xiang, J. Fan, and D. Tao, “Beyond Bilinear:
Generalized Multi-modal Factorized High-order Pooling for
Visual Question Answering,” arXiv:1708.03619, 2017.
[36] D. Teney, P. Anderson, X. He, and A. van den Hengel, “Tips
and Tricks for Visual Question Answering: Learnings from
the 2017 Challenge,” arXiv:1708.02711, 2017.
[37] G. Montavon, W. Samek, and K.-R. Müller, “Methods for interpreting and understanding deep neural networks,” Digital
Signal Processing, vol. 73, pp. 1–15, 2018.
[38] L. Arras, G. Montavon, K.-R. Müller, and W. Samek, “Explaining recurrent neural network predictions in sentiment
analysis,” in EMNLP’17 Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis
(WASSA), 2017, pp. 159–168.
| 1 |
Learning of Human-like Algebraic Reasoning Using
Deep Feedforward Neural Networks
arXiv:1704.07503v1 [cs.AI] 25 Apr 2017
Cheng-Hao Cai1 Dengfeng Ke1∗ Yanyan Xu2 Kaile Su3
1
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2
School of Information Science and Technology, Beijing Forestry University
3
Institute for Integrated and Intelligent Systems, Griffith University
[email protected], [email protected], [email protected], [email protected]
Abstract
There is a wide gap between symbolic reasoning
and deep learning. In this research, we explore
the possibility of using deep learning to improve
symbolic reasoning. Briefly, in a reasoning system,
a deep feedforward neural network is used to guide
rewriting processes after learning from algebraic
reasoning examples produced by humans. To
enable the neural network to recognise patterns of
algebraic expressions with non-deterministic sizes,
reduced partial trees are used to represent the
expressions. Also, to represent both top-down
and bottom-up information of the expressions,
a centralisation technique is used to improve
the reduced partial trees.
Besides, symbolic
association vectors and rule application records
are used to improve the rewriting processes.
Experimental results reveal that the algebraic
reasoning examples can be accurately learnt only
if the feedforward neural network has enough
hidden layers. Also, the centralisation technique,
the symbolic association vectors and the rule
application records can reduce error rates of
reasoning. In particular, the above approaches have
led to 4.6% error rate of reasoning on a dataset of
linear equations, differentials and integrals.
1
Introduction
It is challenging to integrate symbolic reasoning and deep
learning in effective ways [Garcez et al., 2015]. In the
field of symbolic reasoning, much work has been done on
using formal methods to model reliable reasoning processes
[Chang and Lee, 1973]. For instance, algebraic reasoning
can be modelled by using first-order predicate logics or even
higher-order logics, but these logics are usually designed by
experienced experts, because it is challenging for machines
to learn these logics from data automatically [Bundy and
Welham, 1981; Nipkow et al., 2002]. On the other hand,
recent approaches on deep learning have revealed that deep
neural networks are powerful tools for learning from data
[Lecun et al., 2015], especially for learning speech features
∗
Corresponding Author.
[Mohamed et al., 2012] and image features [Sun et al., 2015].
However, not much work has been done on using deep neural
networks to learn formal symbolic logics. To close the gap
between symbolic reasoning and deep learning, this research
explores the possibility of using deep feedforward neural
networks to learn logics of rewriting in algebraic reasoning.
In other words, we try to teach neural networks to solve
mathematical problems, such as finding the solution of an
equation and calculating the differential or integral of an
expression, by using a rewriting system.
Rewriting is an important technique in symbolic reasoning.
Its core concept is to simply reasoning process by using
equivalence relations between different expressions [Bundy,
1983]. Usually, rewriting is based on a tree-manipulating
system, as many algebraic expressions can be represented by
using tree structures, and the manipulation of symbols in the
expressions is equivalent to the manipulation of nodes, leaves
and sub-trees on the trees [Rosen, 1973]. To manipulate
symbols, a rewriting system usually uses one way matching,
which is a restricted application of unification, to find a
desired pattern from an expression and then replaces the
pattern with another equivalent pattern [Bundy, 1983]. In
order to reduce the search space, rewriting systems are
expected to be Church-Rosser, which means that they should
be terminating and locally confluent [Rosen, 1973; Huet,
1980]. Thus, very careful designs and analyses are needed:
A design can start from small systems, because proving
termination and local confluence of a smaller system is
usually easier than proving those of a larger system [Bundy
and Welham, 1981]. Some previous work has focused on this
aspect: The Knuth-Bendix completion algorithm can be used
to solve the problem of local confluence [Knuth and Bendix,
1983], and Huet [1981] has provided a proof of correctness
for this algorithm. Also, dependency pairs [Arts and Giesl,
2000] and semantic labelling [Zantema, 1995] can solve the
problem of termination for some systems. After multiple
small systems have been designed, they can be combined into
a whole system, because the direct sum of two Church-Rosser
systems holds the same property [Toyama, 1987].
Deep neural networks have been used in many fields
of artificial intelligence, including speech recognition
[Mohamed et al., 2012], human face recognition [Sun et
al., 2015], natural language understanding [Sarikaya et al.,
2014], reinforcement learning for playing video games [Mnih
et al., 2015] and Monte Carlo tree search for playing Go
[Silver et al., 2016]. Recently, some researchers are trying
to extend them to reasoning tasks. For instance, Irving et
al. [2016] have proposed DeepMath for automated theorem
proving with deep neural networks. Also, Serafini and Garcez
[2016] have proposed logic tensor networks to combine deep
learning with logical reasoning. In addition, Garnelo et al.
[2016] have explored deep symbolic reinforcement learning.
In this research, we use deep feedforward neural networks
[Lecun et al., 2015] to guide rewriting processes. This
technique is called human-like rewriting, as it is adapted from
standard rewriting and can simulate human’s behaviours of
using rewrite rules after learning from algebraic reasoning
schemes. The following sections provide detailed discussions
about this technique: Section 2 introduces the core method
of human-like rewriting. Section 3 discusses algebraic
reasoning schemes briefly. Section 4 provides three methods
for system improvement. Section 5 provides experimental
results of the core method and the improvement methods.
Section 6 is for conclusions.
2
Rewriting is an inference technique for replacing expressions
or subexpressions with equivalent ones [Bundy, 1983]. For
instance1 , given two rules of the Peano axioms:
x+0⇒x
(1)
x + S(y) ⇒ S(x + y)
(2)
S(0) + S(S(0)) can be rewritten via:
S(0) + S(S(0))
|
{z
}
by (2)
⇒ S( S(0) + S(0) )
{z
}
|
by (2)
(3)
⇒ S(S( S(0) + 0 ))
| {z }
by (1)
⇒ S(S(S(0)))
More detailed discussions about the Peano axioms can be
found from [Pillay, 1981]. Generally, rewriting requires a
source expression s and a set of rewrite rules τ . Let l ⇒ r
denote a rewrite rule in τ , t a subexpression of s, and θ the
most general unifier of one way matching from l and t. A
single rewriting step of inference can be formed as:
(l ⇒ r) ∈ τ
s(r[θ])
l[θ] ≡ t
(4)
It is noticeable that θ is only applied to l, but not to t.
The reason is that one way matching, which is a restricted
application of unification, requires that all substitutions in a
unifier are only applied to the left-hand side of a unification
pair. Standard rewriting is to repeat the above step until
1
⇒
⇒
⇒
⇒
Human-like Rewriting
s(t)
no rule can be applied to the expression further. It requires
the set of rewrite rules τ to be Church-Rosser, which means
that τ should be terminating and locally confluent. This
requirement restricts the application of rewriting in many
D(f )
fields. For instance, the chain rule in calculus
⇒
D(x)
D(f ) D(u)
·
, which is very important for computing
D(u) D(x)
derivatives, will result in non-termination:
We use the mathematical convention that a word is a constant if
its first letter is in upper case, and it is a variable if its first letter is
in lower case.
D(Sin(X))
D(X)
D(Sin(X))
·
D(u1 )
D(Sin(X))
·
D(u2 )
D(Sin(X))
·
D(u3 )
···
D(u1 )
D(X)
D(u2 )
·
D(u1 )
D(u3 )
·
D(u2 )
D(u1 )
D(X)
D(u2 ) D(u1 )
·
D(u1 ) D(X)
(5)
The above process means that it is challenging to use the
chain rule in standard rewriting. Similarly, a commutativity
rule x ◦ y ⇒ y ◦ x, where ◦ is an addition, a
multiplication, a logical conjunction, a logical disjunction or
another binary operation satisfying commutativity, is difficult
to be used in standard rewriting. If termination is not
guaranteed, it will be difficult to check local confluence, as
local confluence requires a completely developed search tree,
but non-termination means that the search tree is infinite and
cannot be completely developed. More detailed discussion
about standard rewriting and Church-Rosser can be found
from [Bundy, 1983].
Human-like rewriting is adapted from standard rewriting.
It uses a deep feedforward neural network [Lecun et al.,
2015] to guide rewriting processes. The neural network has
learnt from some rewriting examples produced by humans,
so that it can, to some extent, simulate human’s ways of
using rewrite rules: Firstly, non-terminating rules are used
to rewrite expressions. Secondly, local confluence is not
checked. Lastly, experiences of rewriting can be learnt and
can guide future rewriting processes.
To train the feedforward neural network, input data and
target data are required. An input can be generated via the
following steps: Firstly, an expression is transformed to a
parsing tree [Huth and Ryan, 2004] with position annotations.
A position annotation is a unique label < p1 , p2 , · · · , pN >
indicating a position on a tree, where each pi is the order
of a branch. Then the tree is reduced to a set of partial
trees with a predefined maximum depth d. Next, the partial
trees are expanded to perfect k-ary trees with the depth d
and a predefined breadth k. In particular, empty positions
on the prefect k-ary trees are filled by Empty. After that,
the perfect k-ary trees are transformed to lists via in-order
traversal. Detailed discussions about perfect k-ary trees
and in-order traversal can be found from [Cormen et al.,
2001]. Finally, the lists with their position annotations are
transformed to a set of one-hot representations [Turian et
vector, Sof tmax the standard Softmax function [Bishop,
2006], and y the output vector. The averaged Softmax layer
is defined as:
P
1X
ui =
xj,i
(6)
P j=1
6 × Y = Y + Ln(3)
<1>
=
<1,1>
<1,2>
×
<1,1,1>
+
<1,1,2>
6
<1,2,1>
Y
y = Sof tmax(W · u + b)
(7)
It is noticeable that the output is a single vector regardless of
the number of the input vectors.
The feedforward neural network is trained by using the
back-propagation algorithm with the cross-entropy error
function [Hecht-Nielsen, 1988; Bishop, 2006]. After training,
the neural network can be used to guide a rewriting
procedure: Given the RPT representation of an expression,
the neural network uses forward computation to get an output
vector, and the position of the maximum element indicates
the name of a rewrite rule and a possible position for the
application of the rule.
<1,2,2>
Y
Ln
<1,2,2,1>
3
<1>
<1,2>
=
+
×
6
+
Y
Y
3
Ln
<1,2,2>
×
Ln
3
Y
<1>
<1,2>
=
+
×
Y
Ln
E
E
×
Ln
E
3
E
E
E
3
<1,2,2>
Y
E
Ln
<1,1>
6
3
Y
+
Y
E
Ln
<1,1>
6
6
Y
E
E
E
E
<1>
=,×,6,Y,+,Y,Ln
<1,2>
+,Y,E,E,Ln,3,E
<1,1>
×,6,E,E,Y,E,E
<1,2,2>
Ln,3,E,E,E,E,E
[1, 0, 0, 0, …, 0, 0, 1, 0, 0, 0, 0, 0, …, 0, 0, 1, 0]
[0, 1, 0, 0, …, 0, 0, 0, 1, 0, 0, 0, 0, …, 0, 0, 0, 0]
[0, 0, 1, 0, …, 0, 0, 0, 0, 1, 0, 0, 0, …, 0, 0, 0, 0]
[0, 0, 0, 1, …, 0, 0, 0, 0, 0, 0, 0, 0, …, 0, 0, 0, 0]
Figure 1: An expression 6×Y = Y +Ln(3) is transformed to
the RPT representation. The maximum depth and the breadth
of a tree are defined as 2. “E” is an abbreviation of “Empty”.
al., 2010]. In particular, Empty is transformed to a zero
block. Figure 1 provides an example for the above procedure.
This representation is called a reduced partial tree (RPT)
representation of the expression. A target is the one-hot
representation [Turian et al., 2010] of a rewrite rule name
with a position annotation for applying the rule.
It is noticeable that the input of the neural network is a set
of vectors, and the number of vectors is non-deterministic, as
it depends on the structure of the expression. However, the
target is a single vector. Thus, the dimension of the input will
disagree with the dimension of the target if a conventional
feedforward neural network structure is used. To solve this
problem, we replace its Softmax layer with an averaged
Softmax layer. Let xj,i denote the ith element of the jth input
vector, P the number of the input vectors, u an averaged input
vector, ui the ith element of u, W a weight matrix, b a bias
Algebraic Reasoning Schemes
The learning of the neural network is based on a set
of algebraic reasoning schemes. Generally, an algebraic
reasoning scheme consists of a question, an answer and
some intermediate reasoning steps. The question is an
expression indicating the starting point of reasoning. The
answer is an expression indicating the goal of reasoning.
Each intermediate reasoning step is a record consisting of:
• A source expression;
• The name of a rewrite rule;
• A position annotation for applying the rewrite rule;
• A target expression.
In particular, the source expression of the first reasoning
step is the question, and the target expression of the final
reasoning step is the answer. Also, for each reasoning step,
the target expression will be the source expression of the next
step if the “next step” exists. By applying all intermediate
reasoning steps, the question can be rewritten to the answer
deterministically.
In this research, algebraic reasoning schemes are
developed via a rewriting system in SWI-Prolog [Wielemaker
et al., 2012]. The rewriting system is based on Rule (4),
and it uses breadth-first search to find intermediate reasoning
steps from a question to an answer. Like most rewriting
systems and automated theorem proving systems2 , its ability
of reasoning is restricted by the problem of combinatorial
explosion: The number of possible ways of reasoning can
grow rapidly when the question becomes more complex
[Bundy, 1983]. Therefore, a full algebraic reasoning scheme
of a complex question is usually difficult to be generated
automatically, and guidance from humans is required. In
other words, if the system fails to develop the scheme, we
will apply rewrite rules manually until the remaining part
of the scheme can be developed automatically, or we will
2
A practical example is the “by auto” function of Isabelle/HOL
[Nipkow et al., 2002]. It is often difficult to prove a complex
theorem automatically, so that experts’ guidance is often required.
provide some subgoals for the system to reduce the search
space. After algebraic reasoning schemes are developed,
their intermediate reasoning steps are used to train the neural
network: For each step, the RPT representation of the source
expression is the input of the neural network, and the onehot representation of the rewrite rule name and the position
annotation is the target of the neural network, as discussed by
Section 2.
4
Methods for System Improvement
4.1
Centralised RPT Representation
The RPT representation discussed before is a top-down
representation of an expression: A functor in the expression
is a node, and arguments dominated by the functor are child
nodes or leaves of the node. However, it does not record
bottom-up information about the expression. For instance,
in Figure 1, the partial tree labelled < 1, 1 > does not record
any information about its parent node “=”.
=
<1,2>
×
<1,1,1>
+
<1,1,2>
6
<1,2,1>
Y
<1,2,2>
Y
Ln
<1,2,2,1>
3
<1>
=
×
6
Y
+
=
E
E
E
<1,1>
×
6
E
E
=
Y
× E
E
×
× +
E
<1,2>
Consider the following rewrite rule:
x × x ⇒ x2
4.3
E
E
+
3
E
+
× +
E
<1,2,2>
Ln
E
3
E
E Ln E
E
Rule Application Record
Qi ≡ < rulei , posi >
=
Ln
+
E
Y
(8)
Previous applications of rewrite rules can provide hints for
current and future applications. In this research, we use rule
application records (RAR) to record the previous applications
of rewrite rules: Let Qi denote the ith element of an RAR Q,
rulei the name of the previous ith rewrite rule, and posi the
position annotation for applying the rule. Qi is defined as:
+
Y
Symbolic Association Vector
After the matrix is produced, it can be reshaped to a vector
and be a part of an input vector of the neural network.
E
Y Ln =
4.2
The application of this rule requires that two arguments of
“×” are the same. If this pattern exists in an expression, it
will be a useful hint for selecting rules. In such case, the use
of a symbolic association vector (SAV) can provide useful
information for the neural network: Assume that H is the
list representation of a perfect k-ary tree (which has been
discussed by Section 2) with a length L. S is defined as an
L × L matrix which satisfies:
1,
if Hi = Hj and i 6= j;
Si,j =
(9)
0,
otherwise.
<1>
<1,1>
both top-down and bottom-up information of an expression:
Firstly, every node on a tree considers itself as the centre of
the tree and grows an additional branch to its parent node (if
it exists), so that the tree becomes a directed graph. This step
is called “centralisation”. Then the graph is reduced to a set
of partial trees and expanded to a set of perfect k-ary trees. In
particular, each additional branch is defined as the kth branch
of its parent node, and all empty positions dominated by the
parent node are filled by Empty. Detailed discussions about
perfect k-ary trees and directed graphs can be found from
[Cormen et al., 2001]. Figure 2 provides an example for the
above steps. Finally, these perfect k-ary trees are transformed
to lists and further represented as a set of vectors, as discussed
by Section 2.
Ln =
Figure 2: The parsing tree of 6 × Y = Y + Ln(3) is
centralised, reduced and expanded to a set of perfect k-ary
trees. Dashed arrows denote additional branchs generated by
centralisation.
A centralised RPT (C-RPT) representation can represent
(10)
Usually, the RAR only records the last N applications of
rewrite rules, where N is a predefined length of Q. To
enable the neural network to read the RAR, it needs to be
transformed to a one-hot representation [Turian et al., 2010].
A drawback of RARs is that they cannot be used in the
first N steps of rewriting, as they record exactly N previous
applications of rewrite rules.
5
5.1
Experiments
Datasets and Evaluation Metrics
A dataset of algebraic reasoning schemes is used to train and
test models. This dataset contains 400 schemes about linear
equations, differentials and integrals and 80 rewrite rules, and
these schemes consist of 6,067 intermediate reasoning steps
totally. We shuffle the intermediate steps and then divide
them into a training set and a test set randomly: The training
set contains 5,067 examples, and the test set contains 1,000
examples. After training a model with the training set, an
error rate of reasoning on the test set is used to evaluate the
model, and it can be computed by:
Error Rate =
NError
× 100%
NT otal
(11)
where NError is the number of cases when the model fails to
indicate an expected application of rewrite rules, and NT otal
is the number of examples in the test set.
5.2
layers can bring about significantly better performance of
learning. On the other hand, if the neural network only
has a single hidden layer, the learning will stop early, while
the cross-entropy loss is very high. Also, by comparing the
curves with the same type of line, it is noticeable that a deeper
RPT often brings about better performance of learning, but an
exception is the curve of the “FNN5 + RPT2” model.
Table 1: Error Rates (%) of Reasoning on the Test Set.
FNNn
n=1
n=3
n=5
Using RPT Representations and Neural
Networks
In this part, we evaluate the core method of human-like
rewriting: All expressions in the dataset are represented by
using the RPT representations. The breadth of an RPT is
set to 2, because the expressions in the dataset are unary
or binary. The depth of an RPT is set to 1, 2 or 3. Also,
feedforward neural networks [Lecun et al., 2015] with 1, 3
and 5 hidden layers are used to learn from the dataset. The
number of units in each hidden layer is set to 1,024, and
their activation functions are rectified linear units (ReLU)
[Glorot et al., 2011]. The output layer of each neural network
is an averaged Softmax layer. The neural networks are
trained via the back-propagation algorithm with the crossentropy error function [Hecht-Nielsen, 1988; Bishop, 2006].
When training models, learning rates are decided by the
Newbob+/Train strategy [Wiesler et al., 2014]: The initial
learning rate is set to 0.01, and the learning rate is halved
when the average improvement of the cross-entropy loss on
the training set is smaller than 0.1. The training process stops
when the improvement is smaller than 0.01.
5
FNN1 + RPT1
FNN1 + RPT2
FNN1 + RPT3
FNN3 + RPT1
FNN3 + RPT2
FNN3 + RPT3
FNN5 + RPT1
FNN5 + RPT2
FNN5 + RPT3
4
Cross-Entropy Loss
3
2
1
0
1
2
3
4
5
6 7
Epoch
8
9
10 11 12
Figure 3: Learning Curves of FNN + RPT Models.
Figure 3 provides learning curves of the models, where
“FNNn” means that the neural network has n hidden layers,
and “RPTm” means that the depth of RPTs is m. To aid the
readability, the curves of “FNN1”, “FNN3” and “FNN5” are
in blue, red and green respectively, and the curves of “RPT1”,
“RPT2” and “RPT3” are displayed by using dotted lines,
dashed lines and solid lines respectively. By comparing the
curves with the same colour, it is noticeable that more hidden
m=1
80.4
27.4
16.5
RPTm
m=2
79.9
20.4
16.6
m=3
76.1
18.8
12.9
Table 1 reveals performance of the trained models on the
test set. In this table, results in “FNNn” rows and “RPTm”
columns correspond to the “FNNn+RPTm” models in Figure
3. It is noticeable that the error rates of reasoning decrease
significantly when the numbers of hidden layers increase.
Also, the error rates of reasoning often decrease when the
depths of RPTs increase, but an exception occurs in the case
of “FNN5 + RPT2”. We believe that the reason why the
exception occurs is that the learning rate strategy results in
early stop of training. In addition, the error rate of the FNN5
+ RPT3 model is the best among all results.
5.3
Using Improvement Methods
In Section 5.2, we have found that the neural networks with 5
hidden layers have better performance than those with 1 or 3
hidden layers on the task of human-like rewriting. Based on
the neural networks with 5 hidden layers, we apply the three
improvement methods to these models.
Figure 4 shows learning curves of models improved by CRPTs, SAVs and RARs. Also, learning curves of the baseline
RPTm models are displayed by using dashed lines, where m
is the depth of RPTs. Learning curves of the C-RPT models
are displayed by Figure 4(a). A comparison between two
lines in the same colour reveals that the C-RPT representation
can improve the model when m is fixed. Also, the C-RPT2
curve is very close to the RPT3 curve during the last 6
epochs, which reveals that there might be a trade-off between
using C-RPTs and increasing the depth of RPTs. The best
learning curve is the C-RPT3 curve, as its cross-entropy loss
is always the lowest during all epochs. Figure 4(b) provides
learning curves of the RPT models with the SAV method.
It is noticeable that SAVs have two effects: The first is that
they can bring about lower cross-entropy losses. The second
is that they can reduce the costs of learning time, as each
RPTm + SAV model uses fewer epochs to finish learning
than its counterpart. Figure 4(c) shows learning curves of
the RPT models with the RAR method. This figure reveals
that RARs always improve the models. In particular, even the
RPT1 + RAR model has better learning performance than the
RPT3 model. Also, the RPT1 + RAR model and the RPT3
+ RAR model use less epochs to be trained, which means
that RARs may reduce the time consumption of learning.
Figure 4(d) provides learning curves of the models with all
3
3
RPT1
RPT1
C-RPT1
RPT1 + SAV
RPT2
C-RPT2
RPT3
C-RPT3
1
RPT2
2
Cross-Entropy Loss
Cross-Entropy Loss
2
RPT2 + SAV
RPT3
RPT3 + SAV
1
0
0
1
2
3
4
5
6 7
Epoch
8
9
10 11
1
2
(a) C-RPT Models.
3
3
4
5
6 7
Epoch
3
C-RPT1 + SAV + RAR
RPT2
RPT2 + RAR
RPT3
RPT3 + RAR
RPT2
2
Cross-Entropy Loss
Cross-Entropy Loss
10 11
RPT1
RPT1 + RAR
1
9
(b) SAV Models.
RPT1
2
8
C-RPT2 + SAV + RAR
RPT3
C-RPT3 + SAV + RAR
1
0
0
1
2
3
4
5
6 7
Epoch
8
9
10 11
1
(c) RAR Models.
2
3
4
5
6 7
Epoch
8
9
10 11
(d) C-RPT + SAV + RAR Models.
Figure 4: Learning Curves of Models Improved by C-RPTs, SAVs and RARs.
improvement methods. A glance at the figure reveals that
these models have better performance of learning than the
baseline models. Also, they require less epochs to be trained
than their counterparts. In addition, the final cross-entropy
loss of the C-RPT2 + SAV + RAR model is the lowest among
all results.
Table 2: Error Rates (%) After Improvement.
Model
RPTm (Baseline)
RPTm + SAV
RPTm + RAR
RPTm + SAV + RAR
C-RPTm
C-RPTm + SAV
C-RPTm + RAR
C-RPTm + SAV + RAR
m=1
16.5
16.8
6.7
6.8
16.1
11.9
6.3
5.4
Value of m
m=2
16.6
15.8
6.0
5.2
12.9
15.1
5.1
4.6
m=3
12.9
11.5
5.4
5.4
11.6
11.8
5.1
5.3
particular, the C-RPT2 + SAV + RAR model reaches the best
error rate (4.6%) among all models.
6
Conclusions and Future Work
Deep feedforward neural networks are able to guide rewriting
processes after learning from algebraic reasoning schemes.
The use of deep structures is necessary, because the
behaviours of rewriting can be accurately modelled only if
the neural networks have enough hidden layers. Also, it has
been shown that the RPT representation is effective for the
neural networks to model algebraic expressions, and it can
be improved by using the C-RPT representation, the SAV
method and the RAR method. Based on these techniques,
human-like rewriting can solve many problems about linear
equations, differentials and integrals. In the future, we will
try to use human-like rewriting to deal with more complex
tasks of mathematical reasoning and extend it to more general
first-order logics and higher-order logics.
Acknowledgments
Table 2 shows error rates of reasoning on the test set after
using the improvement methods. It is noticeable that: Firstly,
the C-RPTm models have lower error rates than the baseline
RPTm models, especially when m = 2. Secondly, the RPTm
+ SAV models have lower error rates than the baseline RPTm
model when m is 2 or 3, but this is not the case for the RPT1
+ SAV model. Thirdly, the RARs can reduce the error rates
significantly. Finally, the error rates can be reduced further
when the three improvement methods are used together. In
This work is supported by the Fundamental Research Funds
for the Central Universities (No. 2016JX06) and the National
Natural Science Foundation of China (No. 61472369).
References
[Arts and Giesl, 2000] Thomas Arts and Jürgen Giesl.
Termination of term rewriting using dependency pairs.
Theor. Comput. Sci., 236(1-2):133–178, 2000.
[Bishop, 2006] Christopher M Bishop. Pattern Recognition
and Machine Learning (Information Science and
Statistics). Springer-Verlag New York, Inc., 2006.
[Bundy and Welham, 1981] Alan Bundy and Bob Welham.
Using meta-level inference for selective application of
multiple rewrite rule sets in algebraic manipulation. Artif.
Intell., 16(2):189–212, 1981.
[Bundy, 1983] Alan Bundy. The computer modelling of
mathematical reasoning. Academic Press, 1983.
[Chang and Lee, 1973] Chin-Liang Chang and Richard C. T.
Lee. Symbolic logic and mechanical theorem proving.
Computer science classics. Academic Press, 1973.
[Cormen et al., 2001] T H Cormen, C E Leiserson, R L
Rivest, and C. Stein. Introduction to algorithms (second
edition). page 1297C1305, 2001.
[Garcez et al., 2015] Artur D ’Avila Garcez, Tarek R Besold,
Luc De Raedt, Peter Fldiak, Pascal Hitzler, Thomas Icard,
Kai Uwe Khnberger, Luis C Lamb, Risto Miikkulainen,
and Daniel L Silver. Neural-symbolic learning and
reasoning: Contributions and challenges. In AAAI Spring
Symposium - Knowledge Representation and Reasoning:
Integrating Symbolic and Neural Approaches, 2015.
[Garnelo et al., 2016] Marta Garnelo, Kai Arulkumaran, and
Murray Shanahan. Towards deep symbolic reinforcement
learning. CoRR, abs/1609.05518, 2016.
[Glorot et al., 2011] Xavier Glorot, Antoine Bordes, and
Yoshua Bengio. Deep sparse rectifier neural networks. In
Proceedings of the Fourteenth International Conference
on Artificial Intelligence and Statistics, AISTATS 2011,
Fort Lauderdale, USA, April 11-13, 2011, pages 315–323,
2011.
[Hecht-Nielsen, 1988] Robert Hecht-Nielsen. Theory of
the backpropagation neural network. Neural Networks,
1(Supplement-1):445–448, 1988.
[Huet, 1980] Grard Huet. Confluent reductions: Abstract
properties and applications to term rewriting systems:
Abstract properties and applications to term rewriting
systems. Journal of the Acm, 27(4):797–821, 1980.
[Huet, 1981] Gérard P. Huet.
A complete proof of
correctness of the knuth-bendix completion algorithm. J.
Comput. Syst. Sci., 23(1):11–21, 1981.
[Huth and Ryan, 2004] Michael Huth and Mark Dermot
Ryan.
Logic in computer science - modelling and
reasoning about systems (2. ed.). Cambridge University
Press, 2004.
[Irving et al., 2016] Geoffrey Irving, Christian Szegedy,
Alexander A. Alemi, Niklas Eén, François Chollet, and
Josef Urban. Deepmath - deep sequence models for
premise selection. In Advances in Neural Information
Processing Systems 29: Annual Conference on Neural
Information Processing Systems 2016, December 5-10,
2016, Barcelona, Spain, pages 2235–2243, 2016.
[Knuth and Bendix, 1983] Donald E. Knuth and Peter B.
Bendix. Simple word problems in universal algebras.
Computational Problems in Abstract Algebra, pages 263–
297, 1983.
[Lecun et al., 2015] Yann Lecun, Yoshua Bengio, and
Geoffrey Hinton. Deep learning. Nature, 521(7553):436–
444, 2015.
[Mnih et al., 2015] Volodymyr Mnih, Koray Kavukcuoglu,
David Silver, Andrei A Rusu, Joel Veness, Marc G
Bellemare, Alex Graves, Martin Riedmiller, Andreas K
Fidjeland, Georg Ostrovski, et al.
Human-level
control through deep reinforcement learning. Nature,
518(7540):529–533, 2015.
[Mohamed et al., 2012] Abdel-rahman
Mohamed,
George E. Dahl, and Geoffrey E. Hinton. Acoustic
modeling using deep belief networks. IEEE Trans. Audio,
Speech & Language Processing, 20(1):14–22, 2012.
[Nipkow et al., 2002] Tobias Nipkow, Lawrence C. Paulson,
and Markus Wenzel. Isabelle/HOL - A Proof Assistant
for Higher-Order Logic, volume 2283 of Lecture Notes in
Computer Science. Springer, 2002.
[Pillay, 1981] Anand Pillay. Models of peano arithmetic.
Journal of Symbolic Logic, 67(3):1265–1273, 1981.
[Rosen, 1973] Barry K. Rosen. Tree-manipulating systems
and church-rosser theorems. In Acm Symposium on Theory
of Computing, pages 117–127, 1973.
[Sarikaya et al., 2014] Ruhi Sarikaya, Geoffrey E. Hinton,
and Anoop Deoras. Application of deep belief networks
for natural language understanding. IEEE/ACM Trans.
Audio, Speech & Language Processing, 22(4):778–784,
2014.
[Serafini and d’Avila Garcez, 2016] Luciano Serafini and
Artur S. d’Avila Garcez. Logic tensor networks: Deep
learning and logical reasoning from data and knowledge.
CoRR, abs/1606.04422, 2016.
[Silver et al., 2016] David Silver, Aja Huang, Chris J.
Maddison, Arthur Guez, Laurent Sifre, George van den
Driessche, Julian Schrittwieser, Ioannis Antonoglou,
Veda Panneershelvam, Marc Lanctot, Sander Dieleman,
Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya
Sutskever, Timothy Lillicrap, Madeleine Leach, Koray
Kavukcuoglu, Thore Graepel, and Demis Hassabis.
Mastering the game of Go with deep neural networks and
tree search. Nature, 529(7587):484–489, January 2016.
[Sun et al., 2015] Yi Sun, Ding Liang, Xiaogang Wang, and
Xiaoou Tang. Deepid3: Face recognition with very deep
neural networks. CoRR, abs/1502.00873, 2015.
[Toyama, 1987] Yoshihito Toyama. On the church-rosser
property for the direct sum of term rewriting systems. J.
ACM, 34(1):128–143, 1987.
[Turian et al., 2010] Joseph P. Turian, Lev-Arie Ratinov, and
Yoshua Bengio. Word representations: A simple and
general method for semi-supervised learning. In ACL
2010, Proceedings of the 48th Annual Meeting of the
Association for Computational Linguistics, July 11-16,
2010, Uppsala, Sweden, pages 384–394, 2010.
[Wielemaker et al., 2012] Jan Wielemaker, Tom Schrijvers,
Markus Triska, and Torbjörn Lager. Swi-prolog. TPLP,
12(1-2):67–96, 2012.
[Wiesler et al., 2014] Simon Wiesler, Alexander Richard,
Ralf Schlüter, and Hermann Ney. Mean-normalized
stochastic gradient for large-scale deep learning. In IEEE
International Conference on Acoustics, Speech and Signal
Processing, ICASSP 2014, Florence, Italy, May 4-9, 2014,
pages 180–184, 2014.
[Zantema, 1995] Hans Zantema.
Termination of term
rewriting by semantic labelling.
Fundam. Inform.,
24(1/2):89–105, 1995.
| 9 |
arXiv:1603.08578v2 [math.ST] 21 Jul 2016
Analysis of k-Nearest Neighbor Distances
with Application to Entropy Estimation
Shashank Singh
Barnabás Póczos
Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213 USA
Abstract
Estimating entropy and mutual information consistently is important for many machine learning
applications. The Kozachenko-Leonenko (KL)
estimator (Kozachenko & Leonenko, 1987) is a
widely used nonparametric estimator for the entropy of multivariate continuous random variables, as well as the basis of the mutual information estimator of Kraskov et al. (2004), perhaps the most widely used estimator of mutual
information in this setting. Despite the practical
importance of these estimators, major theoretical questions regarding their finite-sample behavior remain open. This paper proves finite-sample
bounds on the bias and variance of the KL estimator, showing that it achieves the minimax convergence rate for certain classes of smooth functions. In proving these bounds, we analyze finitesample behavior of k-nearest neighbors (k-NN)
distance statistics (on which the KL estimator is
based). We derive concentration inequalities for
k-NN distances and a general expectation bound
for statistics of k-NN distances, which may be
useful for other analyses of k-NN methods.
1. Introduction
Estimating entropy and mutual information in a consistent manner is of importance in a number problems
in machine learning. For example, entropy estimators
have applications in goodness-of-fit testing (Goria et al.,
2005), parameter estimation in semi-parametric models (Wolsztynski et al., 2005), studying fractal random
walks (Alemany & Zanette, 1994), and texture classification (Hero et al., 2002a;b). Mutual information estimators have applications in feature selection (Peng & Dind,
BAPOCZOS @ CS . CMU . EDU
2005), clustering (Aghagolzadeh et al., 2007), causality
detection (Hlaváckova-Schindler et al., 2007), optimal experimental design (Lewi et al., 2007; Póczos & Lőrincz,
2009), fMRI data processing (Chai et al., 2009), prediction
of protein structures (Adami, 2004), and boosting and facial expression recognition (Shan et al., 2005). Both entropy estimators and mutual information estimators have
been used for independent component and subspace analysis (Learned-Miller & Fisher, 2003; Szabó et al., 2007;
Póczos & Lőrincz, 2005; Hulle, 2008), as well as for image registration (Kybic, 2006; Hero et al., 2002a;b). For
further applications, see (Leonenko et al., 2008).
In this paper, we focus on the problem of estimating the
Shannon entropy of a continuous random variable given
samples from its distribution. All of our results extend to
the estimation of mutual information, since the latter can
be written as a sum of entropies. 1 In our setting, we assume we are given n IID samples from an unknown probability measure P . Under nonparametric assumptions (on
the smoothness and tail behavior of P ), our task is then to
estimate the differential Shannon entropy of P .
Estimators of entropy and mutual information come in
many forms (as reviewed in Section 2), but one common
approach is based on statistics of k-nearest neighbor (kNN) distances (i.e., the distance from a sample to its k th
nearest neighbor amongst the samples, in some metric on
the space). These nearest-neighbor estimates are largely
based on initial work by Kozachenko & Leonenko (1987),
who proposed an estimate for differential Shannon entropy
and showed its weak consistency. Henceforth, we refer to
this historic estimator as the ‘KL estimator’, after its discoverers. Although there has been much work on the problem of entropy estimation in the nearly three decades since
the KL estimator was proposed, there are still major open
questions about the finite-sample behavior of the KL estimator. The goal of this paper is to address some of these
questions in the form of finite-sample bounds on the bias
1
Preprint, Copyright 2016 by the author(s).
SSS 1@ ANDREW. CMU . EDU
Specifically, for random variables X and Y , I(X; Y ) =
H(X) + H(Y ) − H(X, Y ).
Analysis of k-Nearest Neighbor Statistics with Application to Entropy Estimation
and variance of the estimator.
Organization
Specifically, our main contributions are the following:
Section 2 discusses related work. Section 3 gives theoretical context and assumptions underlying our work. In Section 4, we prove concentration boundss for k-NN distances,
and we use these in Section 5 to derive bounds on the expectations of k-NN distance statistics. Section 6 describes
the KL estimator, for which we prove bounds on the bias
and variance in Sections 7 and 8, respectively.
β/D
bounds on the bias of the
1. We derive O (k/n)
KL estimate, where β is a measure of the smoothness
(i.e., Hölder continuity) of the sampling density, D is
the intrinsic dimension of the support of the distribution, and n is the sample size.
2. We derive O n−1 bounds on the variance of the KL
estimator.
3. We derive concentration inequalities for k-NN distances, as well as general bounds on expectations of
k-NN distance statistics, with important special cases:
2. Related Work
Here, we review previous work on the analysis of k-nearest
neighbor statistics and their role in estimating information
theoretic functionals, as well as other approaches to estimating information theoretic functionals.
2.1. The Kozachenko-Leonenko Estimator of Entropy
(a) We bound the moments of k-NN distances,
which play a role in analysis of many applications of k-NN methods, including both the bias
and variance of the KL estimator. In particular, we significantly relax strong assumptions underlying previous results by Evans et al. (2002),
such as compact support and smoothness of the
sampling density. Our results are also the first
which apply to negative moments (i.e., E [X α ]
with α < 0); these are important for bounding
the variance of the KL estimator.
(b) We give upper and lower bounds on the logarithms of k-NN distances. These are important
for bounding the variance of the KL estimator,
as well as k-NN estimators for divergences and
mutual informations.
We present our results in the general setting of a set
equipped with a metric, a base measure, a probability density, and an appropriate definition of dimension. This setting subsumes Euclidean spaces, in which k-NN methods
have traditionally been analyzed, 2 but also includes, for
instance, Riemannian manifolds, and perhaps other spaces
of interest. We also strive to weaken some of the restrictive
assumptions, such as compact support and boundedness of
the density, on which most related work depends.
We anticipate that the some of the tools developed here
may be used to derive error bounds for k-NN estimators
of mutual information, divergences (Wang et al., 2009),
their generalizations (e.g., Rényi and Tsallis quantities
(Leonenko et al., 2008)), norms, and other functionals of
probability densities. We leave such bounds to future work.
2
A recent exception in the context of classification, is
Chaudhuri & Dasgupta (2014) which considers general metric
spaces.
In general contexts, only weak consistency of the KL
estimator is known (Kozachenko & Leonenko, 1987).
Biau & Devroye (2015) recently reviewed finite-sample results known for the KL estimator. They show (Theorem
7.1) that, if the density p has compact support, then the
variance of the KL estimator decays as O(n−1 ). They
also claim (Theorem 7.2) to bound the bias of the KL estimator by O(n−β ), under the assumptions that p is βHölder continuous (β ∈ (0, 1]), bounded away from 0, and
supported on the interval [0, 1]. However, in their proof
Biau & Devroye (2015) neglect the additional bias incurred
at the boundaries of [0, 1], where the density cannot simultaneously be bounded away from 0 and continuous. In fact,
because the KL estimator does not attempt to correct for
boundary bias, for densities bounded away from 0, the estimator may suffer bias worse than O(n−β ).
The KL estimator is also important for its role in the mutual
information estimator proposed by Kraskov et al. (2004),
which we refer to as the KSG estimator. The KSG estimator expands the mutual information as a sum of entropies, which it estimates via the KL estimator with a particular random (i.e., data-dependent) choice of the nearestneighbor parameter k. The KSG estimator is perhaps the
most widely used estimator for the mutual information between continuous random variables, despite the fact that it
currently appears to have no theoretical guarantees, even
asymptotically. In fact, one of the few theoretical results,
due to Gao et al. (2015b), concerning the KSG estimator
is a negative result: when estimating the mutual information between strongly dependent variables, the KSG estimator tends to systematically underestimate mutual information, due to increased boundary bias. 3 Nevertheless, the
3
To alleviate this, Gao et al. (2015b) provide a heuristic correction based on using local PCA to estimate the support of the
distribution. Gao et al. (2015a) provide and prove asymptotic un-
Analysis of k-Nearest Neighbor Statistics with Application to Entropy Estimation
widespread use of the KSG estimator motivates study of its
behavior. We hope that our analysis of the KL estimator, in
terms of which the KSG estimator can be written, will lead
to a better understanding of the latter.
2.2. Analysis of nearest-neighbor distance statistics
Evans (2008) derives a law of large numbers for k-NN
statistics with uniformly bounded (central) kurtosis as the
sample size n → ∞. Although it is not obvious that
the kurtosis of log-k-NN distances is uniformly bounded
(indeed, each log-k-NN distance approaches −∞ almost
surely), we show in Section 8 that this is indeed the case,
and we apply the results of Evans (2008) to bound the variance of the KL estimator.
Evans et al. (2002) derives asymptotic limits and convergence rates for moments of k-NN distances, for sampling
densities with bounded derivatives and compact domain.
In contrast, we use weaker assumptions to simply prove
bounds on the moments of k-NN distances. Importantly,
whereas the results of Evans et al. (2002) apply only to
non-negative moments (i.e., E [|X|α ] with α ≥ 0), our results also hold for certain negative moments, which is crucial for our bounds on the variance of the KL estimator.
2.3. Other Approaches to Estimating Information
Theoretic Functionals
Analysis of convergence rates: For densities over RD
satisfying a Hölder smoothness condition parametrized
by β ∈ (0, ∞), the minimax rate for estimating entropy
known
has been
since Birge & Massart (1995) to be
8β
− min{ 4β+D
,1}
O n
in mean squared error, where n is the
sample size.
Quite recently, there has been much work on analyzing new
estimators for entropy, mutual information, divergences,
and other functionals of densities. Most of this work
has been along one of three approaches. One series of
papers (Liu et al., 2012; Singh & Poczos, 2014b;a) studied boundary-corrected plug-in approach based on undersmoothed kernel density estimation. This approach has
strong finite sample guarantees, but requires prior knowledge of the support of the density and can necessitate computationally demanding numerical integration. A second
approach (Krishnamurthy et al., 2014; Kandasamy et al.,
2015) uses von Mises expansion to correct the bias of
optimally smoothed density estimates. This approach
shares the difficulties of the previous approach, but is
statistically more efficient. Finally, a long line of work
(Pérez-Cruz, 2008; Pál et al., 2010; Sricharan et al., 2012;
Sricharan et al., 2010; Moon & Hero, 2014) has studied enbiasedness of another estimator, based on local Gaussian density
estimation, that directly adapts to the boundary.
tropy estimation based on continuum limits of certain properties of graphs (including k-NN graphs, spanning trees,
and other sample-based graphs).
Most
of 2βthese estimators
achieve
rates of
4β
Only
O n− min{ β+D ,1} or O n− min{ 2β+D ,1} .
the von Mises approach of Krishnamurthy et al. (2014) is
known to achieve the minimax rate for general β and D,
but due to its high computational demand (O(2D n3 )), the
authors suggest the use of other statistically less efficient
estimators for moderately sized datasets. In this paper,
we prove that,for β ∈ (0, 2], the
KL estimator converges
4β
at the rate O n− min{ 2β+D ,1} . It is also worth noting
the relative
of the KL estimator
computational efficiency
(O Dn2 , or O 2D n log n using k-d trees for small D).
Boundedness of the density: For all of the above approaches, theoretical finite-sample results known so far assume that the sampling density is lower and upper bounded
by positive constants. This also excludes most distributions
with unbounded support, and hence, many distributions of
practical relevance. A distinctive feature of our results is
that they hold for a variety of densities that approach 0 and
∞ on their domain, which may be unbounded. Our bias
bounds apply, for example, to densities that decay exponentially, such as Gaussian distributions. To our knowledge, the only previous results that apply to unbounded
densities are
√those of Tsybakov & van der Meulen (1996),
who show n-consistency of a truncated modification of
the KL estimate for a class of functions with exponentially
decaying tails. In fact, components of our analysis are inspired by Tsybakov & van der Meulen (1996), and some of
our assumptions are closely related. Their analysis only applies to the √
case β = 2 and D = 1, for which our results
also imply n-consistency, so our results can be seen in
some respects as a generalization of this work.
3. Setup and Assumptions
While most prior work on k-NN estimators has been restricted to RD , we present our results in a more general
setting. This includes, for example, Riemannian manifolds
embedded in higher dimensional spaces, in which case we
note that our results depend on the intrinsic, rather than extrinsic, dimension. Such data can be better behaved in their
native space than when embedded in a lower dimensional
Euclidean space (e.g., working directly on the unit circle
avoids boundary bias caused by mapping data to the interval [0, 2π]).
Definition 1. (Metric Measure Space): A quadruple
(X, d, Σ, µ) is called a metric measure space if X is a set,
d : X × X → [0, ∞) is a metric on X, Σ is a σ-algebra
on X containing the Borel σ-algebra induced by d, and
µ : Σ → [0, ∞] is a σ-finite measure on the measurable
Analysis of k-Nearest Neighbor Statistics with Application to Entropy Estimation
space (X, Σ).
Definition 2. (Dimension): A metric measure space
(X, d, Σ, µ) is said to have dimension D ∈ [0, ∞) if there
exist constants cD , ρ > 0 such that, ∀r ∈ [0, ρ], x ∈ X ,
µ(B(x, r)) = cD rD . 4
Definition 3. (Full Dimension): Given a metric measure
space (X, d, Σ, µ) of dimension D, a measure P on (X, Σ)
is said to have full dimension on a set X ⊆ X if there exist
functions γ∗ , γ ∗ : X → (0, ∞) such that, for all r ∈ [0, ρ]
and µ-almost all x ∈ X ,
γ∗ (x)rD ≤ P (B(x, r)) ≤ γ ∗ (x)rD .
Remark 4. If X = RD , d is the Euclidean metric, and µ
is the Lebesgue measure, then the dimension of the metric
measure space is D. However, if X is a lower dimensional
subspace of RD , then the dimension may be less than D.
For example, if X = SD−1 := {x ∈ RD : kxk2 = 1}),
d is the geodesic distance on SD−1 , and µ is the (D − 1)dimensional surface measure, then the dimension is D − 1.
Remark 5. In previous work on k-NN statistics
(Evans et al., 2002; Biau & Devroye, 2015) and estimation of information theoretic functionals (Sricharan et al.,
2010; Krishnamurthy et al., 2014; Singh & Poczos, 2014b;
Moon & Hero, 2014), it has been common to make the assumption that the sampling distribution has full dimension
with constant γ∗ and γ ∗ (or, equivalently, that the density
is lower and upper bounded by positive constants). This
excludes distributions with densities approaching 0 or ∞
on their domain, and hence also densities with unbounded
support. By letting γ∗ and γ ∗ be functions, our results extend to unbounded densities that instead satisfy certain tail
bounds.
In order to ensure that entropy is well defined, we assume
that P is a probability measure absolutely continuous with
respect to µ, and that its probability density function p :
X → [0, ∞) satisfies 5
Z
H(p) := E [log p(X)] =
p(x) log p(x) dµ(x) ∈ R.
X∼P
X
(1)
Finally, we assume we have n + 1 samples X, X1 , ..., Xn
drawn IID from P . We would like to use these samples to
estimate the entropy H(p) as defined in Equation (1).
Our analysis and methods relate to the k-nearest neighbor distance εk (x), defined for any x ∈ X by εk (x) =
d(x, Xi ), where Xi is the k th -nearest neighbor of x in
4
Here and in what follows, B(x, r) := {y ∈ X : d(x, y) <
r} denotes the open ball of radius r centered at x.
5
See (Baccetti & Visser, 2013) for discussion of sufficient
conditions for H(p) < ∞.
the set {X1 , ..., Xn }. Note that, since the definition of dimension used precludes the existence of atoms (i.e., for all
x ∈ X , p(x) = µ({x}) = 0), εk (x) > 0, µ-almost everywhere. This is important, since we will study log εk (x).
Initially (i.e., in Sections 4 and 5), we will study log εk (x)
with fixed x ∈ X , for which we will derive bounds in terms
of γ∗ (x) and γ ∗ (x). When we apply these results to analyze the KL estimator in Section 7 and 8, we will need to
take expectations such as E [log εk (X)] (for which we reserve the extra sample X), leading to ‘tail bounds’ on p in
terms of the functions γ∗ and γ ∗ .
4. Concentration of k-NN Distances
We begin with a consequence of the multiplicative Chernoff bound, asserting a sort of concentration of the distance of any point in X from its k th -nearest neighbor in
{X1 , . . . , Xn }. Since the results of this section are concerned with fixed x ∈ X , for notational simplicity, we suppress the dependence of γ∗ and γ ∗ on x.
Lemma 6. Let (X, d, Σ, µ) be a metric measure space
of dimension D. Suppose P is an absolutely continuous probability measure with full dimension on X ⊆ X
and density function
p : X → [0, ∞). For x ∈ X , if
1/D
k
r∈
, ρ , then
γ∗ n
P [εk (x) > r] ≤ e−γ∗ r
and, if r ∈ 0, min
k
γ∗n
D
1/D
P [εk (x) ≤ r] ≤
n
k
γ∗ r D n
e
.
k
,ρ
eγ ∗ rD n
k
, then
kγ∗ /γ ∗
.
5. Bounds on Expectations of KNN Statistics
Here, we use the concentration bounds of Section 4 to
bound expectations of functions of k-nearest neighbor distances. Specifically, we give a simple formula for deriving
bounds that applies to many functions of interest, including
logarithms and (positive and negative) moments. As in the
previous section, the results apply to a fixed x ∈ X , and we
continue to suppress the dependence of γ∗ and γ ∗ on x.
Theorem 7. Let (X , d, Σ, µ) be a metric measure space
of dimension D. Suppose P is an absolutely continuous
probability measure with full dimension and density func-
Analysis of k-Nearest Neighbor Statistics with Application to Entropy Estimation
tion p : X → [0, ∞) that satisfies the tail condition 6
Z ∞
n
CT
−1
≤
1 − P (B(X, f (r)))
E
n
X∼P
ρ
f (x) = xα for certain α, as we will use these bounds when
analyzing the KL estimator.
(2)
for some constant CT > 0. Suppose f : (0, ∞) → R is
continuously differentiable, with f ′ > 0. Fix x ∈ X . Then,
we have the upper bound
D1 !
CT
k
+
(3)
E [f+ (εk (x))] ≤ f+
γ∗ n
n
D1 !
Z ∞
1
y
(e/k)k
dy
e−y y k+ D −1 f ′
+
1
nγ∗
D(nγ∗ ) D k
and the lower bound
E [f− (εk (x))] ≤ f−
+
enγ ∗
k
k
γ∗n
1
∗ Z
kγ
( γ ∗kn ) D
γ∗
y
1/D !
Dkγ∗ /γ
∗
+
′
CT
n
f (y) dy
When f (x) = log(x), (3) gives
e k Γ(k, k)
k
1
+
log+
E log+ (εk (x)) ≤
D
γ∗ n
k
D
k
1
1 + log+
(5)
≤
D
γ∗ n
R∞
(where Γ(s, x) := x ts−1 e−t dt denotes the upper
incomplete Gamma function, and we used the bound
Γ(s, x) ≤ xs−1 e−x ), and (4) gives
k
1
+ C1 ,
(6)
log
log
(ε
(x))
≤
E
−
− k
D
γ∗n
for C1 =
α
E [εk (x)]
(4)
0
Remark 9. The tail condition (2) is difficult to validate directly for many distributions. Clearly, it is satisfied when the support of p is bounded. However,
(Tsybakov & van der Meulen, 1996) show that, for the
functions f we are interested in (i.e., logarithms and power
functions), when X = RD , d is the Euclidean metric, and µ is the Lebesgue measure, (2) is also satisfied
by upper-bounded densities with exponentially decreasing
tails. More precisely, that is when there exist a, b, α, δ > 0
and β > 1 such that, whenever kxk2 > δ,
β
β
ae−αkxk ≤ p(x) ≤ be−αkxk ,
which permits, for example, Gaussian distributions. It
should be noted that the constant CT depends only on the
metric measure space, the distribution P , and the function
f , and, in particular, not on k.
5.1. Applications of Theorem 7
We can apply Theorem 7 to several functions f of interest. Here, we demonstrate the cases f (x) = log x and
. For α > 0, f (x) = xα , (3) gives
k
γ∗ n
Dα
k
γ∗ n
+
Dα
e k αΓ (k + α/D, k)
k
D(nγ∗ )α/D
,
(7)
α
where C2 = 1 + 2 D
. For any α ∈ [−Dkγ∗ /γ ∗ , 0], when
α
f (x) = −x , (4) gives
α
E [εk (x)]
≤ C3
k
γ∗n
Dα
,
(8)
∗
where C3 = 1 +
αγ ∗ ekγ∗ /γ
Dkγ∗ +αγ ∗
.
6. The KL Estimator for Entropy
Recall that, for a random variable X sampled from a probability density p with respect to a base measure µ, the Shannon entropy is defined as
Z
H(X) = −
p(x) log p(x) dx.
X
As discussed in Section 1, many applications call for estimate of H(X) given n IID samples X1 , . . . , Xn ∼ p. For
a positive integer k, the KL estimator is typically written as
Ĥk (X) = ψ(n) − ψ(k) + log cD +
n
DX
log εk (Xi ),
n i=1
where ψ : N → R denotes the digamma function. The
motivating insight is the observation that, independent of
the sampling distribution, 7
6
Since f need not be surjective, we use the generalized inverse
f
: R → [0, ∞] defined by f −1 (ε) := inf{x ∈ (0, ∞) :
f (x) ≥ ε}.
≤
∗
≤ C2
(f+ (x) = max{0, f (x)} and f− (x) = − min{0, f (x)}
denote the positive and negative parts of f , respectively).
Remark 8. If f : (0, ∞) → R is continuously differentiable with f ′ < 0, we can apply Theorem 7 to −f . Also,
similar techniques can be used to prove analogous lower
bounds (i.e., lower bounds on the positive part and upper
bounds on the negative part).
γ ∗ ekγ∗ /γ
Dkγ∗
E [log P (B(Xi , εk (Xi )))] = ψ(k) − ψ(n),
−1
7
See (Kraskov et al., 2004) for a concise proof of this fact.
Analysis of k-Nearest Neighbor Statistics with Application to Entropy Estimation
Hence,
h
i
E Ĥk (X)
"
= E − log P (B(Xi , εk (Xi ))) + log cD +
Definition 11. Given a constant β > 0 and an open set
X ⊆ RD , a function f : X → R is called β-Hölder continuous if f is ℓ times differentiable and there exists L > 0
#such that, for any multi-index α ∈ ND with |α| < β,
n
DX
log εk (Xi )
n i=1
n
1X
P (B(xi , εk (Xi )))
= −E
log
n i=1
cD ε D
k (Xi )
#
" n
1X
log pεk (i) (Xi ) = − E log pεk (X1 ) (X1 ) ,
= −E
n i=1
"
#
where, for any x ∈ X , ε > 0,
Z
1
P (B(x, ε))
pε (x) =
p(y) dµ(y) =
D
cD ε
cD ε D
B(x,ε)
sup
x6=y∈X
where ℓ := ⌊β⌋ is the greatest integer strictly less than β.
Definition 12. Given an open set X ⊆ RD and a function
f : X → R, f is said to vanish on the boundary ∂X of X if,
′
for any sequence {xi }∞
i=1 in X with inf x′ ∈∂X kx − x k2 →
0 as i → ∞, f (x) → 0 as i → ∞. Here,
∂X := {x ∈ RD : ∀δ > 0, B(x, δ) 6⊆ X and B(x, δ) 6⊆ X c },
denotes the local average of p in a ball of radius ε around x.
Since pε is a smoothed approximation of p (with smoothness increasing with ε), the KL estimate can be intuitively
thought of as a plug-in estimator for H(X), using a density
estimate with an adaptive smoothing parameter.
In the next two sections, we utilize the bounds derived in
Section 5 to bound the bias and variance of the KL estimator. We note that, for densities in the β-Hölder smoothness
class (β ∈ (0, 2]), our results imply a mean-squared error of
O(n−2β/D ) when β < D/2 and O(n−1 ) when β ≥ D/2.
7. Bias Bound
In this section, we prove bounds on the bias of the KL estimator, first in a relatively general setting, and then, as a
corollary, in a more specific but better understood setting.
Theorem 10. Suppose (X, d, Σ, µ) and P satisfy the conditions of Theorem 7, and there exist C, β ∈ (0, ∞) with
denotes the boundary of X .
Corollary 13. Consider the metric measure space
(RD , d, Σ, µ), where d is Euclidean and µ is the Lebesgue
measure. Let P be an absolute continuous probability measure with full dimension and density p supported on an
open set X ⊆ RD . Suppose p satisfies (9) and the conditions of Theorem 7 and is β-Hölder continuous (β ∈ (0, 2])
with constant L. Assume p vanishes on ∂X . If β > 1,
assume k∇pk2 vanishes on ∂X . Then,
h
n − Dβ
i
,
E Ĥk (X) − H(X) ≤ CH
k
LD
where CH = (1 + cD )C2 Γ D+β
.
Remark 14. The assumption that p (and perhaps k∇pk)
vanish on the boundary of X can be thought of as ensuring
that the trivial continuation q : RD → [0, ∞)
q(x) =
sup |p(x) − pε (x)| ≤ Cβ εβ ,
x∈X
and suppose p satisfies a ‘tail bound’
h
i
− β+D
ΓB := E (γ∗ (X)) D < ∞.
|Dα f (x) − Dα f (y)|
≤ L,
kx − ykβ−ℓ
p(x)
0
x∈X
x ∈ RD \X
of p to RD is β-Hölder continuous. This reduces boundary
bias, for which the KL estimator does not correct. 8
(9)
X∼P
Then,
Dβ
h
i
k
,
H(X)
−
Ĥ
(X)
≤
C
E
k
B
n
where CB = (1 + cD )C2 Cβ ΓB .
We now show that the conditions of Theorem 10 are satisfied by densities in the commonly used nonparametric class
of β-Hölder continuous densities on RD .
8. Variance Bound
We first use the bounds proven in Section 5 to prove uniform (in n) bounds on the moments of E [log εk (X)]. We
the for any fixed x ∈ X , although log εk (x) → −∞ almost
surely as n → ∞, V [log εk (x)], and indeed all higher central moments of log εk (x), are bounded, uniformly in n. In
fact, there exist exponential bounds, independent of n, on
the density of log εk (x) − E [log εk (x)].
8
Several estimators controlling for boundary bias have been
proposed (e.g., Sricharan et al. (2010) give a modified k-NN estimator that accomplishes this without prior knowledge of X .
Analysis of k-Nearest Neighbor Statistics with Application to Entropy Estimation
8.1. Moment Bounds on Logarithmic k-NN distances
Lemma 15. Suppose (X, d, Σ, µ) and P satisfy the
conditions of Theorem 7. Suppose
also that Γ0 :=
γ ∗ (x)
Dk
supx∈X γ∗ (x) < ∞. Let λ ∈ 0, Γ0 and assume the
following expectations are finite:
∗
γ (X)
Γ := E
< ∞.
(10)
X∼P γ∗ (X)
i
h
−λ/D
< ∞.
(11)
Γ∗ (λ) := E (γ∗ (X))
X∼P
i
h
Γ∗ (λ) := E (γ ∗ (X))λ/D < ∞.
(12)
X∼P
9. Bounds on the Mean Squared Error
The bias and variance bounds (Theorems 10 and 18) imply
a bound on the mean squared error of the KL estimator:
Corollary 20. Suppose p
1. is β-Hölder continuous with β ∈ (0, 2].
2. vanishes on ∂X . If β > 1, then also suppose k∇pk2
vanishes on ∂X .
Then, for any integer ℓ > 1, the ℓth central moment
i
h
ℓ
Mℓ := E (log εk (X) − E [log εk (X)])
3.
satisfies
Mℓ ≤ CM ℓ!/λℓ ,
Remark 19. Nk depends only on k and the geometry of the
metric space (X , d). For example, Corollary A.2 of Evans
(2008) shows that, when X = RD and d is the Euclidean
metric, then Nk ≤ kK(D), where K(D) is the kissing
number of Rd .
(13)
where CM > 0 is a constant independent of n, ℓ, and λ.
Remark 16. The conditions (10), (11), and (12) are mild.
For example, when X = RD , d is the Euclidean metric,
and µ is the Lebesgue measure, it suffices that p is LipsD2
chitz continuous 9 and there exist c, r > 0, p > D−α
such
−p
that p(x) ≤ ckxk whenever kxk2 > r. The condition
Γ0 < ∞ is more prohibitive, but still permits many (possibly unbounded) distributions of interest.
Remark 17. If the terms log εk (Xi ) were independent, a
Bernstein inequality, together with the moment bound (13)
would imply a sub-Gaussian concentration bound on the
KL estimator about its expectation. This may follow from
one of several more refined concentration results relaxing
the independence assumption that have been proposed.
8.2. Bound on the Variance of the KL Estimate
Bounds on the variance of the KL estimator now follow
from the law of large numbers in Evans (2008) (itself an application of the Efron-Stein inequality to k-NN statistics).
Theorem 18. Suppose (X, d, Σ, µ) and P satisfy the conditions of Lemma 15, and that that there exists a constant
Nk ∈ N such that, for any finite F ⊆ X , any x ∈ F can
be among the k-NN hof at most
i Nk other points in that set.
Then, Ĥk (X) → E Ĥk (X) almost surely (as n → ∞),
and, for n ≥ 16k and M4 satisfying (13).
i 5(3 + kN )(3 + 64k)M
h
1
k
4
,
∈O
V Ĥk (X) ≤
n
nk
9
Significantly milder conditions than Lipschitz continuity suffice, but are difficult to state here due to space limitations.
[TODO: Other assumptions.] satisfies the assumptions of
Theorems 10 and 18. Then,
2β/D
2
k
CV
2
≤ CB
+
. (14)
E Ĥk (X) − H(X)
n
nk
If we let k scale as k ≍ nmax{0, 2β+D } this gives an overall
convergence rate of
2β/D
2
k
CV
2
+
. (15)
≤ CB
E Ĥk (X) − H(X)
n
nk
2β−D
10. Conclusions and Future Work
This paper derives finite sample bounds on the bias and
variance of the KL estimator under general conditions,
including for certain classes of unbounded distributions.
As intermediate results, we proved concentration inequalities for k-NN distances and bounds on the expectations
of statistics of k-NN distances. We hope these results and
methods may lead to convergence rates for the widely used
KSG mutual information estimator, or to generalize convergence rates for other estimators of entropy and related
functionals to unbounded distributions.
Acknowledgements
This material is based upon work supported by a National
Science Foundation Graduate Research Fellowship to the
first author under Grant No. DGE-1252522.
References
Adami, C. Information theory in molecular biology.
Physics of Life Reviews, 1:3–22, 2004.
Analysis of k-Nearest Neighbor Statistics with Application to Entropy Estimation
Aghagolzadeh, M., Soltanian-Zadeh, H., Araabi, B., and
Aghagolzadeh, A. A hierarchical clustering based on
mutual information maximization. In in Proc. of IEEE
International Conference on Image Processing, pp. 277–
280, 2007.
Alemany, P. A. and Zanette, D. H. Fractal random walks
from a variational formalism for Tsallis entropies. Phys.
Rev. E, 49(2):R956–R958, Feb 1994. doi: 10.1103/
PhysRevE.49.R956.
Baccetti, Valentina and Visser, Matt. Infinite shannon entropy. Journal of Statistical Mechanics: Theory and Experiment, 2013(04):P04010, 2013.
Biau, Gérard and Devroye, Luc. Entropy estimation. In
Lectures on the Nearest Neighbor Method, pp. 75–91.
Springer, 2015.
Birge, L. and Massart, P. Estimation of integral functions
of a density. A. Statistics, 23:11–29, 1995.
Chai, B., Walther, D. B., Beck, D. M., and Fei-Fei, L. Exploring functional connectivity of the human brain using
multivariate information analysis. In NIPS, 2009.
Chaudhuri, Kamalika and Dasgupta, Sanjoy. Rates of convergence for nearest neighbor classification. In Advances
in Neural Information Processing Systems, pp. 3437–
3445, 2014.
Evans, D. A law of large numbers for nearest neighbor
statistics. In Proceedings of the Royal Society, volume
464, pp. 3175–3192, 2008.
Evans, Dafydd, Jones, Antonia J, and Schmidt, Wolfgang M. Asymptotic moments of near–neighbour distance distributions. In Proceedings of the Royal Society
of London A: Mathematical, Physical and Engineering
Sciences, volume 458, pp. 2839–2849. The Royal Society, 2002.
Gao, Shuyang, Steeg, Greg Ver, and Galstyan, Aram. Estimating mutual information by local gaussian approximation. arXiv preprint arXiv:1508.00536, 2015a.
Gao, Shuyang, Ver Steeg, Greg, and Galstyan, Aram. Efficient estimation of mutual information for strongly dependent variables. In Proceedings of the Eighteenth
International Conference on Artificial Intelligence and
Statistics, pp. 277–286, 2015b.
Goria, M. N., Leonenko, N. N., Mergel, V. V., and Inverardi, P. L. Novi. A new class of random vector entropy estimators and its applications in testing statistical
hypotheses. J. Nonparametric Statistics, 17:277–297,
2005.
Hero, A. O., Ma, B., Michel, O., and Gorman, J. Alphadivergence for classification, indexing and retrieval,
2002a. Communications and Signal Processing Laboratory Technical Report CSPL-328.
Hero, A. O., Ma, B., Michel, O. J. J., and Gorman, J. Applications of entropic spanning graphs. IEEE Signal Processing Magazine, 19(5):85–95, 2002b.
Hlaváckova-Schindler, K., Paluŝb, M., Vejmelkab, M.,
and Bhattacharya, J. Causality detection based on
information-theoretic approaches in time series analysis.
Physics Reports, 441:1–46, 2007.
Hulle, M. M. Van. Constrained subspace ICA based on
mutual information optimization directly. Neural Computation, 20:964–973, 2008.
Kandasamy, Kirthevasan, Krishnamurthy, Akshay, Poczos,
Barnabas, Wasserman, Larry, et al. Nonparametric von
mises estimators for entropies, divergences and mutual
informations. In Advances in Neural Information Processing Systems, pp. 397–405, 2015.
Kozachenko, L. F. and Leonenko, N. N. A statistical estimate for the entropy of a random vector. Problems of
Information Transmission, 23:9–16, 1987.
Kraskov, A., Stögbauer, H., and Grassberger, P. Estimating
mutual information. Phys. Rev. E, 69:066138, 2004.
Krishnamurthy, A., Kandasamy, K., Poczos, B., and
Wasserman, L. Nonparametric estimation of renyi divergence and friends. In International Conference on
Machine Learning (ICML), 2014.
Kybic, J. Incremental updating of nearest neighbor-based
high-dimensional entropy estimation. In Proc. Acoustics, Speech and Signal Processing, 2006.
Learned-Miller, E. G. and Fisher, J. W. ICA using spacings
estimates of entropy. J. Machine Learning Research, 4:
1271–1295, 2003.
Leonenko, N., Pronzato, L., and Savani, V. A class of
Rényi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153–2182, 2008.
Lewi, J., Butera, R., and Paninski, L. Real-time adaptive
information-theoretic optimization of neurophysiology
experiments. In Advances in Neural Information Processing Systems, volume 19, 2007.
Liu, H., Lafferty, J., and Wasserman, L. Exponential concentration inequality for mutual information estimation.
In Neural Information Processing Systems (NIPS), 2012.
Analysis of k-Nearest Neighbor Statistics with Application to Entropy Estimation
Moon, Kevin R and Hero, Alfred O. Ensemble estimation of multivariate f-divergence. In Information Theory (ISIT), 2014 IEEE International Symposium on, pp.
356–360. IEEE, 2014.
Pál, D., Póczos, B., and Szepesvári, Cs. Estimation of
Rényi entropy and mutual information based on generalized nearest-neighbor graphs. In Proceedings of the
Neural Information Processing Systems, 2010.
Peng, H. and Dind, C. Feature selection based on mutual information: Criteria of max-dependency, maxrelevance, and min-redundancy. IEEE Trans On Pattern
Analysis and Machine Intelligence, 27, 2005.
Pérez-Cruz, F. Estimation of information theoretic measures for continuous random variables. In Advances in
Neural Information Processing Systems 21, 2008.
Póczos, B. and Lőrincz, A. Independent subspace analysis
using geodesic spanning trees. In ICML, pp. 673–680,
2005.
Póczos, B. and Lőrincz, A. Identification of recurrent neural networks by Bayesian interrogation techniques. J.
Machine Learning Research, 10:515–554, 2009.
Shan, C., Gong, S., and Mcowan, P. W. Conditional mutual
information based boosting for facial expression recognition. In British Machine Vision Conference (BMVC),
2005.
Singh, S. and Poczos, B. Exponential concentration of a
density functional estimator. In Neural Information Processing Systems (NIPS), 2014a.
Singh, S. and Poczos, B. Generalized exponential concentration inequality for Rényi divergence estimation. In
International Conference on Machine Learning (ICML),
2014b.
Sricharan, K., Raich, R., and Hero, A. Empirical estimation of entropy functionals with confidence. Technical
Report, http://arxiv.org/abs/1012.4188,
2010.
Sricharan, K., Wei, D., and Hero, A. Ensemble estimators for multivariate entropy estimation, 2012.
http://arxiv.org/abs/1203.5829.
Szabó, Z., Póczos, B., and Lőrincz, A. Undercomplete
blind subspace deconvolution. J. Machine Learning Research, 8:1063–1095, 2007.
Tsybakov, A. B. and van der Meulen, E. C. Root-n consistent estimators of entropy for densities with unbounded
support. Scandinavian J. Statistics, 23:75–83, 1996.
Tsybakov, A.B. Introduction to Nonparametric Estimation.
Springer Publishing Company, Incorporated, 1st edition,
2008. ISBN 0387790519, 9780387790510.
Wang, Q., Kulkarni, S.R., and Verdú, S. Divergence estimation for multidimensional densities via k-nearestneighbor distances. IEEE Transactions on Information
Theory, 55(5), 2009.
Wolsztynski, E., Thierry, E., and Pronzato, L. Minimumentropy estimation in semi-parametric models. Signal
Process., 85(5):937–949, 2005. ISSN 0165-1684. doi:
http://dx.doi.org/10.1016/j.sigpro.2004.11.028.
| 2 |
arXiv:1405.4672v1 [math.AT] 19 May 2014
Homology of torus spaces with acyclic proper faces
of the orbit space
Anton Ayzenberg
Abstract. Let X be 2n-dimensional compact manifold and T n ñ X be a locally standard action of a compact torus. The orbit space X{T is a manifold with
corners. Suppose that all proper faces of X{T are acyclic. In the paper we study
˚
the homological spectral sequence XE ˚,˚ ñ H˚ pXq corresponding to the filtration
of X by orbit types. When the free part of the action is not twisted, we describe
˚
the whole spectral sequence XE ˚,˚ in terms of homology and combinatorial structure of the orbit space X{T . In this case we describe the kernel and the cokernel
of the natural map krX{T s{pl.s.o.p.q Ñ H ˚ pXq, where krX{T s is a face ring of
X{T and pl.s.o.p.q is the ideal generated by a linear system of parameters (this
ideal appears as the image of H ą0 pBT q in HT˚ pXq). There exists a natural double
grading on H˚ pXq, which satisfies bigraded Poincare duality. This general theory
is applied to compute homology groups of origami toric manifolds with acyclic
proper faces of the orbit space. A number of natural generalizations is considered.
These include Buchsbaum simplicial complexes and posets. h1 - and h2 -numbers
of simplicial posets appear as the ranks of certain terms in the spectral sequence
X ˚
E ˚,˚ . In particular, using topological argument we show that Buchsbaum posets
have nonnegative h2 -vectors. The proofs of this paper rely on the theory of cellular sheaves. We associate to a torus space certain sheaves and cosheaves on
the underlying simplicial poset, and observe an interesting duality between these
objects. This duality seems to be a version of Poincare–Verdier duality between
cellular sheaves and cosheaves.
Contents
1.
2.
3.
4.
5.
6.
Introduction
Preliminary constructions
Buchsbaum pseudo-cell complexes
Torus spaces over Buchsbaum pseudo-cell complexes
Main results
Duality between certain cellular sheaves and cosheaves
The author is supported by the JSPS postdoctoral fellowship program.
1
2
4
7
15
21
26
HOMOLOGY OF TORUS SPACES
7. Face vectors and ranks of border components
8. Geometry of equivariant cycles
9. Examples and calculations
10. Concluding remarks
Acknowledgements
References
2
30
35
40
43
44
45
1. Introduction
Let M be 2n-dimensional compact manifold with a locally standard action of
a compact torus T n . This means, by definition, that the action of T n on M 2n is
locally modeled by a standard coordinate-wise action of T n on Cn . Since Cn {T n
can be identified with a nonnegative cone Rně , the quotient space Q “ M{T has a
natural structure of a compact manifold with corners. The general problem is the
following:
Problem 1. Describe the (co)homology of M in terms of combinatorics and
topology of the orbit space Q and the local data of the action.
The answer is known in the case when Q and all its faces are acyclic (so called
homology polytope) [9]. In this case the equivariant cohomology ring of M coincides
with the face ring of simplicial poset SQ dual to Q, and the ordinary cohomology
has description similar to that of toric varieties or quasitoric manifolds:
H ˚ pM; kq – krSQ s{pθ1 , . . . , θn q,
deg θi “ 2.
In this case krSQ s is Cohen–Macaulay and θ1 , . . . , θn is a linear regular sequence
determined by the characteristic map on Q. In particular, cohomology vanishes in
odd degree, and dim H 2i pMq “ hi pSQ q.
In general, there is a topological model of a manifold M, called canonical model.
The construction is the following. Start with a nice manifold with corners Q, consider
a principal T n -bundle Y over Q, and then consider the quotient space X “ Y {„
determined by a characteristic map [17, def. 4.2]. It is known that X is a manifold
with locally standard torus action, and every manifold with l.s.t.a. is equivariantly
homeomorphic to such canonical model. Thus it is sufficient to work with canonical
models to answer Problem 1.
In this paper we study the case when all proper faces of Q are acyclic, but Q
itself may be arbitrary. Homology of X can be described by the spectral sequence
X r
E ˚,˚ associated to the filtration of X by orbit types:
(1.1)
X0 Ă X1 Ă . . . Ă Xn´1 Ă Xn “ X,
dim Xi “ 2i.
This filtration is covered by the filtration of Y :
(1.2)
Y0 Ă Y1 Ă . . . Ă Yn´1 Ă Yn “ Y,
Xi “ Yi {„ .
HOMOLOGY OF TORUS SPACES
2
3
We prove that most entries of the second page XE ˚,˚ coincide with corresponding
2
entries of Y E ˚,˚ (Theorem 1). When Y is a trivial T n -bundle, Y “ Q ˆ T n , this
˚
observation allows to describe XE ˚,˚ completely in terms of topology and combinatorics of Q (Theorem 2, statement 5.3 and Theorem 3). This answers Problem 1
additively. From this description in particular follows that Betti numbers of X do
not depend on the choice of characteristic map. We hope, that this technic will lead
to the description of cohomology multiplication in H ˚ pXq as well.
Another motivation for this paper comes from a theory of Buchsbaum simplicial
complexes and posets. The notions of h1 - and h2 -vectors of simplicial poset S first
appeared in combinatorial commutative algebra [15, 13]. These invariants emerge
naturally in the description of homology of X (Theorems 3 and 4). The space
X “ Y { „ can be constructed not only in the case when Q is a manifold with
corners (“manifold case”), but also in the case when Q is a cone over geometric
realization of simplicial poset S (“cone case”). In the cone case, surely, X may not
be a manifold. But there still exists filtration (1.1), and homology groups of X
can be calculated by the same method as for manifolds, when S is Buchsbaum. In
8
the cone case we prove that dim XE i,i “ h2 pSq (Theorem 5). Thus, in particular,
h2 -vector of Buchsbaum simplicial poset is nonnegative. This result is proved in
commutative algebra by completely different methods [13].
The exposition of the paper is built in such way that both situations: manifolds
with acyclic faces, and cones over Buchsbaum posets are treated in a common context. In order to do this we introduce the notion of Buchsbaum pseudo-cell complex
which is very natural and includes both motivating examples. A theory of cellular
sheaves over simplicial posets is used to prove basic theorems. The coincidence of
r
r
most parts of XE ˚,˚ and Y E ˚,˚ follows from the Key lemma (lemma 5.1) which is
an instance of general duality between certain sheaves and cosheaves (Theorem 6).
In the manifold case this duality can be deduced from Verdier duality for cellular
sheaves, described in [6].
The paper is organized as follows. Section 2 contains preliminaries on simplicial
posets and cellular sheaves. In section 3 we introduce the notion of simple pseudocell complex and describe spectral sequences associated to filtrations by pseudo-cell
skeleta. Section 4 is devoted to torus spaces over pseudo-cell complexes. The main
results (Theorems 1–5) are stated in section 5. The rest of section 5 contains the
description of homology of X. There is an additional grading on homology groups,
and in the manifold case there is a bigraded Poincare duality. Section 6 contains a
sheaf-theoretic discussion of the subject. In this section we prove Theorem 6 which
can be considered as a version of cellular Verdier duality. This proves the Key lemma,
from which follow Theorems 1 and 2. Section 7 is devoted to the combinatorics of
simplicial posets. In this section we recall combinatorial definitions of f -, h-, h1 and h2 -vectors and prove Theorems 3–5. The structure of equivariant cycles and
cocycles of a manifold X with locally standard torus action is the subject of section
HOMOLOGY OF TORUS SPACES
4
8. There exists a natural map krSs{pθ1 , . . . , θn q Ñ H ˚pXq, where pθ1 , . . . , θn q is a
linear system of parameters, associated to a characteristic map. In general (i.e. when
Q is not a homology polytope) this map may be neither injective nor surjective. The
kernel of this map is described by corollary 8.8. The calculations for some particular
examples are gathered in section 9. The main family of nontrivial examples is the
family of origami toric manifolds with acyclic proper faces of the orbit space.
2. Preliminary constructions
2.1. Preliminaries on simplicial posets. First, recall several standard definitions. A partially ordered set (poset in the following) is called simplicial if it has a
minimal element ∅ P S, and for any I P S, the lower order ideal SďI “ tJ | J ď Iu is
isomorphic to the boolean lattice 2rks (the poset of faces of a simplex). The number
k is called the rank of I P S and is denoted |I|. Also set dim I “ |I| ´ 1. A vertex is
a simplex of rank 1 (i.e. the atom of the poset); the set of all vertices is denoted by
VertpSq. A subset L Ă S, for which I ă J, J P L implies I P L is called a simplicial
subposet.
The notation I ăi J is used whenever I ă J and |J| ´ |I| “ i. If S is a simplicial
poset, then for each I ă2 J P SQ there exist exactly two intermediate simplices
J 1 , J 2:
(2.1)
I ă1 J 1 , J 2 ă1 J.
For simplicial poset S a “sign convention” can be chosen. It means that we can
associate an incidence number rJ : Is “ ˘1 to any I ă1 J P S in such way that for
(2.1) holds
(2.2)
rJ : J 1 s ¨ rJ 1 : Is ` rJ : J 2 s ¨ rJ 2 : Is “ 0.
The choice of a sign convention is equivalent to the choice of orientations of all
simplices.
For I P S consider the link:
lkS I “ tJ P S | J ě Iu.
It is a simplicial poset with minimal element I. On the other hand, lkS I can also
be considered as a subset of S. It can be seen that Sz lkS I is a simplicial subposet.
Note, that lkS ∅ “ S.
Let S 1 be the barycentric subdivision of S. By definition, S 1 is a simplicial complex on the set Sz∅ whose simplices are the chains of elements of S. By definition,
the geometric realization of S is the geometric realization of its barycentric subdividef
sion |S| “ |S 1 |. One can also think about |S| as a CW-complex with simplicial cells
[2]. A poset S is called pure if all its maximal elements have equal dimensions. A
poset S is pure whenever S 1 is pure.
HOMOLOGY OF TORUS SPACES
5
Definition 2.1. Simplicial complex K of dimension n ´ 1 is called Buchsbaum
r
if Hi plkK Iq “ 0 for all ∅ ‰ I P K and i ‰ n ´ 1 ´ |I|. If K is Buchsbaum and,
r i pKq “ 0 for i ‰ n ´ 1 then K is called Cohen–Macaulay. Simplicial
moreover, H
poset S is called Buchsbaum (Cohen–Macaulay) if S 1 is a Buchsbaum (resp. Cohen–
Macaulay) simplicial complex.
Remark 2.2. Whenever the coefficient ring in the notation of (co)homology is
omitted it is supposed to be the ground ring k, which is either a field or the ring of
integers.
r i plkS Iq “ 0 for all
Remark 2.3. By [13, Sec.6], S is Buchsbaum whenever H
r i plkS Iq “ 0 for
∅ ‰ I P S and i ‰ n ´ 1 ´ |I|. Similarly, S is Cohen–Macaulay if H
all I P S and i ‰ n ´ 1 ´ |I|. A poset S is Buchsbaum whenever all its proper links
are Cohen–Macaulay.
One easily checks that Buchsbaum property implies purity.
2.2. Cellular sheaves. Let MODk be the category of k-modules. The notation
dim V is used for the rank of a k-module V .
Each simplicial poset S defines a small category CATpSq whose objects are the
elements of S and morphisms — the inequalities I ď J. A cellular sheaf [6] (or a stack
[12], or a local coefficient system elsewhere) is a covariant functor A : CATpSq Ñ
MODk . We simply call A a sheaf on S and hope that this will not lead to a confusion,
since different meanings of this word do not appear in the paper. The maps ApJ1 ď
J2 q are called the restriction maps. The cochain complex pC ˚ pS; Aq, dq is defined as
follows:
à i
à
ApIq,
C ˚ pS; Aq “
C pS; Aq,
C i pS; Aq “
iě´1
dim I“i
d : C i pS; Aq Ñ C i`1 pS; Aq,
d“
Iă1
à
rI 1 : IsApI ď I 1 q.
I 1 ,dim I“i
By the standard argument involving sign convention (2.2), d2 “ 0, thus pC ˚ pK; Aq, dq
is a differential complex. Define the cohomology of A as the cohomology of this complex:
(2.3)
def
H ˚ pS; Aq “ H ˚ pC ˚ pS; Aq, dq.
Remark 2.4. Cohomology of A defined this way coincide with any other meaningful definition of cohomology. E.g. the derived functors of the functor of global
sections are isomorphic to (2.3) (see [6] for the vast exposition of this subject).
A sheaf A on S can be restricted to a simplicial subposet L Ă S. The complexes
pC ˚ pL, Aq, dq and pC ˚ pS; Aq{C ˚pL; Aq, dq are defined in a usual manner. The latter
complex gives rise to a relative version of sheaf cohomology: H ˚pS, L; Aq.
HOMOLOGY OF TORUS SPACES
6
Remark 2.5. It is standard in topological literature to consider cellular sheaves
which do not take values on ∅ P S, since in general this element has no geometrical
meaning. However, this extra value Ap∅q is very important in the considerations of
this paper. Thus the cohomology group may be nontrivial in degree dim ∅ “ ´1.
If a sheaf A is defined on S, then we often consider its truncated version A which
coincides with A on Szt∅u and vanishes on ∅.
Example 2.6. Let W be a k-module. By abuse of notation let W denote the
(globally) constant sheaf on S. It takes constant value W on ∅ ‰ I P S and
vanishes on ∅; all nontrivial restriction maps are identity isomorphisms. In this
case H ˚pS; W q – H ˚pSq b W .
Example 2.7. A locally constant sheaf valued by W P MODk is a sheaf W which
satisfies Wp∅q “ 0, WpIq – W for I ‰ ∅ and all nontrivial restriction maps are
isomorphisms.
Example 2.8. Let I P S and W P MODk . Consider the sheaf t I u
#
W, if J ě I
W
t u
(2.4)
I pJq “
0, otherwise,
W
defined by
W
with the restriction maps t I u pJ1 ď J2 q either identity on W (when I ď J1 ), or 0
W
k
W
k
(otherwise). Then t I u “ t I u b W and H ˚ pS; t I u q – H ˚pS; t I u q b W . We have
k
H ˚pS; t I u q – H ˚´|I|plkS Iq,
since corresponding differential complexes coincide.
In the following if A and B are two sheaves on S we denote by A b B their
componentwise tensor product: pA b BqpIq “ ApIq b BpIq with restriction maps
defined in the obvious way.
Example 2.9. As a generalization of the previous example consider the sheaf
I b A. Then
k
H ˚ pS; t I u b Aq – H ˚´|I| plkS I; A|lkS I q.
t uk
Example 2.10. Following [12], define i-th local homology sheaf Ui on S by
setting Ui p∅q “ 0 and
(2.5)
Ui pJq “ Hi pS, Sz lkS Jq
for J ‰ ∅. The restriction maps Ui pJ1 ă J2 q are induced by inclusions lkS J2 ãÑ
lkS J1 . A poset S is Buchsbaum if and only if Ui “ 0 for i ă n ´ 1.
Definition 2.11. Buchsbaum poset S is called homology manifold (orientable
over k) if its local homology sheaf Un´1 is isomorphic to the constant sheaf k.
If |S| is a compact closed orientable topological manifold then S is a homology
manifold.
HOMOLOGY OF TORUS SPACES
7
2.3. Cosheaves. A cellular cosheaf (see [6]) is a contravariant functor Ap : CATop pSq Ñ
MODk . The homology of a cosheaf is defined similar to cohomology of sheaves:
à
à
p “
p Ci pS; Aq
p “
C˚ pS; Aq
Ci pS; Aq
ApIq
iě´1
p Ñ Ci´1 pS; Aq,
p
d : Ci pS; Aq
d“
à
Ią1 I 1 ,dim I“i
dim I“i
p ě I 1 q,
rI : I 1 sApI
p def
p dq.
H˚ pS; Aq
“ H˚ pC˚ pS; Aq,
Example 2.12. Each locally constant sheaf W on S defines the locally constant
x by inverting arrows, i.e. WpIq
x – WpIq and WpI
x ą Jq “ pWpJ ă Iqq´1 .
cosheaf W
3. Buchsbaum pseudo-cell complexes
3.1. Simple pseudo-cell complexes.
Definition 3.1 (Pseudo-cell complex). A CW-pair pF, BF q will be called kdimensional pseudo-cell, if F is compact and connected, dim F “ k, dim BF ď k ´ 1.
A (regular finite) pseudo-cell complex Q is a space which is a union of an expanding
sequence of subspaces Qk such that Q´1 is empty and Qk is the pushout obtained
from Qk´1 by attaching finite number of k-dimensional pseudo-cells pF, BF q along
injective attaching maps BF Ñ Qk´1 . The images of pF, BF q in Q will be also called
pseudo-cells and denoted by the same letters.
Remark 3.2. In general, situations when BF “ ∅ or Q0 “ ∅ are allowed by
this definition. Thus the construction of pseudo-cell complex may actually start not
from Q0 but from higher dimensions.
Let F ˝ “ F zBF denote open cells. In the following we assume that the boundary
of each cell is a union of lower dimensional cells. Thus all pseudo-cells of Q are
partially ordered by inclusion. We denote by SQ the poset of faces with the reversed
order, i.e. F ăSQ G iff G Ď BF Ă F . To distinguish abstract elements of poset SQ
from faces of Q the former are denoted by I, J, . . . P SQ , and corresponding faces —
FI , FJ , . . . Ď Q.
Definition 3.3. A pseudo-cell complex Q, dim Q “ n is called simple if SQ is
a simplicial poset of dimension n ´ 1 and dim FI “ n ´ 1 ´ dim I for all I P SQ .
Thus for every face F , the upper interval tG | G Ě F u is isomorphic to a
boolean lattice 2rcodim F s . In particular, there exists a unique maximal pseudo-cell
F∅ of dimension n, i.e. Q itself. In case of simple pseudo-cell complexes we adopt
the following naming convention: pseudo-cells different from F∅ “ Q are called
faces, and faces of codimension 1 — facets. Facets correspond to vertices (atoms) of
SQ . Each face F is contained in exactly codim F facets. In this paper only simple
pseudo-cell complexes are considered.
HOMOLOGY OF TORUS SPACES
8
Example 3.4. Nice (compact connected) manifolds with corners as defined in
[9] are examples of simple pseudo-cell complexes. Each face F is itself a manifold
and BF is the boundary in a common sense.
Example 3.5. Each pure simplicial poset S determines a simple pseudo-cell
complex P pSq such that SP pSq “ S by the following standard construction. Consider the barycentric subdivision S 1 and construct the cone P pSq “ | Cone S 1 |. By
definition, Cone S 1 is a simplicial complex on the set S and k-simplices of Cone S 1
have the form pI0 ă I1 ă . . . ă Ik q, where Ii P S. For each I P S consider the
pseudo-cell:
FI “ |tpI0 ă I1 ă . . .q P Cone S 1 such that I0 ě Iu| Ă | Cone S 1 |
BFI “ |tpI0 ă I1 ă . . .q P Cone S 1 such that I0 ą Iu| Ă | Cone S 1 |
Since S is pure, dim FI “ n ´ dim I ´ 1. These sets define a pseudo-cell structure
on P pSq. One shows that FI Ă FJ whenever J ă I. Thus SP pSq “ S. Face FI is
called dual to I P S. The filtration by pseudo-cell skeleta
(3.1)
∅ “ Q´1 Ă Q0 Ă Q1 Ă . . . Ă Qn´1 “ BQ “ |S|,
is called the coskeleton filtration of |S| (see [12]).
The maximal pseudo-cell F∅ of P pSq is P pSq – Cone |S|, and BF∅ “ |S|. Note
that BFI can be identified with the barycentric subdivision of lkS I. Face FI is the
cone over BFI .
If S is non-pure, this construction makes sense as well, but the dimension of FI
may not be equal to n ´ dim I ´ 1. So P pSq is not a simple pseudo-cell complex if
S is not pure.
For a general pseudo-cell complex Q there is a skeleton filtration
(3.2)
Q0 Ă Q1 Ă . . . Ă Qn´1 “ BQ Ă Qn “ Q
and the corresponding spectral sequences in homology and cohomology are:
(3.3)
(3.4)
Q 1
E p,q “ Hp`q pQp , Qp´1 q ñ Hp`q pQq,
Q p,q
E 1 “ H p`q pQp , Qp´1 q ñ H p`q pQq
r
r
drQ : QE ˚,˚ Ñ QE ˚´r,˚`r´1
˚,˚
pdQ qr : QE r
˚`r,˚´r`1
Ñ QE r
.
In the following only homological case is considered; the cohomological case being
completely parallel.
Similar to ordinary cell complexes the first term of the spectral sequence is
described as a sum:
à
Hp`q pF, BF q.
Hp`q pQp , Qp´1q –
dim F “p
HOMOLOGY OF TORUS SPACES
9
The differential d1Q is the sum over all pairs I ă1 J P S of the maps:
(3.5) mqI,J : Hq`dim FI pFI , BFI q Ñ Hq`dim FI ´1 pBFI q Ñ
Ñ Hq`dim FI ´1 pBFI , BFI zFJ˝ q – Hq`dim FJ pFJ , BFJ q,
where the last isomorphism is due to excision. Also consider the truncated spectral
sequence
BQ 1
E p,q “ Hp`q pQp , Qp´1q, p ă n ñ Hp`q pBQq.
Construction 3.6. Given a sign convention on SQ , for each q consider the
sheaf Hq on SQ given by
Hq pIq “ Hq`dim FI pFI , BFI q
with restriction maps Hq pI ă1 Jq “ rJ : IsmqI,J . For general I ăk J consider any
saturated chain
I ă1 J1 ă1 . . . ă1 Jk´1 ă1 J
(3.6)
and set Hq pI ăk Jq to be equal to the composition
Hq pJk´1 ă1 Jq ˝ . . . ˝ Hq pI ă1 J1 q.
Lemma 3.7. The map Hq pI ăk Jq does not depend on a saturated chain (3.6).
Proof. The differential d1Q satisfies pd1Q q2 “ 0, thus mqJ 1 ,J ˝mqI,J 1 `mqJ 2 ,J ˝mqI,J 2 “
0. By combining this with (2.2) we prove that Hq pI ă2 Jq is independent of a
chain. In general, since tT | I ď T ď Ju is a boolean lattice, any two saturated chains between I and J are connected by a sequence of elementary flips
rJk ă1 T1 ă1 Jk`2 s ù rJk ă1 T2 ă1 Jk`2s.
Thus the sheaves Hq are well defined. These sheaves will be called the structure
sheaves of Q. Consider also the truncated structure sheaves
#
Hq pIq if I ‰ ∅,
Hq pIq “
0, if I “ ∅.
1
Corollary 3.8. The cochain complexes of structure sheaves coincide with QE ˚,˚
up to change of indices:
1
pQE ˚,q , d1Q q – pC n´1´˚ pHq q, dq,
1
pBQE ˚,q , d1Q q – pC n´1´˚ pHq q, dq.
Proof. Follows from the definition of the cochain complex of a sheaf.
Remark 3.9. Let S be a pure simplicial poset of dimension n ´ 1 and P pSq
— its dual simple pseudo-cell complex. In this case there exists an isomorphism of
sheaves
(3.7)
Hq – Uq`n´1 ,
HOMOLOGY OF TORUS SPACES
10
where U˚ are the sheaves of local homology defined in example 2.10. Indeed, it
can be shown that Hi pS, Sz lkS Iq – Hi´dim I pFI , BFI q and these isomorphisms can
be chosen compatible with restriction maps. For simplicial complexes this fact is
proved in [12, Sec.6.1]; the case of simplicial posets is rather similar. Note that Hq
depends on the sign convention while U does not. There is a simple explanation:
the isomorphism (3.7) itself depends on the orientations of simplices.
3.2. Buchsbaum pseudo-cell complexes.
Definition 3.10. A simple pseudo-cell complex Q of dimension n is called
Buchsbaum if for any face FI Ă Q, I ‰ ∅ the following conditions hold:
(1) FI is acyclic over Z;
(2) Hi pFI , BFI q “ 0 if i ‰ dim FI .
Buchsbaum complex Q is called Cohen–Macaulay if these two conditions also hold
for I “ ∅.
The second condition in Buchsbaum case is equivalent to Hq “ 0 for q ‰ 0.
Cohen–Macaulay case is equivalent to Hq “ 0 for q ‰ 0. Obviously, Q is Buchsbaum
if and only if all its proper faces are Cohen–Macaulay. Thus any face of dimension
p ě 1 has nonempty boundary of dimension p ´ 1. In particular, this implies SQ is
pure.
Definition 3.11. Buchsbaum pseudo-cell complex Q is called (k-orientable)
Buchsbaum manifold if H0 is isomorphic to a constant sheaf k.
Note that this definition actually describes only the property of BQ not Q itself.
Example 3.12. If Q is a nice compact manifold with corners in which every
proper face is acyclic and orientable, then Q is a Buchsbaum pseudo-cell complex.
Indeed, the second condition of 3.10 follows by Poincare–Lefschetz duality. If, moreover, Q is orientable itself then Q is a Buchsbaum manifold (over all k). Indeed,
the restriction maps H0 p∅ Ă Iq send the fundamental cycle rQs P Hn pQ, BQq – k
to fundamental cycles of proper faces, thus identifying H0 with the constant sheaf
k. The choice of orientations establishing this identification is described in details
in section 8.
Example 3.13. Simplicial poset S is Buchsbaum (resp. Cohen–Macaulay) whenever P pSq is a Buchsbaum (resp. Cohen–Macaulay) simple pseudo-cell complex. Indeed, any face of P pSq is a cone, thus contractible. On the other hand, Hi pFI , BFI q –
r i´1 p| lkS I|q. Thus condition 2 in definition 3.10 is satisHi pCone | lkS I|, | lkS I|q – H
r i plkS Iq “ 0 for i ‰ n ´ 1 ´ |I|. This is equivalent to Buchsbaumness
fied whenever H
(resp. Cohen–Macaulayness) of S by remark 2.3.
Poset S is a homology manifold if and only if P pSq is a Buchsbaum manifold.
This follows from remark 3.9. In particular, if |S| is a closed orientable manifold
then P pSq is a Buchsbaum manifold.
HOMOLOGY OF TORUS SPACES
11
In general, if Q is Buchsbaum, then its underlying poset SQ is also Buchsbaum,
see lemma 3.14 below.
In Buchsbaum (resp. Cohen–Macaulay) case the spectral sequence BQE (resp.
Q
E) collapses at the second page, thus
2
–
H n´1´p pSQ ; H0 q – BQE p,0 ñ Hp pBQq,
if Q is Buchsbaum
–
2
H n´1´p pSQ ; H0 q – QE p,0 ñ Hp pQq,
if Q is Cohen–Macaulay
In particular, if Q is a Buchsbaum manifold, then
H n´1´p pSQ q – Hp pBQq
(3.8)
Let Q be a simple pseudo-cell complex, and ∅ ‰ I P SQ . The face FI is a simple
pseudo-cell complex itself, and SFI “ lkS I. The structure sheaves of FI are the
restrictions of Hq to lkSQ I Ă SQ . If Q is Buchsbaum, then FI is Cohen–Macaulay,
thus
–
H k plkS I; H0 q ñ Hdim FI ´1´k pFI q,
(3.9)
which is either k (in case k “ dim FI ´ 1) or 0 (otherwise), since FI is acyclic.
3.3. Universality of posets. The aim of this subsection is to show that Buchsbaum pseudo-cell complex coincides up to homology with the underlying simplicial
poset away from maximal cells. This was proved for nice manifolds with corners in
[9] and essentially we follow the proof given there.
Lemma 3.14.
p1qn Let Q be Buchsbaum pseudo-cell complex of dimension n, SQ — its underlying poset, and P “ P pSQ q — simple pseudo-cell complex associated to SQ (example
3.5), BP “ |SQ |. Then there exists a face-preserving map ϕ : Q Ñ P which induces
the identity isomorphism of posets and the isomorphism of the truncated spectral
r
r
–
sequences ϕ˚ : BQE ˚,˚ Ñ BPE ˚,˚ for r ě 1.
p2qn If Q is Cohen–Macaulay of dimension n, then ϕ induces the isomorphism
r
r
–
of non-truncated spectral sequences ϕ˚ : QE ˚,˚ Ñ PE ˚,˚ .
Proof. The map ϕ is constructed inductively. 0-skeleta of Q and P are naturally identified. There always exists an extension of ϕ to higher-dimensional faces
since all pseudo-cells of P are cones. The lemma is proved by the following scheme
of induction: p2qďn´1 ñ p1qn ñ p2qn . The case n “ 0 is clear. Let us prove
p1qn ñ p2qn . The map ϕ induces the homomorphism of the long exact sequences:
r ˚ pBQq
H
r ˚ pQq
H
/
r ˚ pBP q
H
H˚ pQ, BQq
/
/
r ˚ pP q
H
/
H˚ pP, BP q
r ˚´1 pBQq
H
/
r ˚´1 pQq
H
/
/
r ˚´1 pBP q
H
/
r ˚´1 pP q
H
HOMOLOGY OF TORUS SPACES
12
r ˚ pQq Ñ H
r ˚ pP q are isomorphisms since both groups are trivial. The
The maps H
–
r ˚ pBQq Ñ H
r ˚ pBP q are isomorphisms by p1qn , since BQE ñ
maps H
H˚ pBQq and
–
BP
Q 1
P 1
E ñ H˚ pBP q. Five lemma shows that ϕ˚ : E n,˚ Ñ E n,˚ is an isomorphism as
well. This imply p2qn .
Now we prove p2qďn´1 ñ p1qn . Let FI be faces of Q and FrI — faces of P . All
proper faces of Q are Cohen–Macaulay of dimension ď n ´ 1. Thus p2qďn´1 implies
isomorphisms H˚ pFI , BFI q Ñ H˚ pFrI , B FrI q which sum together to the isomorphism
1
1
–
ϕ˚ : BQE ˚,˚ Ñ BPE ˚,˚ .
Corollary 3.15. If Q is a Buchsbaum (resp. Cohen–Macaulay) pseudo-cell
complex, then SQ is a Buchsbaum (resp. Cohen–Macaulay) simplicial poset. If Q is
a Buchsbaum manifold, then SQ is a homology manifold.
In particular, according to lemma 3.14, if Q is Buchsbaum, then BQ is homologous to |SQ | “ BP pSQ q. So in the following we may not distinguish between their
homology. If Q is Buchsbaum manifold, then (3.8) implies Poincare duality for BQ:
H n´1´p pBQq – Hp pBQq.
(3.10)
r
3.4. Structure of QE ˚,˚ in Buchsbaum case. Let δi : Hi pQ, BQq Ñ Hi´1 pBQq
be the connecting homomorphisms in the long exact sequence of the pair pQ, BQq.
˚
Lemma 3.16. The second term of QE ˚,˚ for Buchsbaum pseudo-cell complex Q
is described as follows:
$
Hp pBQq, if p ď n ´ 2, q “ 0,
’
’
’
’
’
&Coker δn , if p “ n ´ 1, q “ 0,
2
Q
(3.11)
E p,q – Ker δn , if p “ n, q “ 0,
’
’
’Hn`q pQ, BQq, if p “ n, q ă 0,
’
’
%
0, otherwise.
Proof. The first page of the non-truncated spectral sequence has the form
Q 1
E p,q
n´2
0
C
0
n´1
1
...
..
.
0
´n
q
0
pS; H0 q . . . C pS; H0 q C pS; H0 q C
0
´1
n
n´1
...
0
0
..
.
..
.
0
0
´1
pS; H0 q “ Hn pQ, BQq
C ´1 pS; H´1 q “ Hn´1 pQ, BQq
..
.
C ´1 pS; H´n q “ H0 pQ, BQq
p
HOMOLOGY OF TORUS SPACES
13
2
By the definition of Buchsbaum complex, QE p,q “ 0 if p ă n and q ‰ 0. Terms
of the second page with p ď n ´ 2 coincide with their non-truncated versions:
2
Q 2
E p,0 “ BQE p,0 – H n´1´p pSQ ; H0 q – Hp pBQq. For p “ n, q ă 0 the first differential
2
1
vanishes, thus QE n,q “ QE n,q – Hn`q pQ, BQq. The only two cases that require further
investigation are pp, qq “ pn ´ 1, 0q and pn, 0q. To describe these cases consider the
short exact sequence of sheaves
0 Ñ H0 Ñ H0 Ñ H0 {H0 Ñ 0.
(3.12)
The quotient sheaf H0 {H0 is concentrated in degree ´1 and its value on ∅ is
Hn pQ, BQq. Sequence (3.12) induces the long exact sequence in cohomology (middle
row):
δn
Hn pQ, BQq
/
O
Hn´1 pBQq
O
–
0
/
H ´1 pSQ ; H0 q
/
–
H ´1 pSQ ; H0 {H0 q
/
H 0 pSQ ; H0 q
–
/
H 0 pSQ ; H0 q
–
Q 2
E n,0
BQ
–
2
E n´1,0
2
/
/
Q 2
E n´1,0
2
Thus QE n,0 – Ker δn and QE n´1,0 – Coker δn .
In the situations like this, we call a spectral sequence G-shaped. The only
r
r
r
non-vanishing differentials in QE for r ě 2 are dr : QE n,1´r Ñ QE n´r,0 . They have
r
pairwise different domains and targets, thus QE p,q ñ Hp`q pQq folds in a long exact
sequence, which is isomorphic to the long exact sequence of the pair pQ, BQq:
(3.13)
...
/
Hi pQq
/
dn`1´i
Q n`1´i Q
/ QE n`1´i
E n,i´n
i´1,0
/
O
O
Hi´1 pQq
/
...
Hi´1 pQq
/
...
–
–
Q 2
E i´1,0
Q 1
E n,i´n
–
...
/
Hi pQq
/
Hi pQ, BQq
δi
/
Hi´1 pBQq
/
This gives a complete characterization of QE in terms of the homological long
exact sequence of the pair pQ, BQq.
1`
3.5. Artificial page QE ˚,˚ . In this subsection we formally introduce an addi˚
tional term in the spectral sequence to make description of QE ˚,˚ more convenient
0
HOMOLOGY OF TORUS SPACES
14
and uniform. The goal is to carry away δn (which appears in (3.11)) from the
description of the page and treat it as one of higher differentials.
1`
Let XE ˚,˚ be the collection of k-modules defined by
Q 1`
E p,q
$
BQ 2
’
& E p,q , if p ď n ´ 1,
def
1
“ QE p,q , if p “ n,
’
%0, otherwise.
Let d1´
Q be the differential of degree p´1, 0q operating on
d1´
Q
ÀQ 1
E p,q by:
#
1
1
d1Q : QE p,q Ñ QE p´1,q , if p ď n ´ 1,
“
0, otherwise
1
1`
Q
It is easily seen that HpQE ˚,˚ ; d1´
Q q is isomorphic to E ˚,˚ . Now consider the differÀ
1`
Q
ential d1`
E p,q :
Q of degree p´1, 0q operating on
d1`
Q “
2
Then QE – HpQE
1`
#
0, if p ď n ´ 1;
Q 1
E n,q
d1Q
2
1
ÝÑ QE n´1,q ÝÑ QE n´1,q , if p “ n.
, d1`
Q q. These considerations are shown on the diagram:
Q 1`
d1´
Q
Q 1
E
<E
d1`
Q
"
d1Q
/
Q 2
E
d2Q
/
Q 3
E
d3Q
/
...
in which the dotted arrows represent passing to homology. To summarize:
1`
Claim 3.17. There is a spectral sequence whose first page is pQE , d1`
Q q and
Q r
subsequent terms coincide with E for r ě 2. Its nontrivial differentials for r ě 1
are the maps
r
r
drQ : QE n,1´r Ñ QE n´r,0
which coincide up to isomorphism with
δn`1´r : Hn`1´r pQ, BQq Ñ Hn´r pBQq.
HOMOLOGY OF TORUS SPACES
15
r
Thus the spectral sequence QE ˚,˚ for r ě 1` up to isomorphism has the form
Q 1`
E p,q
H0 pBQq
...
Hn´2 pBQq
Hn´1 pBQq
d1`
Q “ δn
d2Q “ δn´1
Hn pQ, BQq
Hn´1 pQ, BQq
..
.
dn
Q “ δ1
H1 pQ, BQq
4. Torus spaces over Buchsbaum pseudo-cell complexes
4.1. Preliminaries on torus maps. Let N be a nonnegative integer. Consider
a compact torus T N “ pS 1 qN . The homology algebra H˚ pT N ; kq is the exterior algebra
Λ “ Λk rH1pT N qs. Let Λpqq denote the
`N˘graded component of Λ of degree q, Λ “
À
N
pqq
pqq
N
pqq
– Hq pT q, dim Λ “ q .
q“0 Λ , Λ
N
If T acts on a space Z, then H˚ pZq obtains the structure of Λ-module (i.e.
two-sided Λ-module with property a ¨ x “ p´1qdeg a deg x x ¨ a for a P Λ, x P H˚ pZq);
T N -equivariant maps f : Z1 Ñ Z2 induce module homomorphisms f˚ : H˚ pZ1 q Ñ
H˚ pZ2 q; and equivariant filtrations induce spectral sequences with Λ-module structures.
Construction 4.1. Let TN be the set of all 1-dimensional toric subgroups of T N .
Let M be a finite subset of TN , i.e. a collection of subgroups M “ tTs1 , is : Ts1 ãÑ T N u.
Consider the homomorphism
ź
ź
def
def
iM : T M Ñ T N , T M “
Ts1 , iM “
is .
M
M
Definition 4.2. We say that the collection M of 1-dimensional subgroups satisfies p˚k q-condition if the map piM q˚ : H1 pT M ; kq Ñ H1 pT N ; kq is injective and splits.
If iM itself is injective, then M satisfies p˚k q for k “ Z and all fields. Moreover,
p˚Z q is equivalent to injectivity of iM . Generally, p˚k q implies that Γ “ ker iM is a
finite subgroup of T M .
For a set M satisfying p˚k q consider the exact sequence
i
ρ
M
0 Ñ Γ Ñ T M ÝÑ
T N ÝÑ G Ñ 0,
where G “ T N {iM pT M q is isomorphic to a torus T N´|M | .
HOMOLOGY OF TORUS SPACES
16
Lemma 4.3. Let IM be the ideal of Λ generated by iM pH1 pT M qq. Then there
exists a unique map β which encloses the diagram
ρ˚
H˚ pT N q
H˚ pGq
/
O
–
β
q
Λ
Λ{IM
/
and β is an isomorphism.
Proof. We have ρ˚ : H˚ pT N q Ñ H˚ pGq – Λ˚ rH1 pGqs. Map ρ is T N -equivariant,
thus ρ˚ is a map of Λ-modules. Since ρ˚ ppiM q˚ H1 pT M qq “ 0, we have ρ˚ pIM q “ 0,
thus ρ˚ factors through the quotient module, ρ˚ “ β ˝ q. Since ρ˚ is surjective so is
β. By p˚k q-condition we have a split exact sequence
0 Ñ H1 pT M q Ñ H1 pT N q Ñ H1 pGq Ñ 0,
So far there is a section α : H1 pGq Ñ H1 pT N q of the map ρ˚ in degree 1. This
section extends to α
r : H˚ pGq “ Λ˚ rH1 pGqs Ñ Λ, which is a section of ρ˚ . Thus β is
injective.
4.2. Principal torus bundles. Let ρ : Y Ñ Q be a principal T N -bundle over
a simple pseudo-cell complex Q.
Lemma 4.4. If Q is Cohen–Macaulay, then Y is trivial. More precisely, there
exists an isomorphism ξ:
ξ
Y❃
❃
❃❃ ρ
❃❃
❃❃
Q
/
Q ˆ TN
①①
①①
①
①①
{①
①
The induced isomorphism ξ˚ identifies H˚ pY, BY q with H˚ pQ, BQq b Λ and H˚ pY q
with Λ.
Proof. Principal T N -bundles are classified by their Euler classes, sitting in
H pQ; ZN q “ 0 (recall that Q is acyclic over Z). The second statement follows
from the Künneth isomorphism.
2
For a general principal T N -bundle ρ : Y Ñ Q consider the filtration
∅ “ Y´1 Ă Y0 Ă Y1 Ă . . . Ă Yn´1 Ă Yn “ Y,
(4.1)
where Yi “ ρ´1 pQi q. For each I P SQ consider the subsets YI “ ρ´1 pFI q and
BYI “ ρ´1 pBFI q. In particular, Y∅ “ Y , BY “ Yn´1.
˚
Let Y E ˚,˚ be the spectral sequence associated with filtration (4.1), i.e.:
Y
1
E p,q – Hp`q pYp , Yp´1q ñ Hp`q pY q,
r
r
drY : Y E ˚,˚ Ñ Y E ˚´r,˚`r´1
HOMOLOGY OF TORUS SPACES
17
À
and Hp`q pYp , Yp´1q – |I|“n´p Hp`q pYI , BYI q.
Similar to construction 3.6 we define the sheaf HqY on SQ by setting
HqY pIq “ Hq`n´|I|pYI , BYI q.
(4.2)
The restriction maps coincide with the differential d1Y up to incidence signs. Note
def À
Y
that HY “
q Hq has a natural Λ-module structure induced by the torus action.
˚
The cochain complex of HY coincides with the first page of Y E ˚,˚ up to change of
indices. As before, consider also the truncated spectral sequence:
BY
1
E p,q – Hp`q pYp , Yp´1q, p ă n ñ Hp`q pBY q,
and the truncated sheaf: HYq p∅q “ 0, HYq pIq “ HY pIq for I ‰ ∅.
Lemma 4.5. If Q is Buchsbaum, then HYq – H0 b Lpqq , where Lpqq is a locally
constant sheaf on SQ valued by Λpqq .
Proof. All proper faces of Q are Cohen–Macaulay, thus lemma 4.4 applies. We
have Hq pYI , BYI q – H0 pFI , BFI q b Λpqq . For any I ă J there are two trivializations
of YJ : the restriction of ξI , and ξJ itself:
FJ
YJ
q
q
ξJ qqq
q
ξI |YJ
q
q
x qq
q
/ FJ ˆ T N
ˆ TN
/
YI
/
ξI
FI ˆ T N
Transition maps ξI |YJ ˝ pξJ q´1 induce the isomorphisms in homology Λpqq “ Hq pFJ ˆ
T N q Ñ Hq pFJ ˆ T N q “ Λpqq which determine the restriction maps Lpqq pI Ă Jq. The
locally constant sheaf Lpqq is thus defined, and the statement follows.
À
À
Denote L “ q Lpqq — the graded sheaf on SQ valued by Λ “ q Λpqq
Remark 4.6. Our main example is the trivial bundle: Y “ Q ˆ T N . In this
˚
˚
case the whole spectral sequence Y E ˚,˚ is isomorphic to QE ˚,˚ b Λ. For the structure
sheaves we also have H˚Y “ H˚ b Λ˚ . In particular the sheaf L constructed in lemma
˚
4.5 is globally trivial. By results of subsection 3.4, all terms and differentials of Y E ˚,˚
are described explicitly. Nevertheless, several results of this paper remain valid in a
general setting, thus are stated in full generality where it is possible.
Remark 4.7. This construction is very similar to the construction of the sheaf
of local fibers which appears in the Leray–Serre spectral sequence. But contrary to
this general situation, here we construct not just a sheaf in a common topological
sense, but a cellular sheaf supported on the given simplicial poset SQ . Thus we
prefer to provide all the details, even if they seem obvious to the specialists.
HOMOLOGY OF TORUS SPACES
18
4.3. Torus spaces over simple pseudo-cell complexes. Recall, that TN denotes the set of all 1-dimensional toric subgroups of T N . Let Q be a simple pseudo-cell
complex of dimension n, SQ — its underlying simplicial poset and ρ : Y Ñ Q — a
principal T N -bundle. There exists a general definition of a characteristic pair in the
case of manifolds with locally standard actions, see [17, Def.4.2]. We do not review
this definition here due to its complexity, but prefer to work in Buchsbaum setting,
in which case many things simplify. If Q is Buchsbaum, then its proper faces FI are
Cohen–Macaulay, and according to lemma 4.4, there exist trivializations ξI which
identify orbits over x P FI with T N . If x belongs to several faces, then different
trivializations give rise to the transition homeomorphisms trIăJ : T N Ñ T N , and at
the global level some nontrivial twisting may occur. To give the definition of characteristic map, we need to distinguish between these different trivializations. Denote
by T N pIq the torus sitting over the face FI (via trivialization of lemma 4.4) and let
TN pIq be the set of 1-dimensional subtori of T N pIq. The map trIăJ sends elements
of TN pIq to TN pJq in an obvious way. One can think of TN p´q as a locally constant
sheaf of sets on SQ zt∅u.
Definition 4.8. A characteristic map λ is a collection of elements λpiq P TN piq
defined for each vertex i P VertpSQ q. This collection should satisfy the following
condition: for any simplex I P SQ , I ‰ ∅ with vertices i1 , . . . , ik the set
(4.3)
ttri1 ăI λpi1 q, . . . , trik ăI λpik qu
satisfies p˚k q condition in T N pIq.
Clearly, a characteristic map exists only if N ě n. Let T λpIq denote the subtorus
of T N pIq generated by 1-dimensional subgroups (4.3).
Construction 4.9 (Quotient construction). Consider the identification space:
(4.4)
X “ Y {„,
where y1 „ y2 if ρpy1 q “ ρpy2 q P FI˝ for some ∅ ‰ I P SQ , and y1 , y2 lie in the same
T λpIq -orbit.
There is a natural action of T N on X coming from Y . The map µ : X Ñ Q
is a projection to the orbit space X{T N – Q. The orbit µ´1 pbq over the point
b P FI˝ Ă BQ is identified (via the trivializing homeomorphism) with T N pIq{T λpIq .
This orbit has dimension N ´ dim T λpIq “ N ´ |I| “ dim FI ` pN ´ nq. The preimages
of points b P QzBQ are the full-dimensional orbits.
Filtration (4.1) descends to the filtration on X:
(4.5)
∅ “ X´1 Ă X0 Ă X1 Ă . . . Ă Xn´1 Ă Xn “ X,
where Xi “ Yi { „ for i ď n. In other words, Xi is the union of pď i ` N ´ nqdimensional orbits of the T N -action. Thus dim Xi “ 2i ` N ´ n for i ď n.
HOMOLOGY OF TORUS SPACES
19
˚
Let XE ˚,˚ be the spectral sequence associated with filtration (4.5):
X
1
E p,q “ Hp`q pXp , Xp´1 q ñ Hp`q pXq,
r
r
dX : XE ˚,˚ Ñ XE ˚´r,˚`r´1 .
r
The quotient map f : Y Ñ X induces a morphism of spectral sequences f˚r : Y E ˚,˚ Ñ
X r
E ˚,˚ , which is a Λ-module homomorphism for each r ě 1.
1
4.4. Structure of XE ˚,˚ . For each I P SQ consider the subsets XI “ YI {„ and
BXI “ BYI {„. As before, define the family of sheaves associated with filtration 4.5:
HqX pIq “ Hq`n´|I|pXI , BXI q,
with the restriction maps equal to d1Y up to incidence signs. These sheaves can be
1
considered as a single sheaf HX graded by q. We have pXE ˚,q , dX q – pC n´1´˚ pSQ , HqX q, dq.
There are natural morphisms of sheaves f˚ : HqY Ñ HqX induced by the quotient
map f : Y Ñ X, and the corresponding map of cochain complexes coincides with
À
1
1
f˚1 : Y E ˚,q Ñ XE ˚,q . Also consider the truncated versions: HX “ q HX
q for which
X
H p∅q “ 0.
Remark 4.10. The map f˚1 : H˚ pY, BY q Ñ H˚ pX, BXq is an isomorphism by
excision since X{BX – Y {BY .
Now we describe the truncated part of the sheaf HY in algebraical terms. Let
I P SQ be a simplex and i ď I its vertex. Consider the element of exterior algebra
ωi P LpIqp1q – Λp1q which is the image of the fundamental cycle of λpiq – T 1 under
the transition map triďI :
(4.6)
ωi “ ptriďI q˚ rλpiqs P LpIqp1q
Consider the subsheaf I of L whose value on a simplex I with vertices ti1 , . . . , ik u ‰
∅ is:
(4.7)
IpIq “ pωi1 , . . . , ωik q Ă LpIq,
— the ideal of the exterior algebra LpIq – Λ generated by linear forms. Also set
Ip∅q “ 0. It is easily checked that LpI ă JqIpIq Ă IpJq, so I is a well-defined
subsheaf of L.
Lemma 4.11. The map of sheaves f˚ : HYq Ñ HX
q is isomorphic to the quotient
pqq
pqq
map of sheaves H0 b L Ñ H0 b pL{Iq .
Proof. By lemma 4.4, pYI , BYI q Ñ pFI , BFI q is equivalent to the trivial T N bundle ξI : pYI , BYI q – pFI , BFI q ˆ T N pIq. By construction of X, we have identifications
“
‰
ξI1 : pXI , BXI q – pFI , BFI q ˆ T N pIq {„ .
By excision, the group H˚ prFI ˆ T N pIqs{„, rBFI ˆ T N pIqs{„q coincides with
H˚ pFI ˆ T N pIq{T λpIq , BFI ˆ T N pIq{T λpIq q “ H˚ pFI , BFI q b H˚ pT N pIq{T λpIq q.
HOMOLOGY OF TORUS SPACES
20
The rest follows from lemma 4.3.
There is a short exact sequence of graded sheaves
0 ÝÑ I ÝÑ L ÝÑ L{I ÝÑ 0
Tensoring it with H0 produces the short exact sequence
0 ÝÑ H0 b I pqq ÝÑ HYq ÝÑ HX
q ÝÑ 0
according to lemma 4.11. The sheaf H0 b I can also be considered as a subsheaf of
non-truncated sheaf HY .
Lemma 4.12. There is a short exact sequence of graded sheaves
0 Ñ H0 b I Ñ HY Ñ HX Ñ 0.
Proof. Follows from the diagram
0
0
0
0
/
/
H0 b I
/
H0 b I
/
Y
//
Y
//
H _
H
Y
H {H
0
Y
–
/
HX _
HX
/
0
/
0
H {HX
X
0
The lower sheaves are concentrated in ∅ P SQ and the graded isomorphism between
them is due to remark 4.10.
4.5. Extra pages of Y E and XE. To simplify further discussion we briefly
sketch the formalism of additional pages of spectral sequences Y E and XE, which
extends considerations of subsection 3.5. Consider the following bigraded module:
#
BY 2
E , if p ă n;
1`
Y
E p,q “ Y 1p,q
E n,q , if p “ n.
1
1`
1`
Y
Y
and define the differentials d1´
by
Y on E and dY on E
#
#
0, if p ă n;
d1Y , if p ă n;
1`
d1´
“
d
“
d1Y Y 1
Y
Y
2
Y 1
0, if p “ n.
E n´1,q ÝÑ Y E n´1,q if p “ n.
E n,q ÝÑ
HOMOLOGY OF TORUS SPACES
1`
1
21
2
1`
Y
Y
It is easily checked that Y E – HpY E , d1´
, d1`
Y q and E – Hp E
Y q. The page
1
1
1´
1`
X 1`
E and the differentials dX , dX are defined similarly. The map f˚1 : Y E Ñ XE
1`
1`
induces the map between the extra pages: f˚1` : Y E Ñ XE .
5. Main results
r
5.1. Structure of XE ˚,˚ . The short exact sequence of lemma 4.12 generates
the long exact sequence in sheaf cohomology:
(5.1)
f˚2
Ñ H i´1 pSQ ; H0 bI pqq q Ñ H i´1 pSQ ; HqY q ÝÑ H i´1 pSQ ; HqX q ÝÑ H i pSQ ; H0 bI pqq q Ñ
The following lemma is the cornerstone of the whole work.
Lemma 5.1 (Key Lemma). H i pSQ ; H0 b I pqq q “ 0 if i ď n ´ 1 ´ q.
The proof follows from a more general sheaf-theoretical fact and is postponed
to section 6. In the following we simply write S instead of SQ . By construction,
2
Y 2
E p,q – H n´1´p pS; HqY q and XE p,q – H n´1´p pS; HqX q. The Key lemma 5.1 and exact
sequence (5.1) imply
Lemma 5.2.
2
2
f˚2 : Y E p,q Ñ XE p,q is an isomorphism if p ą q,
2
2
f˚2 : Y E p,q Ñ XE p,q is injective if p “ q.
2
2
In case N “ n this observation immediately describes XE ˚,˚ in terms of Y E ˚,˚ .
Under the notation
r
r
pdY qrp,q : Y E p,q Ñ Y E p´r,q`r´1,
r
r
pdX qrp,q : XE p,q Ñ XE p´r,q`r´1
there holds
Theorem 1. Let Q be Buchsbaum pseudo-cell complex of dimension n, Y be a
principal T n -bundle over Q, f : Y Ñ X “ Y {„ — the quotient construction, and
r
r
f˚r : Y E ˚,˚ Ñ XE ˚,˚ — the induced map of homological spectral sequences associated
with filtrations 4.1, 4.5. Then
#
an isomorphism if q ă p or q “ p “ n,
2 Y 2
X 2
f˚ : E p,q Ñ E p,q is
injective if q “ p ă n,
2
˚
and XE p,q “ 0 if q ą p. Higher differentials of XE ˚,˚ thus have the form
#
f˚r ˝ pdY qrp,q ˝ pf˚r q´1 , if p ´ r ě q ` r ´ 1,
pdX qrp,q “
0 otherwise,
for r ě 2.
HOMOLOGY OF TORUS SPACES
˚
22
˚
If Y is a trivial T n -bundle, then the structure of Y E ˚,˚ – QE ˚,˚ b Λ is described
˚
completely by subsection 3.4. In this case almost all the terms of XE ˚,˚ are described
explicitly.
Theorem 2. In the notation of Theorem 1 suppose Y “ Q ˆ T n . Let Λpqq “
Hq pT n q and δi : Hi pQ, BQq Ñ Hi´1 pBQq be the connecting homomorphisms. Then
(5.2)
X
2
E p,q
$
Hp pBQq b Λpqq , if q ă p ď n ´ 2;
’
’
’
’
’
Coker δn b Λpqq ,¨if q ă p “ n ´ 1;
’
’
˛
’
’
&
À
Hq1 pQ, BQq b Λpq2 q ‚, if q ă p “ n;
– Ker δn b Λpqq ‘ ˝
’
q1 `q2 “n`q
’
’
q1 ăn
’
’
’
pnq
’
Hn pQ, BQq b Λ , if q “ p “ n;
’
’
%
0, if q ą p.
2
The maps f˚2 : Hq pBQqbΛpqq ãÑ XE q,q are injective for q ă n´1. Higher differentials
for r ě 2 are the following:
$
XE ˚
XE ˚
q1 ´1,q2
n,q1 `q2 ´n
’
’
Y
Y
’
’
pq2 q
’
H
pBQq
b Λpq2 q ,
H
pQ,
BQq
b
Λ
Ñ
δ
b
id
:
’
q1 ´1
q1
q1
Λ
’
’
&
if r “ n ´ q1 ` 1, q1 ´ 1 ą q2 ;
drX –
2
2
pq
q
’
f˚ ˝ pδq1 b idΛ q : Hq1 pQ, BQq b Λ 2 Ñ Hq1 ´1 pBQq b Λpq2 q ãÑ XE q1 ´1,q1 ´1 ,
’
’
’
’
’
if r “ n ´ q1 ` 1, q1 ´ 1 “ q2 ;
’
’
%
0, otherwise.
Using the formalism of extra pages introduced in subsection 4.5, Theorem 2 can
be restated in a more convenient and concise form
Statement 5.3. There exists a spectral sequence whose first term is
(5.3)
X
1`
E p,q
$
’
H pBQq b Λpqq , if q ă p ă n;
’
& pÀ
Hq1 pQ, BQq b Λpq2 q , if p “ n;
–
q1 `q2 “q`n
’
’
%
0, if q ą p;
HOMOLOGY OF TORUS SPACES
23
r
and subsequent terms coincide with XE ˚,˚ for r ě 2. There exist injective maps
2
f˚1` : Hq pBQq b Λpqq ãÑ XE q,q for q ă n. Differentials for r ě 1 have the form
$
XE ˚
XE ˚
q1 ´1,q2
n,q1 `q2 ´n
’
’
Y
Y
’
’
pq
q
’
δq1 b idΛ : Hq1 pQ, BQq b Λ 2 Ñ Hq1 ´1 pBQq b Λpq2 q ,
’
’
’
&
if r “ n ´ q1 ` 1, q1 ´ 1 ą q2 ;
r
dX –
2
pq
q
1`
2
’
f˚ ˝ pδq1 b idΛ q : Hq1 pQ, BQq b Λ
Ñ Hq1 ´1 pBQq b Λpq2 q ãÑ XE q1 ´1,q1 ´1 ,
’
’
’
’
if r “ n ´ q1 ` 1, q1 ´ 1 “ q2 ;
’
’
’
%
0, otherwise.
˚
Note that the terms XE q,q for q ă n are not mentioned in the lists (5.2), (5.3).
À
1`
˚
Let us call qăn XE q,q the border of XE ˚,˚ . This name is due to the fact that all
˚
entries above the border vanish: XE p,q “ 0 for q ą p.
r p pSq “ dimk H
r p pBQq by rbp pSq for p ă n. The ranks of the border
Denote dimk H
components are described as follows:
Theorem 3. In the notation of Theorem 2 and statement 5.3
ˆ ˙ÿ
q
n
X 1`
dim E q,q “ hq pSq `
p´1qp`qrbp pSq
q p“0
for q ď n ´ 1, where hq pSq are the h-numbers of the simplicial poset S.
Theorem 4.
1`
(1) Let Q be a Buchsbaum manifold over k. Then dim XE q,q “ h1n´q pSq for
1`
q ď n ´ 2 and XE n´1,n´1 “ h11 pSq ` n.
(2) Let Q be Buchsbaum manifold such that Hn pQ, BQq – k and δn : Hn pQ, BQq Ñ
2
Hn´1 pBQq is injective. Then dim XE q,q “ h1n´q pSq for 0 ď q ď n.
The definitions of h-, h1 - and h2 -vectors and the proof of Theorems 3,4 and 5
are gathered in section 7. Note that it is sufficient to prove Theorem 3 and the first
2
1`
part of Theorem 4 in the case Q “ P pSq. Indeed, by definition, XE q,q “ BXE q,q for
q ď n ´ 1, and there exists a map pQ ˆ T n q{„Ñ pP pSQ q ˆ T n q{„ which covers
the map ϕ of lemma 3.14 and induces the isomorphism of corresponding truncated
spectral sequences.
In the cone case the border components can be described explicitly up to 8-term.
Theorem 5. Let S be a Buchsbaum poset, Q “ P pSq, Y “ Q ˆ T n , X “
r
Y {„, XE p,q ñ Hp`q pXq — the homological spectral sequence associated with filtration
(4.5). Then
8
dim XE q,q “ h2q pSq
for 0 ď q ď n.
HOMOLOGY OF TORUS SPACES
24
Corollary 5.4. If S is Buchsbaum, then h2i pSq ě 0.
Proof. For any S there exists a characteristic map over Q. Thus there exists
a space X “ pP pSq ˆ T n q{„ and Theorem 5 applies.
5.2. Homology of X. Theorem 2 implies the additional grading on H˚ pSq —
the one given by the degrees of exterior forms. It is convenient to work with this
double grading.
Construction 5.5. Suppose Y “ QˆT n . For j P r0, ns consider the G-shaped
spectral sequence
Q r
pjq
Y r
j E ˚,˚ “ E ˚,˚ b Λ .
Àn Y r
r
def
pjq
Y ˚
Clearly, Y E ˚,˚ “
j“0 j E ˚,˚ and j E p,q ñ Hp`q´j,j pY q “ Hp`q´j pQq b Λ . In
particular,
¯ à´
¯
à ´Q 1`
pjq
Q 1`
pjq
Y 1`
“
‘
E
b
Λ
E
b
Λ
E
p,0
n,q
j ˚,˚
păn
q
˚
Consider the corresponding G-shaped spectral subsequences in XE ˚,˚ . Start with
the k-modules:
¯
à X 1` à 1` ´Q 1`
X 1`
pqq
“
E
‘
f
.
E
E
b
Λ
p,j
˚
j
n,q
˚,˚
păn
q
˚
˚
By statement 5.3, all the differentials of XE ˚,˚ preserve X
E , thus the spectral
Àn X j r ˚,˚
X r
X r
subsequences j E ˚,˚ are well defined, and E ˚,˚ “ j“0 j E ˚,˚ . Let Hi,j pXq be the
r
family of subgroups of H˚ pXq such that X
j E p,q ñ Hp`q´j,j pXq. Then
à
Hk pXq “
Hi,j pXq
i`j“k
and the map f˚ : H˚ pY q Ñ H˚ pXq sends Hi,j pY q – Hi pQqbΛpjq to Hi,j pXq. The map
r
r
r
r
f˚r : Y E Ñ XE sends Yj E to X
j E for each j P t0, . . . , nu and we have commutative
squares:
Y r
j E p,q
+3
Hp`q´j,j pY q
f˚r
f˚
X r
j E p,q
+3
Hp`q´j,j pXq
Proposition 5.6.
(1) If i ą j, then f˚ : Hi,j pY q Ñ Hi,j pXq is an isomorphism. In particular,
Hi,j pXq – Hi pQq b Λpjq .
(2) If i ă j, then there exists an isomorphism Hi,j pXq – Hi pQ, BQq b Λpjq .
HOMOLOGY OF TORUS SPACES
25
(3) In case i “ j ă n, the module Hi,i pXq fits in the exact sequence
8
0 Ñ XE i,i Ñ Hi,i pXq Ñ Hi pQ, BQq b Λpiq Ñ 0,
or, equivalently,
1`
0 Ñ Im δi`1 b Λpiq Ñ XE i,i Ñ Hi,i pXq Ñ Hi pQ, BQq b Λpiq Ñ 0
8
1
(4) If i “ j “ n, then Hn,n pXq “ XE n,n “ XE n,n .
Proof. According to statement 5.3
#
the isomorphism if i ą j or i “ j “ n;
1`
1`
(5.4)
f˚1` : Yj E i,q Ñ X
j E i,q is
injective if i “ j.
For each j both spectral sequences Yj E and X
j E are G-shaped, thus fold in the long
exact sequences:
(5.5)
...
/
n´i`1
Y 1`
j E i,j
/
Hi,j pY q
f˚
/
X 1`
j E i,j
/
Hi,j pXq
dY
Y 1`
E
j n,i´n`j
/
Y 1`
j E i´1,j
– f˚
f˚
...
/
/
...
/
f˚
n´i`1
dX
X 1`
j E n,i´n`j
/
X 1`
j E i´1,j
/
...
1`
Applying five lemma in the case i ą j proves (1). For i ă j, the groups X
j E i,j ,
X 1`
j E i´1,j
1`
1`
Y
vanish by dimensional reasons thus Hi,j pXq – X
j E n,i´n`j – j E n,i´n`j –
Hi pQ, BQq b Λpjq . Case i “ j also follows from (5.5) by a simple diagram chase.
In the manifold case proposition 5.6 reveals a bigraded duality. If Q is a nice
manifold with corners, Y “ Q ˆ T n and λ is a characteristic map over Z, then X is
a manifold with locally standard torus action. In this case Poincare duality respects
the double grading.
Proposition 5.7. If X “ pQ ˆ T n q{„ is a manifold with locally standard torus
action and k is a field, then Hi,j pXq – Hn´i,n´j pXq.
Proof. If i ă j, then Hi,j pXq – Hi pQ, BQq b Λpjq – Hn´i pQq b Λpn´jq –
Hn´i,n´j pXq, since Hi pQ, BQq – Hn´i pQq by the Poincare–Lefschetz duality and
Hj pT n q – Hn´j pT n q by the Poincare duality for the torus. The remaining isomorphism Hi,i pXq – Hn´i,n´i pXq now follows from the ordinary Poincare duality for
X.
Remark 5.8. If the space X “ pQ ˆ T n q{„ is constructed from a manifold with
corners pQ, BQq using characteristic map over Q (i.e. X is a toric orbifold), then
proposition 5.7 still holds over Q.
HOMOLOGY OF TORUS SPACES
26
6. Duality between certain cellular sheaves and cosheaves
6.1. Proof of the Key lemma. In this section we prove lemma 5.1. First
recall the setting.
‚ Q : a Buchsbaum pseudo-cell complex with the underlying simplicial poset
S “ SQ (this poset is Buchsbaum itself by corollary 3.15).
‚ H0 : the structure sheaf on S; H0 pJq “ Hdim FJ pFJ , BFJ q.
‚ L : a locally constant graded sheaf on S valued by exterior algebra Λ “
H˚ pT N q. This sheaf is associated in a natural way to a principal T N -bundle
over Q, LpJq “ H˚ pT N pJqq for J ‰ ∅ and Lp∅q “ 0. By inverting all
p
restriction maps we obtain the cosheaf L.
‚ λ : a characteristic map over S. It determines T 1 -subgroup triďJ pλpiqq Ă
T N pJq for each simplex J with vertex i. The homology class of this subgroup
is denoted ωi P LpIq (see (4.6)). Note that the restriction isomorphism
LpJ1 ă J2 q sends ωi P LpJ1 q to ωi P LpJ2 q. Thus we simply write ωi for
all such elements since the ambient exterior algebra will be clear from the
context.
‚ I : the sheaf of ideals, associated to λ. The value of I on a simplex J ‰ ∅
with vertices ti1 , . . . , ik u is the ideal IpJq “ pωi1 , . . . , ωik q Ă LpJq. Clearly,
I is a graded subsheaf of L.
We now introduce another type of ideals.
Construction 6.1. Let J “ ti1 , . . . , ik u be a nonempty
subset of vertices of
Ź
simplex I P S. Consider the element πJ P LpIq, πJ “ iPJ ωi . By the definition of
characteristic map, the elements ωi are linearly independent, thus πJ is a non-zero
|J|-form. Let ΠJ Ă LpIq be the principal ideal generated by πJ . The restriction
maps LpI ă I 1 q identify ΠJ Ă LpIq with ΠJ Ă LpI 1 q.
In particular, when J is the whole set of vertices of a simplex I ‰ ∅ we define
def
p
p
p ą I 1 q “ LpI 1 ă Iq´1
ΠpIq “ ΠI Ă LpIq.
If I 1 ă I, then the corestriction map LpI
p is a well-defined
p
p 1 q, since LpI
p ą I 1 qπI is divisible by πI 1 . Thus Π
injects ΠpIq
into ΠpI
p Formally set Πp∅q
p
graded subcosheaf of L.
“ 0.
Theorem 6. For Buchsbaum pseudo-cell complex Q and S “ SQ there exists an
p which respects the gradings of I and
isomorphism H k pS; H0 b Iq – Hn´1´k pS; Πq
p
Π.
Before giving a proof let us deduce the Key lemma. We need to show that
H pS; H0 b I pqq q “ 0 for i ď n ´ 1 ´ q. According to Theorem 6 this is equivalent to
i
p pqq q “ 0 for i ě q.
Lemma 6.2. Hi pS; Π
p
Proof. The ideal ΠpIq
“ ΠI is generated by the element πI of degree |I| “
pqq
dim I ` 1. Thus ΠI “ 0 for q ď dim I. Hence the corresponding part of the chain
complex vanishes.
HOMOLOGY OF TORUS SPACES
27
Proof of Theorem 6. The idea of proof is the following. First we construct
a resolution of sheaf H0 b I whose terms are “almost acyclic”. By passing to
cochain complexes this resolution generates a bicomplex C‚‚ . By considering two
standard spectral sequences for this bicomplex we prove that both H k pS; H0 b Iq
p are isomorphic to the cohomology of the totalization C ‚ .
and Hn´1´k pS; Πq
Tot
t u ΠI
For each ∅ ‰ I P S consider the sheaf RI “ H0 b I
(see examples 2.8 and
2.10), i.e.:
#
H0 pJq b ΠI , if I ď J;
RI pJq “
0 otherwise.
À pqq
The sheaf RI is graded by degrees of exterior forms: RI “ q RI . Since I ą I 1
implies ΠI Ă ΠI 1 , and i P VertpSq, i ď J implies Πi Ă IpJq, there exist natural
injective maps of sheaves:
θIąI 1 : RI ãÑ RI 1 ,
and
ηi : Ri ãÑ H0 b I.
For each k ě 0 consider the sheaf
à
R´k “
RI ,
dim I“k
These sheaves can be arranged in the sequence
d
d
η
H
H
. . . ÝÑ R´2 ÝÑ
R´1 ÝÑ
R0 ÝÑ H0 b I ÝÑ 0,
À
1
where dH “
Ią1 I 1 rI : I sθIąI 1 and η “
iPVertpSq ηi . By the standard argument
1
involving incidence numbers rI : I s one shows that (6.1) is a differential complex of
sheaves. Moreover,
(6.1)
R‚ :
À
Lemma 6.3. The sequence R‚ is exact.
Proof. We should prove that the value of R‚ at each J P S is exact. Since
RI pJq ‰ 0 only if I ď J the complex R‚ pJq has the form
à
à
(6.2)
. . . ÝÑ
ΠI ÝÑ
ΠI ÝÑ IpJq ÝÑ 0,
IďJ,|I|“2
IďJ,|I|“1
tensored with H0 pJq. Without lost of generality we forget about H0 pJq. Maps in
(6.2) are given by inclusions of sub-ideals (rectified by incidence signs). This looks
very similar to the Taylor resolution of monomial ideal in commutative polynomial
ring, but our situation is a bit different, since ΠI are not free modules over Λ.
Anyway, the proof is similar to commutative case: exactness of (6.2) follows from
inclusion-exclusion principle. To make things precise (and also to tackle the case
k “ Z) we proceed as follows.
By p˚k q-condition, the subspace xωj | j P Jy is a direct summand in Lp1q pJq – kn .
Let tν1 , . . . , νn u be such a basis of Lp1q pJq, that its first |J| vectors are identified
HOMOLOGY OF TORUS SPACES
28
with ωj , j P J. We simply write J for t1, . . . , |J|u Ď rns
Àby abuse of notation. The
module Λ splits in the multidegree components: Λ “ AĎrns ΛA , where ΛA is a 1Ź
dimensional k-module generated by iPA νi . All modules and maps in (6.2) respect
this splitting. Thus (6.2) can be written as
à à
à à
à
. . . ÝÑ
ΛA ÝÑ
ΛA ÝÑ
ΛA ÝÑ 0,
IĎJ,|I|“2 AĚI
à
A,AXJ‰∅
˜
IĎJ,|I|“1 AĚI
à
. . . ÝÑ
ΛA ÝÑ
IĎAXJ,|I|“2
AXJ‰∅
à
ΛA ÝÑ ΛA ÝÑ 0
IĎAXJ,|I|“1
¸
r ˚ p∆AXJ ; ΛA q –
For each A the cohomology of the complex in brackets coincides with H
r ˚ p∆AXJ ; kq, the reduced simplicial homology of a simplex on the set A X J ‰ ∅.
H
Thus homology vanishes.
By passing to cochains and forgetting the last term we get a complex of complexes
(6.3)
dH
dH
C‚‚ “ CpS, R‚ q :
. . . ÝÑ C ‚ pS; R´2 q ÝÑ
C ‚ pS; R´1 q ÝÑ
C ‚ pS; R0 q ÝÑ 0,
whose horizontal cohomology vanishes except for the upmost right position. Let
pqq
dV be the “vertical” cohomology differential operating in each C ‚ pS; Rk q. Then
‚
‚
dH dV “ dV dH . Thus C‚ can be considered as a bicomplex pCTot , Dq:
à i
à l
‚
i
Ck , D “ dH ` p´1qk dV .
CTot
“
CTot , CTot
“
i
k`l“i
‚
There are two standard spectral sequences converging to H ˚ pCTot
, Dq [11, Ch.2.4].
The first one, horizontal:
H
˚,˚
Er ,
k,l
k`1´r,l`r
H
H
dH
r : Er Ñ Er
computes horizontal cohomology first, then vertical cohomology. The second, vertical,
k,l
k`r,l`1´r
V ˚,˚
Er ,
dVr : V E r Ñ VE r
computes vertical cohomology first, then horizontal.
‚
Lemma 6.4. H k pCTot
, Dq – H k pS; H0 b Iq.
Proof. Consider the horizontal spectral sequence:
#
C l pS; H0 b Iq, k “ 0;
k,l
H
E 1 “ H k pC l pS, Rk q, dH q “
0, o.w.
#
H l pS; H0 b Iq, if k “ 0;
H k,l
E2 “
0, o.w.
˚,˚
Spectral sequence HE ˚ collapses at the second term and the statement follows.
HOMOLOGY OF TORUS SPACES
29
‚
p
Lemma 6.5. H k pCTot
, Dq – Hn´1´k pS; Πq.
Proof. Consider the vertical spectral sequence. It starts with
à
V k,l
E 1 – H l pS; Rk q “
H l pS; RI q.
dim I“´k
Similar to example 2.9 we get
ΠI
H l pS; RI q “ H l pS; H0 b t I u q “ H l´|I| plkS I; H0 |lk I b ΠI q
The restriction of H0 to lkS I Ă S coincides with the structure sheaf of FI and by
(3.9) we have a collapsing
–
H l´|I| plkS I; H0 |lk I b ΠI q ñ Hn´1´l pFI q b ΠI
A proper face FI is acyclic, thus
H l pS; RI q –
#
ΠI , if l “ n ´ 1,
0, if l ‰ n ´ 1.
The maps θIąI 1 induce the isomorphisms H˚ pFI q Ñ H˚ pFI 1 q which assemble in
commutative squares
H n´1 pS; RI q
–
+3
ΠI _
˚
θIąI
1
H n´1 pS; RI 1 q
–
+3
ΠI 1
Thus the first term of vertical spectral sequence is identified with the chain complex
p
of cosheaf Π:
#
#
p if l “ n ´ 1;
p if l “ n ´ 1;
C
pS;
Πq,
H´k pS; Πq,
k,l
k,l
´k
V
V
E1 “
E2 “
0, if l ‰ n ´ 1.
0, if l ‰ n ´ 1.
k,l
The spectral sequence VE ˚ ñ H k`l pq C ‚Tot , Dq collapses at the second page. Lemma
proved.
Theorem 6 follows from lemmas 6.4 and 6.5.
6.2. Extending duality to exact sequences. Theorem 6 can be refined:
Statement 6.6. The short exact sequence of sheaves
(6.4)
0 Ñ H0 b I Ñ H0 b L Ñ H0 b pL{Iq Ñ 0
and the short exact sequence of cosheaves
p Ñ Lp Ñ L{
pΠ
pÑ0
0ÑΠ
HOMOLOGY OF TORUS SPACES
30
induce isomorphic long exact sequences in (co)homology:
(6.5)
/ H i pS; H0 b pL{Iqq
/ H i pS; H0 b Lq
H i pS; H0 b Iq
O
O
–
O
–
p
Hn´1´i pS; Πq
H i`1 pS; H0 b Iq
O
–
/
/
–
p
Hn´1´i pS; Lq
/
p Πqq
p
Hn´1´i pS; pL{
/
p
H n´2´i pS; Πq
Proof. The proof goes essentially the same as in Theorem 6. Denote sequence
(6.4) by seqI. For each ∅ ‰ I P S consider the short exact sequence of sheaves:
¯
´
t u ΠI
Ñ0
R
:
0
Ñ
R
Ñ
H
b
L
Ñ
H
b
L{
I
seq I
I
0
0
and define
seqR´k
“
à
seqRI
dim I“k
One can view seqI, seqRI and seqR´k as the objects in a category of complexes. As
before, we can form the sequence
(6.6)
seqR‚ :
d
d
η
H
H
. . . ÝÑ seqR´2 ÝÑ
seqR´1 ÝÑ seqR0 ÝÑ seqI ÝÑ 0,
which happens to be exact in all positions. This long exact sequence (after forgetting
the last term) generates the bicomplex of short exact sequences (or the short exact
sequence of bicomplexes) seqC ‚‚ . By taking totalization and considering standard
spectral sequences we check that both rows in (6.5) are isomorphic to the long exact
sequence of cohomology associated to pseqC ‚Tot , Dq.
6.3. Remark on duality. In the manifold case (i.e. sheaf H0 is isomorphic to
k), the proof of Theorem 6 can be restated in more conceptual terms. In this case
the cellular version of Verdier duality for manifolds [6, Th.12.3] asserts:
H i pS; Iq “ Hn´1´i pS; Iq,
where the homology groups of a cellular sheaf are defined as homology of global
Π
sections of projective sheaf resolution [6, Def.11.29]. The sheaf RI “ H0 b t I u I –
t u ΠI
I
is projective ([6, Sec.11.1.1]), thus (6.1) is actually a projective resolution. Due
p
to the specific structure of this resolution, we have H˚ pS; Iq – H˚ pS; Πq.
7. Face vectors and ranks of border components
In this section we prove Theorems 3, 4 and 5.
HOMOLOGY OF TORUS SPACES
31
7.1. Preliminaries on face vectors. First recall several standard definitions
from combinatorial theory of simplicial complexes and posets.
Construction 7.1. Let S be a pure simplicial poset, dim S “ n ´ 1. Let
fi pSq “ tI P S | dim I “ iu, f´1 pSq “ 1. The array pf´1 , f0 , . . . , fn´1 q is called the
f -vector of S. We write fi instead of fi pSq ř
since S is clear from the context. Let
fS ptq be the generating polynomial: fS ptq “ iě0 fi´1 ti .
Define the h-numbers by the relation:
˙
ˆ
n
n
ÿ
ÿ
t
i
i
n´i
n
.
(7.1)
hi pSqt “
fi´1 t p1 ´ tq
“ p1 ´ tq fS
1´t
i“0
i“0
ř
ř
r i pSq, χpSq “ n´1 p´1qi bi pSq “ n´1 p´1qi fi pSq
Let bi pSq “ dim Hi pSq, rbi pSq “ dim H
i“0
i“0
řn´1 r
and χ
rpSq “ i“0 bi pSq “ χpSq ´ 1. Thus fS p´1q “ 1 ´ χpSq. Also note that
hn pSq “ p´1qn´1 χ
rpSq.
1
Define h - and h2 -vectors by
¸
ˆ ˙ ˜ i´1
ÿ
n
(7.2)
h1i “ hi `
p´1qi´j´1rbj´1 pSq for 0 ď i ď n;
i
j“1
¸
ˆ ˙
ˆ ˙ ˜ÿ
i
n
n
rbi´1 pSq “ hi `
(7.3) h2i “ h1i ´
p´1qi´j´1rbj´1 pSq for 0 ď i ď n ´ 1,
i
i
j“1
and h2n “ h1n . Note that
(7.4)
h1n
“ hn `
n´1
ÿ
j“0
p´1qn´j´1rbj´1 pSq “ rbn´1 pSq.
Statement 7.2 (Dehn–Sommerville relations). If S is Buchsbaum and dim H0 pIq “ 1
for each I ‰ ∅, then
ˆ ˙
i n
p1 ´ p´1qn ´ χpSqq,
(7.5)
hi “ hn´i ` p´1q
i
or, equivalently:
(7.6)
ˆ ˙
n
p1 ` p´1qn χ
rpSqq,
hi “ hn´i ` p´1q
i
i
If, moreover, S is a homology manifold, then h2i “ h2n´i .
Proof. The first statement can be found e.g. in [16] or [4, Thm.3.8.2]. Also
see remark 7.5 below. The second then follows from the definition of h2 -vector and
Poincare duality (3.8) bi pSq “ bn´1´i pSq.
HOMOLOGY OF TORUS SPACES
32
Definition 7.3. Let S be Buchsbaum. For i ě 0 consider
ÿ
ÿ
r n´1´|I| plkS Iq “
fpi pSq “
dim H
dim H0 pIq.
IPS,dim I“i
IPS,dim I“i
If S is a homology manifold, then fpi pSq “ fi pSq. For general Buchsbaum complexes there is another formula connecting these quantities.
ř
Proposition 7.4. fS ptq “ p1 ´ χpSqq ` p´1qn kě0 fpk pSq ¨ p´t ´ 1qk`1.
Proof. This follows from the general statement [8, Th.9.1],[4, Th.3.8.1], but we
provide an independent proof for completeness. As stated in [1, Lm.3.7,3.8]
for simř
d
plicial complexes (and also not difficult to prove for posets) dt fS ptq “ vPVertpSq flk v ptq,
and, more generally,
ˆ ˙k
ÿ
d
fS ptq “ k!
flk I ptq.
dt
IPS,|I|“k
Thus for k ě 1:
pkq
fS p´1q “ k!
ÿ
p1 ´ χplkS Iqq “
IPS,|I|“k
“ k!
ÿ
IPS,|I|“k
r n´|I|´1plk Iq “ p´1qn´k k!fpk´1 pSq.
p´1qn´|I| dim H
Considering the Taylor expansion of fS ptq at ´1:
ÿ
ÿ 1 pkq
fS p´1qpt` 1qk “ p1 ´ χpSqq ` p´1qn´k´1fpk pSq ¨pt` 1qk`1,
fS ptq “ fS p´1q `
k!
kě0
kě1
finishes the proof.
Remark 7.5. If S is a manifold, then proposition 7.4 implies
fS ptq “ p1 ´ p´1qn ´ χpSqq ` p´1qn fS p´t ´ 1q,
which is an equivalent form of Dehn–Sommerville relations (7.5).
Lemma 7.6. For Buchsbaum poset S there holds
n
ÿ
ÿ
fpk pSq ¨ pt ´ 1qn´k´1.
hi ti “ p1 ´ tqn p1 ´ χpSqq `
i“0
kě0
Proof. Substitute t{p1 ´ tq in proposition 7.4 and use (7.1).
The coefficients of ti in lemma 7.6 give the relations
˙
ˆ
ˆ ˙ ÿ
n´k´i´1 n ´ k ´ 1 p
i n
fk pSq.
p´1q
`
(7.7)
hi pSq “ p1 ´ χpSqqp´1q
i
i
kě0
HOMOLOGY OF TORUS SPACES
33
1
7.2. Ranks of XE ˚,˚ . Our goal is to compute the ranks of border groups
1`
dim XE q,q . The idea is very straightforward: statement 5.3 describes the ranks
1`
1
of all groups XE p,q except for p “ q; and the terms XE p,q are known as well; thus
1`
dim XE q,q can be found by comparing Euler characteristics. Note that the terms
1
1`
with p “ n do not change when passing from XE to XE . Thus it is sufficient to perform calculations with the truncated sequence BXE. By construction,
1
X 1
E p,q – BXE p,q – C n´p´1 pS; HqX q for p ă n. Thus lemma 4.11 implies for p ă n:
dim
1
E p,q
X
1
E p,q
BX
“ dim
ÿ
“
dim H0 pIq ¨ dimpΛ{IpIqq
pqq
|I|“n´p
Let χ1q be the Euler characteristic of q-th row of
χ1q
(7.8)
“
ÿ
p
p´1q dim
pďn´1
1
E p,q
BX
BX
ˆ ˙
p
¨ fpn´p´1pSq.
“
q
1
E ˚,˚ :
ˆ ˙
p p
“
p´1q
fn´p´1
q
pďn´1
ÿ
p
Lemma 7.7. For q ď n ´ 1 there holds χ1q “ pχpSq ´ 1q
`n˘
q
` p´1qq hq pSq.
Proof. Substitute i “ q and k “ n ´ p ´ 1 in (7.7) and combine with (7.8).
1`
1`
7.3. Ranks of XE ˚,˚ . By construction of the extra page, XE p,q –
2
p ă n. Let χ2q be the Euler characteristic of q-th row of BXE ˚,˚ :
χ2q “
(7.9)
BX
2
E p,q for
ÿ
2
p´1qp dim BXE p,q .
p
Euler characteristics of first and second terms coincide: χ2q “ χ1q . By statement 5.3,
` ˘
1`
dim XE p,q “ nq bp pSq for q ă p ă n. Lemma 7.7 yields
q
p´1q dim
1`
E q,q
X
ˆ ˙
ˆ ˙
n
n
` p´1qq hq .
bp pSq “ pχpSq ´ 1q
p´1q
`
q
q
p“q`1
n´1
ÿ
p
By taking into account obvious
relations between reduced and non-reduced Betti
řn´1
numbers and equality χpSq “ p“0 bp pSq, this proves Theorem 3.
HOMOLOGY OF TORUS SPACES
34
7.4. Manifold case. If X is a homology manifold, then Poincare duality bi pSq “
bn´i pSq and Dehn–Sommerville relations (7.6) imply
ˆ ˙ÿ
q
n
X 1`
dim E q,q “ hq `
p´1qp`qrbp “
q p“0
ˆ ˙ ˆ ˙ÿ
q
n
q n
`
p´1qp`q bp “
“ hq ´ p´1q
q p“0
q
ˆ ˙ ˆ ˙ n´1
ÿ
n
q n
`
p´1qn´1´p`q bp “
“ hq ´ p´1q
q p“n´1´q
q
ff
ˆ ˙«
n´1
ÿ
n
´p´1qn ` p´1qn χ `
p´1qn´1´p bp “
“ hn´q ` p´1qq
q
p“n´1´q
ff
ˆ ˙«
n´q´2
ÿ
n
p´1qp`n bp .
´p´1qn `
“ hn´q ` p´1qq
q
p“0
ř
p`nr
bp pSq whenever the
The final expression in brackets coincides with n´q´2
p“´1 p´1q
1`
summation is taken over nonempty set, i.e. for q ă n ´ 1. Thus dim XE q,q “ h1n´q
`
˘
1`
n
for q ă n ´ 1. In the case q “ n ´ 1 we get dim XE n´1,n´1 “ h1 ` n´1
“ h11 ` n.
This proves part (1) of Theorem 4.
Part (2) follows easily. Indeed, for q “ n:
ˆ ˙
n
X 1
X 2
dim Hn pQ, BQq “ 1 “ h10
dim E n,n “ dim E n,n “
n
For q “ n ´ 1:
dim
2
E n´1,n´1
X
2
“ dim
1`
E n´1,n´1
X
˙
n
dim Im δn “ h11 .
´
n´1
ˆ
1`
If q ď n ´ 2, then XE q,q “ XE q,q , and the statement follows from part (1).
7.5. Cone case. If Q “ P pSq – Cone |S|, then the map δi : Hi pQ, BQq Ñ
r
Hi´1 pBQq is an isomorphism. Thus for q ď n ´ 1 statement 5.3 implies
ˆ ˙
ˆ ˙
n
n r
X 8
X 1`
X 1`
dim E q,q “ dim E q,q ´
dim Hq`1 pQ, BQq “ dim E q,q ´
bq pSq.
q
q
By Theorem 3 this expression is equal to
ff ˆ ˙
ˆ ˙ «ÿ
ˆ ˙ q´1
q
n
n ÿ
n r
p`qr
hq pSq`
bq pSq “ hq pSq`
p´1q bp pSq ´
p´1qp`qrbp pSq “ h2q pSq.
q
q
q
p“0
p“0
HOMOLOGY OF TORUS SPACES
35
1`
The case q “ n follows from (7.4). Indeed, the term XE n,n survives, thus:
ˆ ˙
n
X 8
dim Hn pQ, BQq “ bn´1 pSq “ h1n pSq “ h2n pSq.
dim E n,n “
n
This proves Theorem 5.
8. Geometry of equivariant cycles
8.1. Orientations. In this section we restrict to the case when Q is a nice
manifold with corners, X “ pQ ˆ T n q{„ is a manifold with locally standard torus
action, λ — a characteristic map over Z defined on the poset S “ SQ . As before,
suppose that all proper faces of Q are acyclic and orientable and Q itself is orientable.
The subset XI , I ‰ ∅ is a submanifold of X, preserved by the torus action; XI
is called a face manifold, codim XI “ 2|I|. Submanifolds Xtiu , corresponding to
vertices i P VertpSq are called characteristic submanifolds, codim Xtiu “ 2.
Fix arbitrary orientations of the orbit space Q and the torus T n . This defines
the orientation of Y “ Q ˆ T n and X “ Y {„. Also choose an omniorientation, i.e.
orientations of all characteristic submanifolds Xtiu . The choice of omniorientation
defines characteristic values ωi P H1 pT n ; Zq without ambiguity of sign (recall that
previously they were defined only up to units of k). To perform calculations with
the spectral sequences XE and Y E we also need to orient faces of Q.
Lemma 8.1 (Convention). The orientation of each simplex of S (i.e. the sign
convention on S) defines the orientation of each face FI Ă Q.
Proof. Suppose that I P S is oriented. Let i1 , . . . , in´q be the vertices of I,
listed in a positive order (this is where the orientation of I comes in play). The
corresponding face FI lies in the intersecion of facets Fi1 , . . . , Fin´q . The normal
bundles νi to facets Fi have natural orientations, in which inward normal vectors
are positive. Orient FI in such way that Tx FI ‘νii ‘. . .‘νin´q – Tx Q is positive.
Thus there are distinguished elements rFI s P Hdim FI pFI , BFI q. One checks that
for I ă1 J the maps
m0I,J : Hdim FI pFI , BFI q Ñ Hdim FI ´1 pBFI q Ñ Hdim FJ pFJ , BFJ q
(see (3.5)) send rFI s to rJ : Is ¨ rFJ s. Thus the restriction maps H0 pI ă Jq send rFI s
to rFJ s by the definition of H0 .
The choice of omniorientation and orientations of I P S determines the orientation of each orbit T n {T λpIq by the following convention.
Construction 8.2. Let i1 , . . . , in´q be the vertices of I, listed in a positive
order. Recall that H1 pT n {T λpIq q is naturally identified with H1 pT n q{IpIqp1q . The
basis rγ1s, . . . , rγq s P H1 pT n {T λpIq q, rγl s “ γl ` IpIqp1q is defined to be positive if the
basis pωi1 , . . . , ωin´q , γ1, . . . , γq q is positive in H1 pT n q. The orientation of T n {T λpIq
Ź
determines a distinguished “volume form” ΩI “ l rγl s P Hq pT n {T λpIq ; Zq.
HOMOLOGY OF TORUS SPACES
36
The omniorientation and the orientation of S also determine the orientation of
each manifold XI in a similar way. All orientations are compatible: rXI s “ rFI sbΩI .
8.2. Arithmetics of torus quotients. Let us fix a positive basis e1 , . . . , en
of the lattice H1 pT n ; Zq. Let pλi,1 , . . . , λi,n q be the coordinates of ωi in this basis
for each i P VertpSq. The following technical lemma will be used in subsequent
computations.
Lemma 8.3. Let I P S, I ‰ ∅ be a simplex with vertices ti1 , . . . , in´q u listed
in a positive order. Let A “ tj1 ă . . . ă jq u Ă rns be a subset of indices and
eA “ ej1 ^. . .^ejq the corresponding element of Hq pT n q. Consider the map ρ : T n Ñ
T n {T λpIq . Then ρ˚ peA q “ CA,I ΩI P Hq pT n {T λpIq q with the constant:
CA,I “ sgnA det pλi,j qiPti1 ,...,in´q u
jPrnszA
where sgnA “ ˘1 depends only on A Ă rns.
Proof. Let pbl q “ pωi1 , . . . , ωin´q , γ1, . . . , γq q be a positive basis of lattice H1 pT n , Zq.
Thus bl “ Uel , where the matrix U has the form
˛
¨
λi1 ,1 . . . λin´q ,1 ˚ ˚
˚ λi1 ,2 . . . λin´q ,2 ˚ ˚‹
‹
˚
U “˚ .
.. .. ‹
..
.
.
.
˝ .
.
. .‚
.
λi1 ,n . . . λin´q ,n ˚ ˚
We have det U “ 1 since both bases are positive. Consider the inverse matrix
V “ U ´1 . Thus
ÿ
det pVj,αqjPA bα1 ^ . . . ^ bαq .
eA “ ej1 ^ . . . ^ ejq “
M “tα1 ă...ăαq uĂrns
αPM
After passing to quotient Λ Ñ Λ{IpIq all summands with M ‰ tn ´ q ` 1, . . . , nu
vanish. When M “ tn ´ q ` 1, . . . , nu, the element bn´q´1 ^ . . . ^ bn “ γ1 ^ . . . ^ γq
goes to ΩI . Thus
.
CA,I “ det pVj,αqjPA
αPtn´q`1,...,nu
Now apply Jacobi’s identity which states the following (see e.g. [3, Sect.4]). Let U be
an invertible n ˆ n-matrix, V “ U ´1 , M, N Ă rns subsets of indices, |M| “ |N| “ q.
Then
sgnM,N
det pVr,s qrPM “
det pUr,s qrPrnszN ,
sPN
det U
sPrnszM
ř
where sgnM,N “ p´1q rPrnszN r`
sign depends only on A Ă rns.
ř
sPrnszM
s
. In our case N “ tn ´ q ` 1, . . . , nu; thus the
HOMOLOGY OF TORUS SPACES
37
8.3. Face ring and linear system of parameters. Recall the definition of a
face ring of a simplicial poset S. For I1 , I2 P S let I1 _ I2 Ă S denote the set of least
upper bounds, and I1 X I2 P S — the intersection of simplices (it is well-defined and
unique if I1 _ I2 ‰ ∅).
Definition 8.4. The face ring krSs is the quotient ring of krvI | I P Ss, deg vI “
2|I| by the relations
ÿ
vJ ,
v∅ “ 1,
vI1 ¨ vI2 “ vI1 XI2 ¨
JPI1 _I2
where the sum over an empty set is assumed to be 0.
Characteristic
map λ determines the set of linear forms tθ1 , . . . , θn u Ă krSs,
ř
θj “ iPVertpSq λi,j vi . If J P S is a maximal simplex, |J| “ n, then
(8.1)
the matrix pλi,j qiďJ is invertible over k
jPrns
by the p˚k q-condition. Thus the sequence tθ1 , . . . , θn u Ă krSs is a linear system of
parameters in krSs (see, e.g.,[4, lemma 3.5.8]). It generates an ideal pθ1 , . . . , θn q Ă
krSs which we denote by Θ.
The face ring krSs is an algebra with straightening law (see, e.g. [4, §.3.5]).
Additively it is freely generated by the elements
Pσ “ vI1 ¨ vI2 ¨ . . . ¨ vIt ,
σ “ pI1 ď I2 ď . . . ď It q.
Lemma 8.5. The elements rvI s “ vI ` Θ additively generate krSs{Θ.
Proof. Consider an element Pσ with |σ| ě 2. Using relations in the face ring,
we express Pσ “ vI1 ¨ . . . ¨ vIt asřvi ¨ vI1 zi ¨ . . . ¨ vIt , for some vertex i ď I1 . The
element vi can be expressed as i1 ęIt ai1 vi1 modulo Θ according to (8.1) (we can
exclude all vi corresponding to the vertices of some maximal simplex J Ě It ). Thus
vi vIt is expressed as a combination of vIt1 for It1 ą1 It . Therefore, up to ideal Θ, the
element Pσ is expressed as a linear combination of elements Pσ1 which have either
smaller length t (in case |I1 | “ 1) or smaller I1 (in case |I1 | ą 1). By iterating this
descending process, the element Pσ `Θ P krSs{Θ is expressed as a linear combination
of rvI s.
Note that the proof works for k “ Z as well.
8.4. Linear relations on equivariant (co)cycles. Let HT˚ pXq be a T n -equivariant cohomology ring of X. Any proper face of Q is acyclic, thus has a vertex.
Therefore, there is the injective homomorphism
krSs ãÑ HT˚ pXq,
which sends vI to the cohomology class, equivariant Poincare dual to rXI s (see [9,
Lemma 6.4]). The inclusion of a fiber in the Borel construction, X Ñ X ˆT ET n ,
HOMOLOGY OF TORUS SPACES
38
induces the map HT˚ pXq Ñ H ˚ pXq. The subspace V of H˚ pXq, Poincare dual to
the image of
g : krSs ãÑ HT˚ pXq Ñ H ˚ pXq
(8.2)
À
8
is generated by the elements rXI s, thus coincides with the 8-border: V “ q XE q,q Ă
H˚ pXq. Now let us describe explicitly the linear relations on rXI s in H˚ pXq. Note
that the elements rXI s “ rFI s b ΩI can also be considered as the free generators of
the k-module
àX 1
à à
E q,q “
Hq pFI , BFI q b Hq pT n {T λpIq q.
q
q
|I|“n´q
The free k-module on generators rXI s is denoted by xrXI sy.
Proposition 8.6. Let CA,J be the constants defined in lemma 8.3. There are
only two types of linear relations on classes rXI s in H˚ pXq:
(1) For each J P S, |J| “ n ´ q ´ 1, and A Ă rns, |A| “ q there is a relation
ÿ
RJ,A “
rI : JsCA,I rXI s “ 0;
Ią1 J
8
(2) Let β be a homology class from Impδq`1 : Hq`1 pQ, BQq Ñ Hq pBQqq Ď BQE q,0
ř
1
for q ď n ´ 2, and let |I|“n´q BI rFI s P BQE q,0 be a chain representing β.
Then
ÿ
1
Rβ,A
“
BI CA,I rXI s “ 0.
|I|“n´q
˚
˚
Proof. This follows from the structure of the map f˚ : QE ˚,˚ ˆH˚ pT n q Ñ XE ˚,˚ ,
lemma 8.3 and Theorem 1. Relations on rXI s appear as the images of the differr
entials hitting XE q,q , r ě 1. Relations of the first type, RJ,A , are the images of
À
1
1
2
d1X : XE q`1,q Ñ XE q,q . In particular, q XE q,q is identified with xrXI sy{xRJ,A y. Relations of the second type are the images of higher differentials drX , r ě 2.
Now we check that relations of the first type are exactly the relations in the
quotient ring krSs{Θ.
Proposition 8.7. Let ϕ : xrXI sy Ñ krSs be the degree reversing linear map,
which sends rXI s to vI . Then ϕ descends to the isomorphism
ϕ̃ : xrXI sy{xRJ,A y Ñ krSs{Θ.
Proof. (1) First we prove that ϕ̃ is well defined. The image of RJ,A is the
element
ÿ
ϕpRJ,A q “
rI : JsCA,I vI P krSs.
Ią1 J
HOMOLOGY OF TORUS SPACES
39
Let us show that ϕpRJ,A q P Θ. Let s “ |J|, and consequently, |I| “ s ` 1, |A| “
n ´ s ´ 1. Let rnszA “ tα1 ă . . . ă αs`1 u and let tj1 , . . . , js u be the vertices of J
listed in a positive order. Consider s ˆ ps ` 1q matrix:
˛
¨
λj1 ,α1 . . . λj1 ,αs`1
˚
.. ‹
..
D “ ˝ ...
.
. ‚
λjs ,α1 . . . λjs ,αs`1
Denote by Dl the square submatrix obtained from D by deleting i-th column and
let al “ p´1ql`1 det Dl . We claim that
ϕpRJ,A q “ ˘vJ ¨ pa1 θα1 ` . . . ` as`1 θαs`1 q
ř
Indeed, after expanding each θl as iPVertpSq λi,l vi , all elements of the form vJ vi with
i ă J cancel; others give rI : JsCA,I vI for I ą1 J according to lemma 8.3 and cofactor
expansions of determinants (the incidence sign arise from shuffling columns). Thus
ϕ̃ is well defined.
(2) ϕ̃ is surjective by lemma 8.5.
(3) The dimensions of both spaces are equal. Indeed, dimxrXI s | |I| “ n ´
2
qy{xRJ,A y “ dim XE q,q “ h1n´q pSq by Theorem 4. But dimpkrSs{Θqpn´qq “ h1n´q pSq
by Schenzel’s theorem [15], [16, Ch.II,§8.2], (or [13, Prop.6.3] for simplicial posets)
since S is Buchsbaum.
(4) If k is a field, then we are done. This implies the case k “ Z as well.
In particular, this proposition describes the additive structure of krSs{Θ in terms
of the natural additive generators vI . Poincare duality in X yields
Corollary 8.8. The map g : krSs Ñ H ˚ pXq factors through krSs{Θ and the
kernel of g̃ : krSs{Θ Ñ H ˚ pXq is additively generated by the elements
ÿ
L1β,A “
BI CA,I vI
|I|“n´q
where q ď n´2, β P Impδq`1 : Hq`1 pQ, BQq Ñ Hq pBQqq,
chain representing β, and A Ă rns, |A| “ q.
ř
|I|“n´q
BI rFI s is a cellular
Remark 8.9. The ideal Θ Ă krSs coincides with the image of the natural map
H ą0 pBT n q Ñ HT˚ pXq. So the fact that Θ vanishes in H ˚ pXq is not surprising. The
interesting thing is that Θ vanishes by geometrical reasons already in the second
term of the spectral sequence, while other relations in H ˚ pXq are the consequences
of higher differentials.
Remark 8.10. From the spectral sequence follows that the element L1β,A P
krSs{Θ does not depend on the cellular chain, representing β. All such chains
À
2
produce the same element in q XE q,q “ xrXI sy{xRJ,A y – krSs{Θ. Theorem 2 also
implies that the relations tL1β,A u are linearly independent in krSs{Θ when β runs
over some basis of Im δq`1 and A runs over all subsets of rns of cardinality q.
HOMOLOGY OF TORUS SPACES
40
9. Examples and calculations
9.1. Quasitoric manifolds. Let Q be n-dimensional simple polytope. Then
S “ SQ “ BQ˚ is the boundary of the polar dual polytope. In this case Q – Cone |S|.
Given a characteristic map λ : VertpKq Ñ Tn we construct a space X “ pQ ˆ T n q{„
which is a model of quasitoric manifold [7]. Poset S is a sphere thus h2i pSq “ h1i pSq “
hi pSq. Since δn : Hn pQ, BQq Ñ Hn´1pBQq is an isomorphism, Theorem 2 implies
2
X 2
E p,q “ 0 for p ‰ q. By Theorems 3 and 5, dim XE q,q “ hq pSq “ hn´q pSq. Spectral
˚
sequence XE ˚,˚ collapses at its second term, thus dim H2q pXq “ hq pSq “ hn´q pSq for
0 ď q ď n and dim H2q`1 pXq “ 0 which is well known. For bigraded Betti numbers
proposition 5.6 implies Hi,j pXq “ 0 if i ‰ j, and dim Hi,i pXq “ hi pSq.
9.2. Homology polytopes. Let Q be a manifold with corners such that all
its proper faces as well as Q itself are acyclic. Such objects were called homology
polytopes in [9]. In this case everything stated in the previous paragraph remains
valid, thus dim H2q pXq “ hq pSq “ hn´q pSq for 0 ď q ď n, and dim H2q`1 pXq “ 0
(see [9]).
9.3. Origami toric manifolds. Origami toric manifolds appeared in differential geometry as generalizations of symplectic toric manifolds (see [5],[10]). The
original definition contains a lot of subtle geometrical details and in most part is
irrelevant to this paper. Here we prefer to work with the ad hoc model, which
captures most essential topological properties of origami manifolds.
Definition 9.1. Topological toric origami manifold X 2n is a manifold with locally standard action T n ñ X such that all faces of the orbit space including X{T
itself are either contractible or homotopy equivalent to wedges of circles.
As before consider the canonical model. Let Qn be a nice manifold with corners
in which every face is contractible or homotopy equivalent to a wedge of b1 circles.
Every principal T n -bundle Y over Q is trivial (because H 2 pQq “ 0), thus Y “ QˆT n .
Consider the manifold X “ Y {„ associated to some characteristic map over Z. Then
X is a topological origami toric manifold.
To apply the theory developed in this paper we also assume that all proper faces
of Q are acyclic (in origami case this implies contractible) and Q itself is orientable.
Thus, in particular, Q is a Buchsbaum manifold. First, describe the exact sequence
of the pair pQ, BQq. By Poincare–Lefchetz duality:
$
’
&k, ifŽq “ n;
n´q
Hq pQ, BQq – H pQq – H 1 p b1 S 1 q – kb1 , if q “ n ´ 1;
’
%0, otherwise.
In the following let m denote the number of vertices of S (the number of facets of
Q). Thus h11 pSq “ h1 pSq “ m ´ n. Consider separately three cases:
HOMOLOGY OF TORUS SPACES
41
(1) n “ 2. In this case Q is an orientable 2-dimensional surface of genus 0 with
b1 ` 1 boundary components. Thus BQ is a disjoint union of b1 ` 1 circles and long
exact sequence in homology has the form:
0
k
k
k
kb1 `1
k
δ
kb1
k
0
2
H2 pQq ÝÑ H2 pQ, BQq ÝÑ
H1 pBQq ÝÑ H1 pQq ÝÑ
δ
0
1
ÝÑ H1 pQ, BQq ÝÑ
H0 pBQq ÝÑ H0 pQq ÝÑ H0 pQ, BQq
k
kb1
k
kb1 `1
k
k
k
0
2
The second term XE ˚,˚ of spectral sequence for X is given by Theorem 2. It is
shown on a figure below (only ranks are written to save space).
2
1
1
0 b1 ` 1
m´2
b1
b1
2b1
´1
b1
0
1
2
2
2
The only nontrivial higher differential is d2 : XE 2,´1 Ñ XE 0,0 ; it coincides with
2
the composition of δ1 b idH0 pT 2 q and injective map f˚2 : H0 pP q b H0 pT 2 q Ñ XE 0,0 .
8
8
8
8
Thus d2 is injective, and dim XE 2,2 “ dim XE 0,0 “ 1; dim XE 2,1 “ dim XE 1,0 “ b1 ;
8
8
dim XE 1,1 “ m ´ 2; dim XE 2,0 “ 2b1 . Finally,
$
’
&1, if i “ 0, 4;
dim Hi pXq “ b1 , if i “ 1, 3;
’
%m ´ 2 ` 2b , if i “ 2.
1
This coincides with the result of computations in [14], concerning the same object.
This result can be obtained simply by proposition 5.6: dim H0,0 pXq “ dim H2,2 pXq “
1, dim H1,0 pXq “ dim H1,2 pXq “ b1 , dim H2,2 pXq “ m ´ 2 ` 2b1 .
HOMOLOGY OF TORUS SPACES
42
(2) n “ 3. In this case the exact sequence of pQ, BQq splits in three essential
parts:
δ
3
H3 pQq ÝÑ H3 pQ, BQq ÝÑ
H2 pBQq ÝÑ H2 pQq
k
k
k
0
k
0
δ
2
H1 pBQq ÝÑ H1 pQq ÝÑ H1 pQ, BQq
H2 pQq ÝÑ H2 pQ, BQq ÝÑ
k
kb1
k
0
k
kb1
k
0
δ
1
H0 pBQq ÝÑ H0 pQq ÝÑ H0 pQ, BQq
H1 pQ, BQq ÝÑ
k
k
k
0
k
0
2
By Theorems 2, 4, XE p,q has the form
1
3
h11
b1
h12
0
3b1
2b1
0
3b1
2
1
0
h13
´1
b1
0
1
2
3
2
2
2
There are two nontrivial higher differentials: d2 : XE 3,0 Ñ XE 1,1 and d2 : XE 3,´1 Ñ
8
8
8
8
X 2
E 1,0 ; both are injective. Thus dim XE 3,3 “ dim XE 0,0 “ 1; dim XE 3,1 “ dim XE 1,0 “
8
8
8
b1 ; dim XE 2,2 “ h11 ; dim XE 3,1 “ 3b1 ; dim XE 1,1 “ h12 ´ 3b1 . Therefore,
$
1, if i “ 0, 6;
’
’
’
’
’
&b1 , if i “ 1, 5;
dim Hi pXq “ h11 ` 3b1 , if i “ 4;
’
’
’
h12 ´ 3b1 , if i “ 2;
’
’
%
0, if i “ 3.
(3) n ě 4. In this case lacunas in the exact sequence for pQ, BQq imply that
δi : Hi pQ, BQq Ñ Hi´1 pBQq is an isomorphism for i “ n´1, n, and is trivil otherwise.
HOMOLOGY OF TORUS SPACES
We have
43
$
’
’Hn pQ, BQq – k, if i “ n ´ 1;
’
’
b
’
&Hn´1 pQ, BQq – k 1 , if i “ n ´ 2;
Hi pBQq – H1 pQq – kb1 if i “ 1;
’
’
’
H0 pQq – k, if i “ 0;
’
’
%
0, o.w.
2
By Theorems 2 and 4, XE p,q has the form
1
h11
h12
h13
..
h1n
´1
.
h1n´1
0
0
`n˘
0
0
0
b1
`
`n˘
1
`n˘
0
n
`
`
b1
0
..
.
b1
0
`n˘
n
n´2
˘
b1
n
n´3
1
`n˘
0
0
˘
b1
˘
b1
0
`
b1
n
n´1
˘
b1
n
n´3
..
.
0
`n˘
b1
b1
` ˘
8
8
Thus we get: dim XE q,q “ h1n´q , if q ‰ n ´ 2; dim XE n´2,n´2 “ h12 ´ n2 b1
8
8
8
if q “ n ´ 2; dim XE n,n´1 “ dim XE 1,0 “ b1 ; dim XE n,n´2 “ nb1 . Finally, by
proposition 5.6, dim H1,0 pXq
“ dim Hn´1,n pXq “ b1 , dim Hn´1,n´1 pXq “ h11 ` nb1 ,
`
˘
dim Hn´2,n´2 pXq “ h12 ´ n2 b1 , and dim Hi,i pXq “ h1n´i for i ‰ n ´ 1, n ´ 2.
The differential hitting the marked position produces additional relations (of the
second type) on the cycles rXI s P H 2n´4 pXq. These relations are described explicitly
by proposition 8.6. Dually, this consideration shows that the map krSs{Θ Ñ H ˚ pXq
has a nontrivial kernel only in degree 4. The generators of this kernel are described
by corollary 8.8.
10. Concluding remarks
Several questions concerning the subject of this paper are yet to be answered.
HOMOLOGY OF TORUS SPACES
44
(1) Of course, the main question which remains open is the structure
À ofX multi8
˚
plication in the cohomology ring H pXq. The border module q E q,q Ă
H˚ pXq represents an essential part of homology; the structure of multiplication on the corresponding subspace in cohomology can be extracted
from the ring homomorphism krSs{Θ Ñ H ˚ pXq. Still there are cocycles
which do not come from krSs and their products should be described separately. Proposition 5.6 suggests, that some products can be described via
the multiplication in H ˚ pQ ˆ T n q – H ˚ pQq b H ˚ pT n q. This requires further
investigation.
À X 8
(2) It is not clear yet, if there is a torsion in the border module
q E q,q in
case k “ Z. Theorems 3,4,5 describe only the rank of the free part of
this group, but the structure (and existence) of torsion remains open. Note
that the homology of X itself can have a torsion. Indeed, the groups H˚ pQq,
H˚ pQ, BQq can contain arbitrary torsion, and these groups appear in the
description of H˚ pXq by proposition 5.6.
(3) Corollary 8.8 describes the kernel of the map krSs{Θ Ñ H ˚ pXq. It seems
that the elements of this kernel lie in a socle of krSs{Θ, i.e. in a submodule tx P krSs{Θ | pkrSs{Θq` x “ 0u. The existence of such elements is
guaranteed in general by the Novik–Swartz theorem [13]. If the relations
L1β,A do not lie in a socle, their existence would give refined inequalities on
h-numbers of Buchsbaum posets.
(4) Theorem 6 establish certain connection between the sheaf of ideals generated by linear elements and the cosheaf of ideals generated by exterior
products. This connection should be clarified and investigated further. In
particular, statement 6.6 can probably lead to the description of homology
for the analogues of moment-angle complexes, i.e. the spaces of the form
X “ Y {„, where Y is an arbitrary principal T N -bundle over Q.
(5) There is a hope, that the argument of section 6 involving two spectral
sequences for a sheaf resolution can be generalized to non-Buchsbaum case.
(6) The real case, when T N is replaced by ZN2 , can, probably, fit in the same
framework.
Acknowledgements
I am grateful to prof. Mikiya Masuda for his hospitality and for the wonderful
environment with which he provided me in Osaka City University. The problem
of computing the cohomology ring of toric origami manifolds, which he posed in
2013, was a great motivation for this work (and served as a good setting to test
working hypotheses). Also I thank Shintaro Kuroki from whom I knew about h1 and h2 -vectors and their possible connection to torus manifolds.
HOMOLOGY OF TORUS SPACES
45
References
[1] A. A. Ayzenberg, V. M. Buchstaber, Nerve complexes and moment-angle spaces of convex
polytopes, Proc. of the Steklov Institute of Mathematics, Vol.275, Issue 1, pp. 15–46, 2011.
[2] A. Björner, Posets, regular CW complexes and Bruhat order, European J. Combin. 5 (1984),
7–16.
[3] Richard A. Brualdi, Hans Schneider, Determinantal Identities: Gauss, Schur, Cauchy,
Sylvester, Kronecker, Jacobi, Binet, Laplace, Muir, and Cayley, Linear Algebra and its Applications, 52/53:769–791 (1983).
[4] Victor Buchstaber, Taras Panov, Toric Topology, preprint arXiv:1210.2368
[5] A. Cannas da Silva, V. Guillemin and A. R. Pires, Symplectic Origami, IMRN 2011 (2011),
4252–4293, arXiv:0909.4065.
[6] Justin M. Curry, Sheaves, Cosheaves and Applications, arXiv:1303.3255v1
[7] M. Davis, T. Januszkiewicz, Convex polytopes, Coxeter orbifolds and torus actions, Duke Math.
J., 62:2 (1991), 417–451.
[8] Hiroshi Maeda, Mikiya Masuda, Taras Panov, Torus graphs and simplicial posets, Adv. Math.
212 (2007), no. 2, 458–483.
[9] Mikiya Masuda, Taras Panov, On the cohomology of torus manifolds, Osaka J. Math. 43
(2006), 711–746.
[10] Mikiya Masuda, Seonjeong Park, Toric origami manifolds and multi-fans, Preprint
arXiv:1305.6347
[11] John McCleary, A User’s Guide to Spectral Sequences, second edition, Cambridge studies in
advanced mathematics; 58.
[12] Clint McCrory, Zeeman’s filtration on homology, Trans.of the AMS, Vol.250, 1979.
[13] Isabella Novik, Ed Swartz, Socles of Buchsbaum modules, complexes and posets, Adv. Math.,
222 (2009), 2059–2084.
[14] Mainak Poddar, Soumen Sarkar, A class of torus manifolds with nonconvex orbit space,
Preprint arXiv:1109.0798
[15] Peter Schenzel, On the Number of Faces of Simplicial Complexes and the Purity of Frobenius,
Math. Zeitschrift 178, 125–142 (1981).
[16] R. Stanley, Combinatorics and Commutative Algebra. Boston, MA: Birkhäuser Boston Inc.,
1996. (Progress in Mathematics V. 41).
[17] Takahiko Yoshida, Local torus actions modeled on the standard representation, Advances in
Mathematics 227 (2011), pp. 1914–1955.
Osaka City University
E-mail address: [email protected]
| 4 |
ZEILBERGER’S KOH THEOREM AND THE STRICT
UNIMODALITY OF q-BINOMIAL COEFFICIENTS
arXiv:1311.4480v2 [math.CO] 1 Apr 2014
FABRIZIO ZANELLO
Abstract. A recent nice result due to I. Pak and G. Panova is the strict unimodality
of the q-binomial coefficients a+b
b q (see [2] and also [3] for a slightly revised version of
their theorem). Since their proof used representation theory and Kronecker coefficients, the
authors also asked for an argument that would employ Zeilberger’s KOH theorem. In this
note, we give such a proof. Then, as a further application of our method, we also provide a
short proof of their conjecture that the difference between consecutive coefficients of a+b
b q
can get arbitrarily large, when we assume that b is fixed and a is large enough.
A sequence c1 , c2 , . . . , ct is unimodal if it does not increase strictly after a strict decrease.
It is symmetric if ci = ct−i for all i. The unimodality of the q-binomial coefficient
(1 − q)(1 − q 2 ) · · · (1 − q a+b )
a+b
=
,
(1 − q)(1 − q 2 ) · · · (1 − q a ) · (1 − q)(1 − q 2 ) · · · (1 − q b )
b
q
which is easily proven to be a symmetric polynomial in q, is a classical and highly nontrivial
result in combinatorics. It was first shown in 1878 by J.J. Sylvester, and has since received
a number of other interesting proofs (see e.g. [4, 5, 7]). In particular, a celebrated paper
of K. O’Hara [1] provided a combinatorial proof for the unimodality of a+b
. O’Hara’s
b q
argument was subsequently expressed in algebraic terms by D. Zeilberger [8] by means of
the beautiful KOH identity. This identity decomposes a+b
into a finite sum of polynomials
b q
with nonnegative integer coefficients, which are all unimodal and symmetric about ab/2.
More precisely, fix integers a ≥ b ≥ 2. For any given partition λ = (λ1 , λ2 , . . . ) of b, set
P
Yi = ij=1 λj for all i ≥ 1, and Y0 = 0. Then the KOH theorem can be stated as follows:
P
Lemma 1 (KOH). a+b
= λ⊢b Fλ (q), where
b q
Y j(a + 2) − Yj−1 − Yj+1
P
2 i≥1 (λ2i )
.
Fλ (q) = q
λj − λj+1
q
j≥1
A recent nice result shown by I. Pak and G. Panova is a characterization of the strict
unimodality of q-binomial coefficients; i.e., they determined when a+b
strictly increases
b q
2010 Mathematics Subject Classification. Primary: 05A15; Secondary: 05A17.
Key words and phrases. q-binomial coefficient; Gaussian polynomial; unimodality.
1
2
FABRIZIO ZANELLO
from degree 1 to degree ⌊ab/2⌋ (see [2], and also [3] for a slightly revised version of the
theorem). Since their argument employed the algebraic machinery of Kronecker coefficients,
the authors asked whether a proof could also be given that uses Zeilberger’s KOH identity.
We do this in the present note. Then, as a further pithy application of this method, using
the KOH theorem we also give a very short proof of a conjecture stated in the same papers,
on the unbounded growth of the difference between consecutive coefficients of a+b
.
b q
The next lemma is a trivial and probably well-known fact of which we omit the proof.
Lemma 2. Let c and d be positive integers such that the q-binomial coefficient
c+d
d q
is
strictly unimodal. Then, for any positive integer t ≤ cd such that t 6= cd − 2, the product
t+1
c+d
is strictly unimodal (in all nonnegative degrees).
d q 1 q
Theorem 3 ([2, 3]). The q-binomial coefficient
a+b
b q
a = b = 2 or b ≥ 5, with the exception of
is strictly unimodal if and only if
(a, b) = (6, 5), (10, 5), (14, 5), (6, 6), (7, 6), (9, 6), (11, 6), (13, 6), (10, 7).
Proof. We can assume that b ≥ 5, otherwise, as it is also noted in [2, 3], the result is easy to
show. By Lemma 1, since all terms in the KOH decomposition of a+b
are unimodal and
b q
symmetric with respect to ab/2, in order to show that a+b
is strictly unimodal, it clearly
b q
suffices to determine, for each positive degree up to ab/2, some suitable KOH term that is
strictly increasing in that degree. We begin by showing that, for any a ≥ b ≥ 2, a+b
b q
strictly increases up to degree ab/2 − a for b even, and up to degree ab/2 − a/2 for b odd.
Let b = 2m be even. Then the KOH term contributed by the partition λ = (λ1 =
2, . . . , λm−1 = 2, λm = 1, λm+1 = 1) of b is given by:
Fλ (q) = q
2(m−1)
(m + 1)(a + 2) − (2m − 1) − 2m
(m − 1)(a + 2) − 2(m − 2) − (2m − 1)
1
1
q
q
=q
b−2
ab/2 + a − b + 3
ab/2 − a − b + 3
.
1
1
q
q
Notice that the product of the last two q-binomial coefficients is strictly increasing (by
1) from degree 0 to degree ab/2 − a − b + 2. Also, a+b
is clearly strictly increasing
b q
from degree 1 to degree b − 2, since so is the usual partition function p(n) (see e.g. [6],
Chapter 1). From this, we easily have that a+b
strictly increases from degree 1 to degree
b q
(ab/2 − a − b + 2) + (b − 2) = ab/2 − a.
The proof for b = 2m + 1 odd, giving us that
a+b
b q
is strictly increasing up to degree
ab/2 − a/2, is similar (using λ = (λ1 = 2, . . . , λm = 2, λm+1 = 1)) and thus will be omitted.
ZEILBERGER’S KOH THEOREM AND THE STRICT UNIMODALITY OF q-BINOMIAL COEFFICIENTS3
Now, in order to show that for the desired values of a and b,
a+b
b q
strictly increases in
each of the remaining degrees up to ab/2, we consider three cases depending on the residue
of b modulo 3. We start with b ≡ 0 modulo 3, and assume that b ≥ 15. The KOH term
corresponding to the partition λ = (b/3, b/3, b/3) of b is given by:
Fλ (q) = q
6(b/3
2 )
3(a + 2) − 2b/3 − b
b(b−3)/3 (3a − 2b + 6) + b/3
.
=q
b/3
b/3
q
q
Notice that b(b−3)/3 < ab/2−a, and 3a−2b+6 ≥ 15. Thus, it easily follows by induction
a+b
implies
that
of
, as desired.
that, for b ≥ 15, the strict unimodality of (3a−2b+6)+b/3
b q
b/3
q
Let now b ≡ 1 modulo 3, and assume b ≥ 19. By considering the partition λ = ((b −
1)/3, (b − 1)/3, (b − 1)/3, 1) of b, we get:
Fλ (q) = q
(b−1)(b−4)/3
4a − 2b + 9
3a − 2b + 8 + (b − 4)/3
.
1
(b − 4)/3
q
q
It is easy to check that, under the current assumptions on a and b, we have (b−1)(b−4)/3 <
ab/2 − a and (3a − 2b + 8)(b − 4)/3 ≥ (4a − 2b + 8) + 3. In particular, we are under the
hypotheses of Lemma 2. Thus, since 3a−2b+ 8 ≥ 15, the strict unimodality of a+b
follows
b q
3a−2b+8+(b−4)/3
, for all b ≥ 19, as we wanted to show.
by induction from that of
(b−4)/3
q
The treatment of the case b ≡ 2 modulo 3, b ≥ 20, is analogous so we will omit the details.
We only remark here that one considers the partition λ = ((b − 2)/3, (b − 2)/3, (b − 2)/3, 1, 1)
of b, whose contribution to the KOH expansion of a+b
is:
b q
Fλ (q) = q
(b−2)(b−5)/3
5a − 2b + 11
3a − 2b + 10 + (b − 5)/3
.
1
(b − 5)/3
q
q
The strict unimodality of a+b
, for b ≥ 20, then follows in a similar fashion from that of
b q
3a−2b+10+(b−5)/3
, by employing Lemma 2 and induction.
(b−5)/3
q
Therefore, it remains to show the theorem for 5 ≤ b ≤ 17 (b 6= 15). We will assume for
simplicity that a ≥ 2b + 13, the result being easy to verify directly for the remaining values
of a. The KOH term contributed by the partition (b) of b in the expansion of a+b
is:
b q
F(b) (q) = q
2(2b)
a+2−b
b(b−1) (a − 2b + 2) + b
.
=q
b
b
q
q
Clearly, since a ≥ 2b + 13, we have b(b − 1) < ab/2 − a and a − 2b + 2 ≥ 15. Thus, by
induction, the strict unimodality of (a−2b+2)+b
implies that of a+b
, as desired.
b
b q
q
In Remark 3.6 of [2, 3], the authors also conjectured that, roughly speaking, the difference
between consecutive coefficients of a q-binomial coefficient is eventually larger than any
fixed integer. As a further nice, and very brief, application of our method, we answer this
4
FABRIZIO ZANELLO
conjecture in the positive using the KOH identity. (Just notice that unlike in the original
formulation of the conjecture, our proof will assume that b is fixed and only a is large enough.)
Proposition 4. Fix any integer d ≥ 1. Then there exist integers a0 , b and L such that, if
Pab
a+b
i
=
i=0 ci q , then ci − ci−1 ≥ d, for all indices L ≤ i ≤ ab/2 and for all a ≥ a0 .
b q
[k]
[k]
[k]
Proof. Consider the partition λ[k] = (λ1 = b − k, λ2 = 1, . . . , λk+1 = 1) of b, where k ≥ 1.
It is easy to see that its contribution to the KOH identity for a+b
is given by:
b q
(k + 1)(a + 2) − 2b + 1
(b−k)(b−k−1) a − 2b + 2k + 2 + (b − k − 1)
.
Fλ[k] (q) = q
1
b−k−1
q
q
Set for instance b = 2d + 4 and a0 = (d + 2)(d + 3) + 6, where we can assume d ≥ 2. A
standard computation gives that, for any a ≥ a0 and k ≤ b/2 − 2 = d, we are under the
hypotheses of Lemma 2. Hence, by Theorem 3, each polynomial Fλ[k] (q) is strictly unimodal
from degree (b − k)(b − k − 1) on, and the theorem now immediately follows by choosing
P
L = (b − 1)(b − 2) + 1 = 4d2 + 10d + 7 and considering the coefficients of dk=1 Fλ[k] (q).
1. Acknowledgements
The idea of this paper originated during a visit to UCLA in October 2013, for whose
invitation we warmly thank Igor Pak. We wish to thank the referee for a careful reading of
our manuscript and for helpful suggestions that improved the presentation, and Igor Pak,
Greta Panova, and Richard Stanley for several helpful discussions. We are also very grateful
to Doron Zeilberger for comments (and for calling this a “proof from the book”). This work
was done while the author was partially supported by a Simons Foundation grant (#274577).
References
[1] K. O’Hara: Unimodality of Gaussian coefficients: a constructive proof, J. Combin. Theory Ser. A 53
(1990), no. 1, 29–52.
[2] I. Pak and G. Panova: Strict unimodality of q-binomial coefficients, C. R. Math. Acad. Sci. Paris 351
(2013), no. 11-12, 415–418.
[3] I. Pak and G. Panova: Strict unimodality of q-binomial coefficients (new version), preprint. Available
on the arXiv.
[4] R. Proctor: Solution of two difficult combinatorial problems using linear algebra, Amer. Math. Monthly
89 (1982), no. 10, 721–734.
[5] R. Stanley: Weyl groups, the hard Lefschetz theorem, and the Sperner property, SIAM J. Algebraic
Discrete Methods 1 (1980), no. 2, 168–184.
[6] R. Stanley: “Enumerative Combinatorics”, Vol. I, Second Ed., Cambridge University Press, Cambridge,
U.K. (2012).
[7] J.J. Sylvester: Proof of the hitherto undemonstrated fundamental theorem of invariants, Collect. Math.
papers, Vol. 3, Chelsea, New York (1973), 117–126.
ZEILBERGER’S KOH THEOREM AND THE STRICT UNIMODALITY OF q-BINOMIAL COEFFICIENTS5
[8] D. Zeilberger: Kathy O’Hara’s constructive proof of the unimodality of the Gaussian polynomials, Amer.
Math. Monthly 96 (1989), no. 7, 590–602.
Department of Mathematics, MIT, Cambridge, MA 02139-4307
matical Sciences, Michigan Tech, Houghton, MI 49931-1295
E-mail address: [email protected], [email protected]
and
Department of Mathe-
| 0 |
MNRAS 000, 1–4 (2017)
Preprint 15 June 2017
Compiled using MNRAS LATEX style file v3.0
An unbiased estimator for the ellipticity from image moments
Nicolas Tessore⋆
arXiv:1705.01109v2 [astro-ph.CO] 14 Jun 2017
Jodrell Bank Centre for Astrophysics, University of Manchester, Alan Turing Building, Oxford Road, Manchester, M13 9PL, UK
Accepted 2017 June 14. Received 2017 May 31; in original form 2017 May 07
ABSTRACT
An unbiased estimator for the ellipticity of an object in a noisy image is given in terms of
the image moments. Three assumptions are made: i) the pixel noise is normally distributed,
although with arbitrary covariance matrix, ii) the image moments are taken about a fixed centre,
and iii) the point-spread function is known. The relevant combinations of image moments are
then jointly normal and their covariance matrix can be computed. A particular estimator for the
ratio of the means of jointly normal variates is constructed and used to provide the unbiased
estimator for the ellipticity. Furthermore, an unbiased estimate of the covariance of the new
estimator is also given.
Key words: gravitational lensing: weak – methods: statistical – techniques: image processing
1 INTRODUCTION
A number of applications in astronomy require the measurement of
shapes of objects from observed images. A commonly used shape
descriptor is the ellipticity χ, which is defined in terms of the central
image moments m pq as the complex number
χ = χ1 + i χ2 =
m20 − m02 + 2 i m11
.
m20 + m02
(1)
For example, in weak gravitational lensing, the gravitational field
distorts the observed shapes of background galaxies, and this shear
can be detected in ellipticity measurements. This is possible because
the observed ellipticity of a source affected by gravitational lensing
is (see e.g. Bartelmann & Schneider 2001)
χ=
χs + 2g + g2 χs∗
,
1 + |g| 2 + 2ℜ[g χs∗ ]
(2)
where χs is the intrinsic ellipticity of the source, and g = g1 + i g2
is the so-called reduced shear of the gravitational lens. In the limit
of weak lensing, this can be approximated to linear order as
χ ≈ χs + 2g .
(3)
Averaging over an ensemble of randomly-oriented background
sources, i.e. h χs i = 0, the weak lensing equation (3) yields
h χi = 2hgi .
(4)
The observed ellipticities are thus a direct estimator for the shear g
from gravitational lensing. Similar ideas are used in Cosmology (for
a recent review, see Kilbinger 2015). Here, cosmic shear from the
large-scale structure of the universe imprints a specific signature
onto the ellipticity two-point correlation functions,
ξij (r) = χi (x) χj (y) |x−y |=r ,
⋆
Email: [email protected]
© 2017 The Authors
(5)
where the average is taken over pairs of sources with the given
separation r on the sky. Note that both the one-point function (4)
and the two-point function (5) depend on the mean ellipticity over
a potentially large sample of sources.
In practice, an estimator χ̂ is used to measure the ellipticities
of observed sources. In order to not introduce systematic errors into
applications such as the above, the ellipticity estimator χ̂ must be
unbiased, i.e. E[ χ̂] = χ. One of the biggest problems for estimators
that directly work on the data is the noise bias (Refregier et al.
2012) arising from pixel noise in the observations. For example,
the standard approach to moment-based shape measurement is to
obtain estimates m̂ pq of the second-order moments from the (noisy)
data and use (1) directly as an ellipticity estimate,
ê =
m̂20 − m̂02 + 2 i m̂11
.
m̂20 + m̂02
(6)
The statistical properties of this estimator have been studied by
Melchior & Viola (2012) and Viola et al. (2014), who assumed that
the image moments are jointly normal with some given variance
and correlation. The estimator (6) then follows the distribution of
Marsaglia (1965, 2006) for the ratio of jointly normal variates. None
of the moments of this distribution exist, and even for a finite sample, small values in the denominator can quickly cause significant
biases and large variances. The estimator (6) is thus generally poorly
behaved, unless the signal-to-noise ratio of the data is very high.
Here, a new unbiased estimator for the ellipticity χ from the
second-order image moments is proposed. First, it is shown that
for normally-distributed pixel noise with known covariance matrix
and a fixed centre, the relevant combinations of image moments are
indeed jointly normal and that their covariance matrix can easily
be computed. In the appendix, an unbiased estimator for the ratio
of the means of jointly normal variates is constructed, which can
subsequently be applied to the image moments. This produces the
ellipticity estimate, as well as unbiased estimates of its variance and
covariance.
L2
N. Tessore
2 AN UNBIASED ESTIMATOR FOR THE ELLIPTICITY
It is assumed that the data is a random vector d = (d1, d2, . . . ) of
pixels following a multivariate normal distribution centred on the
unknown true signal µ,
d ∼ N (µ, Σ) ,
m20 − m02 − m00 (π20 − π02 ) + 2 i m11 − 2 i m00 π11
,
m20 + m02 − m00 (π20 + π02 )
(8)
from the central moments π pq of the (normalised) PSF. Fixing a
centre ( x̄, ȳ), the relevant combinations of moments to compute the
ellipticity from data d are thus
Õ
αi di , αi = wi (xi − x̄)2 − (yi − ȳ)2 − π20 + π02 ,
u=
i
v=
Õ
βi di ,
βi = wi 2 (xi − x̄) (yi − ȳ) − 2 π11 ,
s=
Õ
γi di ,
γi = wi (xi − x̄)2 + (yi − ȳ)2 − π20 − π02 ,
i
i
(9)
where wi is the window function of the observation. To obtain the
true ellipticity estimate of the signal, and for the PSF correction (8)
to remain valid, the window function must be unity over the support
of the signal.1
Due to the linearity in the pixel values di , the vector (u, v, s)
can be written in matrix form,
(u, v, s) = M d ,
(10)
where the three rows αi , βi , γi of matrix M are defined by (9). The
random vector (u, v, s) is hence normally distributed,
(u, v, s) ∼ N (µ uvs, Σuvs ) ,
(11)
with unknown mean µ uvs = (µu , µv, µs ) = M µ and known 3 × 3
covariance matrix Σuvs = M Σ MT with entries
Õ
Õ
αi β j Σij ,
αi α j Σij , Σuv = Σvu =
Σuu =
Σvv =
Σss =
i, j
i, j
Õ
αi γ j Σij ,
i, j
Õ
Õ
Õ
βi γ j Σij ,
βi β j Σij ,
Σus = Σsu =
i, j
γi γ j Σij ,
Σvs = Σsv =
(12)
i, j
i, j
where Σij are the entries of the covariance matrix Σ of the pixel
noise. Hence the covariance matrix Σuvs of the moments can be
computed if the pixel noise statistics are known.
The true ellipticity (8) of the signal can be written in terms of
the mean values of the variates u, v and s defined in (9) as
χ = χ1 + i χ2 =
µu + i µ v
.
µs
(13)
The problem is to find an unbiased estimate of χ1 and χ2 from the
1
p = u−as,
(7)
where Σ is the covariance matrix for the noise, which is assumed
to be known but not restricted to a particular diagonal shape. The
observed signal usually involves a point-spread function (PSF), and
it is further assumed that this effect can be approximated as a linear
convolution of the true signal and the discretised PSF. In this case,
definition (1) can be extended to obtain the true ellipticity of the
object before convolution,
χ=
observed values of u, v and s. In appendix A, an unbiased estimator
is given for the ratio of the means of two jointly normal random
variables. It can be applied directly to the ellipticity (13). First, two
new variates p and q are introduced,
This is in contrast to the weight functions that are sometimes used in
moment-based methods to reduce the influence of noise far from the centre.
q = v − bs ,
a = Σus /Σss ,
b = Σvs /Σss ,
(14)
where a, b are constants. This definition corresponds to (A2) in
the univariate case, and therefore both (p, s) and (q, s) are pairs of
independent normal variates. The desired estimator for the ellipticity
is then
χ̂1 = a + p ĝ(s) ,
χ̂2 = b + q ĝ(s) .
(15)
Because the mean µs is always positive for a realistic signal, the
function ĝ(s) is given by (A3). From the expectation
E[ χ̂1 ] = a + E[p] E[ĝ(s)] = a + (µu − a µs )/µs = χ1 ,
E[ χ̂2 ] = b + E[q] E[ĝ(s)] = b + (µv − b µs )/µs = χ2 ,
(16)
it follows that χ̂ = χ̂1 + i χ̂2 is indeed an unbiased estimator for the
true ellipticity χ of the signal.
In addition, an unbiased estimate of the the variance of χ̂1
and χ̂2 is provided by (A9),
2
V̂ar[ χ̂1 ] = p2 ĝ(s) − (p2 − Σuu )/Σss + a2 1 − s ĝ(s) ,
2
V̂ar[ χ̂2 ] = q2 ĝ(s) − (q2 − Σvv )/Σss + b2 1 − s ĝ(s) .
(17)
Similarly, there is an unbiased estimator for the covariance,
2
Ĉov[ χ̂1, χ̂2 ] = pq ĝ(s) − (pq − Σuv )/Σss + ab 1 − s ĝ(s) .
(18)
It follows that the individual estimates of the ellipticity components
are in general not independent. However, for realistic pixel noise,
window functions and PSFs, the correlations between u, v and s,
and hence χ̂1 and χ̂2 , can become very small.
To demonstrate that the proposed estimator is in fact unbiased
under the given assumptions of i) normal pixel noise with known
covariance, ii) a fixed centre for the image moments, and iii) the
discrete convolution with a known PSF, a Monte Carlo simulation
was performed using mock observations of an astronomical object.
The images are postage stamps of 49×49 pixels, containing a centred
source that is truncated near the image borders. A circular aperture
is used as window function. The source is elliptical and follows
the light profile of de Vaucouleurs (1948). The ellipse containing
half the total light has a 10 pixel semi-major axis and ellipticity as
specified. Where indicated, the signal is convolved with a Gaussian
PSF with 5 pixel FWHM. The pixel noise is uncorrelated with unit
variance. The normalisation N of the object (i.e. the total number of
counts) varies to show the effect of the signal-to-noise ratio on the
variance of the estimator.2 The signal ellipticity χ is computed from
the image before the PSF is applied and noise is added. The mean
of the estimator χ̂ is computed from 106 realisations of noise for
the same signal. Also computed are the square root of the sample
variance Var[ χ̂] and the mean of the estimated variance V̂ar[ χ̂],
respectively. The results shown in Table 1 indicate that the estimator
performs as expected.
2
To compare the results to a given data set, it is then possible to scale the
data so that the noise has unit variance, and compare the number of counts.
For example, the control-ground-constant data set of the GREAT3 challenge
(Mandelbaum et al. 2014) has mostly N = 500–1000.
MNRAS 000, 1–4 (2017)
An unbiased estimator for the ellipticity
L3
Table 1. Monte Carlo results for the unbiased ellipticity estimator
χ1
0.1107
χ2
0.0000
PSF
counts
χ̂1
χ̂2
Var[ χ̂1 ]1/2
Var[ χ̂2 ]1/2
no
500
1000
2000
500
1000
2000
500
1000
2000
500
1000
2000
500
1000
2000
500
1000
2000
0.1113 (08)
0.1105 (02)
0.1108 (01)
0.1115 (07)
0.1111 (02)
0.1108 (01)
0.1820 (07)
0.1817 (02)
0.1822 (01)
0.1802 (07)
0.1819 (02)
0.1819 (01)
−0.0005 (08)
−0.0001 (02)
−0.0002 (01)
0.0001 (07)
0.0002 (02)
0.0001 (01)
0.0006 (11)
−0.0002 (02)
0.0000 (01)
0.0006 (06)
−0.0001 (02)
0.0000 (01)
−0.1840 (09)
−0.1842 (02)
−0.1842 (01)
−0.1840 (06)
−0.1845 (02)
−0.1841 (01)
0.5489 (15)
0.5494 (03)
0.5493 (01)
0.5473 (10)
0.5488 (03)
0.5491 (01)
0.762
0.216
0.104
0.717
0.216
0.104
0.704
0.225
0.108
0.658
0.223
0.108
0.793
0.248
0.117
0.720
0.244
0.116
1.112
0.214
0.103
0.598
0.214
0.103
0.853
0.226
0.108
0.616
0.224
0.108
1.451
0.323
0.149
0.957
0.315
0.147
yes
0.1820
−0.1842
no
yes
0.0000
0.5492
no
yes
3 CONCLUSION & DISCUSSION
The unbiased estimator (15) provides a new method for ellipticity measurement from noisy images. Its simple and analytic form
allows quick implementation and fast evaluation, and statistics for
the results can be obtained directly with unbiased estimates of the
variance (17) and covariance (18).
Using an unbiased estimator for the ellipticity of the signal µ
eliminates the influence of noise from a subsequent analysis of the
results (the so-called “noise bias” in weak lensing, Refregier et al.
2012). However, depending on the application, other kinds of biases
might exist even for a noise-free image. For example, due to the
discretisation of the image, the signal ellipticity can differ from the
intrinsic ellipticity of the observed object. This “pixellation bias”
(Simon & Schneider 2016) remains an issue in applications such as
weak lensing, where the relevant effects must be measured from the
intrinsic ellipticity of the objects.
Furthermore, a fixed centre ( x̄, ȳ) for the moments has been
assumed throughout. For a correct ellipticity estimate, this must
be the centroid of the signal, which is usually estimated from the
data itself. Centroid errors (Melchior & Viola 2012) might therefore
ultimately bias the ellipticity estimator or increase its variance,
although this currently does not seem to be a significant effect.
In practice, additional biases might arise when the assumed
requirements for the window function and PSF are not fulfilled by
the data. The estimator should therefore always be carefully tested
for the application at hand.
Lastly, the ellipticity estimate might be improved by suitable
filtering of the observed image. A linear filter with matrix A can
be applied to the image before estimating the ellipticity, since the
transformed pixels remain multivariate normal with mean µ ′ = A µ
and covariance matrix Σ ′ = A Σ AT . Examples of viable filters are
nearest-neighbour or bilinear interpolation, as well as convolution.
A combination of these filters could be used to perform PSF deconvolution on the observed image, as an alternative to the algebraic
PSF correction (8).
MNRAS 000, 1–4 (2017)
V̂ar[ χ̂1 ]
1/2
0.762
0.216
0.104
0.717
0.215
0.104
0.704
0.225
0.108
0.658
0.224
0.108
0.794
0.248
0.117
0.720
0.244
0.116
V̂ar[ χ̂2 ]
1/2
1.111
0.214
0.103
0.599
0.214
0.103
0.853
0.226
0.108
0.616
0.225
0.108
1.450
0.322
0.149
0.960
0.316
0.147
ACKNOWLEDGEMENTS
NT would like to thank S. Bridle for encouragement and many
conversations about shape measurement. The author acknowledges
support from the European Research Council in the form of a Consolidator Grant with number 681431.
REFERENCES
Bartelmann M., Schneider P., 2001, Phys. Rep., 340, 291
Kilbinger M., 2015, Rep. Prog. Phys., 78, 086901
Mandelbaum R., et al., 2014, ApJS, 212, 5
Marsaglia G., 1965, J. Amer. Statist. Assoc., 60, 193
Marsaglia G., 2006, J. Stat. Softw., 16, 1
Melchior P., Viola M., 2012, MNRAS, 424, 2757
Refregier A., Kacprzak T., Amara A., Bridle S., Rowe B., 2012, MNRAS,
425, 1951
Simon P., Schneider P., 2016, preprint, (arXiv:1609.07937)
Viola M., Kitching T. D., Joachimi B., 2014, MNRAS, 439, 1909
Voinov V. G., 1985, Sankhya B, 47, 354
de Vaucouleurs G., 1948, Ann. Astrophys., 11, 247
APPENDIX A: A RATIO ESTIMATOR FOR NORMAL
VARIATES
Let x and y be jointly normal variates with unknown means µx
and µy , and known variances σx2 and σy2 and correlation ρ. The
goal here is to find an unbiased estimate of the ratio r of their
means,
r = µx /µy .
(A1)
Under the additional assumption that the sign of µy is known, an
unbiased estimator for r can be found in two short steps.
First, the transformation of Marsaglia (1965, 2006) is used to
construct a new variate,
w = x−cy,
c = ρ σx /σy .
(A2)
The constant c is chosen so that E[wy] = E[w] E[y]. It is clear that w
is normal, and that variates w and y are jointly normal, uncorrelated,
L4
N. Tessore
and thus independent. Note that c = 0 and w = x for independent x
and y.
Secondly, Voinov (1985) derived an unbiased estimator for
the inverse mean of the normal variate y, i.e. a function ĝ(y)
with E[ĝ(y)] = 1/µy . For the relevant case of an unknown but
positive mean µy > 0, this function is given by
2
√
y
y
π
ĝ(y) = √
exp
erfc
.
(A3)
√
2 σy2
2 σy
2 σy
It is then straightforward to construct an estimator for the ratio (A1),
(A4)
r̂ = c + w ĝ(y) .
Since w and y are independent, the expectation is
E[r̂] = c + E[w] E[ĝ(y)] = c + (µx − c µy )/µy = µx /µy , (A5)
which shows that r̂ is in fact an unbiased estimator for r.
The variance of the ratio estimator r̂ is formally given by
Var[r̂] = E[w]2 + Var[w] Var[ĝ(y)] + Var[w] E[ĝ(y)]2 . (A6)
As pointed out by Voinov (1985), the variance Var[ĝ(y)] does not
exist for function (A3) due to a divergence at infinity. The confidence
interval h with probability p,
Pr | ĝ(y) − 1/µy | < h = p ,
(A7)
however, is well-defined, and the variance of ĝ(y) remains finite in
applications where infinite values of y are not observed. In this case,
an unbiased estimator
2
V̂ar[ĝ(y)] = ĝ(y) − 1 − y ĝ(y) /σy2
(A8)
exists and, together with Voinov’s estimator for 1/µ2y , yields
2
V̂ar[r̂] = w2 ĝ(y) − (w2 − σx2 )/σy2 + c2 1 − y ĝ(y) (A9)
as an unbiased estimate of the variance of the estimator r̂.
When y is significantly larger than its standard deviation,
e.g. y/σy > 10, the function ĝ(y) given by (A3) is susceptible
to numerical overflow. However, in this regime, it is also very well
approximated by its series expansion,
ĝ(y) ≈ 1/y − σy2 /y 3 + 3 σy4 /y 5 ,
y/σy > 10 .
(A10)
For even larger values y ≫ σy , this approaches 1/y, as expected.
This paper has been typeset from a TEX/LATEX file prepared by the author.
MNRAS 000, 1–4 (2017)
| 1 |
Parallel Pricing Algorithms
for Multi–Dimensional Bermudan/American Options
using Monte Carlo methods
Viet_Dung Doan — Abhijeet Gaikwad — Mireille Bossy — Françoise Baude — Ian
Stokes-Rees
N° 6530
Mai 2008
apport
de recherche
ISRN INRIA/RR--6530--FR+ENG
Thèmes COM et NUM
ISSN 0249-6399
arXiv:0805.1827v1 [cs.DC] 13 May 2008
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
Parallel Pricing Algorithms
for Multi–Dimensional Bermudan/American Options
using Monte Carlo methods
Viet Dung Doan∗ , Abhijeet Gaikwad∗ , Mireille Bossy† , Françoise Baude∗ ,
Ian Stokes-Rees‡
Thèmes COM et NUM — Systèmes communicants et Systèmes numériques
Projets OASIS et TOSCA
Rapport de recherche n° 6530 — Mai 2008 — 16 pages
Abstract: In this paper we present two parallel Monte Carlo based algorithms for pricing
multi–dimensional Bermudan/American options. First approach relies on computation of
the optimal exercise boundary while the second relies on classification of continuation and
exercise values. We also evaluate the performance of both the algorithms in a desktop
grid environment. We show the effectiveness of the proposed approaches in a heterogeneous
computing environment, and identify scalability constraints due to the algorithmic structure.
Key-words: Multi–dimensional Bermudan/American option, Parallel Distributed Monte
Carlo methods, Grid computing.
∗
†
‡
INRIA, OASIS
INRIA, TOSCA
Dept. Biological Chemistry & Molecular Pharmacology, Harvard Medical School
Unité de recherche INRIA Sophia Antipolis
2004, route des Lucioles, BP 93, 06902 Sophia Antipolis Cedex (France)
Téléphone : +33 4 92 38 77 77 — Télécopie : +33 4 92 38 77 65
Algorithmes de Pricing parallèles pour des Options
Bermudiennes/Américaines multidimensionnelles par
une méthode de Monte Carlo
Résumé : Dans ce papier, nous présentons deux algorithmes de type Monte Carlo pour le
pricing d’options Bermudiennes/Américaines multidimensionnelles. La premiere approche
repose sur le calcul de la frontière d’exercice, tandis que la seconde repose sur la classification
des valeurs d’exercice et de continuation. Nous évaluons les performances des algorithmes
dans un environnement grille. Nous montrons l’efficacité des approches proposées dans un
environnement hétérogène. Nous identifions les contraintes d’évolutivité dues à la structure
algorithmique.
Mots-clés : options Bermudiennes/Américaines multidimensionnelles, Méthodes de Monte
Carlo paralléles, Grid computing.
Parallel Pricing Algorithms for Multi–Dimensional Bermudan/American Options
1
3
Introduction
Options are derivative financial products which allow buying and selling of risks related to
future price variations. The option buyer has the right (but not obligation) to purchase
(for a call option) or sell (for a put option) any asset in the future (at its exercise date) at
a fixed price. Estimates of the option price are based on the well known arbitrage pricing
theory: the option price is given by the expected value of the option payoff at its exercise
date. For example, the price of a call option is the expected value of the positive part of
the difference between the market value of the underlying asset and the asset fixed price at
the exercise date. The main challenge in this situation is modelling the future asset price
and then estimating the payoff expectation, which is typically done using statistical Monte
Carlo (MC) simulations and careful selection of the static and dynamic parameters which
describe the market and assets.
Some of the widely used options include American option, where the exercise date is variable, and its slight variation Bermudan/American (BA) option, with the fairly discretized
variable exercise date. Pricing these options with a large number of underlying assets is
computationally intensive and requires several days of serial computational time (i.e. on a
single processor system). For instance, Ibanez and Zapatero (2004) [10] state that pricing
the option with five assets takes two days, which is not desirable in modern time critical
financial markets. Typical approaches for pricing options include the binomial method [5]
and MC simulations [6]. Since binomial methods are not suitable for high dimensional options, MC simulations have become the cornerstone for simulation of financial models in
the industry. Such simulations have several advantages, including ease of implementation
and applicability to multi–dimensional options. Although MC simulations are popular due
to their “embarrassingly parallel” nature, for simple simulations, allows an almost arbitrary
degree of near-perfect parallel speed-up, their applicability to pricing American options is
complex[10], [4], [12]. Researchers have proposed several approximation methods to improve
the tractability of MC simulations. Recent advances in parallel computing hardware such
as multi-core processors, clusters, compute “clouds”, and large scale computational grids
have also attracted the interest of the computational finance community. In literature, there
exist a few parallel BA option pricing techniques. Examples include Huang (2005) [9] or
Thulasiram (2002) [15] which are based on the binomial lattice model. However, a very
few studies have focused on parallelizing MC methods for BA pricing [16]. In this paper,
we aim to parallelize two American option pricing methods: the first approach proposed in
Ibanez and Zapatero (2004) [10] (I&Z) which computes the optimal exercise boundary and
the second proposed by Picazo (2002) [8] (CMC) which uses the classification of continuation
values. These two methods in their sequential form are similar to recursive programming
so that at a given exercise opportunity they trigger many small independent MC simulations to compute the continuation values. The optimal strategy of an American option is
to compare the exercise value (intrinsic value) with the continuation value (the expected
cash flow from continuing the option contract), then exercise if the exercise value is more
valuable. In the case of I&Z Algorithm the continuation values are used to parameterize the
exercise boundary whereas CMC Algorithm classifies them to provide a characterization of
RR n° 6530
4
Doan, Gaikwad, Bossy, Baude & Stokes-Rees
the optimal exercise boundary. Later, both approaches compute the option price using MC
simulations based on the computed exercise boundaries.
Our roadmap is to study both the algorithms to highlight their potential for parallelization: for the different phases, our aim is to identify where and how the computation could be
split into independent parallel tasks. We assume a master-worker grid programming model,
where the master node splits the computation in such tasks and assigns them to a set of
worker nodes. Later, the master also collects the partial results produced by these workers.
In particular, we investigate parallel BA options pricing to significantly reduce the pricing
time by harnessing the computational power provided by the computational grid.
The paper is organized as follows. Sections 2 and 3 focus on two pricing methods and are
structured in a similar way: a brief introduction to present the method, sequential followed
by parallel algorithm and performance evaluation concludes each section. In section 4 we
present our conclusions.
2
2.1
Computing optimal exercise boundary algorithm
Introduction
In [10], the authors propose an option pricing method that builds a full exercise boundary
as a polynomial curve whose dimension depends on the number of underlying assets. This
algorithm consists of two phases. In the first phase the exercise boundary is parameterized.
For parameterization, the algorithm uses linear interpolation or regression of a quadratic
or cubic function at a given exercise opportunity. In the second phase, the option price
is computed using MC simulations. These simulations are run until the price trajectory
reaches the dynamic boundary computed in the first phase. The main advantage of this
method is that it provides a full parameterization of the exercise boundary and the exercise
rule. For American options, a buyer is mainly concerned in these values as he can decide
at ease whether or not to exercise the option. At each exercise date t, the optimal exercise
point St∗ is defined by the following implicit condition,
Pt (St∗ ) = I(St∗ )
(1)
where Pt (x) is the price of the American option on the period [t, T ], I(x) is the exercise value
(intrinsic value) of the option and x is the asset value at opportunity date t. As explained
in [10], these optimal values stem from the monotonicity and convexity of the price function
P (·) in (1). These are general properties satisfied by most of the derivative securities such
as maximum, minimum or geometric average basket options. However, for the problems
where the monotonicity and convexity of the price function can not be easily established,
this algorithm has to be revisited. In the following section we briefly discuss the sequential
algorithm followed by a proposed parallel solution.
INRIA
Parallel Pricing Algorithms for Multi–Dimensional Bermudan/American Options
2.2
5
Sequential Boundary Computation
The algorithm proposed in [10] is used to compute the exercise boundary. To illustrate this
approach, we consider a call BA option on the maximum of d assets modeled by Geometric
Brownian Motion (GBM). It is a standard example for the multi–dimensional BA option
with maturity date T , constant interest rate r and the price of this option at t0 is given as
Pt0 = E (exp (−rτ )Φ(Sτ , τ )|St0 )
where τ is the optimal stopping time ∈ {t1 , .., T }, defined as the first time ti such that
the underlying value Sτ surpasses the optimal exercise boundary at the opportunity τ
otherwise the option is held until τ = ∞. The payoff at time τ is defined as follows:
Φ(Sτ , τ ) = (maxi (Sτi ) − K)+ , where i = 1,..,d, S is the underlying asset price vector and K
is the strike price. The parameter d has a strong impact on the complexity of the algorithm,
except in some cases as the average options where the number of dimensions d can be easily
reduced to one. For an option on the maximum of d assets there are d separate exercise
regions which are characterized by d boundaries [10]. These boundaries are monotonic and
smooth curves in Rd−1 . The algorithm uses backward recursive time programming, with
a finite number of exercise opportunities m = 1, ..., NT . Each boundary is computed by
regression on J numbers of boundary points in Rd−1 . At each given opportunity, these
optimal boundary points are computed using N1 MC simulations. Further in case of an
option on the maximum of d underlying assets, for each asset the boundary points are computed.
P It takes n iterations to converge eachindividual point. The complexity of this step is
NT
O
m=1 d × J × m × N1 × (NT − m) × n . After estimating J optimal boundary points
for each asset, d regressions are performed over these points to get d execution boundaries.
Let us assume that the complexity of this step is O(NT × regression(J)), where the complexity of the regression is assumed to be constant. After computing the boundaries at
all m exercise opportunities, in the second phase, the price of the option is computed using a standard MC simulation of N paths in Rd . The complexity of the pricing phase is
O(d × NT × N ). Thus the total complexity of the algorithm is as follows,
P
NT
O
m=1 d × J × m × N1 × (NT − m) × n + NT × regression(J) + d × NT × N
≈ O NT2 × J × d × N1 × n + NT × (J + d × N )
For the performance benchmarks, we use the same simulation parameters as given in [10].
Consider a call BA option on the maximum of d assets with the following configuration.
K = 100, interest rate r = 0.05, volatility rate σ = 0.2,
dividend δ = 0.1, J = 128, N1 = 5e3, N = 1e6, d = 3,
NT = 9 and T = 3 years.
(2)
The sequential pricing of this option (2) takes 40 minutes. The distribution of the total time
for the different phases is shown in Figure 1. As can be seen, the data generation phase, which
simulates and calculates J optimal boundary points, consumes most of the computational
RR n° 6530
6
Doan, Gaikwad, Bossy, Baude & Stokes-Rees
time. We believe that a parallel approach to this and other phases could dramatically
reduce the computational time. This inspires us to investigate a parallel approach for I&Z
Algorithm which is presented in the following section. The numerical results that we shall
provide indicate that the proposed parallel solution is more efficient compared with the serial
algorithm.
2.3
Parallel approach
In this section, a simple parallel approach for I&Z Algorithm is presented and the pseudocode for the same is given in Algorithm 13. This approach is inspired from the solution
proposed by Muni Toke [16], though he presents it in the context of a low–order homogeneous parallel cluster. The algorithm consists of two phases. The first parallel phase is based
on the following observation: for each of the d boundaries, the computation of J optimal
boundary points at a given exercise date can be simulated independently. The optimal exerAlgorithm 1 Parallel Ibanez and Zapatero Algorithm
1: [glp] Generation of the J “Good Lattice Points”
2: for t = tNT to t1 do
3:
for di = 1 to d do
4:
for j = 1 to J in parallel do
5:
[calc] Computation of a boundary point with N1 Monte Carlo simulations
6:
end for
7:
[reg] Regression of the exercise boundary .
8:
end for
9: end for
10: for i = 1 to N in parallel do
11:
[mc] Computation of the partial option price.
12: end for
13: Estimation of the final option price by merging the partial prices.
cise boundaries from opportunity date m back to m − 1 are computed as follows. Note that
at m = NT , the boundary is known (e.g. for a call option the boundary at NT is defined
as the strike value). Backward to m = NT − 1, we have to estimate J optimal points from
J initial good lattice points [10], [7] to regress the boundary to this time. The regression
of Rd → Rd is difficult to achieve in a reasonable amount of time in case of large number
of training points. To decrease the size of the training set we utilize “Good Lattice Points”
(GLPs) as described in [7],[14], and [3]. In particular case of a call on the maximum of d
assets, only a regression of Rd−1 → R is needed, but we repeat it d times.
The Algorithm 13 computes GLPs using either SSJ library [11] or the quantification
number sequences as presented in [13]. SSJ is a Java library for stochastic simulation and it
computes GLPs as a Quasi Monte Carlo sequence such as Sobol or Hamilton sequences. The
algorithm can also use the number sequences readily available at [1]. These sequences are
INRIA
Parallel Pricing Algorithms for Multi–Dimensional Bermudan/American Options
7
generated using an optimal quadratic quantizer of the Gaussian distribution in more than
one dimension. The [calc] phase of the Algorithm 13 is embarrassingly parallel and the
J boundary points are equally distributed among the computing nodes. At each node, the
algorithm simulates N1 paths to compute the approximate points. Then Newton’s iterations
method is used to converge an individual approximated point to the optimal boundary point.
After computing J optimal boundary points, these points are collected by the master node,
for sequential regression of the exercise boundary. This node then repeats the same procedure
for every date t, in a recursive way, until t = t1 in the [reg] phase.
The [calc] phase provides the exact optimal exercise boundary at every opportunity
date. After computation of the boundary, in the last [mc] phase, the option is priced using
parallel MC simulations as shown in the Algorithm 13.
2.4
Numerical results and performance
In this section we present performance and accuracy results due to the parallel I&Z Algorithm described in Algorithm 13. We price a basket BA call option on the maximum of 3
assets as given in (2). The start prices for the assets are varied as 90, 100, and 110. The
prices estimated by the algorithm are presented in Table 1. To validate our results we compare the estimated prices with the prices mentioned in Andersen and Broadies (1997) [2].
Their results are reproduced in the column labeled as “Binomial”. The last column of the
table indicates the errors in the estimated prices. As we can see, the estimated option prices
are close to the desired prices by acceptable marginal error and this error is represented by
a desirable 95% confidence interval.
As mentioned earlier, the algorithm relies on J number of GLPs to effectively compute
the optimal boundary points. Later the parameterized boundary is regressed over these
points. For the BA option on the maximum described in (2), Muni Toke [16] notes that J
smaller than 128 is not sufficient and prejudices the option price. To observe the effect of
the number of optimal boundary points on the accuracy of the estimated price, the number
of GLPs is varied as shown in Table 2. For this experiment, we set the start price of the
option as S0 = 90. The table indicates that increasing number of GLPs has negligible
impact on the accuracy of the estimated price. However, we observe the linear increase in
the computational time with the increase in the number of GLPs.
[INSERT TABLE 1 HERE]
To evaluate the accuracy of the computed prices by the parallel algorithm, we obtained
the numerical results with 16 processors. First, let us observe the effect of N1 , the number
of simulations required in the first phase of the algorithm, on the computed option price. In
[16], the author comments that the large values of N1 do not affect the accuracy of option
price. For these experiments, we set the number of GLPs, J, as 128 and vary N1 as shown
in Table 3. We can clearly observe that N1 in fact has a strong impact on the accuracy of
the computed option prices: the computational error decreases with the increased N1 . A
large value of N1 results in more accurate boundary points, hence more accurate exercise
boundary. Further, if the exercise boundary is accurately computed, the resulting option
prices are much closer to the true price. However this, as we can see in the third column,
RR n° 6530
8
Doan, Gaikwad, Bossy, Baude & Stokes-Rees
S0i
90
100
110
Option Price
11.254
18.378
27.512
Variance (95% CI)
153.857 (0.024)
192.540 (0.031)
226.332 (0.035)
Binomial
11.29
18.69
27.58
Error
0.036
0.312
0.068
Table 1: Price of the call BA on the maximum of three assets (d = 3, with the spot price
S0i for i = 1, .., 3) using I&Z Algorithm. (r = 0.05, δ = 0.1, σ = 0.2, ρ = 0.0, T = 3 and
N = 9). The binomial values are referred as the true values.
J
128
256
1024
Price
11.254
11.258
11.263
Time (in minute)
4.6
8.1
29.5
Error
0.036
0.032
0.027
Table 2: Impact of the value of J on the results of the maximum on three assets option
(S0 = 90). The binomial price is 11.29. Running time on 16 processors.
N1
5000
10000
100000
Price
11.254
11.262
11.276
Time (in minute)
4.6
6.9
35.7
Error
0.036
0.028
0.014
Table 3: Impact of the value of N1 on the results of the maximum on three assets option
(S0 = 90). The binomial price is 11.29. Running time on 16 processors.
comes at a cost of increased computational time. The I&Z algorithm highly relies on the
accuracy and the convergence rate of the optimal boundary points. While the former affects
the accuracy of the option price, the later affects the speed up of the algorithm. In each
iteration, to converge to the optimal boundary point, the algorithm starts with an arbitrary
point with the strike price K often as its initial value. The algorithm then uses N1 random
MC paths to simulate the approximated point. A convergence criterion is used to optimize
this approximated point. The simulated point is assumed to be optimal when it satisfies the
i,(initial)
i,(initial)
i,(simulated)
is the initial
| < ǫ = 0.01, where the Stn
− Stn
following condition, |Stn
i,(simulated)
is the point simulated by
point at a given opportunity tn , i = 1..J and the Stn
using N1 MC simulations. In case, the condition is not satisfied, this procedure is repeated
i,(simulated)
. Note that the
and now with the initial point as the newly simulated point Stn
number of iterations n, required to reach to the optimal value, varies depending on the fixed
precision in the Newton procedure (for instance, with a precision ǫ = 0.01, n varies from 30
to 60). We observed that not all boundary points take the same time for the convergence.
INRIA
Parallel Pricing Algorithms for Multi–Dimensional Bermudan/American Options
9
Some points converge faster to the optimal boundary points while some take longer than
usual. Since the algorithm has to wait until all the points are optimized, the slower points
increase the computational time, thus reducing the efficiency of the parallel algorithm, see
Figure 2.
Monte Carlo simulation 22.09%
Regression 0.01%
Data generation 77.90%
Figure 1: The time distribution for the sequential optimal exercise boundary computation
algorithm. The total time is about 40 minutes.
3
3.1
The Classification and Monte Carlo algorithm
Introduction
The Monte Carlo approaches for BA option pricing are usually based on continuation value
computation [12] or continuation region estimation [8], [10]. The option holder decides either
to execute or to continue with the current option contract based on the computed asset value.
If the asset value is in the exercise region, he executes the option otherwise he continues to
hold the option. Denote that the asset values which belong to the exercise region will form
the exercise values and rest will belong to the continuation region. In [8] Picazo et al. propose an algorithm based on the observation that at a given exercise opportunity the option
holder makes his decision based on whether the sign of (exercise value−continuation value)
is positive or negative. The author focuses on estimating the continuation region and the
exercise region by characterizing the exercise boundary based on these signs. The classification algorithm is used to evaluate such sign values at each opportunity. In this section
we briefly describe the sequential algorithm described in [8] and propose a parallel approach
followed by performance benchmarks.
RR n° 6530
10
Doan, Gaikwad, Bossy, Baude & Stokes-Rees
Figure 2: Speedup of the parallel I&Z Algorithm.
3.2
Sequential algorithm
For illustration let us consider a BA option on d underlying assets modeled by Geometric
Brownian Motion (GBM). St = (Sti ) with i = 1, .., d. The option price at time t0 is defined
as follows:
Pt0 (St0 ) = E (exp (−rτ )Φ(Sτ , τ )|St0 )
where τ is the optimal stopping time ∈ {t1 , .., T }, T is the maturity date, r is the constant
interest rate and Φ(Sτ , τ ) is the payoff value at time τ . In case of I&Z Algorithm, the optimal
stopping time is defined when the underlying asset value crosses the exercise boundary. The
CMC algorithm defines the stopping time whenever the underlying asset value makes the
sign of (exercise value − continuation value) positive. Without loss of generality, at a given
time t the BA option price on the period [t, T ] is given by:
Pt (St ) = E (exp (−r(τ − t))Φ(Sτ , τ )|St )
where τ is the optimal stopping time ∈ {1, .., T }. Let us define the difference between the
payoff value and the option price at time tm as,
β(tm , Stm ) = Φ(Stm , tm ) − Ptm (Stm )
where m ∈ {1, .., T }. The option is exercised when Stm ∈ {x|β(tm , x) > 0} which is
the exercise region, and x is the simulated underlying asset value, otherwise the option is
continued. The goal of the algorithm is to determinate the function β(·) for every opportunity
INRIA
Parallel Pricing Algorithms for Multi–Dimensional Bermudan/American Options
11
date. However, we do not need to fully parameterize this function. It is enough to find a
function Ft (·) such that signFt (·) = signβ(t, ·).
The algorithm consists of two phases. In the first phase, it aims to find a function Ft (·)
having the same sign as the function β(t, ·). The AdaBoost or LogitBoost algorithm is used
to characterize these functions. In the second phase the option is priced by a standard MC
simulation by taking the advantage of the characterization of Ftm (·), so for the (i)th MC
(i)
simulation we get the optimal stopping time τ(i) = min{tm ∈ {t1 , t2 , ..., T }|Ft (St ) > 0}.
The (i) is not the index of the number of assets.
Now, consider a call BA option on the maximum of d underlying assets where the payoff
at time τ is defined as Φ(Sτ , τ ) = (maxi (Sτi ) − K)+ with i = 1, .., d. During the first
phase of the algorithm, at a given opportunity date tm with m ∈ 1, ..., NT , N1 underlying
price vectors sized d are simulated. The simulations are performed recursively in backward
from m = T to m = 1. From each price point, another N2 paths are simulated from a
given opportunity date to the maturity date to compute the “small” BA option price at this
opportunity (i.e. Ptm (Stm )). At this step, N1 option prices related to the opportunity date
are computed. The time step complexity of this step is O(N1 × d × m × N2 × (NT − m)).
In the classification phase, we use a training set of N1 underlying price points and their
corresponding option prices at a given opportunity date. In this step, a non–parametric
regression is done on N1 points to characterize the exercise boundary. This first phase is
repeated for each opportunity date. In the second phase, the option value is computed by
simulating a large number, N , of standard MC simulations with NT exercise opportunities.
The complexity of this phase is O(d × NT × N ). Thus, the total time steps required for the
algorithm can be given by the following formula,
P
NT
O
N
×
d
×
m
×
N
×
(N
−
m)
+
N
×
classif
ication(N
)
+
d
×
N
×
N
2
T
T
1
T
m=1 1
≈ O NT2 × N1 × d × N2 + NT × (N1 + d × N )
where O(classif ication(·)) is the complexity of the classification phase and the details of
which can be found in [8]. For the simulations, we use the same option parameters as
described in (2), taken from [10], and the parameters for the classification can be found in
[8].
K = 100, interest rate r = 0.05, volatility rate σ = 0.2,
dividend δ = 0.1, N1 = 5e3, N2 = 500, N = 1e6, d = 3
NT = 9 and T = 3 years.
(3)
Each of the N1 points of the training set acts as a seed which is further used to simulate
N2 simulation paths. From the exercise opportunity m backward to m − 1, a Brownian
motion bridge is used to simulate the price of the underlying asset. The time distribution
of each phase of the sequential algorithm for pricing the option (3) is shown in Figure 3. As
we can see from the figure, the most computationally intensive part is the data generation
phase which is used to compute the option prices required for classification. In the following
section we present a parallel approach for this and rest of the phases of the algorithm.
RR n° 6530
12
3.3
Doan, Gaikwad, Bossy, Baude & Stokes-Rees
Parallel approach
The Algorithm 10 illustrates the parallel approach based on CMC Algorithm. At tm = T
(i)
we generate N1 points of the price of the underlying assets, Stm , i = 1, .., N1 then apply
the Brownian bridge simulation process to get the price at the backward date, tm−1 . For
simplicity we assume a master–worker programming model for the parallel implementation:
the master is responsible for allocating independent tasks to workers and for collecting
the results. The master divides N1 simulations into nb tasks then distributes them to a
number of workers. Thus each worker has N1 /nb points to simulate in the [calc] phase.
(i)
Each worker, further, simulates N2 paths for each point from tm to tNT starting at Stm to
compute the option price related to the opportunity date. Next the worker calculates the
value yj = (exercise value − continuation value), j = 1, .., N1 /nb. The master collects
the yj of these nb tasks from the workers and then classifies them in order to return the
characterization model of the associated exercise boundary in the [class] phase. For the
Algorithm 2 Parallel Classification and Monte Carlo Algorithm
1: for t = tNT to t1 do
2:
for i = 1 to N1 in parallel do
3:
[calc] Computation of training points.
4:
end for
5:
[class] Classification using boosting.
6: end for
7: for i = 1 to N in parallel do
8:
[mc] The partial option price computation.
9: end for
10: Estimation of the final option price by merging the partial prices.
1
classification phase, the master does a non-parametric regression with the set (x(i) , y(i) )N
i=1 ,
(i)
where x(i) = Stm , to get the function Ftm (x) described above in Section (3.2). The algorithm
recursively repeats the same procedure for earlier time intervals [m − 1, 1]. As a result we
obtain the characterization of the boundaries, Ftm (x), at every opportunity tm . Using
these boundaries, a standard MC simulation, [mc], is used to estimate the option price.
The MC simulations are distributed among workers such that each worker has the entire
characterization boundary information (Ftm (x), m = 1, .., NT ) to compute the partial option
price. The master later merges the partially computed prices and estimates the final option
price.
3.4
Numerical results and performance
In this section we present the numerical and performance results of the parallel CMC Algorithm. We focus on the standard example of a call option on the maximum of 3 assets as
given in (3). As it can be seen, the estimated prices are equivalent to the reference prices
INRIA
Parallel Pricing Algorithms for Multi–Dimensional Bermudan/American Options
S0
90
100
110
Price
11.295
18.706
27.604
Variance (95% CI)
190.786 (0.027)
286.679 (0.033)
378.713 (0.038)
Binomial
11.290
18.690
27.580
13
Error
0.005
0.016
0.024
Table 4: Price of the call BA on the maximum of three assets using CMC Algorithm.
(r = 0.05, δ = 0.1, σ = 0.2, ρ = 0.0, T = 3, N = 9 opportunities)
presented in Andersen and Broadies [2], which are represented in the “Binomial” column in
Table 4. For pricing this option, the sequential execution takes up to 30 minutes and the
time distribution for the different phases can be seen in Figure 3. The speed up achieved by
the parallel algorithm is presented in Figure 4. We can observe from the figure that the parallel algorithm achieves linear scalability with a fewer number of processors. The different
phases of the algorithm scale differently. The MC phase being embarrassingly parallel scales
linearly, while, the number of processors has no impact on the scalability of the classification
phase. The classification phase is sequential and takes a constant amount of time for the
same option. This affects the overall speedup of the algorithm as shown in Figure 4.
Monte Carlo simulation 38.40%
Data generation 60.49%
Boosting classification 1.11%
Figure 3: The time distribution for different phases of the sequential Classification–Monte
Carlo algorithm. The total time is about 30 minutes.
4
Conclusion
The aim of the study is to develop and implement parallel Monte Carlo based Bermudan/American option pricing algorithms. In this paper, we particularly focused on multi–
RR n° 6530
14
Doan, Gaikwad, Bossy, Baude & Stokes-Rees
Figure 4: Speedup of the parallel CMC Algorithm.
dimensional options. We evaluated the scalability of the proposed parallel algorithms in a
computational grid environment. We also analyzed the performance and the accuracy of
both algorithms. While I&Z Algorithm computes the exact exercise boundary, CMC Algorithm estimates the characterization of the boundary. The results obtained clearly indicate
that the scalability of I&Z Algorithm is limited by the boundary points computation. The
Table 2 showed that there is no effective advantage to increase the number of such points
as will, just to take advantage of a greater number of available CPUs. Moreover, the time
required for computing individual boundary points varies and the points with slower convergence rate often haul the performance of the algorithm. However, in the case of CMC
Algorithm, the sequential classification phase tends to dominate the total parallel computational time. Nevertheless, CMC Algorithm can be used for pricing different option types
such as maximum, minimum or geometric average basket options using a generic classification configuration. While the optimal exercise boundary structure in I&Z Algorithm needs
to be tailored as per the option type and requires. Parallelizing classification phase presents
us several challenges due to its dependency on inherently sequential non–parametric regression. Hence, we direct our future research to investigate efficient parallel algorithms for
computing exercise boundary points, in case of I&Z Algorithm, and the classification phase,
in case of CMC Algorithm.
INRIA
Parallel Pricing Algorithms for Multi–Dimensional Bermudan/American Options
5
15
Acknowledgments
This research is supported by the French “ANR-CIGC GCPMF” project and Grid5000 has
been funded by ACI-GRID.
References
[1] http://perso-math.univ-mlv.fr/users/printems.jacques/.
[2] L. Andersen and M. Broadie. Primal-Dual Simulation Algorithm for Pricing Multidimensional American Options. Management Science, 50(9):1222–1234, 2004.
[3] P.P. Boyle, A. Kolkiewicz, and K.S. Tan. Pricing American style options using low
discrepancy mesh methods. Forthcoming, Mathematics and Computers in Simulation.
[4] M. Broadie and J. Detemple. The Valuation of American Options on Multiple Assets.
Mathematical Finance, 7(3):241–286, 1997.
[5] J.C. Cox, S.A. Ross, and M. Rubinstein. Option Pricing: A Simplified Approach.
Journal of Financial Economics, 7(3):229–263, 1979.
[6] P. Glasserman. Monte Carlo Methods in Financial Engineering. Springer, 2004.
[7] S. Haber. Parameters for Integrating Periodic Functions of Several Variables. Mathematics of Computation, 41(163):115–129, 1983.
[8] F.J. Hickernell, H. Niederreiter, and K. Fang. Monte Carlo and Quasi-Monte Carlo
Methods 2000: Proceedings of a Conference Held at Hong Kong Baptist University,
Hong Kong SAR, China. Springer, 2002.
[9] K. Huang and R.K. Thulasiram. Parallel Algorithm for Pricing American Asian Options
with Multi-Dimensional Assets. Proceedings of the 19th International Symposium on
High Performance Computing Systems and Applications, pages 177–185, 2005.
[10] A. Ibanez and F. Zapatero. Monte Carlo Valuation of American Options through
Computation of the Optimal Exercise Frontier. Journal of Financial and Quantitative
Analysis, 39(2):239–273, 2004.
[11] P. L’Ecuyer, L. Meliani, and J. Vaucher. SSJ: SSJ: a framework for stochastic simulation in Java. Proceedings of the 34th conference on Winter simulation: exploring new
frontiers, pages 234–242, 2002.
[12] F.A. Longstaff and E.S. Schwartz. Valuing American options by simulation: a simple
least-squares approach. Review of Financial Studies, 2001.
[13] G. Pagès and J. Printems. Optimal quadratic quantization for numerics: the Gaussian
case. Monte Carlo Methods and Applications, 9(2):135–165, 2003.
RR n° 6530
16
Doan, Gaikwad, Bossy, Baude & Stokes-Rees
[14] I.H. Sloan and S. Joe. Lattice Methods for Multiple Integration. Oxford University
Press, 1994.
[15] R.K. Thulasiram and D.A. Bondarenko. Performance Evaluation of Parallel Algorithms
for Pricing Multidimensional Financial Derivatives. The International Conference on
Parallel Processing Workshops (ICPPW 02), 1530, 2002.
[16] I.M. Toke. Monte Carlo Valuation of Multidimensional American Options Through Grid
Computing. Lecture notes in computer science, Springer-Verlag, Volume 3743:page 462,
2006.
INRIA
Unité de recherche INRIA Sophia Antipolis
2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex (France)
Unité de recherche INRIA Futurs : Parc Club Orsay Université - ZAC des Vignes
4, rue Jacques Monod - 91893 ORSAY Cedex (France)
Unité de recherche INRIA Lorraine : LORIA, Technopôle de Nancy-Brabois - Campus scientifique
615, rue du Jardin Botanique - BP 101 - 54602 Villers-lès-Nancy Cedex (France)
Unité de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex (France)
Unité de recherche INRIA Rhône-Alpes : 655, avenue de l’Europe - 38334 Montbonnot Saint-Ismier (France)
Unité de recherche INRIA Rocquencourt : Domaine de Voluceau - Rocquencourt - BP 105 - 78153 Le Chesnay Cedex (France)
Éditeur
INRIA - Domaine de Voluceau - Rocquencourt, BP 105 - 78153 Le Chesnay Cedex (France)
http://www.inria.fr
ISSN 0249-6399
| 5 |
Reversible Computation in Term Rewriting✩
Naoki Nishidaa , Adrián Palaciosb , Germán Vidalb,∗
a
arXiv:1710.02804v1 [cs.PL] 8 Oct 2017
Graduate School of Informatics, Nagoya University
Furo-cho, Chikusa-ku, 4648603 Nagoya, Japan
b
MiST, DSIC, Universitat Politècnica de València
Camino de Vera, s/n, 46022 Valencia, Spain
Abstract
Essentially, in a reversible programming language, for each forward computation from state S to state S ′ , there exists a constructive method to go
backwards from state S ′ to state S. Besides its theoretical interest, reversible
computation is a fundamental concept which is relevant in many different areas like cellular automata, bidirectional program transformation, or quantum
computing, to name a few.
In this work, we focus on term rewriting, a computation model that underlies most rule-based programming languages. In general, term rewriting
is not reversible, even for injective functions; namely, given a rewrite step
t1 → t2 , we do not always have a decidable method to get t1 from t2 . Here, we
introduce a conservative extension of term rewriting that becomes reversible.
Furthermore, we also define two transformations, injectivization and inversion, to make a rewrite system reversible using standard term rewriting. We
illustrate the usefulness of our transformations in the context of bidirectional
program transformation.
This work has been partially supported by the EU (FEDER) and the Spanish Ministerio de Economı́a y Competitividad (MINECO) under grants TIN2013-44742-C4-1-R
and TIN2016-76843-C4-1-R, by the Generalitat Valenciana under grant PROMETEOII/2015/013 (SmartLogic), and by the COST Action IC1405 on Reversible Computation
- extending horizons of computing. Adrián Palacios was partially supported by the EU
(FEDER) and the Spanish Ayudas para contratos predoctorales para la formación de doctores and Ayudas a la movilidad predoctoral para la realización de estancias breves en
centros de I+D, MINECO (SEIDI), under FPI grants BES-2014-069749 and EEBB-I-1611469. Part of this research was done while the second and third authors were visiting
Nagoya University; they gratefully acknowledge their hospitality.
c 2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license
http://creativecommons.org/licenses/by-nc-nd/4.0/
∗
Corresponding author.
Email addresses: [email protected] (Naoki Nishida),
[email protected] (Adrián Palacios), [email protected] (Germán Vidal)
✩
Preprint submitted to Elsevier
February 20, 2018
To appear in the Journal of Logical and Algebraic Methods in Programming.
Keywords: term rewriting, reversible computation, program
transformation
1. Introduction
The notion of reversible computation can be traced back to Landauer’s
pioneering work [22]. Although Landauer was mainly concerned with the energy consumption of erasing data in irreversible computing, he also claimed
that every computer can be made reversible by saving the history of the
computation. However, as Landauer himself pointed out, this would only
postpone the problem of erasing the tape of a reversible Turing machine before it could be reused. Bennett [6] improved the original proposal so that the
computation now ends with a tape that only contains the output of a computation and the initial source, thus deleting all remaining “garbage” data,
though it performs twice the usual computation steps. More recently, Bennett’s result is extended in [9] to nondeterministic Turing machines, where
it is also proved that transforming an irreversible Turing machine into a
reversible one can be done with a quadratic loss of space. We refer the interested reader to, e.g., [7, 14, 40] for a high level account of the principles
of reversible computation.
In the last decades, reversible computing and reversibilization (transforming an irreversible computation device into a reversible one) have been the
subject of intense research, giving rise to successful applications in many different fields, e.g., cellular automata [28], where reversibility is an essential
property, bidirectional program transformation [24], where reversibility helps
to automate the generation of inverse functions (see Section 6), reversible debugging [17], where one can go both forward and backward when seeking the
cause of an error, parallel discrete event simulation [34], where reversible
computation is used to undo the effects of speculative computations made
on a wrong assumption, quantum computing [39], where all computations
should be reversible, and so forth. The interested reader can find detailed
surveys in the state of the art reports of the different working groups of
COST Action IC1405 on Reversible Computation [20].
In this work, we introduce reversibility in the context of term rewriting
[4, 36], a computation model that underlies most rule-based programming
languages. In contrast to other, more ad-hoc approaches, we consider that
term rewriting is an excellent framework to rigorously define reversible computation in a functional context and formally prove its main properties. We
2
expect our work to be useful in different (sequential) contexts, like reversible
debugging, parallel discrete event simulation or bidirectional program transformation, to name a few. In particular, Section 6 presents a first approach
to formalize bidirectional program transformation in our setting.
To be more precise, we present a general and intuitive notion of reversible
term rewriting by defining a Landauer embedding. Given a rewrite system
R and its associated (standard) rewrite relation →R , we define a reversible
extension of rewriting with two components: a forward relation ⇀R and a
backward relation ↽R , such that ⇀R is a conservative extension of →R and,
moreover, (⇀R )−1 = ↽R. We note that the inverse rewrite relation, (→R )−1 ,
is not an appropriate basis for “reversible” rewriting since we aim at defining
a technique to undo a particular reduction. In other words, given a rewriting
reduction s →∗R t, our reversible relation aims at computing the term s
from t and R in a decidable and deterministic way, which is not possible
using (→R )−1 since it is generally non-deterministic, non-confluent, and nonterminating, even for systems defining injective functions (see Example 6).
In contrast, our backward relation ↽R is deterministic (thus confluent) and
terminating. Moreover, our relation proceeds backwards step by step, i.e.,
the number of reduction steps in s ⇀∗R t and t ↽∗R s are the same.
In order to introduce a reversibilization transformation for rewrite systems, we use a flattening transformation so that the reduction at top positions of terms suffices to get a normal form in the transformed systems. For
instance, given the following rewrite system:
add(0, y) → y,
add(s(x), y) → s(add(x, y))
defining the addition on natural numbers built from constructors 0 and s( ),
we produce the following flattened (conditional) system:
R={
add(0, y) → y,
add(s(x), y) → s(z) ⇐ add(x, y) ։ z }
(see Example 29 for more details). This allows us to provide an improved notion of reversible rewriting in which some information (namely, the positions
where reduction takes place) is not required anymore. This opens the door to
compile the reversible extension of rewriting into the system rules. Loosely
speaking, given a system R, we produce new systems Rf and Rb such that
standard rewriting in Rf , i.e., →Rf , coincides with the forward reversible
extension ⇀R in the original system, and analogously →Rb is equivalent to
3
↽R . E.g., for the system R above, we would produce
Rf = {
Rb = {
addi (0, y) → hy, β1i,
addi (s(x), y) → hs(z), β2 (w)i ⇐ addi (x, y) ։ hz, wi }
add−1 (y, β1) → h0, yi,
add−1 (s(z), β2 (w)) → hs(x), yi ⇐ add−1 (z, w) → hx, yi
}
where addi is an injective version of function add, add−1 is the inverse of
addi , and β1 , β2 are fresh symbols introduced to label the rules of R.
In this work, we will mostly consider conditional rewrite systems, not
only to have a more general notion of reversible rewriting, but also to define
a reversibilization technique for unconditional rewrite systems, since the application of flattening (cf. Section 4) may introduce conditions in a system
that is originally unconditional, as illustrated above.
This paper is an extended version of [31]. In contrast to [31], our current paper includes the proofs of technical results, the reversible extension of
term rewriting is introduced first in the unconditional case (which is simpler
and more intuitive), and presents an improved injectivization transformation when the system includes injective functions. Furthermore, a prototype
implementation of the reversibilization technique is publicly available from
http://kaz.dsic.upv.es/rev-rewriting.html.
The paper is organized as follows. After introducing some preliminaries
in Section 2, we present our approach to reversible term rewriting in Section 3. Section 4 introduces the class of pure constructor systems where all
reductions take place at topmost positions, so that storing this information
in reversible rewrite steps becomes unnecessary. Then, Section 5 presents injectivization and inversion transformations in order to make a rewrite system
reversible with standard rewriting. Here, we also present an improvement of
the transformation for injective functions. The usefulness of these transformations is illustrated in Section 6. Finally, Section 7 discusses some related
work and Section 8 concludes and points out some ideas for future research.
2. Preliminaries
We assume familiarity with basic concepts of term rewriting. We refer
the reader to, e.g., [4] and [36] for further details.
2.1. Terms and Substitutions
A signature F is a set of ranked function symbols. Given a set of variables
V with F ∩V = ∅, we denote the domain of terms by T (F , V). We use f, g, . . .
to denote functions and x, y, . . . to denote variables. Positions are used to
4
address the nodes of a term viewed as a tree. A position p in a term t, in
symbols p ∈ Pos(t), is represented by a finite sequence of natural numbers,
where ǫ denotes the root position. We let t|p denote the subterm of t at
position p and t[s]p the result of replacing the subterm t|p by the term s.
Var(t) denotes the set of variables appearing in t. We also let Var(t1 , . . . , tn )
denote Var(t1 ) ∪ · · · ∪ Var(tn ). A term t is ground if Var(t) = ∅.
A substitution σ : V 7→ T (F , V) is a mapping from variables to terms such
that Dom(σ) = {x ∈ V | x 6= σ(x)} is its domain. A substitution σ is ground
if xσ is ground for all x ∈ Dom(σ). Substitutions are extended to morphisms
from T (F , V) to T (F , V) in the natural way. We denote the application of a
substitution σ to a term t by tσ rather than σ(t). The identity substitution
is denoted by id. We let “◦” denote the composition of substitutions, i.e.,
σ ◦ θ(x) = (xθ)σ = xθσ. The restriction θ |`V of a substitution θ to a set of
variables V is defined as follows: xθ |`V = xθ if x ∈ V and xθ |`V = x otherwise.
2.2. Term Rewriting Systems
A set of rewrite rules l → r such that l is a nonvariable term and r is a term
whose variables appear in l is called a term rewriting system (TRS for short);
terms l and r are called the left-hand side and the right-hand side of the rule,
respectively. We restrict ourselves to finite signatures and TRSs. Given a
TRS R over a signature F , the defined symbols DR are the root symbols
of the left-hand sides of the rules and the constructors are CR = F \ DR .
Constructor terms of R are terms over CR and V, denoted by T (CR , V).
We sometimes omit R from DR and CR if it is clear from the context. A
substitution σ is a constructor substitution (of R) if xσ ∈ T (CR , V) for all
variables x.
For a TRS R, we define the associated rewrite relation →R as the smallest
binary relation on terms satisfying the following: given terms s, t ∈ T (F , V),
we have s →R t iff there exist a position p in s, a rewrite rule l → r ∈ R,
and a substitution σ such that s|p = lσ and t = s[rσ]p ; the rewrite step is
sometimes denoted by s →p,l→r t to make explicit the position and rule used
in this step. The instantiated left-hand side lσ is called a redex. A term s is
called irreducible or in normal form with respect to a TRS R if there is no
term t with s →R t. A substitution is called normalized with respect to R
if every variable in the domain is replaced by a normal form with respect to
R. We sometimes omit “with respect to R” if it is clear from the context.
A derivation is a (possibly empty) sequence of rewrite steps. Given a binary
relation →, we denote by →∗ its reflexive and transitive closure, i.e., s →∗R t
means that s can be reduced to t in R in zero or more steps; we also use
s →nR t to denote that s can be reduced to t in exactly n steps.
5
We further assume that rewrite rules are labelled, i.e., given a TRS R,
we denote by β : l → r a rewrite rule with label β. Labels are unique in a
TRS. Also, to relate label β to fixed variables, we consider that the variables
of the rewrite rules are not renamed1 and that the reduced terms are always
ground. Equivalently, one could require terms to be variable disjoint with
the variables of the rewrite system, but we require groundness for simplicity.
We often write s →p,β t instead of s →p,l→r t if rule l → r is labeled with β.
2.3. Conditional Term Rewrite Systems
In this paper, we also consider conditional term rewrite systems (CTRSs);
namely oriented 3-CTRSs, i.e., CTRSs where extra variables are allowed
as long as Var(r) ⊆ Var(l) ∪ Var(C) for any rule l → r ⇐ C [26]. In
oriented CTRSs, a conditional rule l → r ⇐ C has the form l → r ⇐
s1 ։ t1 , . . . , sn ։ tn , where each oriented equation si ։ ti is interpreted as
reachability (→∗R ). In the following, we denote by on a sequence of elements
o1 , . . . , on for some n. We also write oi,j for the sequence oi , . . . , oj when i ≤ j
(and the empty sequence otherwise). We write o when the number of elements
is not relevant. In addition, we denote a condition o1 ։ o′1 , . . . , on ։ o′n by
on ։ o′n .
As in the unconditional case, we consider that rules are labelled and that
labels are unique in a CTRS. And, again, to relate label β to fixed variables,
we consider that the variables of the conditional rewrite rules are not renamed
and that the reduced terms are always ground.
For a CTRS R, the associated rewrite relation →R is defined as the
smallest binary relation satisfying the following: given ground terms s, t ∈
T (F ), we have s →R t iff there exist a position p in s, a rewrite rule l → r ⇐
sn ։ tn ∈ R, and a ground substitution σ such that s|p = lσ, si σ →∗R ti σ
for all i = 1, . . . , n, and t = s[rσ]p .
In order to simplify the presentation, we only consider deterministic
CTRSs (DCTRSs), i.e., oriented 3-CTRSs where, for each rule l → r ⇐
sn ։ tn , we have Var(si ) ⊆ Var(l, ti−1 ) for all i = 1, . . . , n (see Section 3.2
for a justification of this requirement and how it could be relaxed to arbitrary
3-CTRSs). Intuitively speaking, the use of DCTRs allows us to compute the
bindings for the variables in the condition of a rule in a deterministic way.
E.g., given a ground term s and a rule β : l → r ⇐ sn ։ tn with s|p = lθ, we
have that s1 θ is ground. Therefore, one can reduce s1 θ to some term s′1 such
that s′1 is an instance of t1 θ with some ground substitution θ1 . Now, we have
1
This will become useful in the next section where the reversible extension of rewriting
keeps a “history” of a computation in the form of a list of terms β(p, σ), and we want the
domain of σ to be a subset of the left-hand side of the rule labelled with β.
6
that s2 θθ1 is ground and we can reduce s2 θθ1 to some term s′2 such that s′2
is an instance of t2 θθ1 with some ground substitution θ2 , and so forth. If all
equations in the condition hold using θ1 , . . . , θn , we have that s →p,β s[rσ]p
with σ = θθ1 . . . θn .
Example 1. Consider the following DCTRS R that defines the function
double that doubles the value of its argument when it is an even natural
number:
β1 :
add(0, y) → y
β4 :
even(0) → true
β2 : add(s(x), y) → s(add(x, y))
β5 : even(s(s(x))) → even(x)
β3 : double(x) → add(x, x) ⇐ even(x) ։ true
Given the term double(s(s(0))) we have, for instance, the following derivation:
double(s(s(0)))→ǫ,β3 add(s(s(0)), s(s(0))) since even(s(s(0))) →∗R true
with σ = {x 7→ s(s(0))}
→ǫ,β2 s(add(s(0), s(s(0)))) with σ = {x 7→ s(0), y 7→ s(s(0))}
→1,β2 s(s(add(0, s(s(0))))) with σ = {x 7→ 0, y 7→ s(s(0))}
→1.1,β1 s(s(s(s(0))))
with σ = {y 7→ s(s(0))}
3. Reversible Term Rewriting
In this section, we present a conservative extension of the rewrite relation
which becomes reversible. In the following, we use ⇀R to denote our reversible (forward) term rewrite relation, and ↽R to denote its application in
the reverse (backward) direction. Note that, in principle, we do not require
↽R = ⇀−1
R , i.e., we provide independent (constructive) definitions for each
relation. Nonetheless, we will prove that ↽R = ⇀−1
R indeed holds (cf. Theorems 9 and 20). In some approaches to reversible computing, both forward
and backward relations should be deterministic. Here, we will only require
deterministic backward steps, while forward steps might be non-deterministic,
as it is often the case in term rewriting.
3.1. Unconditional Term Rewrite Systems
We start with unconditional TRSs since it is conceptually simpler and
thus will help the reader to better understand the key ingredients of our
approach. In the next section, we will consider the more general case of
DCTRSs.
Given a TRS R, reversible rewriting is defined on pairs ht, πi, where t is
a ground term and π is a trace (the “history” of the computation so far).
Here, a trace in R is a list of trace terms of the form β(p, σ) such that β is
7
a label for some rule l → r ∈ R, p is a position, and σ is a substitution with
Dom(σ) = Var(l)\Var(r) which will record the bindings of erased variables
when Var(l)\Var(r) 6= ∅ (and σ = id if Var(l)\Var(r) = ∅).2 Our trace
terms have some similarities with proof terms [36]. However, proof terms do
not store the bindings of erased variables (and, to the best of our knowledge,
they are only defined for unconditional TRSs, while we use trace terms both
for unconditional and conditional TRSs).
Our reversible term rewriting relation is only defined on safe pairs:
Definition 2. Let R be a TRS. The pair hs, πi is safe in R iff, for all
β(p, σ) in π, σ is a ground substitution with Dom(σ) = Var(l)\Var(r) and
β : l → r ∈ R.
In the following, we often omit R when referring to traces and safe pairs if
the underlying TRS is clear from the context.
Safety is not necessary when applying a forward reduction step, but will
become essential for the backward relation ↽R to be correct. E.g., all traces
that come from the forward reduction of some initial pair with an empty
trace will be safe (see below). Reversible rewriting is then introduced as
follows:
Definition 3. Let R be a TRS. A reversible rewrite relation ⇀R is defined
on safe pairs ht, πi, where t is a ground term and π is a trace in R. The
reversible rewrite relation extends standard rewriting as follows:3
hs, πi ⇀R ht, β(p, σ ′ ) : πi
iff there exist a position p ∈ Pos(s), a rewrite rule β : l → r ∈ R, and a
ground substitution σ such that s|p = lσ, t = s[rσ]p , and σ ′ = σ|`Var(l)\Var(r) .
The reverse relation, ↽R , is then defined as follows:
ht, β(p, σ ′) : πi ↽R hs, πi
iff ht, β(p, σ ′) : πi is a safe pair in R and there exist a ground substitution
θ and a rule β : l → r ∈ R such that Dom(θ) = Var(r), t|p = rθ and s =
t[lθσ ′ ]p . Note that θσ ′ = σ ′ θ = θ ∪ σ ′ , where ∪ is the union of substitutions,
since Dom(θ) = Var(r), Dom(σ ′ ) = (Var(l)\Var(r)) and both substitutions
are ground, so Dom(θ) ∩ Dom(σ ′ ) = ∅.
2
Note that if a rule l → r is non-erasing, i.e., Var(l) = Var(r), then σ = id.
In the following, we consider the usual infix notation for lists where [ ] is the empty
list and x : xs is a list with head x and tail xs.
3
8
We denote the union of both relations ⇀R ∪ ↽R by ⇋R .
Example 4. Let us consider the following TRS R defining the addition on
natural numbers built from 0 and s( ), and the function fst that returns its
first argument:
β1 :
add(0, y) → y
β2 : add(s(x), y) → s(add(x, y))
β3 : fst(x, y) → x
Given the term fst(add(s(0), 0), 0), we have, for instance, the following reversible (forward) derivation:
hfst(add(s(0), 0), 0), [ ]i ⇀R hfst(s(add(0, 0)), 0), [β2(1, id)]i
⇀R hs(add(0, 0)), [β3(ǫ, {y 7→ 0}), β2 (1, id)]i
⇀R hs(0), [β1 (1, id), β3 (ǫ, {y 7→ 0}), β2(1, id)]i
The reader can easily check that hs(0), [β1 (1, id), β3(ǫ, {y 7→ 0}), β2 (1, id)]i
is reducible to hfst(add(s(0), 0), 0), [ ]i using the backward relation ↽R by
performing exactly the same steps but in the backward direction.
An easy but essential property of ⇀R is that it is a conservative extension
of standard rewriting in the following sense (we omit its proof since it is
straightforward):
Theorem 5. Let R be a TRS. Given terms s, t, if s →∗R t, then for any
trace π there exists a trace π ′ such that hs, πi ⇀∗R ht, π ′ i.
Here, and in the following, we assume that ←R = (→R )−1 , i.e., s →−1
R t is
denoted by s ←R t. Observe that the backward relation is not a conservative
extension of ←R : in general, t ←R s does not imply ht, π ′ i ↽R hs, πi for any
arbitrary trace π ′ . This is actually the purpose of our notion of reversible
rewriting: ↽R should not extend ←R but is only aimed at performing exactly
the same steps of the forward computation whose trace was stored, but in
the reverse order. Nevertheless, one can still ensure that for all steps t ←R
s, there exists some trace π ′ such that ht, π ′i ↽R hs, πi (which is an easy
consequence of the above result and Theorem 9 below).
Example 6. Consider again the following TRS R = {β : snd(x, y) → y}.
Given the reduction snd(1, 2) →R 2, there are infinitely many reductions for
2 using ←R , e.g., 2 ←R snd(1, 2), 2 ←R snd(2, 2), 2 ←R snd(3, 2), etc. The
relation is also non-terminating: 2 ←R snd(1, 2) ←R snd(1, snd(1, 2)) ←R
· · · . In contrast, given a pair h2, πi, we can only perform a single deterministic and finite reduction (as proved below). For instance, if π =
[β(ǫ, {x 7→ 1}), β(2, {x 7→ 1})], then the only possible reduction is h2, πi ↽R
hsnd(1, 2), [β(2, {x 7→ 1})]i ↽R hsnd(1, snd(1, 2)), [ ]i 6↽R .
9
Now, we state a lemma which shows that safe pairs are preserved through
reversible term rewriting (both in the forward and backward directions):
Lemma 7. Let R be a TRS. Let hs, πi be a safe pair. If hs, πi ⇋∗R ht, π ′ i,
then ht, π ′ i is also safe.
Proof. We prove the claim by induction on the length k of the derivation.
Since the base case k = 0 is trivial, consider the inductive case k > 0. Assume
a derivation of the form hs, πi ⇋∗R hs0 , π0 i ⇋R ht, π ′ i. By the induction
hypothesis, we have that hs0 , π0 i is a safe pair. Now, we distinguish two
cases depending on the last step. If we have hs0 , π0 i ⇀R ht, π ′i, then there
exist a position p ∈ Pos(s0 ), a rewrite rule β : l → r ∈ R, and a ground
substitution σ such that s0 |p = lσ, t = s0 [rσ]p , σ ′ = σ |`Var(l)\Var(r) , and
π ′ = β(p, σ ′) : π0 . Then, since σ ′ is ground and Dom(σ ′ ) = Var(l)\Var(r)
by construction, the claim follows straightforwardly. If the last step has the
form hs0 , π0 i ↽R ht, π ′ i, then the claim follows trivially since each step with
↽R only removes trace terms from π0 .
✷
Hence, since any pair with an empty trace is safe the following result, which
states that every pair that is reachable from an initial pair with an empty
trace is safe, straightforwardly follows from Lemma 7:
Proposition 8. Let R be a TRS. If hs, [ ]i ⇋∗R ht, πi, then ht, πi is safe.
Now, we state the reversibility of ⇀R , i.e., the fact that (⇀R)−1 = ↽R (and
thus the reversibility of ↽R and ⇋R , too).
Theorem 9. Let R be a TRS. Given the safe pairs hs, πi and ht, π ′ i, for all
n ≥ 0, hs, πi ⇀nR ht, π ′ i iff ht, π ′ i ↽nR hs, πi.
Proof. (⇒) We prove the claim by induction on the length n of the derivation hs, πi ⇀nR ht, π ′i. Since the base case n = 0 is trivial, let us consider the
n−1
inductive case n > 0. Consider a derivation hs, πi ⇀R
hs0 , π0 i ⇀R ht, π ′i.
′
By Lemma 7, both hs0 , π0 i and ht, π i are safe. By the induction hypothen−1
sis, we have hs0 , π0 i ↽R
hs, πi. Consider now the step hs0 , π0 i ⇀R ht, π ′i.
Then, there is a position p ∈ Pos(s0 ), a rule β : l → r ∈ R and a ground
substitution σ such that s0 |p = lσ, t = s0 [rσ]p , σ ′ = σ |`Var(l)\Var(r) , and
π ′ = β(p, σ ′) : π0 . Let θ = σ|`Var(r) . Then, we have ht, π ′ i ↽R hs′0 , π0 i with
t|p = rθ, β : l → r ∈ R and s′0 = t[lθσ ′ ]p . Moreover, since σ = θσ ′ , we have
s′0 = t[lθσ ′ ]p = t[lσ]p = s0 , and the claim follows.
(⇐) This direction proceeds in a similar way. We prove the claim by
induction on the length n of the derivation ht, π ′i ↽nR hs, πi. As before,
10
we only consider the inductive case n > 0. Let us consider a derivation
n−1
ht, π ′ i ↽R
hs0 , π0 i ↽R hs, πi. By Lemma 7, both hs0 , π0 i and hs, πi are
n−1
safe. By the induction hypothesis, we have hs0 , π0 i ⇀R
ht, π ′ i. Consider
now the reduction step hs0 , π0 i ↽R hs, πi. Then, we have π0 = β(p, σ ′ ) : π,
β : l → r ∈ R, and there exists a ground substitution θ with Dom(θ) =
Var(r) such that s0 |p = rθ and s = s0 [lθσ ′ ]p . Moreover, since hs0 , π0 i is safe,
we have that Dom(σ ′ ) = Var(l)\Var(r) and, thus, Dom(θ) ∩ Dom(σ ′ ) = ∅.
Let σ = θσ ′ . Then, since s|p = lσ and Dom(σ ′ ) = Var(l)\Var(r), we can
perform the step hs, πi ⇀R hs′0 , β(p, σ ′) : πi with s′0 = s[rσ]p = s[rθσ ′ ]p =
s[rθ]p = s0 [rθ]p = s0 , and the claim follows.
✷
The next corollary is then immediate:
Corollary 10. Let R be a TRS. Given the safe pairs hs, πi and ht, π ′ i, for
all n ≥ 0, hs, πi ⇋nR ht, π ′ i iff ht, π ′ i ⇋nR hs, πi.
A key issue of our notion of reversible rewriting is that the backward rewrite
relation ↽R is deterministic (thus confluent), terminating, and has a constructive definition:
Theorem 11. Let R be a TRS. Given a safe pair ht, π ′ i, there exists at most
one pair hs, πi such that ht, π ′ i ↽R hs, πi.
Proof. First, if there is no step using ↽R from ht, π ′ i, the claim follows
trivially. Now, assume there is at least one step ht, π ′ i ↽R hs, πi. We prove
that this is the only possible step. By definition, we have π ′ = β(p, σ ′) : π,
p ∈ Pos(t), β : l → r ∈ R, and there exists a ground substitution θ with
Dom(θ) = Var(r) such that t|p = rθ and s = t[lθσ ′ ]p . The only source of
nondeterminism may come from choosing a rule labeled with β and from the
computation of the substitution θ. The claim follows trivially from the fact
that labels are unique in R and that, if there is some ground substitution θ′
with θ′ = Var(r) and t|p = rθ′ , then θ = θ′ .
✷
Therefore, ↽R is clearly deterministic and confluent. Termination holds
straightforwardly for pairs with finite traces since its length strictly decreases
with every backward step. Note however that even when ⇀R and ↽R are
terminating, the relation ⇋R is always non-terminating since one can keep
going back and forth.
11
3.2. Conditional Term Rewrite Systems
In this section, we extend the previous notions and results to DCTRSs.
We note that considering DCTRSs is not enough to make conditional rewriting deterministic. In general, given a rewrite step s →p,β t with p a position
of s, β : l → r ⇐ sn → tn a rule, and σ a substitution such that s|p = lσ
and si σ →∗R ti σ for all i = 1, . . . , n, there are three potential sources of nondeterminism: the selected position p, the selected rule β, and the substitution
σ. The use of DCTRSs can only make deterministic the last one, but the
choice of a position and the selection of a rule may still be non-deterministic.
For DCTRSs, the notion of a trace term used for TRSs is not sufficient
since we also need to store the traces of the subderivations associated to the
condition of the applied rule (if any). Therefore, we generalize the notion of
a trace as follows:
Definition 12 (trace). Given a CTRS R, a trace in R is recursively defined
as follows:
• the empty list is a trace;
• if π, π1 , . . . , πn are traces in R, n ≥ 0, β : l → r ⇐ sn ։ tn ∈
R is a rule, p is a position, and σ is a ground substitution, then
β(p, σ, π1 , . . . , πn ) : π is a trace in R.
We refer to each component β(p, σ, π1 , . . . , πn ) in a trace as a trace term.
Intuitively speaking, a trace term β(p, σ, π1 , . . . , πn ) stores the position of a
reduction step, a substitution with the bindings that are required for the step
to be reversible (e.g., the bindings for the erased variables, but not only; see
below) and the traces associated to the subcomputations in the condition.
The notion of a safe pair is now more involved in order to deal with
conditional rules. The motivation for this definition will be explained below,
after introducing reversible rewriting for DCTRSs.
Definition 13 (safe pair). Let R be a DCTRS. A trace π is safe in R iff,
for all trace terms β(p, σ,
Sπn ) in π, σ is a ground substitution with Dom(σ) =
(Var(l)\Var(r, sn , tn )) ∪ ni=1 Var(ti )\Var(r, si+1,n ), β : l → r ⇐ sn ։ tn ∈ R,
and πn are safe too. The pair hs, πi is safe in R iff π is safe.
Reversible (conditional) rewriting can now be introduced as follows:
Definition 14 (reversible rewriting). Let R be a DCTRS. The reversible
rewrite relation ⇀R is defined on safe pairs ht, πi, where t is a ground term
12
and π is a trace in R. The reversible rewrite relation extends standard conditional rewriting as follows:
hs, πi ⇀R ht, β(p, σ ′, π1 , . . . , πn ) : πi
iff there exist a position p ∈ Pos(s), a rewrite rule β : l → r ⇐ sn ։ tn ∈ R,
and a ground substitution σ such that s|p = lσ, hsi σ, [ ]i ⇀∗R hti σ, πi i for
all i = 1, . . . , n, t = s[rσ]p , and σ ′ = σ|`(Var(l)\Var(r,sn ,tn ))∪Sn Var(ti )\Var(r,si+1,n ) .
i=1
The reverse relation, ↽R , is then defined as follows:
ht, β(p, σ ′ , π1 , . . . , πn ) : πi ↽R hs, πi
iff ht, β(p, σ ′, πn ) : πi is a safe pair in R, β : l → r ⇐ sn ։ tn ∈ R and there
is a ground substitution θ such that Dom(θ) = Var(r, sn )\Dom(σ ′ ), t|p = rθ,
hti θσ ′ , πi i ↽∗R hsi θσ ′ , [ ]i for all i = 1, . . . , n, and s = t[lθσ ′ ]p . Note that
θσ ′ = σ ′ θ = θ ∪ σ ′ since Dom(θ) ∩ Dom(σ ′ ) = ∅ and both substitutions are
ground.
As in the unconditional case, we denote the union of both relations ⇀R
∪ ↽R by ⇋R .
Example 15. Consider again the DCTRS R from Example 1:
β1 :
add(0, y) → y
β4 :
even(0) → true
β2 : add(s(x), y) → s(add(x, y))
β5 : even(s(s(x))) → even(x)
β3 : double(x) → add(x, x) ⇐ even(x) ։ true
Given the term double(s(s(0))), we have, for instance, the following forward
derivation:
hdouble(s(s(0))), [ ]i
⇀R hadd(s(s(0)), s(s(0))), [β3 (ǫ, id, π)]i
⇀R · · ·
⇀R hs(s(s(s(0)))), [β1 (1.1, id), β2(1, id), β2 (ǫ, id), β3 (ǫ, id, π)]i
where π = [β4 (ǫ, id), β5 (ǫ, id)] since we have the following reduction:
heven(s(s(0))), [ ]i ⇀R heven(0), [β5 (ǫ, id)]i ⇀R htrue, [β4 (ǫ, id), β5 (ǫ, id)]i
The reader can easily construct the associated backward derivation:
hadd(s(s(0)), s(s(0))), [β1 (1.1, id), β2(1, id), . . .]i ↽∗R hdouble(s(s(0))), [ ]i
Let us now explain why we need to store σ ′ in a step of the form hs, πi ⇀R
ht, β(p, σ ′ , πn ) : πi. Given a DCTRS, for each rule l → r ⇐ sn ։ tn , the
following conditions hold:
13
• 3-CTRS: Var(r) ⊆ Var(l, sn , tn ).
• Determinism: for all i = 1, . . . , n, we have Var(si ) ⊆ Var(l, ti−1 ).
Intuitively, the backward relation ↽R can be seen as equivalent to the forward relation ⇀R but using a reverse rule of the form r → l ⇐ tn ։
sn , . . . , t1 ։ s1 . Therefore, in order to ensure that backward reduction is
deterministic, we need the same conditions as above but on the reverse rule:4
• 3-CTRS: Var(l) ⊆ Var(r, sn , tn ).
• Determinism: for all i = 1, . . . , n, Var(ti ) ⊆ Var(r, si+1,n ).
Since these conditions cannot be guaranteed in general, we store
σ ′ = σ|`(Var(l)\Var(r,sn ,tn ))∪Sn
i=1
Var(ti )\Var(r,si+1,n )
in the trace term so that (r → l ⇐ tn ։ sn , . . . , t1 ։ s1 )σ ′ is deterministic and fulfills the conditions of a 3-CTRS by construction, i.e., Var(lσ ′ ) ⊆
Var(rσ ′ , sn σ ′ , tn σ ′ ) and for all i = 1, . . . , n, Var(ti σ ′ ) ⊆ Var(rσ ′ , si+1,n σ ′ ); see
the proof of Theorem 21 for more details.
Example 16. Consider the following DCTRS:
β1 : f(x, y, m) → s(w) ⇐ h(x) ։ x, g(y, 4) ։ w
β2 : h(0) → 0
β3 : h(1) → 1
β4 : g(x, y) → x
and the step hf(0, 2, 4), [ ]i ⇀R hs(2), [β1 (ǫ, σ ′ , π1 , π2 )]i with σ ′ = {m 7→
4, x 7→ 0}, π1 = [β2 (ǫ, id)] and π2 = [β4 (ǫ, {y 7→ 4})]. The binding of variable
m is required to recover the value of the erased variable m, but the binding of
variable x is also needed to perform the subderivation hx, π1 i ↽R hh(x), [ ]i
when applying a backward step from hs(2), [β1 (ǫ, σ ′ , π1 , π2 )]i. If the binding
for x were unknown, this step would not be deterministic. As mentioned
above, an instantiated reverse rule (s(w) → f(x, y, m) ⇐ w ։ g(y, 4), x ։
h(x))σ ′ = s(w) → f(0, y, 4) ⇐ w ։ g(y, 4), 0 ։ h(0) would be a legal
DCTRS rule thanks to σ ′ .
We note that similar conditions could be defined for arbitrary 3-CTRSs.
However, the conditions would be much more involved; e.g., one had to compute first the variable dependencies between the equations in the conditions.
4
We note that the notion of a non-erasing rule is extended to the DCTRSs in [32],
which results in a similar condition.
14
Therefore, we prefer to keep the simpler conditions for DCTRSs (where these
dependencies are fixed), which is still quite a general class of CTRSs.
Reversible rewriting is also a conservative extension of rewriting for DCTRSs (we omit the proof since it is straightforward):
Theorem 17. Let R be a DCTRS. Given ground terms s, t, if s →∗R t, then
for any trace π there exists a trace π ′ such that hs, πi ⇀∗R ht, π ′ i.
For the following result, we need some preliminary notions (see, e.g., [36]).
For every oriented CTRS R, we inductively define the TRSs Rk , k ≥ 0, as
follows:
R0 = ∅
Rk+1 = {lσ → rσ | l → r ⇐ sn ։ tn ∈ R, si σ →∗Rk ti σ for all i = 1, . . . , n}
S
Observe that Rk ⊆ Rk+1 for all k ≥ 0. We have →R = i≥0 →Ri . We also
have s →R t iff s →Rk t for some k ≥ 0. The minimum such k is called
the depth of s →R t, and the maximum depth k of s = s0 →Rk1 · · · →Rkm
sm = t (i.e., k is the maximum of depths k1 , . . . , km ) is called the depth of
the derivation. If a derivation has depth k and length m, we write s →m
Rk t.
Analogous notions can naturally be defined for ⇀R , ↽R , and ⇋R .
The next result shows that safe pairs are also preserved through reversible
rewriting with DCTRSs:
Lemma 18. Let R be a DCTRS and hs, πi a safe pair. If hs, πi ⇋∗R ht, π ′ i,
then ht, π ′ i is also safe.
Proof. We prove the claim by induction on the lexicographic product (k, m)
′
of the depth k and the length m of the derivation hs, πi ⇋m
Rk ht, π i. Since the
base case is trivial, we consider the inductive case (k, m) > (0, 0). Consider
a derivation hs, πi ⇋m−1
hs0 , π0 i ⇋Rk ht, π ′ i. By the induction hypotheRk
sis, we have that hs0 , π0 i is safe. Now, we distinguish two cases depending
on the last step. If the last step is hs0 , π0 i ⇀Rk ht, π ′ i, then there exist
a position p ∈ Pos(s0 ), a rewrite rule β : l → r ⇐ sn ։ tn ∈ R, and a
ground substitution σ such that s0 |p = lσ, hsi σ, [ ]i ⇀∗Rk hti σ, πi i for all
i
i = 1, . . . , n, t = s0 [rσ]p , σ ′ = σ |`(Var(l)\Var(r,sn ,tn ))∪Sn Var(ti )\Var(r,si+1,n ) , and
i=1
π ′ = β(p, σ ′ , π1 , . . . , πn ). Then, sinceSki < k, i = 1, . . . , n, σ ′ is ground and
Dom(σ ′ ) = (Var(l)\Var(r, sn , tn )) ∪ ni=1 Var(ti )\Var(r, si+1,n ) by construction, the claim follows by induction. Finally, if the last step has the form
hs0 , π0 i ↽Rk ht, π ′ i, then the claim follows trivially since a step with ↽R
only removes trace terms from π0 .
✷
15
As in the unconditional case, the following proposition follows straightforwardly from the previous lemma since any pair with an empty trace is safe.
Proposition 19. Let R be a DCTRS. If hs, [ ]i ⇋∗R ht, πi, then ht, πi is safe
in R.
Now, we can already state the reversibility of ⇀R for DCTRSs:
Theorem 20. Let R be a DCTRS. Given the safe pairs hs, πi and ht, π ′ i,
′
′
m
for all k, m ≥ 0, hs, πi ⇀m
Rk ht, π i iff ht, π i ↽Rk hs, πi.
Proof. (⇒) We prove the claim by induction on the lexicographic product
′
(k, m) of the depth k and the length m of the derivation hs, πi ⇀m
Rk ht, π i.
Since the base case is trivial, we consider the inductive case (k, m) > (0, 0).
′
Consider a derivation hs, πi ⇀m−1
Rk hs0 , π0 i ⇀Rk ht, π i whose associated product is (k, m). By Proposition 19, both hs0 , π0 i and ht, π ′ i are safe. By the
induction hypothesis, since (k, m − 1) < (k, m), we have hs0 , π0 i ↽m−1
Rk hs, πi.
Consider now the step hs0 , π0 i ⇀Rk ht, π ′ i. Thus, there exist a position
p ∈ Pos(s0 ), a rule β : l → r ⇐ sn ։ tn ∈ R, and a ground substitution σ
such that s0 |p = lσ, hsi σ, [ ]i ⇀∗Rk hti σ, πi i for all i = 1, . . . , n, t = s0 [rσ]p ,
i
σ ′ = σ|`(Var(l)\Var(r,sn ,tn ))∪Sn Var(ti )\Var(r,si+1,n ) , and π ′ = β(p, σ ′ , π1 , . . . , πn ) : π0 .
i=1
By definition of ⇀Rk , we have that ki < k and, thus, (ki , m1 ) < (k, m2 ) for all
i = 1, . . . , n and for all m1 , m2 . Hence, by the induction hypothesis, we have
hti σ, πi i ↽∗Rk hsi σ, [ ]i for all i = 1, . . . , n. Let θ = σ |`Var(r,sn )\Dom(σ′ ) , so that
i
σ = θσ ′ and Dom(θ) ∩ Dom(σ ′ ) = ∅. Therefore, we have ht, π ′ i ↽Rk hs′0 , π0 i
with t|p = rθ, β : l → r ⇐ sn ։ tn ∈ R and s′0 = t[lθσ ′ ]p = t[lσ]p = s0 , and
the claim follows.
(⇐) This direction proceeds in a similar way. We prove the claim by
induction on the lexicographic product (k, m) of the depth k and the length
m of the considered derivation. Since the base case is trivial, let us consider the inductive case (k, m) > (0, 0). Consider a derivation ht, π ′i ↽m−1
Rk
hs0 , π0 i ↽Rk hs, πi whose associated product is (k, m). By Proposition 19,
both hs0 , π0 i and hs, πi are safe. By the induction hypothesis, since (k, m −
′
1) < (k, m), we have hs0 , π0 i ⇀m−1
Rk ht, π i. Consider now the step hs0 , π0 i ↽Rk
hs, πi. Then, we have π0 = β(p, σ ′ , π1 , . . . , πn ) : π, β : l → r ⇐ sn ։ tn ∈ R,
and there exists a ground substitution θ with Dom(θ) = Var(r, sn )\Dom(σ ′ )
such that s0 |p = rθ, hti θσ ′ , πi i ↽∗Rk hsi θσ ′ , [ ]i for all i = 1, . . . , n, and
i
′
s = s0 [lθσ ′ ]p . Moreover,
since
hs
,
π
0
0 i is safe, we have that Dom(σ ) =
Sn
(Var(l)\Var(r, sn , tn )) ∪ i=1 Var(ti )\Var(r, si+1,n ). By definition of ↽Rk , we
have that ki < k and, thus, (ki , m1 ) < (k, m2 ) for all i = 1, . . . , n and for
all m1 , m2 . By the induction hypothesis, we have hsi θσ ′ , [ ]i ⇀∗Rk hti θσ ′ , πi i
i
16
for all i = 1, . . . , n. Let σ = θσ ′ , with Dom(θ) ∩ Dom(σ ′ ) = ∅. Then, since
s|p = lσ, we can perform the step hs, πi ⇀Rk hs′0 , β(p, σ ′, π1 , . . . , πn ) : πi
with s′0 = s[rσ]p = s[rθσ ′ ]p ; moreover, s[rθσ ′ ]p = s[rθ]p = s0 [rθ]p = s0 since
Dom(σ ′ ) ∩ Var(r) = ∅, which concludes the proof.
✷
In the following, we say that ht, π ′ i ↽R hs, πi is a deterministic step if there
is no other, different pair hs′′ , π ′′ i with ht, π ′ i ↽R hs′′ , π ′′ i and, moreover,
the subderivations for the equations in the condition of the applied rule (if
any) are deterministic, too. We say that a derivation ht, π ′ i ↽∗R hs, πi is
deterministic if each reduction step in the derivation is deterministic.
Now, we can already prove that backward reversible rewriting is also
deterministic, as in the unconditional case:
Theorem 21. Let R be a DCTRS. Let ht, π ′ i be a safe pair with ht, π ′ i ↽∗R
hs, πi for some term s and trace π. Then ht, π ′i ↽∗R hs, πi is deterministic.
Proof. We prove the claim by induction on the lexicographic product (k, m)
of the depth k and the length m of the steps. The case m = 0 is trivial, and
thus we let m > 0. Assume ht, π ′ i ↽m−1
hu, π ′′ i ↽Rk hs, πi. For the base
Rk
case k = 1, the applied rule is unconditional and the proof is analogous to
that of Theorem 11.
Let us now consider k > 1. By definition, if hu, π ′′ i ↽Rk hs, πi, we
have π ′′ = β(p, σ ′, π1 , . . . , πn ) : π, β : l → r ⇐ sn ։ tn ∈ R and there
exists a ground substitution θ with Dom(θ) = Var(r) such that u|p = rθ,
hti θσ ′ , πi i ↽∗Rj hsi θσ ′ , [ ]i, j < k, for all i = 1, . . . , n, and s = t[lθσ ′ ]p . By
the induction hypothesis, the subderivations hti θσ ′ , πi i ↽∗Rj hsi θσ ′ , [ ]i are
deterministic, i.e., hsi θσ ′ , [ ]i is a unique resulting term obtained by reducing hti θσ ′ , πi i. Therefore, the only remaining source of nondeterminism can
come from choosing a rule labeled with β and from the computed substitution θ. On the one hand, the labels are unique in R. As for θ, we prove that
this is indeed the only possible substitution for the reduction step. Consider
the instance of rule l → r ⇐ sn ։ tn with σ ′ : lσ ′ → rσ ′ ⇐ sn σ ′ ։ tn σ ′ .
Since hu, π ′′i is safe, we S
have that σ ′ is a ground substitution and Dom(σ ′ ) =
(Var(l)\Var(r, sn , tn )) ∪ ni=1 Var(ti )\Var(r, si+1,n ). Then, the following properties hold:
• Var(lσ ′ ) ⊆ Var(rσ ′ , sn σ ′ , tn σ ′ ), since σ ′ is ground and it covers all the
variables in Var(l)\Var(r, sn , tn ).
• Var(ti σ ′ ) ⊆ Var(rσ ′ , si+1,nSσ ′ ) for all i = 1, . . . , n, since σ ′ is ground and
it covers all variables in ni=1 Var(ti )\Var(r, si+1,n ).
17
The above properties guarantee that a rule of the form rσ ′ → lσ ′ ⇐ tn σ ′ ։
sn σ ′ , . . . , t1 σ ′ ։ s1 σ ′ can be seen as a rule of a DCTRS and, thus, there
exists a deterministic procedure to compute θ, which completes the proof. ✷
Therefore, ↽R is deterministic and confluent. Termination is trivially guaranteed for pairs with a finite trace since the trace’s length strictly decreases
with every backward step.
4. Removing Positions from Traces
Once we have a feasible definition of reversible rewriting, there are two
refinements that can be considered: i) reducing the size of the traces and
ii) defining a reversibilization transformation so that standard rewriting becomes reversible in the transformed system. In this section, we consider the
first problem, leaving the second one for the next section.
In principle, one could remove information from the traces by requiring certain conditions on the considered systems. For instance, requiring
injective functions may help to remove rule labels from trace terms. Also,
requiring non-erasing rules may help to remove the second component of
trace terms (i.e., the substitutions). In this section, however, we deal with
a more challenging topic: removing positions from traces. This is useful not
only to reduce the size of the traces but it is also essential to define a reversibilization technique for DCTRSs in the next section.5 In particular, we
aim at transforming a given DCTRS into one that fulfills some conditions
that make storing positions unnecessary.
In the following, given a CTRS R, we say that a term t is basic [18] if it
has the form f (tn ) with f ∈ DR a defined function symbol and tn ∈ T (CR , V)
constructor terms. Furthermore, in the remainder of this paper, we assume
that the right-hand sides of the equations in the conditions of the rules of
a DCTRS are constructor terms. This is not a significant restriction since
these terms cannot be reduced anyway (since we consider oriented equations
in this paper), and still covers most practical examples.
Now, we introduce the following subclass of DCTRSs:
Definition 22 (pcDCTRS [30]). We say that a DCTRS R is a pcDCTRS
(“pc” stands for pure constructor ) if, for each rule l → r ⇐ sn ։ tn ∈ R, we
have that l and sn are basic terms and r and tn are constructor terms.
5
We note that defining a transformation with traces that include positions would be a
rather difficult task because positions are dynamic (i.e., they depend on the term being
reduced) and thus would require a complex (and inefficient) system instrumentation.
18
Pure constructor systems are called normalized systems in [3]. Also, they
are mostly equivalent to the class IIIn of conditional systems in [8], where
t1 , . . . , tn are required to be ground unconditional normal forms instead.6
In principle, any DCTRS with basic terms in the left-hand sides (i.e., a
constructor DCTRS) and constructor terms in the right-hand sides of the
equations of the rules can be transformed into a pcDCTRS by applying
a few simple transformations: flattening and simplification of constructor
conditions. Let us now consider each of these transformations separately.
Roughly speaking, flattening involves transforming a term (occurring, e.g.,
in the right-hand side of a DCTRS or in the condition) with nested defined
functions like f(g(x)) into a term f(y) and an (oriented) equation g(x) ։ y,
where y is a fresh variable. Formally,
Definition 23 (flattening). Let R be a CTRS, R = (l → r ⇐ sn ։ tn ) ∈
R be a rule and R′ be a new rule either of the form l → r ⇐ s1 ։
t1 , . . . , si |p ։ w, si [w]p ։ ti , . . . , sn ։ tn , for some p ∈ Pos(si ), 1 6 i 6 n,
or l → r[w]q ⇐ sn ։ tn , r|q ։ w, for some q ∈ Pos(r), where w is a fresh
variable.7 Then, a CTRS R′ is obtained from R by a flattening step if
R′ = (R\{R}) ∪ {R′ }.
Note that, if an unconditional rule is non-erasing (i.e., Var(l) ⊆ Var(r) for
a rule l → r), any conditional rule obtained by flattening is trivially nonerasing too, according to the notion of non-erasingness for DCTRSs in [32].8
Flattening is trivially complete since any flattening step can be undone by
binding the fresh variable again to the selected subterm and, then, proceeding
as in the original system. Soundness is more subtle though. In this work,
we prove the correctness of flattening for arbitrary DCTRSs with respect to
innermost rewriting. As usual, the innermost rewrite relation, in symbols,
i
→R , is defined as the smallest binary relation satisfying the following: given
i
ground terms s, t ∈ T (F ), we have s →R t iff there exist a position p in s
such that no proper subterms of s|p are reducible, a rewrite rule l → r ⇐
sn ։ tn ∈ R, and a normalized ground substitution σ such that s|p = lσ,
i
si σ →∗R ti σ, for all i = 1, . . . , n, and t = s[rσ]p .
In order to prove the correctness of flattening, we state the following
auxiliary lemma:
6
Given a CTRS R, we define Ru = {l → r | l → r ⇐ sn ։ tn ∈ R}. A term is an
unconditional normal form in R, if it is a normal form in Ru .
7
The positions p, q can be required to be different from ǫ, but this is not strictly
necessary.
8
Roughly, a DCTRS is considered non-erasing in [32] if its transformation into an
unconditional TRS by an unraveling transformation gives rise to a non-erasing TRS.
19
Lemma 24. Let R be a DCTRS. Given terms s and t, with t a normal form,
i
i
i
and a position p ∈ Pos(s), we have s →∗R t iff s|p →∗R wσ and s[wσ]p →∗R t,
for some fresh variable w and normalized substitution σ.
Proof. (⇒) Let us consider an arbitrary position p ∈ Pos(s). If s|p is
normalized, the proof is straightforward. Otherwise, since we use innermost
reduction (leftmost innermost, for simplicity), we can represent the derivation
i
s →∗R t as follows:
i
i
i
s[s|p ]p →∗R s′ [s|p ]p →∗R s′ [s′′ ]p →∗R t
i
where s′′ is a normal form and the subderivation s[s|p ]p → ∗R s′ [s|p ]p reduces the leftmost innermost subterms that are to the left of s|p (if any).
i
Then, by choosing σ = {w 7→ s′′ } we have s|p → ∗R wσ (by mimicking the
i
i
steps of s′ [s|p ]p → ∗R s′ [s′′ ]p ), s[wσ]p →∗R s′ [wσ]p (by mimicking the steps of
i
i
i
s[s|p ]p →∗R s′ [s|p ]p ), and s′ [wσ]p →∗R t (by mimicking the steps of s′ [s′′ ]p →∗R t),
which concludes the proof.
(⇐) This direction is perfectly analogous to the previous case. We consider an arbitrary position p ∈ Pos(s) such that s|p is not normalized (otherwise, the proof is trivial). Now, since derivations are innermost, we can
i
i
i
consider that s[wσ]p → ∗R t is as follows: s[wσ]p → ∗R s′ [wσ]p → ∗R t, where
i
s[wσ]p →∗R s′ [wσ]p reduces the innermost subterms to the left of s|p . Therei
i
fore, we have s[s|p ]p →∗R s′ [s|p ]p (by mimicking the steps of s[wσ]p →∗R s′ [wσ]p ),
i
i
s′ [s|p ]p →∗R s′ [s′′ ]p (by mimicking the steps of s|p →∗R wσ, with σ = {w 7→ s′′ }),
i
i
and s′ [s′′ ]p →∗R t (by mimicking the steps of s′ [wσ]p →∗R t).
✷
The following theorem is an easy consequence of the previous lemma:
Theorem 25. Let R be a DCTRS. If R′ is obtained from R by a flattening
step, then R′ is a DCTRS and, for all ground terms s, t, with t a normal
i
i
form, we have s →∗R t iff s →∗R′ t.
Proof. (⇒) We prove the claim by induction on the lexicographic product
i
(k, m) of the depth k and the length m of the derivation s →∗Rk t. Since the
base case is trivial, we consider the inductive case (k, m) > (0, 0). Assume
i
i
i
that s →∗Rk t has the form s[lσ]u →Rk s[rσ]u →∗Rk t with l → r ⇐ sn ։ tn ∈ R
i
and si σ →∗Rk ti σ, ki < k, i = 1, . . . , n. If l → r ⇐ sn ։ tn ∈ R′ , the claim
i
follows directly by induction. Otherwise, we have that either l → r ⇐ s1 ։
20
t1 , . . . , si |p ։ w, si [w]p ։ ti , . . . , sn ։ tn ∈ R′ , for some p ∈ Pos(si ), 1 6
i 6 n, or l → r[w]q ⇐ sn ։ tn , r|q ։ w ∈ R′ , for some q ∈ Pos(r), where
w is a fresh variable. Consider first the case l → r ⇐ s1 ։ t1 , . . . , si |p ։
w, si [w]p ։ ti , . . . , sn ։ tn ∈ R′ , for some p ∈ Pos(si ), 1 6 i 6 n. Since
i
si σ → ∗Rk ti σ, ki < k, i = 1, . . . , n, by the induction hypothesis, we have
i
i
si σ → ∗R′ ti σ, i = 1, . . . , n. By Lemma 24, there exists σ ′ = {w 7→ s′ }
i
for some normal form s′ such that si |p σ = si |p σσ ′ → ∗Rk wσσ ′ = wσ ′ and
i
′
si [w]p σσ = si σ[wσ
′
i
]p →∗Rk ti .
i
Moreover, since w is an extra variable, we also
i
→∗R′ tj σ
= tj σσ ′ for j = 1, . . . , i − 1, i + 1, . . . , n. Therefore,
have sj σσ ′ = sj σ
i
since lσσ ′ = lσ and rσσ ′ = rσ, we have s[lσ]u →R s[rσ]u , and the claim
follows by induction. Consider the second case. By the induction hypothesis,
i
i
we have s[rσ]u → ∗R′ t and si σ → ∗R′ ti σ for all i = 1, . . . , n. By Lemma 24,
there exists a substitution σ ′ = {w 7→ s′ } such that s′ is the normal form of
i
i
r|q σ and we have r|q σ →∗R′ wσ ′ and s[rσ[wσ ′ ]q ]u →∗R′ t. Moreover, since w is
i
a fresh variable, we have si σσ ′ →∗R′ ti σσ ′ for all i = 1, . . . , n. Therefore, we
i
have s[lσσ ′ ]u = s[lσ]u →R′ s[rσ[wσ ′ ]q ]u , which concludes the proof.
(⇐) This direction is perfectly analogous to the previous one, and follows
easily by Lemma 24 too.
✷
Let us now consider the second kind of transformations: the simplification
of constructor conditions. Basically, we can drop an equation s ։ t when
the terms s and t are constructor, called a constructor condition, by either
applying the most general unifier (mgu) of s and t (if it exists) to the remaining part of the rule, or by deleting entirely the rule if they do not unify
because (under innermost rewriting) the equation will never be satisfied by
any normalized substitution. Similar transformations can be found in [33].
In order to justify these transformations, we state and prove the following
results. In the following, we let mgu(s, t) denote the most general unifier of
terms s and t if it exists, and fail otherwise.
Theorem 26 (removal of unifiable constructor conditions). Let R be
a DCTRS and let R = (l → r ⇐ sn ։ tn ) ∈ R be a rule with mgu(si , ti ) = θ,
for some i ∈ {1, . . . , n}, where si and ti are constructor terms. Let R′ be a
new rule of the form lθ → rθ ⇐ s1 θ ։ t1 θ, . . . , si−1 θ ։ ti−1 θ, si+1 θ ։
ti+1 θ, . . . , sn θ ։ tn θ.9 Then R′ = (R\{R}) ∪ {R′ } is a DCTRS and, for all
9
In [33], the condition Dom(θ) ∩ Var(l, r, s1 , t1 , . . . , sn , tn ) = ∅ is required, but this
condition is not really necessary.
21
i
i
ground terms s and t, we have s →∗R t iff s →∗R′ t.
Proof. (⇒) First, we prove the following claim by induction on the lexicographic product (k, m) of the depth k and the length m of the steps: if
i ∗
i
s →m
Rk t, then s →R′ t. It suffices to consider the case where R is applied, i.e.,
i
i
s = s[lσ]p →{R} s[rσ]p with sj σ →∗Rk tj σ for all j ∈ {1, . . . , n}. By definition,
j
σ is normalized. Hence, since si and ti are constructor terms, we have that
si σ and ti σ are trivially normal forms since the normalized subterms introduced by σ cannot become reducible in a constructor context. Therefore, we
have si σ = ti σ. Thus, σ is a unifier of si and ti and, hence, θ is more general
than σ. Let δ be a substitution such that σ = θδ. Since σ is normalized, so
is δ. Since kj < k for all j = 1, . . . , n, by the induction hypothesis, we have
i
that sj σ = sj θδ →∗R′ tj θδ = tj σ for j ∈ {1, . . . , i − 1, i + 1, . . . , n}. Therefore,
i
we have that s[lσ]p = s[lθδ]p →{R′ } s[rθδ]p = s[rσ]p .
(⇐) Now, we prove the following claim by induction on the lexicographic
i
product (k, m) of the depth k and the length m of the steps: if s → m
R′ t,
k
i
then s → ∗R t. It suffices
i
s[lθδ]p →{R} s[rθδ]p with
′
to consider the case where R is applied, i.e., s =
i
sj θδ →∗R′ tj θδ for all j ∈ {1, . . . , i − 1, i + 1, . . . , n}.
kj
By the assumption and the definition, θ and δ are normalized, and thus, si θδ
and ti θδ are normal forms (as in the previous case, because the normalized
subterms introduced by θδ cannot become reducible in a constructor context),
i.e., si θδ = ti θδ. Since kj < k for all j ∈ {1, . . . , i − 1, i + 1, . . . , n}, by the
i
induction hypothesis, we have that sj θδ → ∗R tj θδ for j ∈ {1, . . . , i − 1, i +
i
1, . . . , n}. Therefore, we have that s[lσ]p = s[lθδ]p →{R} s[rθδ]p = s[rσ] with
σ = θδ.
✷
Now we consider the case when the terms in the constructor condition do
not unify:
Theorem 27 (removal of infeasible rules). Let R be a DCTRS and let
R = (l → r ⇐ sn ։ tn ) ∈ R be a rule with mgu(si , ti ) = fail , for some
i ∈ {1, . . . , n}. Then R′ = R\{R} is a DCTRS and, for all ground terms s
i
i
and t, we have s →∗R t iff s →∗R′ t.
Proof. Since R ⊇ R′ , the if part is trivial, and thus, we consider the onlyif part. To apply R to a term, there must exist a normalized substitution σ
i
such that si σ →∗R ti σ. Since si , ti are constructor terms and σ is normalized,
si σ is a normal form (because the normalized subterms introduced by σ
22
i
cannot become reducible in a constructor context). If si σ →∗R ti σ is satisfied
(i.e., si σ = ti σ), then si and ti are unifiable, and thus, this contradicts the
i
assumption. Therefore, R is never applied to any term, and hence, s →∗R t
i
iff s →∗R′ t.
✷
Using flattening and the simplification of constructor conditions, any constructor DCTRS with constructor terms in the right-hand sides of the equations of the rules can be transformed into a pcDCTRS. One can use, for
instance, the following simple algorithm. Let R be such a constructor DCTRS. We apply the following transformations as much as possible:
(flattening-rhs) Assume that R contains a rule of the form R = (l → r ⇐
sn ։ tn ) where r is not a constructor term. Let r|q , q ∈ Pos(r), be a
basic subterm of r. Then, we replace rule R by a new rule of the form
l → r[w]q ⇐ sn ։ tn , r|q ։ w, where w is a fresh variable.
(flattening-condition) Assume that R contains a rule of the form R = (l →
r ⇐ sn ։ tn ) where si is neither a constructor term nor a basic term,
i ∈ {1, . . . , n}. Let si |q , q ∈ Pos(s1 ), be a basic subterm of si . Then, we
replace rule R by a new rule of the form l → r ⇐ s1 ։ t1 , . . . , si |q ։
w, si [w]q ։ ti , . . . , sn ։ tn , where w is a fresh variable.
(removal-unify) Assume that R contains a rule of the form R = (l → r ⇐
sn ։ tn ) where si is a constructor term, i ∈ {1, . . . , n}. If mgu(si , ti ) =
θ 6= fail , then we replace rule R by a new rule of the form lθ → rθ ⇐
s1 θ ։ t1 θ, . . . , si−1 θ ։ ti−1 θ, si+1 θ ։ ti+1 θ, . . . , sn θ ։ tn θ.
(removal-fail) Assume that R contains a rule of the form R = (l → r ⇐
sn ։ tn ) where si is a constructor term, i ∈ {1, . . . , n}. If mgu(si , ti ) =
fail , then we remove rule R from R.
Trivially, by applying rule flattening-rhs as much as possible, we end up with
a DCTRS where all the right-hand sides are constructor terms; analogously,
the exhaustive application of rule flattening-condition allows us to ensure that
the left-hand sides of all equations in the conditions of the rules are either constructor or basic; finally, the application of rules removal-unify and removal-fail
produces a pcDCTRS by removing those equations in which the left-hand
side is a constructor term. Therefore, in the remainder of this paper, we only
consider pcDCTRSs.
A nice property of pcDCTRSs is that one can consider reductions only at
topmost positions. Formally, given a pcDCTRS R, we say that s →p,l→r⇐sn։tn
t is a top reduction step if p = ǫ, there is a ground substitution σ with s = lσ,
23
si σ →∗R ti σ for all i = 1, . . . , n, t = rσ, and all the steps in si σ →∗R ti σ for
ǫ
i = 1, . . . , n are also top reduction steps. We denote top reductions with →
ǫ
ǫ
for standard rewriting, and ⇀R , ↽R for our reversible rewrite relations.
i
ǫ
The following result basically states that → and → are equivalent for
pcDCTRSs:
Theorem 28. Let R be a constructor DCTRS with constructor terms in the
right-hand sides of the equations and R′ be a pcDCTRS obtained from R by
a sequence of transformations of flattening and simplification of constructor conditions. Given ground terms s and t such that s is basic and t is
i
ǫ
normalized, we have s →∗R t iff s →∗R′ t.
Proof. First, it is straightforward to see that an innermost reduction in
R′ can only reduce the topmost positions of terms since defined functions
can only occur at the root of terms and the terms introduced by instantiation are, by definition, irreducible. Therefore, the claim is a consequence of
Theorems 25, 26 and 27, together with the above fact.
✷
Therefore, when considering pcDCTRSs and top reductions, storing the reduced positions in the trace terms becomes redundant since they are always
ǫ. Thus, in practice, one can consider simpler trace terms without positions,
β(σ, π1 , . . . , πn ), that implicitly represent the trace term β(ǫ, σ, π1 , . . . , πn ).
Example 29. Consider the following TRS R defining addition and multiplication on natural numbers, and its associated pcDCTRS R′ :
R={
add(0, y)
add(s(x), y)
mult(0, y)
mult(s(x), y)
R′ = {
add(0, y)
add(s(x), y)
mult(0, y)
mult(s(x), y)
→
→
→
→
→
→
→
→
y,
s(add(x, y)),
0,
add(mult(x, y), y)}
y,
s(z) ⇐ add(x, y) ։ z,
0,
w ⇐ mult(x, y) ։ z, add(z, y) ։ w}
For instance, given the following reduction in R:
i
i
i
mult(s(0), s(0)) →R add(mult(0, s(0)), s(0)) →R add(0, s(0)) →R s(0)
we have the following counterpart in R′ :
ǫ
ǫ
mult(s(0), s(0)) →R′ s(0) with mult(0, s(0)) →R′ 0
ǫ
and add(0, s(0)) →R′ s(0)
24
Trivially, all results in Section 3 hold for pcDCTRSs and top reductions starting from basic terms. The simpler trace terms without positions will allow
us to introduce appropriate injectivization and inversion transformations in
the next section.
5. Reversibilization
In this section, we aim at compiling the reversible extension of rewriting
into the system rules. Intuitively speaking, given a pure constructor system
R, we aim at producing new systems Rf and Rb such that standard rewriting
in Rf , i.e., →Rf , coincides with the forward reversible extension ⇀R in the
original system, and analogously →Rb is equivalent to ↽R . Therefore, Rf
can be seen as an injectivization of R, and Rb as the inversion of Rf .
In principle, we could easily introduce a transformation for pcDCTRSs
that mimicks the behavior of the reversible extension of rewriting. For instance, given the pcDCTRS R of Example 16, we could produce the following
injectivized version Rf :10
hf(x, y, m), wsi → hs(w), β1 (m, x, w1 , w2 ) : wsi
⇐ hh(x), [ ]i ։ hx, w1 i, hg(y, 4), [ ]i ։ hw, w2i
hh(0), wsi → h0, β2 : wsi
hh(1), wsi → h1, β3 : wsi
hg(x, y), wsi → hx, β4 (y) : wsi
ǫ
For instance, the reversible step hf(0, 2, 4), [ ]i ⇀R hs(2), [β1 (σ ′ , π1 , π2 )]i with
σ ′ = {m 7→ 4, x 7→ 0}, π1 = [β2 (id)] and π2 = [β4 ({y 7→ 4})], has the
following counterpart in Rf :
ǫ
hf(0, 2, 4), [ ]i →Rf hs(2), [β1 (4, 0, [β2 ], [β4 (4)])]i
ǫ
ǫ
with hh(0), [ ]i →Rf h0, [β2 ]i and hg(2, 4), [ ]i →Rf h2, [β4(4)]i
The only subtle difference here is that a trace term like
β1 ({m 7→ 4, x 7→ 0}, [β2 (id)], [β4 ({y 7→ 4})])
is now stored in the transformed system as
β1 (4, 0, [β2 ], [β4 (4)])
10
We will write just β instead of β() when no argument is required.
25
Furthermore, we could produce an inverse Rb of the above system as follows:
hs(w), β1 (m, x, w1 , w2 ) : wsi−1 → hf(x, y, m), wsi−1
⇐ hw, w2i−1 ։ hg(y, 4), [ ]i−1,
hx, w1 i−1 ։ hh(x), [ ]i−1
h0, β2 : wsi−1 → hh(0), wsi−1
h1, β3 : wsi−1 → hh(1), wsi−1
hx, β4 (y) : wsi−1 → hg(x, y), wsi−1
mainly by switching the left- and right-hand sides of each rule and condition.
The correctness of these injectivization and inversion transformations would
be straightforward.
These transformations are only aimed at mimicking, step by step, the
reversible relations ⇀R and ↽R . Roughly speaking, for each step hs, πi ⇀R
ht, π ′ i in a system R, we have hs, πi →Rf ht, π ′i, where Rf is the injectivized version of R, and for each step hs, πi ↽R ht, π ′ i in R, we have
hs, πi →Rb ht, π ′ i, where Rb is the inverse of Rf . More details on this approach can be found in [31]. Unfortunately, it might be much more useful to
produce injective and inverse versions of each function defined in a system
R. Note that, in the above approach, the system Rf only defines a single function h , i and Rb only defines h , i−1 , i.e., we are computing systems
that define the relations ⇀R and ↽R rather than the injectivized and inverse
versions of the functions in R. In the following, we introduce more refined
transformations that can actually produce injective and inverse versions of
the original functions.
5.1. Injectivization
In principle, given a function f, one can consider that the injectivization
of a rule of the form11
β : f(s0 ) → r ⇐ f1 (s1 ) ։ t1 , . . . , fn (sn ) ։ tn
produces the following rule
f i (s0 ) → hr, β(y, wn )i ⇐ f1i (s1 ) ։ ht1 , w1 i . . . , fni (sn ) ։ htn , wn i
S
where {y} = (Var(l)\Var(r, sn , tn )) ∪ ni=1 Var(ti )\Var(r, si+1,n ) and wn are
fresh variables. The following example, though, illustrates that this is not
correct in general.
11
By abuse of notation, here we let s0 , . . . , sn denote sequences of terms of arbitrary
length, i.e., s0 = s0,1 , . . . , s0,l0 , s1 = s1,1 , . . . , s1,l1 , etc.
26
Example 30. Consider the following pcDCTRS R:
β1 :
f(x, y) → z ⇐ h(y) ։ w, first(x, w) ։ z
β2 :
h(0) → 0
β3 : first(x, y) → x
together with the following top reduction:
ǫ
f(2, 1) →R 2 with σ = {x 7→ 2, y 7→ 1, w 7→ h(1), z 7→ 2}
ǫ
where h(y)σ = h(1) →∗R h(1) = wσ
ǫ
and first(x, w)σ = first(2, h(1)) →R 2 = zσ
Following the scheme above, we would produce the following pcDCTRS
f i (x, y) → hz, β1 (w1 , w2 )i ⇐ hi (y) ։ hw, w1i, firsti (x, w) ։ hz, w2 i
hi (0) → h0, β2 i
i
first (x, y) → hx, β3 (y)i
Unfortunately, the corresponding reduction for f i (2, 1) above cannot be done
in this system since hi (1) cannot be reduced to hhi (1), [ ]i.
In order to overcome this drawback, one could complete the function definitions with rules that reduce each irreducible term t to a tuple of the
form ht, [ ]i. Although we find it a promising idea for future work, in this
paper we propose a simpler approach. In the following, we consider a refinement of innermost reduction where only constructor substitutions are
c
computed. Formally, the constructor reduction relation, →, is defined as
c
follows: given ground terms s, t ∈ T (F ), we have s →R t iff there exist a
position p in s such that no proper subterms of s|p are reducible, a rewrite
rule l → r ⇐ sn ։ tn ∈ R, and a ground constructor substitution σ such
c
that s|p = lσ, si σ →∗R ti σ for all i = 1, . . . , n, and t = s[rσ]p . Note that the
c
results in the previous section also hold for →.
In the following, given a basic term t = f(s), we denote by ti the term
i
f (s). Now, we introduce our injectivization transformation as follows:
Definition 31 (injectivization). Let R be a pcDCTRS. We produce a
new CTRS I(R) by replacing each rule β : l → r ⇐ sn ։ tn of R by a new
rule of the form
li → hr, β(y, wn )i ⇐ sin ։ htn , wn i
S
in I(R), where {y} = (Var(l)\Var(r, sn , tn )) ∪ ni=1 Var(ti )\Var(r, si+1,n ) and
wn are fresh variables. Here, we assume that the variables of y are in lexicographic order.
27
Observe that now we do not need to keep a trace in each term, but only a
single trace term since all reductions finish in one step in a pcDCTRS. The
relation between the original trace terms and the information stored in the
injectivized system is formalized as follows:
Definition 32. Given a trace term π = β({ym 7→ tm }, π1 , . . . , πn ), we define
π
b recursively as follows: π
b = β(tm , πb1 , . . . , π
cn ), where we assume that the
variables ym are in lexicographic order.
Moreover, in order to simplify the notation, we consider that a a trace term
π and a singleton list of the form [π] denote the same object. The correctness
of the injectivization transformation is stated as follows:
Theorem 33. Let R be a pcDCTRS and Rf = I(R) be its injectivization.
c
Then Rf is a pcDCTRS and, given a basic ground term s, we have hs, [ ]i ⇀R
c
ht, πi iff si →Rf ht, π
bi.
Proof. The fact that Rf is a pcDCTRS is trivial. Regarding the second
part, we proceed as follows:
c
(⇒) We proceed by induction on the depth k of the step hs, [ ]i ⇀Rk
ht, πi. Since the depth k = 0 is trivial, we consider the inductive case
k > 0. Thus, there is a rule β : l → r ⇐ sn ։ tn ∈ R, and a subc
stitution σ such that s = lσ, hsi σ, [ ]i⇀Rki hti σ, πi i, i = 1, . . . , n, t = rσ,
σ ′ = σ |`(Var(l)\Var(r,sn ,tn ))∪Sn Var(ti )\Var(r,si+1,n ) , and π = β(σ ′ , π1 , . . . , πn ). By
i=1
definition of ⇀Rk , we have that ki < k for all i = 1, . . . , n and, thus, by the inc
duction hypothesis, we have (si σ)i →Rf hti σ, π
bi i for all i = 1, . . . , n. Consider
i
now the equivalent rule in Rf : l → hr, β(y, wn )i ⇐ si1 ։ ht1 , w1 i, . . . , sin ։
i c
htn , wn i. Therefore, we
b1 , . . . , π
bn )i where {y} =
Snhave s →Rf ht, β(yσ, π
(Var(l)\Var(r, sn , tn )) ∪ i=1 Var(ti )\Var(r, si+1,n ) and, thus, we can conclude
that π
b = β(yσ, π
b1 , . . . , π
bn ).
(⇐) This direction is analogous. We proceed by induction on the depth
c
bi. Since the depth k = 0 is trivial, we consider
k of the step si →Rfk ht, π
the inductive case k > 0. Thus, there is a rule li → hr, β(y, wn )i ⇐ si1 ։
ht1 , w1 i, . . . , sin ։ htn , wn i in Rf and a substitution θ such that li θ = si ,
c
bi. Assume that σ
sii θ →Rfk hti , wi iθ, i = 1, . . . , n, and hr, β(y, wn )iθ = ht, π
i
is the restriction of θ to the variables of the rule, excluding the fresh variables wn , and that wi θ = π
bi for all i = 1, . . . , n. Therefore, hsi , [ ]iθ = hsi σ, [ ]i
and hti , wi iθ = hti σ, π
bi i, i = 1, . . . , n. Then, by definition of Rfki , we have
that ki < k for all i = 1, . . . , n and, thus, by the induction hypothesis,
c
we have hsi σ, [ ]i⇀R hti σ, πi i, i = 1, . . . , n. Consider now the equivalent rule
c
in R: β : l → r ⇐ sn ։ tn ∈ R. Therefore, we have hs, [ ]i ⇀R ht, πi,
28
σ ′ = σ |`(Var(l)\Var(r,sn ,tn ))∪Sn Var(ti )\Var(r,si+1,n ) , and π = β(σ ′ , π1 , . . . , πn ). Fii=1
S
nally, since {y} = (Var(l)\Var(r, sn , tn )) ∪ ni=1 Var(ti )\Var(r, si+1,n ), we can
conclude that π
b = π.
✷
5.2. Inversion
Given an injectivized system, inversion basically amounts to switching the
left- and right-hand sides of the rule and of every equation in the condition,
as follows:
Definition 34 (inversion). Let R be a pcDCTRS and Rf = I(R) be its
injectivization. The inverse system Rb = I−1 (Rf ) is obtained from Rf by
replacing each rule12
f i (s0 ) → hr, β(y, wn )i ⇐ f1i (s1 ) ։ ht1 , w1i, . . . , fni (sn ) ։ htn , wn i
of Rf by a new rule of the form
f −1 (r, β(y, wn )) → hs0 i ⇐ fn−1 (tn , wn ) ։ hsn i, . . . , f1−1 (t1 , w1 ) ։ hs1 i
in I−1 (Rf ), where the variables of y are in lexicographic order.
Example 35. Consider again the pcDCTRS of Example 16. Here, injectivization returns the following pcDCTRS I(R) = Rf :
f i (x, y, m) → hs(w), β1(m, x, w1 , w2 )i
⇐ hi (x) ։ hx, w1 i, gi (y, 4) ։ hw, w2i
hi (0) → h0, β2 i
hi (1) → h1, β3 i
gi (x, y) → hx, β4 (y)i
Then, inversion with I−1 produces the following pcDCTRS I−1 (I(R)) = Rb :
f −1 (s(w), β1 (m, x, w1 , w2 )) → hx, y, mi
⇐ g−1 (w, w2) ։ hy, 4i, h−1(x, w1 ) ։ hxi
−1
h (0, β2 ) → h0i
h−1 (1, β3 ) → h1i
−1
g (x, β4 (y)) → hx, yi
Finally, the correctness of the inversion transformation is stated as follows:
12
Here, we assume that s0 , s1 ,. . . , sn denote arbitrary sequences of terms, i.e., s0 =
s0,1 , . . . , s0,l0 , s1 = s1,1 , . . . , s1,l1 , etc. We use this notation for clarity.
29
Theorem 36. Let R be a pcDCTRS, Rf = I(R) its injectivization, and
Rb = I−1 (Rf ) the inversion of Rf . Then, Rb is a basic pcDCTRS and,
given a basic ground term f(s) and a constructor ground term t with ht, πi a
c
c
b) →Rb hsi.
safe pair, we have ht, πi ↽R hf(s), [ ]i iff f −1 (t, π
Proof. The fact that Rf is a pcDCTRS is trivial. Regarding the second
part, we proceed as follows.
c
(⇒) We proceed by induction on the depth k of the step ht, πi ↽Rk
hf(s), [ ]i. Since the depth k = 0 is trivial, we consider the inductive case
k > 0. Let π = β(σ ′ , πn ). Thus, we have that ht, β(σ ′ , πn )i is a safe pair,
there is a rule β : f(s0 ) → r ⇐ f1 (s1 ) ։ t1 , . . . , fn (sn ) ։ tn and substitution θ with Dom(θ) = (Var(r, s1 , . . . , sn )\Dom(σ ′ )) such that t = rθ,
c
hti θσ ′ , πi i→Rki hf(si )θσ ′ , [ ]i for all i = 1, . . . , n, and f(s) = f(s0 )θσ ′ . Note
that s0 , . . . , sn denote sequences of terms of arbitrary length, i.e., s0 =
s0,1 , . . . , s0,l0 , s1 = s1,1 , . . . , s1,l1 , etc. Since ht,
Sπi is a safe pair, we have that
Dom(σ ′ ) = (Var(s0 )\Var(r, s1 , . . . , sn , tn )) ∪ ni=1 Var(ti )\Var(r, si+1 , . . . , sn ).
By definition of ↽Rk , we have that ki < k for all i = 1, . . . , n and, by the
c
induction hypothesis, we have f −1 (ti σ, πbi )→Rb hsi σi for all i = 1, . . . , n. Let
us now consider the equivalent rule in Rb :
f −1 (r, β(y, wn ))) → hs0 i ⇐ fn−1 (tn , wn ) ։ hsn i, . . . , f1−1 (t1 , w1) ։ hs1 i
Hence, we have f −1 (t, β(yσ, π
b1 , . . . , π
b1 )) →Rb hs0 σi = hsi, where
{y} = (Var(s0 )\Var(r, s1 , . . . , sn , tn )) ∪
n
[
Var(ti )\Var(r, si+1 , . . . , sn )
i=1
and, thus, we can conclude that π
b = β(yσ, π
b1 , . . . , π
bn ).
(⇐) This direction is analogous. We proceed by induction on the depth
c
k of the step f −1 (t, π
b) →Rbk hsi. Since the depth k = 0 is trivial, we consider the inductive case k > 0. Thus, there is a rule f −1 (r, β(y, wn ))) →
hs0 i ⇐ fn−1 (tn , wn ) ։ hsn i, . . . , f1−1 (t1 , w1 ) ։ hs1 i in Rb and a substitution θ
c
such that f −1 (r, β(y, wn ))θ = f −1 (t, π
b), fi−1 (ti , wi )θ →Rbk hsi iθ, i = n, . . . , 1,
i
and f −1 (r, ws)θ = hsi. Assume that σ is the restriction of θ to the variables of the rule, excluding the fresh variables wn , and that wi θ = π
bi for
−1
−1
all i = 1, . . . , n. Therefore, f (r, β(y, wn ))θ = f (rσ, β(yσ, π
b1 , . . . , π
bn ),
−1
−1
fi (ti , wi )θ = fi (ti σ, π
bi ) and hsi iθ = hsi σi, i = 1, . . . , n. Then, by definition of Rbki , we have that ki < k for all i = 1, . . . , n and, thus, by the
c
induction hypothesis, we have hti σ, πi i↽R hfi (si σ), [ ]i, i = 1, . . . , n. Consider
now the equivalent rule in R: β : f(s0 ) → r ⇐ f1 (s1 ) ։ t1 , . . . , fn (sn ) ։ tn
c
in R. Therefore, we have ht, πi ↽R hf(s), [ ]i,
σ ′ = σ|`(Var(s0 )\Var(r,s1 ,...,sn ,tn ))∪Sn
i=1
30
Var(ti )\Var(r,si+1 ,...,sn )
and π = β(σ ′ , π1 , . . . , πn ). Finally, since {y} = (Var(s0 )\Var(r, s1 , . . . , sn , tn ))∪
S
n
b = π.
✷
i=1 Var(ti )\Var(r, si+1 , . . . , sn ), we can conclude that π
5.3. Improving the transformation for injective functions
When a function is injective, one can expect the injectivization transformation to be unnecessary. This is not generally true, since some additional
syntactic conditions might also be required. Furthermore, depending on the
considered setting, it can be necessary to have an injective system, rather
than an injective function. Consider, e.g., the following simple TRS:
R = { f1 → f2 , f2 → 0, g1 → g2 , g2 → 0 }
Here, all functions are clearly injective. However, given a reduction like
f1 →R f2 →R 0, we do not know which rule should be applied to 0 in order to
go backwards until the initial term (actually, both the second and the fourth
rules are applicable in the reverse direction).
Luckily, in our context, the injectivity of a function suffices since reductions in pcDCTRSs are performed in a single step. Therefore, given a
reduction of the form f i (sn ) →R t, a backward computation will have the
form f −1 (t) →R hsn i, so that we know that only the inverse rules of f are
applicable.
Now, we present an improvement of the injectivization transformation
presented in Section 5.1 which has some similarities with that in [24]. Here,
we consider that the initial system is a TRS R since, to the best of our
knowledge, there is no reachability analysis defined for DCTRSs. In the
following, given a term s, we let
range(s) = {t | sσ →∗R t, σ : V 7→ T (C), and t ∈ T (C)}
i.e., range(s) returns a set with the constructor normal forms of all possible
ground constructor instances of s. Although computing this set is generally undecidable, there are some overapproximations based on the use of
tree automata (see, e.g., [15] and the most recent approach for innermost
rewriting [16]). Let us consider that rangeα (s) is such an approximation,
with rangeα (s) ⊇ range(s) for all terms s. Here, we are interested in determining when the right-hand sides, r1 and r2 , of two rules do not overlap,
i.e., range(r1 ) ∩ range(r2 ) = ∅. For this purpose, we will check whether
rangeα (r1 ) ∩ rangeα (r2 ) = ∅. Since finite tree automata are closed under intersection and the emptiness of a finite tree automata is decidable, checking
the emptiness of rangeα (r1 )∩rangeα (r2 ) is decidable and can be used to safely
identify non-overlapping right-hand sides, i.e., if rangeα (r1 ) ∩ rangeα (r2 ) = ∅,
31
then r1 and r2 are definitely non-overlapping; otherwise, they may be overlapping or non-overlapping.
Now, we summarize our method to simplify some trace terms. Given
a constructor TRS R and a rule β : l → r ∈ R, we check the following
conditions:
1. the right-hand side r of the rule does not overlap with the right-hand
side of any other rule defining the same function;
2. the rule is non-erasing, i.e., Var(l) = Var(r);
3. the right-hand side r contains a single occurrence of a defined function
symbol, say f ∈ D.
If these conditions hold, then the rule has the form l → r[f(s)]p with l and
f(s) basic terms,13 and r[x]p and s constructor terms, where x is a fresh
variable. In this case, we can safely produce the following injective version:14
li → hr[x]p , wi ⇐ f i (s) ։ hx, wi
instead of
li → hr[x]p , β(w)i ⇐ f i (s) ։ hx, wi
Let us illustrate this improved transformation with a couple of examples.
Example 37. Consider the following TRS:
R = { f(s(x)) → g(x), f(c(x)) → h(x), g(x) → s(x), h(x) → c(x)}
Here, it can easily be shown that rangeα (g(x)) ∩ rangeα (h(x)) = ∅, the two
rules defining f are non-erasing, and both contain a single occurrence of a
defined function symbol in the righ-hand sides. Therefore, our improved
injectivization applies and we get the following pcDCTRS Rf :
f i (s(x)) → hy, wi ⇐ gi (x) ։ hy, wi
f i (c(x)) → hy, wi ⇐ hi (x) ։ hy, wi
gi (x) → hs(x), β3 i
hi (x) → hc(x), β4 i
In contrast, the original injectivization transformation would return the following system:
f i (s(x)) → hy, β1(w)i ⇐ gi (x) ։ hy, wi
f i (c(x)) → hy, β2(w)i ⇐ hi (x) ։ hy, wi
13
gi (x) → hs(x), β3 i
hi (x) → hc(x), β4 i
Note that l is a basic term since we initially consider a constructor TRS and, thus, all
left-hand sides are basic terms by definition.
14
Since l → r is non-erasing, the pcDCTRS rule l → r[x]p ⇐ f(s) ։ x is trivially nonerasing too (according to [32], i.e., (Var(l)\Var(r[x]p , f(s), x)) ∪ Var(x)\Var(r[x]p ) = ∅)
and, thus, no binding should be stored during the injectivization process.
32
Finally, the inverse system Rb obtained from Rf using the original transformation has the following form:
f −1 (y, w) → hs(x)i ⇐ g−1 (y, w) ։ hxi
f −1 (y, w) → hc(x)i ⇐ h−1 (y, w) ։ hxi
g−1 (s(x), β3 ) → hxi
h−1 (c(x), β4 ) → hxi
For instance, given the forward reduction f i (s(0)) →Rf hs(0), β3 i, we can
build the corresponding backward reduction: f −1 (s(0), β3 ) →Rb hs(0)i.
Note, however, that the left-hand sides of f −1 overlap and we should
reduce the conditions in order to determine which rule to apply. Therefore,
in some cases, there is a trade-off between the size of the trace terms and the
complexity of the reduction steps.
The example above, though, only produces a rather limited improvement
since the considered functions are not recursive. Our next example shows a
much significant improvement. Here, we consider the function zip (also used
in [24] to illustrate the benefits of an injectivity analysis).
Example 38. Consider the following TRS R defining the function zip:
zip([ ], ys) → [ ]
zip(xs, [ ]) → [ ]
zip(x : xs, y : ys) → pair(x, y) : zip(xs, ys)
Here, since the third rule is non-erasing, its right-hand side contains a single
occurrence of a defined function, zip, and it does not overlap with any other
right-hand side, our improved injectivization applies and we get the following
pcDCTRS Rf :
zipi ([ ], ys) → h[ ], β1(ys)i
zipi (xs, [ ]) → h[ ], β2(xs)i
zipi (x : xs, y : ys) → hpair(x, y) : zs, wi ⇐ zipi (xs, ys) ։ hzs, wi
In contrast, the original injectivization transformation would return the following system R′f :
zipi ([ ], ys) → h[ ], β1 (ys)i
zipi (xs, [ ]) → h[ ], β2 (xs)i
zipi (x : xs, y : ys) → hpair(x, y) : zs, β3 (w)i ⇐ zipi (xs, ys) ։ hzs, wi
It might seem a small difference, but if we call zipi with two lists of n elements,
the system R′f would build a trace term of the form β3 (. . . β3 (β1 (. . .)) . . .) with
n nested constructors β3 , while Rf would just build the trace term β1 (. . .).
For large values of n, this is a significant improvement in memory usage.
33
6. Bidirectional Program Transformation
We illustrate a practical application of our reversibilization technique in
the context of bidirectional program transformation (see [10] for a survey).
In particular, we consider the so-called view-update problem. Here, we have
a data structure (e.g., a database) called the source, which is transformed to
another data structure, called the view. Typically, we have a view function,
view: Source → View that takes the source and returns the corresponding
view, together with an update function, upd: View × Source → Source that
propagates the changes in a modified view to the original source. Two basic
properties that these functions should satisfy in order to be well-behaved are
the following [13]:
∀s ∈ Source, ∀v ∈ View : view(upd(v, s)) = v
∀s ∈ Source: upd(view(s), s) = s
Bidirectionalization (first proposed in the database community [5]) basically
consists in, given a view function, “bidirectionalize” it in order to derive
an appropriate update function. For this purpose, first, a view complement
function is usually defined, say viewc , so that the tupled function
view △ viewc: Source → View × Comp
becomes injective. Therefore, the update function can be defined as follows:
upd(v, s) = (view △ viewc )−1 (v, viewc (s))
This approach has been applied to bidirectionalize view functions in a functional language in [24].
In the following, we apply our injectivization and inversion transformations in order to produce a bidirectionalization transformation that may be
useful in the context of the view-update problem (with some limitations).
Let us assume that we have a view function, view, that takes a source and returns the corresponding view, and which is defined by means of a pcDCTRS.
Following our approach, given the original program R, we produce an injectivized version Rf and the corresponding inverse Rb . Therefore, in principle,
one can use Rf ∪ Rb , which will include the functions viewi and view−1, to
define an update function as follows:
upd(v, s) → s′ ⇐ viewi (s) ։ hv ′ , πi, view−1(v, π) ։ hs′ i
where s is the original source, v is the updated view, and s′ , the returned
value, is the corresponding updated source. Note that, in our context, the
function viewi is somehow equivalent to view △ viewc above.
34
Let us now illustrate the bidirectionalization process with an example.
Consider a particular data structure, a list of records of the form r(t, v)
where t is the type of the record (e.g., book, dvd, pen, etc.) and v is its price
tag. The following system defines a view function that takes a type and a list
of records, and returns a list with the price tags of the records of the given
type:15
view(t, nil)
view(t, r(t′ , v) : rs)
view(t, r(t′ , v) : rs)
eq(book, book)
eq(book, dvd)
val(r(t, v))
→
→
→
→
→
→
nil
val(r(t′ , v)) : view(t, rs) ⇐ eq(t, t′ ) ։ true
view(t, rs) ⇐ eq(t, t′ ) ։ false
true
eq(dvd, dvd) → true
false
eq(dvd, book) → false
v
However, this system is not a pcDCTRS. Here, we use a flattening transformation to produce the following (labeled) pcDCTRS R which is equivalent
for constructor derivations:
β1 :
view(t, nil) → nil
β2 : view(t, r(t′ , v) : rs) → p : r
⇐ eq(t, t′ ) ։ true, val(r(t′ , v)) ։ p, view(t, rs) ։ r
′
β3 : view(t, r(t , v) : rs) → r ⇐ eq(t, t′ ) ։ false, view(t, rs) ։ r
β4 :
β6 :
eq(book, book) → true
eq(book, dvd) → false
β8 :
val(r(t, v)) → v
β5 : eq(dvd, dvd) → true
β7 : eq(dvd, book) → false
Now, we can apply our injectivization transformation which returns the following pcDCTRS Rf = I(R):
viewi (t, nil) → hnil, β1 (t)i
viewi (t, r(t′ , v) : rs) → hp : r, β2 (w1 , w2 , w3 )i
⇐ eqi (t, t′ ) ։ htrue, w1 i, vali (r(t′ , v)) ։ hp, w2i, viewi (t, rs) ։ hr, w3 i
viewi (t, r(t′ , v) : rs) → hr, β3 (v, w1, w2 )i
⇐ eqi (t, t′ ) ։ hfalse, w1 i, viewi (t, rs) ։ hr, w2i
eqi (book, book) → htrue, β4 i
eqi (book, dvd) → hfalse, β6 i
eqi (dvd, dvd) → htrue, β5 i
eqi (dvd, book) → hfalse, β7 i
vali (r(t, v)) → hv, β8(t)i
15
For simplicity, we restrict the record types to only book and dvd.
35
Finally, inversion returns the following pcDCTRS Rb = I(Rf ):
view−1(nil, β1 (t)) → ht, nili
view (p : r, β2 (w1 , w2, w3 )) → ht, r(t′ , v) : rsi
⇐ eq−1(true, w1 ) ։ ht, t′ i, val−1(p, w2 ) ։ hr(t′ , v)i, view−1(r, w3 ) ։ ht, rsi
view−1(r, β3 (v, w1, w2 )) → ht, r(t′ , v) : rsi
⇐ eq−1(false, w1 ) ։ ht, t′ i, view−1(r, w2 ) ։ ht, rsi
−1
eq−1(true, β4 ) → hbook, booki eq−1(true, β5 ) → hdvd, dvdi
eq−1 (false, β6 ) → hbook, dvdi
eq−1(false, β7 ) → hdvd, booki
val−1(v, β8 (t)) → hr(t, v)i
For instance, the term view(book, [r(book, 12), r(dvd, 24)]), reduces to [12] in
the original system R. Given a modified view, e.g., [15], we can compute the
modified source using function upd above:
upd([r(book, 12), r(dvd, 24)], [15])
Here, we have the following subcomputations:16
viewi (book, [r(book, 12), r(dvd, 24)])
→Rf h[12], β2 (β4 , β8 (book), β3 (24, β6 , β1 (book)))i
view−1([15], β2 (β4 , β8 (book), β3 (24, β6 , β1 (book))))
→Rb hbook, [r(book, 15), r(dvd, 24)]i
Thus upd returns the updated source [r(book, 15), r(dvd, 24)], as expected.
We note that the considered example cannot be transformed using the technique in [24], the closer to our approach, since the right-hand sides of some
rules contain functions which are not treeless.17 Nevertheless, one could consider a transformation from pcDCTRS to functional programs with treeless
functions so that the technique in [24] becomes applicable.
Our approach can solve a view-update problem as long as the view function can be encoded in a pcDCTRS. When this is the case, the results from
Section 5 guarantee that function upd is well defined. Formally analyzing
the class of view functions that can be represented with a pcDCTRS is an
interesting topic for further research.
7. Related Work
There is no widely accepted notion of reversible computing. In this work,
we have considered one of its most popular definitions, according to which a
16
Note that, in this case, the function view requires not only the source but also the
additional parameter book.
17
A call is treeless if it has the form f(x1 , . . . , xn ) and x1 , . . . , xn are different variables.
36
computation principle is reversible if there is a method to undo a (forward)
computation. Moreover, we expect to get back to an exact past state of the
computation. This is often referred to as full reversibility.
As we have mentioned in the introduction, some of the most promising
applications of reversibility include cellular automata [28], bidirectional program transformation [24], already discussed in Section 6, reversible debugging
[17], where the ability to go both forward and backward when seeking the
cause of an error can be very useful for the programmer, parallel discrete
event simulation [34], where reversibility is used to undo the effects of speculative computations made on a wrong assumption, quantum computing [39],
where all computations should be reversible, and so forth. The interested
reader can find detailed surveys in the state of the art reports of the different
working groups of COST Action IC1405 on Reversible Computation [20].
Intuitively speaking, there are two broad approaches to reversibility from
a programming language perspective:
Reversible programming languages. In this case, all constructs of the programming language are reversible. One of the most popular languages
within the first approach is the reversible (imperative) language Janus
[23]. The language was recently rediscovered [42, 41, 43] and has since
been formalized and further developed.
Irreversible programming languages and Landauer’s embedding. Alternatively,
one can consider an irreversible programming language, and enhance
the states with some additional information (typically, the history of
the computation so far) so that computations become reversible. This
is called Landauer’s embedding.
In this work, we consider reversibility in the context of term rewriting. To the
best of our knowledge, we have presented the first approach to reversibility in
term rewriting. A closest approach was introduced by Abramsky in the context of pattern matching automata [2], though his developments could easily
be applied to rewrite systems as well. In Abramsky’s approach, biorthogonality was required to ensure reversibility, which would be a very significant
restriction for term rewriting systems. Basically, biorthogonality requires
that, for every pair of (different) rewrite rules l → r and l′ → r ′ , l and
l′ do not overlap (roughly, they do not unify) and r and r ′ do not overlap too. Trivially, the functions of a biorthogonal system are injective and,
thus, computations are reversible without the need of a Landauer embedding. Therefore, Abramsky’s work is aimed at defining a reversible language,
in contrast to our approach that is based on defining a Landauer embedding
for standard term rewriting and a general class of rewrite systems.
37
Defining a Landauer embedding in order to make a computation mechanism reversible has been applied in different contexts and computational
models, e.g., a probabilistic guarded command language [44], a low level
virtual machine [35], the call-by-name lambda calculus [19, 21], cellular automata [38, 27], combinatory logic [11], a flowchart language [41], etc.
In the context of declarative languages, we find the work by Mu et al. [29],
where a relational reversible language is presented (in the context of bidirectional programming). A similar approach was then introduced by Matsuda et
al. [24, 25] in the context of functional programs and bidirectional transformation. The functional programs considered in [24] can be seen as linear and
right-treeless 18 constructor TRSs. The class of functional programs is more
general in [25], which would correspond to left-linear, right-treeless TRSs.
The reversibilization technique of [24, 25] includes both an injectivization
stage (by introducing a view complement function) and an inversion stage.
These methods are closely related to the transformations of injectivization
and inversion that we have presented in Section 5, although we developed
them from a rather different starting point. Moreover, their methods for
injectivization and inversion consider a more restricted class of systems than
those considered in this paper. On the other hand, they apply a number
of analyses to improve the result, which explains the smaller traces in their
approach. All in all, we consider that our approach gives better insights to
understand the need for some of the requirements of the program transformations and the class of considered programs. For instance, most of our
requirements come from the need to remove programs positions from the
traces, as shown in Section 4.
Finally, [37] considers the reversible language RFUN. Similarly to Janus,
computations in RFUN are reversible without the need of a Landauer embedding. The paper also presents a transformation from a simple (irreversible)
functional language, FUN, to RFUN, in order to highlight how irreversibilities are handled in RFUN. The transformation has some similarities with
both the approach of [24] and our improved transformation in Section 5.3;
on the other hand, though, [37] also applies the Bennett trick [6] in order to
avoid some unnecessary information.
18
There are no nested defined symbols in the right-hand sides, and, moreover, any term
rooted by a defined function in the right-hand sides can only take different variables as its
proper subterms.
38
8. Discussion and Future Work
In this paper, we have introduced a reversible extension of term rewriting.
In order to keep our approach as general as possible, we have initially considered DCTRSs as input systems, and proved the soundness and reversibility
of our extension of rewriting. Then, in order to introduce a reversibilization
transformation for these systems, we have also presented a transformation
from DCTRSs to pure constructor systems (pcDCTRSs) which is correct
for constructor reduction. A further improvement is presented for injective
functions, which may have a significant impact in memory usage in some
cases. Finally, we have successfully applied our approach in the context of
bidirectional program transformation.
We have developed a prototype implementation of the reversibilization
transformations introduced in Section 5. The tool can read an input TRS
file (format .trs [1]) and then it applies in a sequential way the following
transformations: flattening, simplification of constructor conditions, injectivization, and inversion. The tool prints out the CTRSs obtained at each
transformation step. It is publicly available through a web interface from
http://kaz.dsic.upv.es/rev-rewriting.html, where we have included a
number of examples to easily test the tool.
As for future work, we plan to investigate new methods to further reduce the size of the traces. In particular, we find it interesting to define a
reachability analysis for DCTRSs. A reachability analysis for CTRSs without extra-variables (1-CTRSs) can be found in [12], but the extension to
deal with extra-variables in DCTRSs (since a DCTRS is a particular case
of 3-CTRS) seems challenging. Furthermore, as mentioned in the paper, a
completion procedure to add default cases to some functions (as suggested in
Section 5.1) may help to broaden the applicability of the technique and avoid
the restriction to constructor reduction. Finally, our injectivization and inversion transformations are correct w.r.t. innermost reduction. Extending
our results to a lazy strategy is also an interesting topic for further research.
Acknowledgements
We thank the anonymous reviewers for their useful comments and suggestions to improve this paper.
References
[1] Annual international termination competition. Available from URL:
http://www.termination-portal.org/wiki/Termination Competition.
39
[2] S. Abramsky. A structural approach to reversible computation. Theoretical
Computer Science, 347(3):441–464, 2005.
[3] J. M. Almendros-Jiménez and G. Vidal. Automatic partial inversion of inductively sequential functions. In Z. Horváth, V. Zsók, and A. Butterfield,
editors, Implementation and Application of Functional Languages, 18th International Symposium (IFL 2006), Revised Selected Papers, volume 4449 of
Lecture Notes in Computer Science, pages 253–270. Springer, 2007.
[4] F. Baader and T. Nipkow. Term Rewriting and All That. Cambridge University Press, 1998.
[5] F. Bancilhon and N. Spyratos. Update semantics of relational views. ACM
Transactions on Database Systems, 6(4):557–575, 1981.
[6] C. H. Bennett. Logical reversibility of computation. IBM Journal of Research
and Development, 17:525–532, 1973.
[7] C. H. Bennett. Notes on the history of reversible computation. IBM Journal
of Research and Development, 44(1):270–278, 2000.
[8] J. Bergstra and J. Klop. Conditional Rewrite Rules: confluence and termination. Journal of Computer and System Sciences, 32:323–362, 1986.
[9] P. Crescenzi and C. H. Papadimitriou. Reversible simulation of spacebounded computations. Theoretical Computer Science, 143(1):159–165, 1995.
[10] K. Czarnecki, J. N. Foster, Z. Hu, R. Lämmel, A. Schürr, and J. F. Terwilliger.
Bidirectional transformations: A cross-discipline perspective. In R. F. Paige,
editor, Proc. of the 2nd Int’l Conf. on Theory and Practice of Model Transformations (ICMT 2009), volume 5563 of Lecture Notes in Computer Science,
pages 260–283. Springer, 2009.
[11] A. Di Pierro, C. Hankin, and H. Wiklicky. Reversible combinatory logic.
Mathematical Structures in Computer Science, 16(4):621–637, 2006.
[12] G. Feuillade and T. Genet. Reachability in Conditional Term Rewriting Systems. Electronic Notes in Theoretical Computer Science, 86(1):133–146, 2003.
[13] J. N. Foster, M. B. Greenwald, J. T. Moore, B. C. Pierce, and A. Schmitt.
Combinators for bidirectional tree transformations: A linguistic approach to
the view-update problem. ACM Transactions on Programming Languages
and Systems, 29(3):17, 2007.
[14] M. P. Frank. Introduction to reversible computing: motivation, progress,
and challenges. In N. Bagherzadeh, M. Valero, and A. Ramı́rez, editors,
Proceedings of the Second Conference on Computing Frontiers, pages 385–
390. ACM, 2005.
40
[15] T. Genet. Decidable Approximations of Sets of Descendants and Sets of Normal Forms. In T. Nipkow, editor, Proc. of the 9th International Conference
on Rewriting Techniques and Applications (RTA’98), volume 1379 of Lecture
Notes in Computer Science, pages 151–165. Springer, 1998.
[16] T. Genet and Y. Salmon. Reachability Analysis of Innermost Rewriting. In
M. Fernández, editor, Proc. of the 26th International Conference on Rewriting
Techniques and Applications (RTA’15), volume 36 of LIPIcs, pages 177–193.
Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2015.
[17] E. Giachino, I. Lanese, and C. A. Mezzina. Causal-consistent reversible
debugging. In S. Gnesi and A. Rensink, editors, Proc. of the 17th International Conference on Fundamental Approaches to Software Engineering
(FASE 2014), volume 8411 of Lecture Notes in Computer Science, pages 370–
384. Springer, 2014.
[18] N. Hirokawa and G. Moser. Automated Complexity Analysis Based on the
Dependency Pair Method. In A. Armando, P. Baumgartner, and G. Dowek,
editors, Proc. of IJCAR 2008, volume 5195 of Lecture Notes in Computer
Science, pages 364–379. Springer, 2008.
[19] L. Huelsbergen. A logically reversible evaluator for the call-by-name lambda
calculus. In T. Toffoli and M. Biafore, editors, Proc. of PhysComp96, pages
159–167. New England Complex Systems Institute, 1996.
[20] COST Action IC1405 on Reversible Computation - extending horizons of
computing. URL: http://revcomp.eu/.
[21] W. E. Kluge. A reversible SE(M)CD machine. In P. W. M. Koopman and
C. Clack, editors, Proc. of the 11th International Workshop on the Implementation of Functional Languages, IFL’99. Selected Papers, volume 1868 of
Lecture Notes in Computer Science, pages 95–113. Springer, 2000.
[22] R. Landauer. Irreversibility and heat generation in the computing process.
IBM Journal of Research and Development, 5:183–191, 1961.
[23] C. Lutz and H. Derby. Janus: A time-reversible language, 1986. A letter to
R. Landauer. Available from URL http://tetsuo.jp/ref/janus.pdf.
[24] K. Matsuda, Z. Hu, K. Nakano, M. Hamana, and M. Takeichi. Bidirectionalization transformation based on automatic derivation of view complement
functions. In R. Hinze and N. Ramsey, editors, Proc. of the 12th ACM SIGPLAN International Conference on Functional Programming, ICFP 2007,
pages 47–58. ACM, 2007.
41
[25] K. Matsuda, Z. Hu, K. Nakano, M. Hamana, and M. Takeichi. Bidirectionalizing programs with duplication through complementary function derivation.
Computer Software, 26(2):56–75, 2009. In Japanese.
[26] A. Middeldorp and E. Hamoen. Completeness results for basic narrowing.
Applicable Algebra in Engineering, Communication and Computing, 5:213–
253, 1994.
[27] K. Morita. Reversible simulation of one-dimensional irreversible cellular automata. Theoretical Computer Science, 148(1):157–163, 1995.
[28] K. Morita. Computation in reversible cellular automata. International Journal of General Systems, 41(6):569–581, 2012.
[29] S. Mu, Z. Hu, and M. Takeichi. An injective language for reversible computation. In D. Kozen and C. Shankland, editors, Proc. of the 7th International
Conference on Mathematics of Program Construction (MPC 2004), volume
3125 of Lecture Notes in Computer Science, pages 289–313. Springer, 2004.
[30] M. Nagashima, M. Sakai, and T. Sakabe. Determinization of conditional term
rewriting systems. Theoretical Computer Science, 464:72–89, 2012.
[31] N. Nishida, A. Palacios, and G. Vidal. Reversible term rewriting. In D. Kesner
and B. Pientka, editors, Proc. of the 1st International Conference on Formal
Structures for Computation and Deduction (FSCD’16), volume 52 of LIPIcs,
pages 28:1–28:18. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016.
[32] N. Nishida, M. Sakai, and T. Sakabe. Soundness of unravelings for conditional term rewriting systems via ultra-properties related to linearity. Logical
Methods in Computer Science, 8(3-4):1–49, Aug. 2012.
[33] N. Nishida and G. Vidal. Program inversion for tail recursive functions. In
M. Schmidt-Schauß, editor, Proceedings of the 22nd International Conference
on Rewriting Techniques and Applications (RTA 2011), volume 10 of LIPIcs,
pages 283–298. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2011.
[34] M. Schordan, D. R. Jefferson, P. D. B. Jr., T. Oppelstrup, and D. J. Quinlan.
Reverse code generation for parallel discrete event simulation. In J. Krivine
and J. Stefani, editors, Proc. of the 7th International Conference on Reversible
Computation (RC 2015), volume 9138 of Lecture Notes in Computer Science,
pages 95–110. Springer, 2015.
[35] B. Stoddart, R. Lynas, and F. Zeyda. A virtual machine for supporting reversible probabilistic guarded command languages. Electronic Notes in Theoretical Computer Science, 253(6):33–56, 2010.
42
[36] Terese. Term Rewriting Systems, volume 55 of Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 2003.
[37] M. K. Thomsen and H. B. Axelsen. Interpretation and programming of the
reversible functional language RFUN. In R. Lämmel, editor, Proc. of the 27th
Symposium on the Implementation and Application of Functional Programming Languages (IFL’15), pages 8:1–8:13. ACM, 2015.
[38] T. Toffoli. Computation and construction universality of reversible cellular
automata. Journal of Computer and System Sciences, 15(2):213–231, 1977.
[39] T. Yamakami. One-way reversible and quantum finite automata with advice.
Information and Computation, 239:122–148, 2014.
[40] T. Yokoyama. Reversible computation and reversible programming languages.
Electronic Notes in Theoretical Computer Science, 253(6):71–81, 2010.
[41] T. Yokoyama, H. Axelsen, and R. Glück. Fundamentals of reversible flowchart
languages. Theoretical Computer Science, 611:87–115, 2016.
[42] T. Yokoyama, H. B. Axelsen, and R. Glück. Reversible flowchart languages
and the structured reversible program theorem. In Proc. of the 35th International Colloquium on Automata, Languages and Programming (ICALP 2008),
volume 5126 of Lecture Notes in Computer Science, pages 258–270. Springer,
2008.
[43] T. Yokoyama and R. Glück. A reversible programming language and its
invertible self-interpreter. In G. Ramalingam and E. Visser, editors, Proc. of
the 2007 ACM SIGPLAN Workshop on Partial Evaluation and Semanticsbased Program Manipulation (PEPM 2007), pages 144–153. ACM, 2007.
[44] P. Zuliani. Logical reversibility. IBM Journal of Research and Development,
45(6):807–818, 2001.
43
| 6 |
Transivity of Commutativity for Second-Order Linear Time-Varying Analog
Systems
Mehmet Emir KOKSAL
Department of Mathematics, Ondokuz Mayis University, 55139 Atakum, Samsun, Turkey
[email protected]
Abstract: After reviewing commutativity of second-order linear time-varying analog
systems, the inverse commutativity conditions are derived for these systems by considering
non-zero initial conditions. On the base of these conditions, the transitivity property is
studied for second order linear time-varying unrelaxed analog systems. It is proven that this
property is always valid for such systems when their initial states are zero; when non-zero
initial states are present, it is shown that the validity of transitivity does not require any
more conditions and it is still valid. Throughout the study it is assumed that the subsystems
considered can not be obtained from each other by any feed-forword and feed-back
structure. The results are well validated by MATLAB simulations.
Keywords: Differential equations, Initial conditions, Linear time-varying systems,
Commutativity, Transitivity
AMS Subject Classification: 93C05, 93C15, 93A30
I.
Introduction
Second-order differential equations originate in electromagnetic, electrodynamics,
transmission lines and communication, circuit and system theory, wave motion and
distribution, and in many fields of electrics-electronics engineering. They play a prominent
1
role for modelling problems occurring in electrical systems, fluid systems, thermal systems
and control systems. Especially, they are used as a powerful tool for modelling, analyzing
and solving problems in classical control theory, modern control theory, robust control
theory and automatic control, which is essential in any field of engineering and sciences,
and for discussing the results turned up at the end of analyzing for resolution of naturel
problems. For example, they are used in cascade connected and feedback systems to design
higher order composite systems for achieving several beneficial properties such as
controllability, sensitivity, robustness, and design flexibility. When the cascade connection
which is an old but still an up to date trend in system design [1-4] is considered, the
commutativity concept places an important role to improve different system performances.
On the other hand, since the commutativity of linear time-invariant relaxed systems is
straightforward and time-varying systems have found a great deal of applications recently
[5-10], the scope of this paper is focused on commutativity of linear time-varying systems
only.
When two systems 𝐴 and 𝐵 are are interconnected one after the other so that the
output of the former acts as the input of the later, it is said that these systems are connected
in cascade [11]. If the order of connection in the sequence is not effective on the inputoutput relation of the cascade connection, then these systems are commutative [12].
Figure 1: Cascade connection of the differential system 𝐴 and 𝐵
The tutorial papers [13, 14] cover almost all the subjects scattered in a great deal of
2
literature about commutativity. Some of the important results about the commutativity are
summarized in the sequel superficially.
J. E. Marshall has proven that “for commutativity, either both systems are timeinvariant or both systems are time-varying” [12]. After many contributions appeared as
conference presentations and a few short papers focusing to special cases such as first,
second, third, and forth order systems, the exhaustive journal paper of M. Koksal introduced
the basic fundamentals of the subject [13]. Another work joint by the same author has
presented explicit commutativity conditions of fifth order systems in addition to reviews of
commutativity of systems with non-zero initial conditions, commutativity and system
disturbance, commutativity of Euler systems [14].
There is some literature on the commutativity of discrete-time systems as well [15,
16]. And the research of commutativity is continuing on both analog and digital systems in
[17, 18]. In [17], all the second-order commutative pairs of a first-order linear time-varying
analogue systems are derived. The decomposition of a second-order linear time-varying
systems into its first-order commutative pairs are studied in [18]. This is important for the
cascade realization of the second-order linear time-varying systems.
In [19], the inverse conditions expressed in terms of the coefficients of the
differential equation describing system 𝑩 have been derived for the case of zero initial
conditions and shown to be of the same form of the original equations appearing in the
literature.
Transitivity property of commutativity is first introduced and some general
conditions for it are presented in [20], where transitivity of commutativity is fully
investigated for first-order systems. It is shown that commutativity of first-order linear timevarying systems with and without initial conditions has always transitivity property.
3
Explicit commutativity conditions for second-order linear time-varying systems
have been studied in [21]. On the other hand, no special research as in [20] for the firstorder systems has been done on the transitivity property of commutativity for second-order
systems. This paper completes this vacancy in the literature on the base of work in [21].
In this paper, transtivity property of second-order linear time-varying analog
systems with and without initial conditions is studied. Section II is devoted to the explicite
commutativity conditions for such systems. Section III presents preliminaris, namely
inverse commutativity conditions in the case of non-zero inititial states, that are used in the
proof of the subseqient section. Section IV deals with transtivity property with and without
initial conditions. After giving an example is Section V, the paper ends with conclusions
which appears in Section VI.
II.
Commutativity Conditions for Second-order Systems
In this section, the commutativity conditions for two second-order linear time-varying
analog systems are reviewed [21].
Let 𝐴 be the systam described by the second-order linear time-varying differential equation
𝑎2 (𝑡)𝑦̈𝐴 (𝑡) + 𝑎1 (𝑡)𝑦̇𝐴 (𝑡) + 𝑎0 (𝑡)𝑦𝐴 (𝑡) = 𝑥𝐴 (𝑡); 𝑡 ≥ 𝑡0
(1a)
with the initial conditions at the initial time 𝑡0 ∈ 𝑅
𝑦𝐴 (𝑡0 ), 𝑦̇𝐴 (𝑡0 ).
(1b)
Where the single (double) dot on the top indicates the first (second) order derivative with
respect to time 𝑡 ∈ 𝑅; 𝑥𝐴 (𝑡) and 𝑦𝐴 (𝑡) are the input and output of the system, respectively.
Since the system is second-order
𝑎2 (𝑡) ≢ 0.
(1c)
Further, 𝑎̈ 2 (𝑡), 𝑎̇ 1 (𝑡) and , 𝑎0 (𝑡) are well defined continuous functions, that is 𝑎2 (𝑡)
, 𝑎̇ 1 (𝑡), 𝑎0 (𝑡) ∈ 𝐶[𝑡0 , ∞) , hence so 𝑎̇ 2 (𝑡), 𝑎2 (𝑡), 𝑎1 (𝑡) are.
4
It is true that System 𝐴 has a unique continuous solution 𝑦𝐴 (𝑡) with its first and secondorder derivatives for any continuous input function 𝑥𝐴 (𝑡) [22].
Let 𝐵 be another second-order linear time varying system described in a similar way to 𝐴.
Hence it is described by
𝑏2 (𝑡)𝑦̈ 𝐵 (𝑡) + 𝑏1 (𝑡)𝑦̇ 𝐵 (𝑡) + 𝑏0 (𝑡)𝑦𝐵 (𝑡) = 𝑥𝐵 (𝑡), 𝑡 ≥ 𝑡0 ,
(2a)
𝑦𝐵 (𝑡0 ), 𝑦̇ 𝐵 (𝑡𝑜 ),
(2b)
𝑏2 (𝑡) ≢ 0.
(2c)
Where the coefficients 𝑏2 , 𝑏1 , 𝑏0 satisfy the same properties satisfied by 𝑎2 , 𝑎1 , 𝑎0 .
For the commutativity of 𝐵 with 𝐴, it is well known that
I)
i) The coefficients of 𝐵 be expressed in terms of those of 𝐴 through
𝑎2 (𝑡)
0
𝑏2 (𝑡)
0,5
[𝑏1 (𝑡)] = [𝑎1 (𝑡) 𝑎2 (𝑡)
𝑏0 (𝑡)
𝑎0 (𝑡)
𝑓𝐴 (𝑡)
0 𝑘2
0] [𝑘1 ],
1 𝑘0
(3a)
.
(3b)
where
𝑓𝐴 (𝑡) =
𝑎2 −0,5 [2𝑎1 (𝑡)−𝑎̇ 2 (𝑡)]
4
And 𝑘2 , 𝑘1 , 𝑘0 are some constants with 𝑘2 ≠ 0 for 𝐵 is of second-order. Further, it is
assumed that 𝐵 can not be obtained from 𝐴 by a constant feed forward and feedback path
gains; hence 𝑘1 ≠ 0 [14, 21].
ii)
𝑎0 − 𝑓𝐴 2 𝑎2 −0,5 𝑓𝐴̇ = 𝐴0 , ∀𝑡 ≥ 𝑡0 ,
(3c)
where 𝐴0 is a constant,
When the initial conditions in (1b) and (2b) are zero, the conditions i) and ii) are necessary
and sufficient conditions fort the commutativity of 𝐴 and 𝐵 under the mentioned conditions
(𝑘2 ≠ 0, 𝑘1 ≠ 0).
For commutativity with nonzero initial conditions as well, the following additional
conditions are required:
II)
i)
5
𝑦𝐵 (𝑡0 ) = 𝑦𝐴 (𝑡0 ) ≠ 0,
(4a)
𝑦̇ 𝐵 (𝑡0 ) = 𝑦̇𝐴 (𝑡0 );
(4b)
(𝑘2 + 𝑘0 − 1)2 = 𝑘12 (1 − 𝐴0 ),
(4c)
ii)
iii)
𝑦̇ 𝐵 (𝑡0 ) = −𝑎2 −0,5 (𝑡0 ) [
𝑘2 +𝑘0 −1
𝑘1
+ 𝑓𝐴 (𝑡0 )] 𝑦𝐵 (𝑡0 );
(4d)
which are necessary and sufficient together with Eqs. 3a,b,c for commutativity of 𝐴 and 𝐵
under non-zero initial conditions. Note that by the nonzero initial condition it is meant
“general values of initial conditions”, so one or two of them may be zero in special cases.
In fact, if the output 𝑦(𝑡0 ) is zero, its derivative need to be zero due to Eq. 4d; if not, its
derivative may or may not be zero depending on the term in bracket in (4d) is zero or not
[21].
III. Inverse Commutativity Conditions for Second Order Unrelaxed Systems
For the proof of the transitivity theorems of the following section, we need some formulas
which express the inverse commutativity conditions. Although these conditions have been
partially treated in [21], initial conditions are all assumed to be zero their; and for the sake
of completeness, we express the general results by Lemma 1 and exhibit the complete
inverse commutativity conditions for unrelaxed second order linear time-varying systems.
In the previous section, the necessary and sufficient conditions for the commutativity of 𝐴
and 𝐵 are expressed dominantly by awnsening “what are the conditions that must be
satisfied by 𝐵 to be commutative with 𝐴?” The answer to this question constitutes the
inverse commutativity conditions. We express the results by the following Lemma.
Lemma 1. The necessary and sufficients conditions given in Eqs. (3) and (4) for the
commutativity 𝐴 and 𝐵 can be expressed by the following formulas:
I.
i)
6
𝑏2 (𝑡)
0
𝑎2 (𝑡)
0,5
[𝑎1 (𝑡)] = [𝑏1 (𝑡) 𝑏2 (𝑡)
𝑎0 (𝑡)
𝑏0 (𝑡)
𝑓𝐵 (𝑡)
𝑓𝐵 (𝑡) =
𝑏2 −0,5 [2𝑏1 (𝑡)−𝑏̇2 (𝑡)]
4
0 𝑙2
0] [𝑙1 ], where
1 𝑙0
(5a)
.
(5b)
ii)
II.
𝑏0 − 𝑓𝐵 2 − 𝑏2 0,5 𝑓𝐵̇ = 𝐵0 , ∀𝑡 ≥ 𝑡0
(5c)
𝑦𝐴 (𝑡0 ) = 𝑦𝐵 (𝑡0 ),
(6a)
𝑦̇𝐴 (𝑡0 ) = 𝑦̇ 𝐵 (𝑡0 );
(6b)
(𝑙2 + 𝑙0 − 1)2 = 𝑙12 (1 − 𝐵0 ),
(6c)
i)
ii)
𝑙 +𝑙0 −1
𝑦̇𝐴 (𝑡0 ) = −𝑏2−0.5 (𝑡0 ) [ 2
𝑙1
+ 𝑓𝐵 (𝑡0 )] 𝑦𝐴 (𝑡0 );
(6d)
For the proof of the lemma, we solve Eq. 3a for 𝑎2 , 𝑎1 , 𝑎0 in terms of 𝑏2 , 𝑏1 , 𝑏0 as follows:
1
𝑎2 (𝑡) = 𝑘 𝑏2 (𝑡),
(7a)
2
𝑎1 (𝑡) =
𝑏1 (𝑡)−𝑘1 𝑎20.5 (𝑡)
𝑘2
1
𝑘
𝑏
0,5
= 𝑘 𝑏1 (𝑡) − 𝑘1 (𝑘2 )
2
2
2
1
= 𝑘 𝑏1 (𝑡) − 𝑘
2
𝑘1
2
1,5
𝑏20.5 (𝑡). (7b)
Before progressing further, let us compute 𝑓𝐴 from (3b) and by using the above formulas
for 𝑎2 and 𝑎1 , we obtain
𝑓𝐴 =
1 𝑏2−0.5
𝑏1 𝑘1 𝑏20.5
𝑏̇2
1 𝑏2−0.5 2𝑏1 − 𝑏̇2 2𝑘1 𝑏20.5
[2 ( − 1.5 ) − ] =
[
−
]
4 𝑘2
𝑘2
𝑘2
4 𝑘2−0.5
𝑘2
𝑘2
𝑘21.5
= 𝑘2−0.5
𝑏2−0.5 (2𝑏1 − 𝑏̇2 ) 𝑘1
−
4
2𝑘2
Finally, defining 𝑏2−0.5 (2𝑏1 − 𝑏̇2 )/4 as 𝑓𝐵 as in (5b), we have
𝑘
𝑓𝐴 = 𝑘2−0.5 𝑓𝐵 − 2𝑘1 ,
(8a)
2
or equivalently
7
𝑘
1
𝑓𝐵 = 𝑘20.5 𝑓𝐴 + 2𝑘 0.5
(8b)
2
We now compute 𝑎0 from the last row of (3a) by using (8a)
𝑎0 =
𝑏0 −𝑘1 𝑓𝐴 −𝑘0
𝑘2
1
𝑏
𝑘
𝑘
2
2
2
𝑘2
𝑘
𝑘
1
= 𝑘 𝑏0 − 𝑘 1.5
𝑓𝐵 + 2𝑘12 − 𝑘0 .
2
𝑘
= 𝑘0 − 𝑘1 [𝑘2−0.5 𝑓𝐵 + 2𝑘1 ] − 𝑘0
2
2
2
2
(9)
Writing (7a), (7b), and (9) in matrix form we obtain
1
𝑏2
𝑎2
[𝑎1 ] = [𝑏1
𝑎0
𝑏0
0
𝑏20.5
𝑓𝐵
𝑘2
−𝑘1
0
0]
1
,
𝑘21.5
𝑘12
2𝑘22
[
(10)
𝑘
− 𝑘0 ]
2
Comparing with Eq. 5a, we observe that (5a) is valid with
1
𝑙2
[𝑙1 ] =
𝑙0
𝑘2
−𝑘1
.
𝑘21.5
𝑘12
(11a)
𝑘0
[2𝑘22 − 𝑘2 ]
Hence (5a) has been proved.
For use in the sequel, we solve (11a) for 𝑘𝑖 ’s and obtain
1
𝑘2
[𝑘1 ]=
𝑘0
𝑙2
−𝑙1
,
𝑙21.5
𝑙12
(11b)
𝑙0
[2𝑙22 − 𝑙2 ]
which is naturally the dual of Eq. 11a with 𝑘 and 𝑙 iterchanged. By using (11b) in (8a) and
(8b), or directly interchanging 𝐴 ↔ 𝐵 and 𝑘𝑖 ↔ 𝑙𝑖 in (8a) and (8b), we obtain the following
equations:
𝑙
𝑓𝐵 = 𝑙2−0.5 𝑓𝐴 − 2𝑙1 ,
2
𝑙
𝑓𝐴 = 𝑙20.5 (𝑓𝐵 + 2𝑙1 ).
2
8
(11c)
(11d)
To show (5c), we substitute values of 𝑏2 and 𝑏0 from (3a), value of 𝑓𝐵 from (8b) in the left
side of (5c), we obtain
𝑘1 2
𝑏0 − 𝑓𝐵2 − 𝑏20.5 𝑓𝐵̇ = 𝑘2 𝑎0 + 𝑘1 𝑓𝐴 + 𝑘0 − (𝑘20.5 𝑓𝐴 + 2𝑘 0.5
) − (𝑘2 𝑎2 )0,5 (𝑘20.5 𝑓𝐴̇ )
2
𝑘2
=𝑘2 𝑎0 + 𝑘1 𝑓𝐴 + 𝑘0 − 𝑘2 𝑓𝐴2 − 𝑘1 𝑓𝐴 − 4𝑘1 − 𝑘2 𝑎20.5 𝑓𝐴̇
2
2
𝑘
= 𝑘2 (𝑎0 − 𝑓𝐴2 − 𝑎20.5 𝑓𝐴̇ ) + 𝑘0 − 4𝑘1 .
2
Finally using (3c)
𝑏0 −
𝑓𝐵2
−
𝑏20.5 𝑓𝐵̇
𝑘12
= 𝑘2 𝐴0 + 𝑘0 −
4𝑘2
which is constant for 𝐴0 being constant. Hence (5c) is valid with
𝑘2
𝐵0 = 𝑘2 𝐴0 + 𝑘0 − 4𝑘1 ,
(12a)
2
or equivalently
1
𝑘2
𝑘
𝐴0 = 𝑘 𝐵0 + 𝑘0 + 4𝑘1 .
2
2
(12b)
2
The dual equations for (12a) and (12b) can be written by using constants 𝑙𝑖 ’s; this is done
by using (11b) in (12a) and (12b), or directly interchanging 𝐴 ↔ 𝐵 and 𝑘𝑖 ↔ 𝑙𝑖 in (12a) and
(12b). The results are
𝑙2
𝐴0 = 𝑙2 𝐵0 + 𝑙0 − 4𝑙1 ,
(12c)
2
𝐵0 =
1
𝑙2
𝐴0 −
𝑙0
𝑙2
+
𝑙12
4𝑙22
.
(12d)
Equations (6a) and (6b) are the same as Eqs. 4a and 4b, respectively, so they do not need to
be proved.
To prove (6c), we start from (4c); inserting valves of 𝑘𝑖 ’s from (11b) and valve of 𝐴0 from
(12b) in, we obtain:
1
𝑙12
𝑙0
𝑙1 2
𝑙12
2
( + 2 − − 1) = 3 (1 − 𝑙2 𝐵0 − 𝑙0 +
)
𝑙2 2𝑙2 𝑙2
4𝑙2
𝑙2
9
(
1 − 𝑙0 − 𝑙2
𝑙12
𝑙12
𝑙12
+ 2 )2 = 3 (1 − 𝑙2 𝐵0 − 𝑙0 +
)
𝑙2
4𝑙2
2𝑙2
𝑙2
(1 − 𝑙0 − 𝑙2 +
𝑙12 2 𝑙12
𝑙12
)
=
(1
−
𝑙
𝐵
−
𝑙
+
)
2 0
0
𝑙2
4𝑙2
2𝑙22
𝑙14 𝑙12
𝑙12
𝑙14
(1
)
)
(1 − 𝑙0 − 𝑙2 ) + 2 +
− 𝑙0 − 𝑙2 = ((1 − 𝑙2 𝐵0 − 𝑙0 + 2 )
𝑙2
4𝑙2 𝑙2
4𝑙2
2
(1 − 𝑙0 − 𝑙2 )2 =
𝑙12
(1 − 𝑙2 𝐵0 − 𝑙0 − 1 + 𝑙0 + 𝑙2 )
𝑙2
(𝑙2 + 𝑙0 − 1)2 = 𝑙12 (1 − 𝐵0 )
To prove (6d), we start from (4d) and using (4a) and (4b), we write
𝑦̇𝐴 (𝑡0 ) = −𝑎2−0.5 (𝑡0 ) [
𝑘2 +𝑘0 −1
+ 𝑓𝐴 (𝑡0 )] 𝑦𝐴 (𝑡0 ).
𝑘1
Using (5a) for 𝑎2 , (11b) for 𝑘𝑖 ’s and (11d) for 𝑓𝐴 , we proceed
𝑦̇𝐴 (𝑡0 ) = −[𝑙2 𝑏2 (𝑡0 )]−0.5 [
2
1 𝑙1 𝑙0
+ − −1
𝑙2 2𝑙2 𝑙2
2
𝑙1
− 1.5
𝑙2
= −𝑙2−0.5 𝑏2−0.5 (𝑡0 )[−
1
𝑙21.5
𝑙1
𝑙
𝑙
1
+ 𝑙20.5 𝑓𝐵 (𝑡0 ) + 2𝑙0.5
] 𝑦𝐴 (𝑡0 )
2
𝑙2
1
𝑙
𝑙
1
(𝑙 + 2𝑙12 − 𝑙0 − 1) + 𝑙20.5 𝑓𝐵 (𝑡0 ) + 2𝑙0.5
] 𝑦𝐴 (𝑡0 )
2
2
2
𝑙
2
𝑙
𝑙
=−𝑏2−0.5 (𝑡0 )[− 𝑙 − 2𝑙1 + 𝑙0 + 𝑙2 + 𝑓𝐵 (𝑡0 ) + 2𝑙1 ]
1
2
𝑙 +𝑙0 −1
= −𝑏2−0.5 (𝑡0 )[ 2
𝑙1
1
1
2
+ 𝑓𝐵 (𝑡0 )] 𝑦𝐴 (𝑡0 )
which is the same equation as (6d). Hence the proof of Lemma 1 is completed.
Fact: Comparing Eqs. 4d and 6d, together with the equalities 𝑦𝐴 (𝑡0 ) = 𝑦𝐵 (𝑡0 ), 𝑦̇𝐴 (𝑡0 ) =
𝑦̇ 𝐵 (𝑡0 ), we see that the derivatives 𝑦̇𝐴 (𝑡0 ) and 𝑦̇ 𝐵 (𝑡0 ) are constant multiples of 𝑦𝐴 (𝑡0 ) and
𝑦𝐵 (𝑡0 ). The multipliers are initial time dependent and
−𝑎𝑏2−0.5 (𝑡0 ) [
𝑘2 +𝑘0 −1
𝑘1
𝑙 +𝑙0 −1
+ 𝑓𝐵 (𝑡0 )].
𝑙1
+ 𝑓𝐴 (𝑡0 )] = −𝑏2−0.5 (𝑡0 )[ 2
Inserting in value of 𝑏2 from (3a) and value of 𝑓𝐵 (𝑡0 ) from (8b) yields that
𝑙2 +𝑙0 −1
𝑙1
𝑘2 +𝑘0 −1
= 𝑘20.5 [
10
𝑘1
𝑘
− 2𝑘1 ].
2
On the other hand, inserting in value of 𝑎2 from (5a) and value of 𝑓𝐴 (𝑡0 ) from (11d) yields
𝑘2 + 𝑘0 − 1
𝑙2 + 𝑙0 − 1 𝑙1
= 𝑙20.5 [
−
].
𝑘1
𝑙1
2𝑙2
This is the dual of the previous equation. Using the transformations (11a) and (11b) between
𝑘𝑖 ’s and 𝑙𝑖 ’s, it is straightforword to show that the above relations between 𝑘𝑖 ’s and 𝑙𝑖 ’s are
valid.
IV.
Transitivity Property of Commutativity
To be able to study the transivity property of commutativity for second-order linear timevarying systems, we should consider a third system 𝐶 of the same type as 𝐴 and 𝐵
considered in Section III. So, let 𝐶 be defined by the following second-order differential
equation:
𝑐2 (𝑡)𝑦̈ 𝐶 (𝑡) + 𝑐1 (𝑡)𝑦̇ 𝐶 (𝑡) + 𝑐0 (𝑡)𝑦𝐶 (𝑡) = 𝑥𝐶 (𝑡); 𝑡 ≥ 𝑡0 ,
(13a)
𝑦𝐶 (𝑡0 ), 𝑦̇ 𝐶 (𝑡0 ),
(13b)
𝑐2 (𝑡) ≢ 0,
(13c)
where 𝑐̈2 (𝑡), 𝑐̇2 (𝑡), 𝑐0 (𝑡) ∈ 𝐶[𝑡0 , ∞). We assume 𝐶 is commutative with 𝐵, to similar
relations to Eqs. (3) and (4) can be written as
I.
i)
𝑏2
𝑐2
[𝑐1 ] = [𝑏1
𝑐0
𝑏0
𝑓𝐵 =
0
𝑏20.5
𝑓𝐵
0 𝑚2
0] [𝑚1 ] , where
1 𝑚0
𝑏2 0,5 (2𝑏1 −𝑏̇2 )
4
.
(14a)
(14b)
And 𝑚2 , 𝑚1 , 𝑚0 are some constants with 𝑙2 ≠ 0 for 𝐶 is of second-order. Further, we
assume that 𝐶 can not be obtained from 𝐵 by constant feed forward and feedback gains;
hence 𝑚1 ≠ 0. Moreover,
ii)
𝑏0 − 𝑓𝐵 2 − 𝑏2 0,5 𝑓𝐵̇ = 𝐵0 , ∀𝑡 ≥ 𝑡0
11
(14c)
where 𝐵0 is a constant. When the initial conditions are non-zero, the following should be
satisfied:
II.
i)
𝑦𝐶 (𝑡0 ) − 𝑦𝐵 (𝑡0 ) ≠ 0,
(15a)
𝑦̇ 𝐶 (𝑡0 ) = 𝑦̇ 𝐵 (𝑡0 ),
(15b)
(𝑚2 + 𝑚0 − 1)2 = 𝑚1 2 (1 − 𝐵0 ),
(15c)
ii)
𝑦̇ 𝐶 (𝑡0 ) = −𝑏2 −0,5 [
𝑚2 +𝑚0 −1
𝑚1
+ 𝑓𝐵 (𝑡0 )] 𝑦𝐶 (𝑡0 ).
(15d)
Considering the inverse commutativity conditions derived for 𝐴 and 𝐵, the inverse
commutativity conditions for 𝐵 and 𝐶 can be written from Eqs. (5) and (6) by changing
𝐴 → 𝐵 and 𝐵 → 𝐶, ℎ𝑖 → 𝑚𝑖 and 𝑙𝑖 → 𝑛𝑖 in Eqs. (5) and (6). The results are
I.
i)
𝑐2
𝑏2
[𝑏1 ] = [𝑐1
𝑏0
𝑐0
𝑓𝑐 =
0
𝑐20,5
𝑓𝑐
0 𝑛2
0] [𝑛1 ] , where
1 𝑛0
𝑐2−0,5 (2𝑐1 −𝑐2 )
4
.
(16a)
(16b)
ii)
II.
𝑐0 − 𝑓𝑐2 − 𝑐20,5 𝑓𝑐 = 𝑐0 , ∀𝑡 ≥ 𝑡0
(16c)
𝑦𝐵 (𝑡0 ) = 𝑦𝐶 (𝑡0 ),
(17a)
𝑦′𝐵 (𝑡0 ) = 𝑦′𝐶 (𝑡0 ),
(17b)
(𝑛2 + 𝑛0 − 1)2 = 𝑛12 (1 − 𝑐0 ),
(17c)
i)
ii)
𝑦̇ 𝐵 (𝑡0 ) = −𝑐2−0,5 (𝑡0 ) [
𝑛2 +𝑛0 −1
𝑛1
Further, Eqs, (8a) and (8b) become
12
+ 𝑓𝑐 (𝑡0 )] 𝑦𝐵 (𝑡0 ).
(17d)
𝑓𝐵 = 𝑚2−0,5 𝑓𝑐 −
2𝑚1
𝑓𝐶 = 𝑚20,5 𝑓𝐵 +
𝑚1
𝑚2
2𝑚20,5
,
(18a)
.
(18b)
The relations between the constants 𝑚𝑖 and 𝑛𝑖 can be written from Eqs. (11a) and (11b)
by the replacements 𝑘𝑖 → 𝑚𝑖 , 𝑙𝑖 → 𝑛𝑖 . The results are;
1
𝑛2
[𝑛1 ] =
𝑛0
𝑚2
𝑚1
−
,
𝑚21,5
𝑚12
(19a)
𝑚0
[2𝑚22 − 𝑚2 ]
1
𝑛2
𝑛1
𝑚2
[𝑚1 ] =
𝑚0
−
𝑛12
[
2𝑛22
.
𝑛21,5
(19b)
𝑛
− 𝑛0 ]
2
By using values of 𝑚𝑖 ’s in Eqs. (18a) and (18b) , or directly interchanging 𝐵 ↔ 𝐶, 𝑚𝑖 ↔
𝑛𝑖 , we obtain
𝑓𝐶 = 𝑛2−0.5 𝑓𝐵 −
2𝑛1
𝑓𝐵 = 𝑛20,5 𝑓𝐶 +
𝑛1
𝑛2
2𝑛20,5
,
(19c)
).
(19d)
Finally, Eqs. (12a, b, c, d) turns out to be
𝑚2
𝐶0 = 𝑚2 𝐵0 + 𝑚0 − 4𝑚1 ,
2
𝐵0 =
1
𝑚2
𝐶0 −
𝑚0
𝑚2
+
𝑚12
4𝑚22
𝑛2
𝐵0 = 𝑛2 𝐶0 + 𝑛0 − 4𝑛1 ,
2
1
𝑛
𝑛2
𝐶0 = 𝑛 𝐵0 − 𝑛0 + 4𝑛12 .
2
2
,
(20a)
(20b)
(20c)
(20d)
2
by the replacements 𝐴 → 𝐵, 𝐵 → 𝐶, 𝑘𝑖 → 𝑚𝑖 , 𝑙𝑖 → 𝑛𝑖 .
The preliminaries have been ready now for studying the transitivity property of
commutativity. Assuming 𝐵 is commutative with 𝐴 and 𝐶 is commutative with 𝐵, we need
13
to answer that weather 𝐶 is a commutative pair of 𝐴 . The answer is expressed by the
following theorems and their proves.
Theorem 1: Transitivity property of commutativity for second-order linear time-varying
analog systems which cannot be obtained from each other by constant feed forward and
feedback gains is always valid under zero initial conditions.
Proof: Since it is true by the hypothesis that 𝐵 is commutative with 𝐴, Eqs. (3a) and (3c)
are valid; since it is true by hypothesis that 𝐶 is commutative with 𝐵, Eqs. (14a) and (14c)
are also valid.
To prove Theorem 1, it should be proven that 𝐶 is commutative with 𝐴 under zero
initial conditions. Referring to the commutativity conditions for 𝐴 and 𝐵 in Eq. (3a) and
replacing 𝐵 by 𝐶, this proof is done by showing the validity of
𝑎2
𝑐2
[𝑐1 ] = [𝑎1
𝑐0
𝑎0
0
𝑎20,5
𝑓𝐴
0 𝑝2
0] [𝑝1 ],
1 𝑝0
(20)
where 𝑓𝐴 (𝑡) is given as in Eq. (3b), and the coefficients of 𝐴 already satisfy Eq. (3c) due to
the commutativity of 𝐵 with 𝐴; further 𝑝2 , 𝑝1 , 𝑝0 are some constants to be revealed:
Using Eq. (14a) first and then Eq. (3a), as well as Eq. (8b) for computing 𝑐0 , we can
express 𝑐2 , 𝑐1 , 𝑐0 as follows:
𝑐2 = 𝑚2 𝑏2 = 𝑚2 𝑘2 𝑎2 ,
𝑐1 = 𝑚2 𝑏1 + 𝑚1 𝑏20,5 = 𝑚2 (𝑘2 𝑎1 + 𝑘1 𝑎20,5 ) + 𝑚1 (ℎ2 𝑎2 )0,5
= 𝑚2 ℎ2 𝑎1 + (𝑚2 𝑘1 + 𝑚1 𝑘20,5 )𝑎20,5
𝑐0 = 𝑚2 𝑏0 + 𝑚1 𝑓𝐵 + 𝑚0
= 𝑚2 (𝑘2 𝑎0 + 𝑘1 𝑓𝐴 + 𝑘0 ) + 𝑚1 (𝑘20,5 𝑓𝐴 +
𝑘1
2𝑘20,5
= 𝑚2 𝑘2 𝑎0 + (𝑚2 𝑘1 + 𝑚1 𝑘20,5 )𝑓𝐴 + 𝑚2 𝑘0 +
14
) + 𝑚0
𝑚1 𝑘1
2𝑘20,5
+ 𝑚0 .
These results can be written as
𝑎2
𝑐2
[𝑐1 ] = [𝑎1
𝑐0
𝑎0
𝑚2 𝑘2
0
0,5
0] [ 𝑚2 𝑘1 + 𝑚1 𝑘2 ],
𝑚1 𝑘1
1 𝑚2 𝑘0 + 0,5 + 𝑚0
0
𝑎20,5
𝑓𝐴
(21a)
2𝑘2
which is exactly in the same form as Eq. (20) with the constants 𝑝2 , 𝑝1 , 𝑝0 ;
𝑚2 𝑘2
𝑝2
0,5
[𝑝1 ] = [ 𝑚2 𝑘1 + 𝑚1 𝑘2 ].
𝑚 𝑘
𝑝0
𝑚2 𝑘0 + 10,51 + 𝑚0
(21b)
2𝑘2
So, the proof is completed.
For the validity of transitivity property for second-order linear time-varying analog systems
under non-zero initial conditions, we state the following theorem.
Theorem 2: Transitivity property of commutativity of systems considered in Theorem 1 is
valid for the non-zero initial conditions of the systems as well.
Proof: The proof is done by showing the commutativity of 𝐶 with 𝐴 under non-zero
conditions as well. Since 𝐶 and 𝐴 are commutative with non-zero initial conditions Eq. (20)
and Eq. (3a) are valid as mentioned in proof of Theorem 1. To complete the proof, we
should show that 𝐶 is a commutative pair of 𝐴 under non-zero conditions as well, Eqs. (4ad) are satisfied for systems 𝐶 (instead of 𝐵) and 𝐴. Namely,
𝑦𝐶 (𝑡0 ) = 𝑦𝐴 (𝑡0 ) ≠ 0,
(22a)
𝑦̇ 𝐶 (𝑡0 ) = 𝑦̇𝐴 (𝑡0 ),
(22b)
(𝑝2 + 𝑝0 − 1)2 = 𝑝12 (1 − 𝐴0 ),
(22c)
𝑦̇ 𝐶 (𝑡0 ) = −𝑎2−0,5 (𝑡0 ) [
𝑝2 +𝑝0 −1
𝑝1
+ 𝑓𝐴 (𝑡0 )] 𝑦𝐶 (𝑡0 ),
(22d)
where 𝑘𝑖 ’s in Eq. (3a) for system 𝐵 are replaced by 𝑝𝑖 ‘s in Eq. (20) for system 𝐶.
Since, (𝐴, 𝐵) and (𝐵, 𝐶) are commutative under non-zero initial conditions by hypothesis,
Eqs. (4a, b) and (17a, b) are satisfied; so it follows that Eqs. (22a) and (22b) are valid. Since,
15
𝐵 and 𝐶 are commutative, in the commutativity conditions (4c) 𝑦′𝐵 (𝑡0 ) and 𝑦𝐵 (𝑡0 ) can be
replaced by 𝑦′𝐶 (𝑡0 ) and 𝑦𝐶 (𝑡0 ) due to Eqs. (15a, b); the result is
𝑦̇ 𝐶 (𝑡0 ) = −𝑎2−0,5 (𝑡0 ) [
𝑘2 +𝑘0 −1
𝑘1
+ 𝑓𝐴 (𝑡0 )] 𝑦𝐶 (𝑡0 ).
(23)
On the other hand, 𝑦̇ 𝐶 (𝑡0 ) and 𝑦𝐶 (𝑡0 ) are related by Eq. (15d). Comparing it with Eq. (23),
we write
−𝑏2−0,5 (𝑡0 ) [
𝑚2 +𝑚0 −1
𝑚1
+ 𝑓𝐵 (𝑡0 )] 𝑦𝐶 (𝑡0 ) = −𝑎2−0,5 (𝑡0 ) [
𝑘2 +𝑘0 −1
𝑘1
+ 𝑓𝐴 (𝑡0 )] 𝑦𝐶 (𝑡0 ). (24)
Since, (𝐴, 𝐵) is a commutative pair, substituting the values of 𝑎2 from Eq. (5a) and 𝑓𝐴 from
Eq. (8a) into Eq. (24), we obtain
−𝑏2−0,5 (𝑡0 ) [
𝑚2 + 𝑚0 − 1
+ 𝑓𝐵 (𝑡0 )] 𝑦𝐶 (𝑡0 )
𝑚1
−0,5
𝑏
= − (𝑘2 )
2
(𝑡0 ) [
𝑘2 +𝑘0 −1
𝑘1
𝑘
+ 𝑘2−0,5 𝑓𝐵 (𝑡0 ) − 2𝑘1 ] 𝑦𝐶 (𝑡0 ).
2
(25)
Since, 𝑏2 (𝑡) ≠ 0, 𝑦𝐶 (𝑡0 ) ≠ 0, we can write the above equality as
𝑚2 +𝑚0 −1
𝑚1
+ 𝑓𝐵 (𝑡0 ) = 𝑘20,5 [
𝑘2 +𝑘0 −1
𝑘1
𝑘
− 2𝑘1 ] + 𝑓𝐵 (𝑡0 ).
2
(26)
Finally, cancelling 𝑓𝐵 (𝑡0 ), we result with
𝑚2 +𝑚0 −1
𝑚1
= 𝑘20,5 [
𝑘2 +𝑘0 −1
𝑘1
𝑘
− 2𝑘1 ],
2
(27)
which is due to the commutativities of (𝐴, 𝐵) and (𝐵, 𝐶) under non-zero initial conditions.
Now, to prove Eq. (22c), we proceed as follows: Using Eq. (21b), we compute
𝑝2 +𝑝0 −1
𝑝1
𝑚 𝑘
𝑚2 𝑘2 +𝑚2 𝑘0 + 10,51 +𝑚0 −1
2𝑘2
=
𝑚2 𝑘1 +𝑚1 𝑘20,5
.
(28a)
Solving Eq. (27) for 𝑚1 , we have
𝑚1 =
𝑚2 +𝑚0 −1
.
𝑘 +𝑘 −1 𝑘
𝑘20,5 ( 2 0 − 1 )
𝑘1
2𝑘2
Substituting Eq. (28b) in (28a), we proceed as
16
(28b)
𝑚2 (𝑘2 + 𝑘0 ) + 𝑚0 − 1 +
𝑝2 + 𝑝0 − 1
=
𝑝1
𝑘1
𝑚2 + 𝑚 0 − 1
]
0,5 [
𝑘
2𝑘2 𝑘 0,5 ( 2 + 𝑘0 − 1 − 𝑘1 )
2
𝑘1
2𝑘2
𝑚2 + 𝑚0 − 1
𝑚2 𝑘1 + 𝑘20,5 [
]
𝑘
+ 𝑘0 − 1 𝑘1
ℎ20,5 ( 2
−
)
𝑘1
2𝑘2
[𝑚2 (𝑘2 + 𝑘0 ) + 𝑚0 − 1]𝑘20,5 (
=
𝑘 (𝑚 + 𝑚 − 1)
𝑘2 + 𝑘0 − 1 𝑘1
−
) + 1 2 0,50
𝑘1
2𝑘2
2𝑘
2
𝑘 + 𝑘0 − 1 𝑘1
𝑚2 𝑘1 𝑘20,5 ( 2
−
) + 𝑘20,5 (𝑚2 + 𝑚0 − 1)
𝑘1
2𝑘2
𝑘 (𝑚 + 𝑚0 − 1)
𝑘 + 𝑘0 − 1 𝑘1
[𝑚2 (𝑘2 + 𝑘0 ) + 𝑚0 − 1] ( 2
−
)+ 1 2
𝑘1
2𝑘2
2𝑘2
=
𝑘2 + 𝑘0 − 1 𝑘1
𝑚2 𝑘1 (
−
) + (𝑚2 + 𝑚0 − 1)
𝑘1
2𝑘2
𝑘1
𝑘 + 𝑘0 − 1
[𝑚2 + 𝑚0 − 1 − 𝑚2 (𝑘2 + 𝑘0 ) − 𝑚0 + 1] + 2
[𝑚2 (𝑘2 + 𝑘0 ) + 𝑚0 − 1]
2𝑘
ℎ1
= 2
𝑘2
𝑚2 (𝑘2 + 𝑘0 − 1) + 𝑚2 + 𝑚0 − 1 − 𝑚2 1
2𝑘2
𝑘12
(𝑘
)
𝑚
+
𝑘
+
𝑚
−
1
−
𝑚
2
2
0
0
2
𝑘2 + 𝑘0 − 1
2𝑘2 𝑘2 + 𝑘0 − 1
=
=
.
𝑘1
𝑘1
𝑘12
𝑚2 (𝑘2 + 𝑘0 ) + 𝑚0 − 1 − 𝑚2 2
2𝑘2
(28𝑐)
Using the equality (28c) in Eq. (4c) directly yields Eq. (22c). On the other hand, when Eq.
(28c) is used in Eq. (23), this equation results with the proof of Eq. (22d), so does with the
completion of the proof of Theorem 2.
We now introduce an example to illustrate the results obtained in the paper and to validate
the transitivity by computer simulation.
V.
Example
To illustrate the validity of the results obtained in the previous section, consider
the system 𝐴 defined by
𝐴: 𝑦′′𝐴 + (3 + sin 𝑡)𝑦′𝐴 + (3.25 + 0.25𝑠𝑖𝑛2 𝑡 + 1.5 sin 𝑡 + 0.5 cos 𝑡)𝑦𝐴 = 𝑥𝐴 , (29a)
for which Eq. (3b) yields
17
𝑎2−0,5 [2𝑎1 − 𝑎2 ] 1[2(3 + sin 𝑡) − 0] 6 + 2 sin 𝑡
𝑓𝐴 (𝑡) =
=
=
4
4
4
= 1.5 + 0.5 sin 𝑡 ,
(29b)
𝑓 ′𝐴 (𝑡) = 0.5 cos 𝑡 .
(29c)
To check Eq. (3c), we proceed
𝐴0 = 𝑎0 − 𝑓𝐴2 − 𝑎20,5 𝑓 ′𝐴
= 3.25 + 0.25𝑠𝑖𝑛2 𝑡 + 1.5𝑠𝑖𝑛𝑡 + 0.5𝑐𝑜𝑠𝑡 − (1.5 + 0.5𝑠𝑖𝑛𝑡)2 − 0.5𝑐𝑜𝑠𝑡
= 3.25 + 0.25𝑠𝑖𝑛2 𝑡 + 1.5𝑠𝑖𝑛𝑡 − 2.25 − 1.5𝑠𝑖𝑛𝑡 − 0.25𝑠𝑖𝑛2 𝑡
= 1.
(29d)
Hence, this expression is constant, that is 𝐴0 = 1. Chosing 𝑘2 = 1, 𝑘1 = −2, 𝑘0 = 0 in Eq.
(3a),
𝑎2
𝑏2
[𝑏1 ] = [𝑎1
𝑏0
𝑎0
=[
0
𝑎20,5
𝑓𝐴
𝑎2
0 1
0,5
1] [−2] = [𝑎1 − 2𝑎2 ]
𝑎0 − 2𝑓𝐴
0 0
1
]
3 + sin 𝑡 − 2
2
3.25 + 0.25𝑠𝑖𝑛 𝑡 + 0.5 sin 𝑡 + 0.5 cos 𝑡
1
=[
].
1 + sin 𝑡 − 2
0.25 + 0.25𝑠𝑖𝑛2 + 0.5 sin 𝑡 + 0.5 cos 𝑡
(30a)
So, 𝐴 and 𝐵 are commutative under zero initial conditions. From Eq. (30a), we compute 𝑓𝐵
and 𝐵0 by using Eqs. (5b) and (5c)
𝑓𝐵 =
𝑏2−0,5 (2𝑏1 − 𝑏2 ) (2 + 2 sin 𝑡)
=
= 0.5 + 0.5 sin 𝑡 ,
4
4
𝑓′𝐵 = 0.5 cos 𝑡 ,
(30b)
(30c)
𝐵0 = 𝑏0 − 𝑓𝐵2 − 𝑏20,5 𝑓 ′ 𝐵
= 0.25 + 0.25𝑠𝑖𝑛2 𝑡 + 0.5 sin 𝑡 + 0.5 cos 𝑡 − (0.5 + 0.5 sin 𝑡)2 − 0.5 cos 𝑡
= 0.25 + 0.25𝑠𝑖𝑛2 𝑡 + 0.5 sin 𝑡 − 0.25 − 0.5 sin 𝑡 − 0.25𝑠𝑖𝑛2 𝑡
= 0.
(30d)
18
We check the validity of Eq. (12a) by using Eqs. (30d) and (29d):
𝐵0 = 𝑘2 𝐴0 + 𝑘0 −
(−2)2
𝑘12
= 1(1) + 0 −
= 0.
4𝑘2
4(1)
(30e)
It can be checked easily by using Eqs. (29b) and (30b) that Eqs. (8a,b) are also correct.
Considering the requirements for the non-zero initial conditions at 𝑡0 = 0, Eq. (4) yields
𝑦𝐵 (0) = −(1)2 [
1+0−1
+ 1.5 + 0.5 sin 0] 𝑦𝐴 (0) = −1.5𝑦𝐵 (0).
−2
(31a)
Hence, for the commutativity of 𝐴 and 𝐵 under non-zero initial conditions as well, due to
Eqs. (6a, b) and (31a),
𝑦′𝐴 (0) = 𝑦′𝐵 (0) = −1.5𝑦𝐵 (0) = −1.5𝑦𝐴 (0).
(31b)
We now consider a third system 𝐶 which is commutative with 𝐵. Therefore, using Eq. (14c)
with 𝑚2 = 1, 𝑚1 = 3, 𝑚0 = 3, we have
𝑏2
𝑐2
[𝑐1 ] = [𝑏1
𝑐0
𝑏0
0
𝑏20,5
𝑓𝐵
0 1
0] [3].
1 3
Inserting values of 𝑏𝑖 ’s from Eq. (30a) and value of 𝑓𝐵 from Eq. (30b) in, we have
𝑐2
1
𝑐
[ 1] = [
1 + sin 𝑡
𝑐0
0.25 + 0.25𝑠𝑖𝑛2 𝑡 + 0.5 sin 𝑡 + 0.5 cos 𝑡
0
1
0.5 + 0.5 sin 𝑡
1
=[
].
4 + sin 𝑡
4.75 + 0.25𝑠𝑖𝑛2 𝑡 + 2 sin 𝑡 + 0.5 cos 𝑡
0 1
0 ] [3 ]
1 3
(32a)
Eqs. (16b) and (16c) yield that
𝑓𝐶 =
8 + 2 sin 𝑡
= 2 + 0.5 sin 𝑡 ,
4
𝑓′𝐶 = 0.5 cos 𝑡
(32b)
(32c)
𝐶0 = 𝑐0 − 𝑓𝐶2 − 𝑐20,5 𝑓 ′ 𝐶
= 4.75 + 0.25𝑠𝑖𝑛2 𝑡 + 2 sin 𝑡 + 0.5 cos 𝑡 − (2 + 0.5 sin 𝑡) 2 − 0.5 cos 𝑡
= 4.75 + 1.25𝑠𝑖𝑛2 𝑡 + 2 sin 𝑡 − 4 − 2𝑠𝑖𝑛𝑡 − 0.25𝑠𝑖𝑛2 𝑡
19
= 0.75.
(32d)
One can easily check that 𝐶0 and 𝐵0 in Eqs. (32d) and (30d) satisfy relations in Eqs. (20a,b).
For the commutativity of 𝐵 and 𝐶 under non-zero initial conditions at time 𝑡0 = 0 as well;
Eq. (15) together with Eqs. (30a), (30b) and chosen values of 𝑚𝑖 ’s yield
𝑦̇ 𝐶 (0) = 𝑦̇ 𝐵 (0) = −(1)2 [
= −[
𝑚2 + 𝑚0 − 1
+ 0,5 + 0,5 sin 0] 𝑦𝐶 (𝑡0 )
𝑚1
1+3−1
+ 0,5] 𝑦𝐶 (𝑡0 ) = −1,5𝑦𝐶 (𝑡0 ) = −1,5𝑦𝐵 (𝑡0 ).
3
(33)
Considering the transitivity property under non-zero initial conditions, the
conditions of Theorem 2 are satisfied. Namely, using the chosen values of 𝑚𝑖 ’s and 𝑘𝑖 ’s ,
from Eq. (21b), we have 𝑝2 = 1, 𝑝1 = 1, 𝑝0 = 0. And with 𝐴0 = 1 as computed in Eq.
(29d), Eq. (22c) is satisfied;
(1 + 0 − 1)2 = (1)2 (1 − 1).
So does Eq. (22d)
1−0−1
𝑦̇ 𝐶 (0) = −(1)−0,5 (
1
+ 1.5 + 0.5 sin 0) 𝑦𝐶 (0) = −1.5𝑦𝐶 (0).
The answer is obviously yes as already shown in Eq. (33).
The simulations are done for the inter connection of the above mentioned systems
𝐴, 𝐵, 𝐶. The initial conditions are taken as
𝑦𝐴 (0) = 𝑦𝐵 (0) = 𝑦𝐶 (0) = 1,
(34a)
𝑦̇𝐴 (0) = 𝑦̇ 𝐵 (0) = 𝑦̇ 𝐶 (0) = −1.5𝑦𝐴 (0) = −1.5𝑦𝐵 (0) = −1,5𝑦𝐶 (0) = −1.5. (34b)
And input is assumed 40sin(10𝜋𝑡). It is observed that 𝐴𝐵, 𝐵𝐴 yield the same response;
𝐵𝐶, 𝐶𝐵 yield the same response; so 𝐶𝐴, 𝐴𝐶 yield to same response. These responses are
shown in Fig. 2 by 𝐴𝐵 = 𝐵𝐴, 𝐵𝐶 = 𝐶𝐵, 𝐶𝐴 = 𝐴𝐶, respectively. Hence, transitivity
property shows up as if (𝐴, 𝐵) and (𝐵, 𝐶) are commutative pairs so is (𝐴, 𝐶).
20
These simulations and all the subsequent ones are done by MATLAB2010 Simulink
Toolbox with fixed time step of 0.02 using ode3b (Bogachi-Shampine) program; the final
time is 𝑡 = 10.
Fig. 2: Outputs of commutative cascade connections 𝐴𝐵, 𝐵𝐴, 𝐶𝐵, 𝐵𝐶, 𝐶𝐴, 𝐴𝐶 with
nonzero initial conditions.
The second set of simulations are obtained by zero initial conditions, the conditions
of Theorem 1 are satisfied by choosing 𝑚2 = 1, 𝑚1 = −1, 𝑚0 = 3, so that 𝐶 is obtained
from 𝐵 through Eq. (14) as
𝑐2
1
1 + sin 𝑡
[𝑐1 ] = [
𝑐0
0.25 + 0.25𝑠𝑖𝑛2 𝑡 + 0.5 sin 𝑡 + 0,5 cos 𝑡
21
0
1
0.5 + 0.5 sin 𝑡
0 1
0] [−1]
1 3
=[
1
].
2 − sin 𝑡
2
2.75 + 0.25𝑠𝑖𝑛 𝑡 + 0.5 cos 𝑡
Hence, 𝐶 is commutative with 𝐵, and together with 𝐵 being commutative with 𝐴,
the conditions of Theorem 1 are satisfied so that 𝐶 is commutative with 𝐴. This is observed
in Fig. 3. In this figure, the responses indicated by 𝐴𝐵 = 𝐵𝐴, 𝐵𝐶 = 𝐶𝐵, 𝐶𝐴 = 𝐴𝐶 which
are all obtained by zero initial conditions validates the transitivity
commutativity, that is Theorem 1 is valid.
Fig. 3: The simulations obtained with zero initial conditions
22
property of
Finally, the simulations are performed for arbitrary initial conditions 𝑦𝐴 (0) = 0.4,
𝑦̇𝐴 (0) = −0.3, 𝑦𝐵 (0) = 0.2, 𝑦̇ 𝐵 (0) = −0.4, 𝑦𝐶 (0) = −0.5, 𝑦̇ 𝐶 (0) = 0.5. It is observed
that (𝐴, 𝐵), (𝐵, 𝐶), (𝐶, 𝐴) are not commutative pairs at all, the plots AB, BA; BC, CA; CA,
AC are shown in Fig. 4, respectively. However, since all systems (individually and in pairs
as cascade conceded) are asymptotically stable and the effects of non-zero initial conditions
die away as time proceeds, and 𝐴, 𝐵, 𝐶 are pairwise commutative with zero initial
conditions, the responses of 𝐴𝐵 and 𝐵𝐴, 𝐵𝐶 and 𝐶𝐵, 𝐶𝐴 and 𝐴𝐶 approach each other with
increasing time. That is commutativity property and its transitivity gets valid in the steadystate case.
23
Fig. 4: Responses of cascade connection of Systems 𝐴, 𝐵, 𝐶 (which are commutative with
zero initial conditions) with arbitrary initial conditions not satisfying commutativity
conditions.
VI.
Conclusions
On the base of the commutativity conditions for second order linear time-varying
analog systems with nonzero initial conditions, the inverse commutativity conditions are
reformulated completely in the form of Lemma 1 by considering the case of non-zero initial
conditions. With the obtained results, the transitivity property of commutativity is stated
both for relaxed and unrelaxed cases by Theorems 1 and 2, respectively. Througout the
study the subsystems considered are assumed not obtainable from each other by any feedforword and feed-back structure, which is a case that needs special treatement due to special
commutativity requirements in case of nonzero initial conditions [21].
All the results derived in the paper are well verified by simulations done by
MATLAB2010 Simulink Toolbox using ode3b (Bogachi-Shampine) program.
Acknowledgments: This study is supported by the Scientific and Technological Research
Council of Turkey (TUBITAK) under the project no. 115E952.
References
[1]
M. Borenovic, A. Neskovic, D. Budimir, Space partitioning strategies for indoor
wlan positioning with cascade-connected ann structures, International Journal of
Neural Systems, 21, 1-15, 2011.
[2]
D. antic, Z. Jovanovic, V. Nikolic, M. Milojkovic, S. Nikolic, N. Dankovic,
Modelling of cascade-connected systems using quise-orthagonal functions,
Elektronika Ir Elektrotechnika, 18, 3-8, 2012
[3]
K. Nagase, Wave analysis and control of double cascade-connected damped mass24
spring systems, Mechanics Research Communications, 70, 49-57, 2015.
[4]
B. Samardzic, B. M. Zlatkovic, Analysis of spatial chaos appearance in cascade
connected nonlinear electrical circuits, Chaos Solutions and Fractals, 95, 14-20,
2017.
[5]
T. Kaczorek, Positive time-varying continuous-time linear systems and electrical
circuits, The journal of Polish Academy of Sciences, 63, 1-6, 2015.
[6]
B. Basu, A. Staino, Control of a linear time-varying system with a forward Ricatti
formulation in wavelet domain, Journal of Dynamic systems Measurement and
Control-Transactions of the ASME, 138, 1-6, 2016.
[7]
R.K.R. Alla, J.S. Lather, G.L. Pahuja, New delay dependent stability criterion for
linear system with time-varying delay using Wirtinger’s inequality, Journal of
Engineering Research, 4, 103-116, 2016.
[8]
J. Wang, C.M. Mak, An active vibration control system with decoupling scheme for
linear periodically time-varying systems, Journal of Vibration and Control, 22,
2370-2379, 2016.
[9]
W. Guan, C. Wang, D.S. Chen, C.Y. Luo, F.F. Su, Recursive principal component
analysis with forgetting factor for operational modal analysis of linear time-varying
system, International Journal of Applied Electromagnetics and Mechanics, 52, 9991006, 2016.
[10]
J. Lataire, R. Pintelon, D. Piga, R. Toth, Continuous-time linear time-varying
system identification with a frequency-domain kernel-based estimator, IET Control
Theory and Applications, 11, 457-465, 2017.
[11]
R. Boylestad and L. Nashelsky, Electronic Devices and Circuit Theory, Prentice
Hall, New Jersey, 2013.
[12]
E. Marshal, Commutativity of time varying systems, Electronics Letters, 13, 539540, 1977.
[13]
M. Koksal, An exhaustive study on the commutativity of time-varying systems,
International Journal of Control, 47, 1521-1537, 1988.
[14]
M. Koksal and M. E. Koksal, Commutativity of linear time-varying differential
systems with non-zero initial conditions: A review and some new extensions,
Mathematical Problems in Engineering, 2011, 1-25, 2011.
[15]
M. E. Koksal and M. Koksal, Commutativity of cascade connected discrete time
linear time-varying systems, 2013 Automatic Control National Meeting TOK’2013,
p.1128-1131, 2013.
25
[16]
M. Koksal and M. E. Koksal, Commutativity of cascade connected discrete-time
linear time-varying systems, Transactions of the Institute of Measurement and
Control, 37, 615-622, 2015.
[17]
M. E. Koksal, The Second order commutative pairs of a first order linear timevarying system, Applied Mathematics and Information Sciences, 9 (1) 1-6, 2015.
[18]
M. E. Koksal, Decomposition of a second-order linear time-varying differential
system as the series connection of two first-order commutative pairs, Open
Mathematics, 14, 693-704, 2016.
[19]
M. E. Koksal, Inverse commutativity conditions for second-order linear timevarying systems, Journal of Mathematics, 2017, 1-14, 2017.
[20]
M. E. Koksal, Transitivity property of commutativity for linear time varying analog
systems, Submitted, arXiv: 1709.04477, 1-22, 2017.
[21]
M. E. Koksal, Explicit commutativity conditions for second-order linear timevarying systems with non-zero initial conditions, Submitted, arXiv: 1709.04403, 120, 2017.
[22]
C. A. Desoer, Notes For A Second Course On Linear Systems, Van Nostrand
Rheinhold, New York, 1970.
26
| 3 |
Weakly Supervised Object Detection with Pointwise Mutual Information
Rene Grzeszick, Sebastian Sudholt, Gernot A. Fink
TU Dortmund University, Germany
arXiv:1801.08747v1 [cs.CV] 26 Jan 2018
{rene.grzeszick,sebastian.sudholt,gernot.fink}@tu-dortmund.de
Abstract
In this work a novel approach for weakly supervised object detection that incorporates pointwise mutual information is presented. A fully convolutional neural network architecture is applied in which the network learns one filter
per object class. The resulting feature map indicates the
location of objects in an image, yielding an intuitive representation of a class activation map. While traditionally such
networks are learned by a softmax or binary logistic regression (sigmoid cross-entropy loss), a learning approach
based on a cosine loss is introduced. A pointwise mutual
information layer is incorporated in the network in order
to project predictions and ground truth presence labels in
a non-categorical embedding space. Thus, the cosine loss
can be employed in this non-categorical representation. Besides integrating image level annotations, it is shown how
to integrate point-wise annotations using a Spatial Pyramid
Pooling layer. The approach is evaluated on the VOC2012
dataset for classification, point localization and weakly supervised bounding box localization. It is shown that the
combination of pointwise mutual information and a cosine
loss eases the learning process and thus improves the accuracy. The integration of coarse point-wise localizations
further improves the results at minimal annotation costs.
1. Introduction
The classification and localization of objects is one of the
main tasks for the understanding of images. Much progress
has been made in this field based on the recent developments in Convolutional Neural Networks (CNNs) [12]. The
error rates in prominent tasks like ImageNet competitions
have been reduced by a large margin over the last five years
[16]. While the models become more and more powerful,
the required data can still pose a bottleneck. In many localization tasks very detailed annotations are required in order
to train a visual detector. Typically, an annotation is required that has the same level of detail as the desired output
of the detector, e.g. bounding boxes or even pixel level annotations.
As obtaining these annotations is expensive, weakly supervised learning approaches become of broader interest.
These methods require a lower level of supervision during
training. An object detector can be trained while only labeling images with respect to the presence or absence of
certain objects. Similar to supervised tasks, great progress
has been made in the field of weakly supervised learning by
incorporating CNNs.
State-of-the-art approaches in weakly supervised object
detection use region proposals in order to localize objects.
They evaluate these proposals based on a CNN that is solely
trained on image level annotations. In [2] images are evaluated based on a pre-trained classification network, i.e. a
VGG16 network that is pre-trained on ImageNet. The convolutional part of this network is evaluated on a given image, computing a feature map. Based on heuristic region
proposals, e.g. from selective search or edge boxes, a set of
candidate regions is cropped from the feature maps. These
candidates are then processed by a Spatial Pyramid Pooling
(SPP) [9] layer which is followed by fully connected layers. Only this part is trained in a weakly supervised fashion
based on image level annotations and the relation between
different candidate regions. In [10] a similar approach is
followed. Here, two different loss functions are introduced
which incorporate either an additive or subtractive center
surround criterion for each candidate region. It has been
shown that this allows for improving the results compared
to [2]. While these approaches show state-of-the-art performance, the incorporation of region proposals often comes at
a high computational cost and is based on heuristic design
decisions and expert knowledge (cf. [7]).
It has also been shown that with a deeper understanding
of CNNs and their activations, the visualization of important filters can be leveraged for localizing objects. These approaches do not include additional region proposals so that
they can be learned in an end-to-end fashion. A comparison of recent visualization approaches can be found in [17].
In [14] and [19] CNNs are trained on image level annotations for the task of object detection. The work in [14] applies max pooling for predicting the locations of objects in a
weakly supervised manner. A multi-scale training approach
is employed in order to learn the locations more accurately.
The network training is performed using a binary logistic
loss function (also known as cross-entropy loss) which in
turn allows to predict a binary vector indicating the presence of multiple objects at once. A similar approach is
followed in [19], but in contrast to [14], a global average
pooling followed by a softmax is applied. In [19], it is argued that the global average pooling captures the extent of
an object rather than just a certain part of an object. Based
on the global average of the last filter responses, weights are
computed which calculate the importance of each filter for
a certain class. This can then be used in order to highlight
the presence of certain classes in a so-called class activation map (CAM). These class specific activations show an
object’s extent and can therefore be leveraged in order to
predict objects in a weakly supervised manner.
Besides different approaches for weakly supervised
learning, there are a few methods that deal with learning
from annotations which require a minimal annotation effort. In [11] CAMs are improved by adding micro annotations. Similar regions are grouped and manually labeled in
order to remove false positive detections and obtain a more
accurate localization. For example, trains are consistently
co-occurring with tracks and thus often falsely recognized
in the localization. In [1], point-wise annotations are introduced for the task of semantic segmentation. These provide
a coarse localization of objects that also comes with a low
annotation effort. It has been shown that the additional effort for point wise annotations is as low as approx. 2.1 sec.
per image compared to image level annotations [1]. Such
annotations may also provide an interesting cue of information to boost the performance of weakly supervised object
detectors.
Another interesting aspect of weakly supervised learning with CNNs are the loss functions. For example, in [14]
a binary logistic loss function is used. Most prominently
this loss is also used in tasks where multiple entities are
predicted at once, as, for example, in attribute prediction
[3, 8]. In [8], multiple scene attributes are recognized simultaneously in a CNN. It is shown that this approach outperforms traditional per attribute recognizers which are typically SVMs on top of heuristic feature representations or
later on SVMs trained on CNN features [15, 20]. The simultaneous prediction is important as training multiple deep
networks for each attribute is not suitable. Furthermore, the
larger number of samples is an advantage for training. It can
be assumed that the network also learns which attributes are
typically appearing simultaneously within its feature representations. This idea is followed in [3], where an embedding is computed which encodes the mutual information between two attributes in the data, the pointwise mutual information (PMI) embedding. A CNN is trained based on the
feature vectors of the embedding space. The predictions are
then also made with respect to the embedding space. Given
that this is a continuous, non-categorical, space, traditional
softmax or binary logistic loss functions can no longer be
used. Thus, a cosine loss function is employed for training the CNN. However, since the predictions are made in
the embedding space, the presence of certain attributes can
no longer be predicted in a straightforward manner. They
are thus predicted using the cosine similarity between the
networks output and vectors with a one-hot encoding that
indicate the presence of a certain attribute.
In this work a fully convolutional network that incorporates PMI for weakly supervised object detection is introduced. The network learns exactly one filter per object
class and does not incorporate additional information such
as region proposals. The contributions are as follows: The
network incorporates a PMI embedding layer and is trained
through a cosine loss, yet is still able to predict the presence of objects in a single forward pass. It is furthermore
shown how to integrate annotations for either image level
annotations or point-wise annotations using a SPP layer.
2. Method
A fully convolutional network architecture is proposed
which allows for object detection. An overview is given in
Fig. 1. The network is designed to learn exactly one filter
for each object class (see sec. 2.1). These filters are then
followed by a SPP layer which allows for training the network in a weakly supervised fashion (cf. [14, 19]). For example, image level annotations correspond to the first level
of the SPP, whereas coarse localizations can be encoded
by using multiple levels which indicate the presence of an
object in a certain region (as described in sec. 2.2). In order to account for co-occurrences of objects, an integrated
learning step is proposed. A PMI layer, more precisely the
positive pointwise mutual information (PPMI), is included
in the network which projects the object prediction scores
into a de-correlated feature space (see sec. 2.3). As the features to be learned in this feature space are in Rn and noncategorical, the cosine loss function is applied for training.
The error is backpropagated through the network, including
a backprojection of the PMI transformation so that the network still outputs scores for the presence of objects at the
last convolutional feature map.
2.1. Fully Convolutional Network
The proposed fully convolutional network architecture
is similar to many other fully convolutional networks and
based on the VGG networks [18]. Here, the fully connected
layers of the VGG16 architecture are replaced by two additional convolution layers: one with 512 filters and one with
exactly one filter per object class. Thus, instead of learning
a global mapping of filters to object classes as in the CAM
approach (cf. [14, 19]), the network learns exactly one filter
...
of an image. For training the network based on image tags
or point-wise annotations, the output is projected into an
embedding space using the PPMI layer. The ground truth
annotations are projected to the same embedding space and
then a cosine loss is computed. For evaluation, a sigmoid
layer can be used in order to derive probability scores from
the class activations. Weakly supervised object detection
can be performed by processing the response of each pixel
of the class activation map by a sigmoid function. This results in probability scores which indicate the location of objects.
...
2.2. Integrating coarse point-wise localizations
Figure 1: Overview of the proposed fully convolutional network architecture. During training both ground truth and
predictions may be processed on image level or may include
coarse localizations which are encoded by an Spatial Pyramid Pooling (SPP) layer. Both vectors are projected into
an embedding space using the positive pointwise mutual information (PPMI) transformation which is derived from the
training data. A cosine loss is computed for training. During testing, the output of the last convolutional layer can be
either used for a presence prediction or a weakly supervised
classification based on the class activation maps.
which is responsible for indicating the presence of an object
class at a certain location. This behavior is rather similar to
networks for semantic segmentation [13]. The per class activations of the network are, therefore, easily interpretable,
i.e., by a human user.
For a classification task this map can be processed by a
pooling layer in order to indicate the presence of an object
in an image. Here, it is proposed to use a SPP layer that
employs average pooling. This allows to compute a presence score for a global presence but also for certain regions
Incorporating an SPP layer in the network architecture
allows for encoding additional coarse localizations for an
object’s presence within an image. The presence of an object can be encoded for each tile in the pyramid. Such an encoding can be combined with bounding boxes or with even
simpler forms of annotations. Most interestingly, pointwise annotations allow to indicate the presence of an object
in a certain region. A human annotator is asked to click
on an object, therefore, indicating it’s presence by a single
point within the image. These point-wise annotations require a minimal manual effort [1].
As each tile indicates the presence of an object in a certain region, the SPP approach will generate different levels
of granularity. Each tile that contains a point of a certain
object, is labeled with this object class being present. The
feature vector that is used for training is the concatenation of
multiple SPP tiles. An illustration is shown in Fig. 2. Therefore, when encoding the presence with a binary vector that
shall be learned by the network, multiple co-occurrences are
created within this vector. In the given example, the presence of a dog in the upper left, as well as the upper right tile
of the image at the first level of the pyramid co-occurs with
the presence of a dog at image level. This co-occurrence
will always occur for the tiles at the image level and the
finer levels of detail.
2.3. Encoding co-occurences with pointwise mutual
information
Due to the location encoding by the SPP layer, as well as
the natural co-occurrences of objects in images, the binary
label vectors will exhibit recurring co-occurrences. In order
to take these into account, a feature space is computed that
captures the likelihood that any two labels may co-occur in
a given image. This feature space is then used for training
the network.
Following the idea of [3], the PMI can be computed in
order to measure the mutual information between labels and
find correlations within the data. Here, all object occurrences within the ground truth annotations of the training
Object Presence
Upper Left
Upper Right
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
Lower Left
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
Lower Right
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Image Level
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
Figure 2: Illustration of the point-wise labels and the resulting co-occurrences in the resulting pyramid feature vector. Here,
the two dogs are annotated with points-wise annotations (in blue) in the upper left and upper right tile (indicated in red).
The resulting feature vector indicates the presence of a certain class in the respective tiles using a one. A zero indicates it’s
absence. The final representation is the concatenation of all tiles.
data can be used for computing
p(Oi , Oj )
PMI = log
p(Oi )p(Oj ) (i,j)
,
(1)
where p(Oi , Oj ) represents the probability of object i and j
to occur together, p(Oi ) and p(Oj ) are the priors for object
i and j respectively. This can either be evaluated on image
level or for the complete pyramid in the SPP. In contrast to
[3], the PPMI, which is defined by
PPMI = max(0, P M I) ,
(2)
is used in the proposed approach. For object detection, the
presence of objects occurring together is most important,
ignoring negative correlations. This matrix can then be expressed by
PPMI = U · Σ · U T
(3)
where Σ is a diagonal matrix
with the eigenvalues in the
√
diagonal. Then, E = U Σ is considered as a transformation matrix so that the P P M I = E · E T . In [3] this approach was not only used for computing a transformation,
but also in order to reduce dimensionality. In the presented
approach, the dimensionality is preserved, which is important in the following as it allows for reconstructing the original feature vector. Note that for unobserved co-occurrences
of two object classes P (Oi , Oj ) equals zero. Therefore, the
P M I matrix is not necessarily positive semidefinite so that
the eigenvalues in Σ could become negative. Without reducing the dimensionality, it is therefore imperative to use the
PPMI, which yields
√ positive semidefinite matrix, instead of
PMI as otherwise Σ could become complex.
For projecting a feature vector x into the embedding
space, the transformation E · x is applied. In order to integrate this into the CNN, an additional layer is introduced
that implements the embedding E (see Fig. 1). The PPMI
transformation is learned from the training samples before
training the CNN. When training the CNN, the embedding
matrix E is encoded in a single fully connected layer for
which the weights are not updated during the stochastic
gradient descent. This layer is used in order to project the
scores as well as the ground truth annotations indicating an
objects presence in the image or a certain region of the SPP
representation into a new embedding space. In contrast to
logistic regression, where a non-continuous space (binary
vectors) is used, the embedding space is continuous. Since
the features in this space are in Rn and non-categorical, the
cosine loss function is computed between the networks output ŷ and a ground truth feature vector y:
loss(ŷ, y) = 1 −
ŷ T y
||ŷ|| · ||y||
(4)
The cosine loss is chosen over the L2 -loss as it can be expected that distances between high-dimensional target vectors are better represented by an angle than the Euclidean
distance which typically suffers from the curse of dimensionality. Moreover, the cosine loss computes to zero for
an infinite amount of points for a given target while the L2 loss only equals to zero for a single point (the target itself).
It is reasonable to assume that this trait benefits the learning
process.
When training the network with backpropagation, the
PPMI layer computes a backprojection from the embedding
space and reconstructs the original feature vector x. The
error is, therefore, also evaluated with respect to the class
scores.
In [3] the network is trained solely in the embedding
space and thus predicts a vector in the embedding space.
The presence of an attribute had therefore to be predicted
Annotation
Detail
Global
Global
Global
Global (*)
2×2
2×2
2×2
Loss
Loss
Function
Layer(s)
Binary logistic
–
PPMI + Cosine
–
Oquab et. al. [14] (full images)
Oquab et. al. [14] (weakly supervised)
Binary logistic
Finest Level
Binary logistic
Pyramid
PPMI + Cosine
Pyramid
mAP
image Level
2×2
76.9%
26.0%
80.3%
33.1%
76.0%
–
81.8%
–
75.5%
34.2%
77.1%
34.4%
82.1%
35.2%
(*) An additional multi scale analysis is carried out.
Table 1: Mean average precision for the classification in the VOC2012 dataset.
based on the cosine distance between a binary vector indicating the presence of a single attribute and the PMI output.
In the proposed approach, the class scores are directly obtained by a forward pass through the network.
3. Evaluation
The proposed approach is evaluated on the VOC2012
dataset [5]. Additional coarse localizations are provided by
the point-wise annotations published in [1]. These annotations were created by crowd workers, which were asked to
indicate the location of an object by clicking on it.
The approach is evaluated for three tasks. First, the classification task indicating the presence of an object in an
image. Second, the point-wise localization, following the
setup of [14], where the highest activation in a feature map
is taken in order to predict a single point indicating the location of an object. Third, the weakly supervised localization
is evaluated based on the correct localization (CorLoc) accuracy [4, 10]. While, for the first two tasks, the networks
are trained on the train set and evaluated on the validation
set of the VOC2012 benchmark (the test set is not available
for the point-wise annotations), the CorLoc metric is typically evaluated on a training set and therefore evaluated on
the complete trainval split of the dataset (cf. [10]).
The training images are rescaled so that the shortest side
is 512px in length. All networks are trained with a batch
size of 256 for 2, 000 iterations, which equals 512, 000 images or 90/45 epochs on the train/trainval split. The first
600 iterations are trained with a learning rate of 0.0001
which is then increased to 0.001. Random data augmentations are applied, which include translation (up to 5%), rotation (up to 5 deg), Gaussian noise (σ = 0.02) and vertical
mirroring. The networks that are trained with global, image
level annotations are initialized using the ImageNet weights
for the VGG16 networks. The networks which are trained
with coarse point-wise localization are initialized using the
weights from the image level training.
3.1. Classification
Table 1 reports the classification accuracy on the
VOC2012 dataset as the mean average precision (mAP).
The accuracy is evaluated with respect to the presence of
an object in an image or in any of the tiles of a 2 × 2 subdivision of the image. In the latter case, each tile is evaluated
independently. The prediction scores for each tile are compared to the point-wise annotations and the average over all
tiles is reported.
A CNN using binary logistic loss is compared to one using the proposed combination of PPMI and a cosine loss.
The results show that the performance on image level and
also when using the highly correlated point-wise annotations can be improved by the PPMI embedding. It can also
be seen that incorporating the coarse localizations which
are derived from the point-wise annotations helps improving the image level results as well as the predictions for the
more detailed 2 × 2 regions.
When comparing the image level results to the ones published in [14], similar results are achieved. While outperforming the full image setup of [14] the results are slightly
below the weakly supervised setup which applied an additional multi-scale analysis during training. Note that our
training configuration is more similar to the full image setup
as a fixed image size is used.
3.2. Localization
For evaluating the localization accuracy the protocol designed in [14] is followed. The class-wise activations after
the sigmoid computation are rescaled to the original image
size. Using the maximum activation within a feature map,
one detection point can be reported for each object class.
A point prediction is considered as correct if it is within a
ground truth bounding box (± 18px) of the same class, as
in [14]. Each point is then associated with its respective
probability score and the mAP is computed.
The results are reported in Tab. 2. The PPMI embedding
improves the results significantly compared to the binary
Annotation
Detail
Global
Global
Global
2×2
BBoxes
Loss
Loss
Function
Layer(s)
Binary logistic
–
PPMI + Cosine
–
Oquab et. al. [14] (weakly supervised)
PPMI + Cosine
Pyramid
R-CNN [6]; results reported in [14]
mAP
localization
69.8%
76.5%
74.5%
78.1%
74.8%
Table 2: Results of the point-wise localization, following the setup in [14].
logistic loss. Here, it can be seen that the proposed network provides a better localization result than the approach
in [14]. Similar to the classification results, the additional
coarse localizations allow for further improving the results
at the cost of a minimal annotation effort.
3.3. CorLoc
Last, the correct localization (CorLoc) accuracy has been
evaluated. Here, the results are provided for the VOC2012
trainval set. Given an image and a target class, the CorLoc
describes the percentage of images where a bounding box
has been predicted that correctly localizes an object of the
target class. An intersection over union (IoU) of 50% is
required for a prediction to be considered as correct.
For predicting a localization, the approach of [19] is followed. Note that the network is able to predict multiple
localizations for different object classes at once. Given a
target class, all pixels with an activation of more than 10%
of the maximum activation for the target class are chosen
as foreground pixels. The bounding box around the largest
connected region is chosen as the object prediction.
The results are shown in Tab. 3. The combination of
PPMI and a cosine loss improves the localization by a margin compared to a binary logistic loss. Again, the coarse
localizations are able to produce more precise results. Note
that recently an approach has been proposed that achieves a
CorLoc of 54.8% by incorporating selective search data and
explicitly optimizing for bounding box predictions [10]. In
contrast, the proposed approach is trained in an end-to-end
fashion without additional selective search data.
3.4. Qualitative results
Exemplary results are shown in Fig. 3. The examples
are taken from the CNN that has been trained for the CorLoc on the VOC2012 trainval set. The network has been
trained using the proposed combination of PPMI and a cosine loss. The annotations are coarse localizations derived
from point-wise annotations for 2 × 2 tiles. The left column shows the input image with the bounding boxes of the
target class shown in green. The middle column shows the
predicted object region after thresholding the class activa-
tion map. The right side shows the class activation map as
derived from the network. The class activation map is the
output of a single feature map where each pixel’s intensity
has been processed by a sigmoid function.
It can be observed that the desired objects are nicely localized in the feature maps. Even in the error case, the activations are reasonable as multiple screens are placed close
to each other, making it difficult to distinguish them in a
weakly supervised fashion.
4. Conclusion
In this work a novel approach for weakly supervised
object detection with CNNs has been proposed. The network incorporates the positive pointwise mutual information (PPMI) and a cosine loss function for learning. It is
shown that this approach eases the learning process, improving the results compared to a binary logistic loss based
on categorical feature vectors.
A fully convolutional network architecture has been used
for the weakly supervised detection. A single feature map
is learned for each class, creating an intuitive representation
for the presence of an object class in an image. Furthermore, an SPP layer is incorporated in the network instead
of a global pooling operation. This allows for incorporating
coarse localizations, i.e., in the form of point-wise annotations. These annotations require a minimal manual effort,
but provide additional information that can be leveraged for
weakly supervised localization.
The evaluation on the VOC2012 dataset shows that the
combination of PPMI and a cosine loss improves the results
for classification, point localization as well as the CorLoc.
Furthermore, the additional point-wise annotations helps in
steering the learning process and further improve the results
for all three tasks at a minimal annotation cost.
5. Acknowledgment
This work has been supported by **** an anonymous
institution *** .
Annotation
Detail
Global
Global
Global (*)
2×2
Loss
Loss
Function
Layer(s)
Binary logistic
–
PPMI + Cosine
–
Kolesnikov et. al. [11]
PPMI + Cosine Pyramid
Initialization
ImgNet
ImgNet
SPL1
(*) Requires additional selective search data
CorLoc
26.2%
39.2%
54.8%
43.4%
Table 3: CorLoc on the VOC2012 trainval set with an IoU of 50%.
Figure 3: Qualitative results for the best performing network using 2×2 tiles for coarse localizations, derived from point-wise
annotations. (left) Original images with annotated bounding boxes in green, predicted bounding boxes in blue and red for
correct and incorrect predictions respectively. (middle) thresholded activations as used for bounding box computation. (right)
class activation map as computed by the CNN.
References
[1] A. Bearman, O. Russakovsky, V. Ferrari, and L. Fei-Fei.
Whats the point: Semantic segmentation with point supervision. In European Conference on Computer Vision (ECCV),
pages 549–565. Springer, 2016.
[2] H. Bilen and A. Vedaldi. Weakly supervised deep detection
networks. In Proc. IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), pages 2846–2854, 2016.
[3] F. Chollet.
Information-theoretical label embeddings
for large-scale image classification.
arXiv preprint
arXiv:1607.05691, 2016.
[4] T. Deselaers, B. Alexe, and V. Ferrari. Weakly supervised
localization and learning with generic knowledge. International Journal of Computer Vision (IJCV), 100(3):275–293,
2012.
[5] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn,
and A. Zisserman. The PASCAL Visual Object Classes
Challenge 2011 (VOC2011) Results. http://www.pascalnetwork.org/challenges/VOC/voc2011/workshop/index.html,
2011.
[6] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic
segmentation. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 580–587,
2014.
[7] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Regionbased convolutional networks for accurate object detection
and segmentation. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 38(1):142–158, 2016.
[8] R. Grzeszick, S. Sudholt, and G. A. Fink. Optimistic and
pessimistic neural networks for scene and object recognition.
2016.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.
IEEE transactions on pattern analysis and machine intelligence, 37(9):1904–1916, 2015.
[10] V. Kantorov, M. Oquab, C. M., and I. Laptev. Contextlocnet:
Context-aware deep network models for weakly supervised
localization. In Proc. European Conference on Computer
Vision (ECCV), 2016.
[11] A. Kolesnikov and C. H. Lampert. Improving weaklysupervised object localization by micro-annotation. Proc.
British Machine Vision Conference (BMVC), 2016.
[12] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature,
521(7553):436–444, 2015.
[13] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional
networks for semantic segmentation. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
[14] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Is object localization for free?-weakly-supervised learning with convolutional neural networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 685–
694, 2015.
[15] G. Patterson, C. Xu, H. Su, and J. Hays. The sun attribute
database: Beyond categories for deeper scene understanding.
[16]
[17]
[18]
[19]
[20]
International Journal of Computer Vision, 108(1-2):59–81,
2014.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh,
S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,
A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual
Recognition Challenge. International Journal of Computer
Vision (IJCV), 115(3):211–252, 2015.
W. Samek, A. Binder, G. Montavon, S. Lapuschkin, and K.R. Müller. Evaluating the visualization of what a deep neural
network has learned. IEEE Transactions on Neural Networks
and Learning Systems, 2016.
K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR,
abs/1409.1, 2014.
B. Zhou, A. Khosla, L. A., A. Oliva, and A. Torralba. Learning Deep Features for Discriminative Localization. In Proc.
IEEE Conf. on Computer Vision and Pattern Recognition
(CVPR), 2016.
B. Zhou, A. Khosla, A. Lapedriza, A. Torralba, and A. Oliva.
Places: An image database for deep scene understanding.
CoRR, abs/1610.02055, 2016.
| 1 |
Prediction risk for the horseshoe regression
Anindya Bhadra 1 Jyotishka Datta 2 Yunfan Li 1 Nicholas G. Polson 3 and Brandon Willard 3
Abstract
arXiv:1605.04796v2 [math.ST] 13 Jun 2017
Predictive performance in shrinkage regression suffers from two major difficulties: (i) the amount
of relative shrinkage is monotone in the singular values of the design matrix and (ii) the amount
of shrinkage does not depend on the response variables. Both of these factors can translate to
a poor prediction performance, the risk of which can be estimated unbiasedly using Stein’s
approach. We show that using a component-specific local shrinkage term that can be learned
from the data under a suitable heavy-tailed prior, in combination with a global term providing shrinkage towards zero, can alleviate both these difficulties and consequently, can result
in an improved risk for prediction. Demonstrations of improved prediction performance over
competing approaches in a simulation study and in a pharmacogenomics data set confirm our
theoretical findings.
Keywords: global-local priors; principal components; shrinkage regression; Stein’s unbiased
risk estimate.
1
Introduction
Prediction using shrinkage regression techniques such as ridge regression (Hoerl and Kennard,
1970) and principal components regression or PCR (Jolliffe, 1982) remain popular in high-dimensional
problems. Shrinkage methods enjoy a number of advantages over selection-based methods such
as the lasso (Tibshirani, 1996) and comfortably outperform them in predictive performance in certain situations. Prominent among these is when the predictors are correlated and the resulting
lasso estimate is unstable, but ridge or PCR estimates are not (see, e.g, the discussion in Chapter
3 of Hastie et al., 2009). Polson and Scott (2012a) showed, following a representation originally
devised by Frank and Friedman (1993), that many commonly used high-dimensional shrinkage
regression estimates, such as the estimates of ridge regression, regression with g-prior (Zellner,
1986) and PCR, can be viewed as posterior means under a unified framework of “global” shrinkage prior on the regression coefficients that are suitably orthogonalized. They went on to demonstrate these global shrinkage regression models suffer from two major difficulties: (i) the amount
of relative shrinkage is monotone in the singular values of the design matrix and (ii) the amount
of shrinkage does not depend on the response variables. Both of these factors can contribute to
poor out of sample prediction performance, which they demonstrated numerically.
Polson and Scott (2012a) further provided numerical evidence that both of these difficulties
mentioned above can be resolved by allowing a “local,” component-specific shrinkage term that
1 Address:
Department of Statistics, Purdue University, 250 N. University St., West Lafayette, IN 47907, USA.
Department of Mathematical Sciences, University of Arkansas, Fayetteville, AR 72701, USA.
3 Address: Booth School of Business, The University of Chicago, 5807 S. Woodlawn Ave., Chicago, IL 60637, USA.
2 Address:
1
can be learned from the data, in conjunction with a global shrinkage parameter as used in ridge
or PCR, giving rise to the so-called “global-local” shrinkage regression models. Specifically, Polson and Scott (2012a) demonstrated by simulations that using the horseshoe prior of Carvalho
et al. (2010) on the regression coefficients performed well over a variety of competitors in terms
of predictive performance, including the lasso, ridge, PCR and sparse partial least squares (Chun
and Keles, 2010). However, a theoretical investigation of the conditions required for the horseshoe regression model to outperform a global shrinkage regression model such as ridge or PCR
in terms of predictive performance has been lacking. The goal of the current work is to bridge
this methodological and theoretical gap by developing formal tools for comparing the predictive
performances of shrinkage regression methods.
Developing a formal measure to compare predictive performance of competing regression
methods is important in both frequentist and Bayesian settings. This is because the frequentist
tuning parameter or the Bayesian hyper-parameters can then be chosen to minimize the estimated
prediction risk, if prediction of future observations is the main modeling goal. A measure of
quadratic risk for prediction in regression models can be obtained either through model-based
covariance penalties or through nonparametric approaches. Examples of covariance penalties
include Mallow’s C p (Mallows, 1973), Akaike’s information criterion (Akaike, 1974), risk inflation criterion (Foster and George, 1994) and Stein’s unbiased risk estimate or SURE (Stein, 1981).
Nonparametric penalties include the generalized cross validation of Craven and Wahba (1978),
which has the advantage of being model free but usually produces a prediction error estimate
with high variance (Efron, 1983). The relationship between the covariance penalties and nonparametric approaches were further explored by Efron (2004), who showed the covariance penalties
to be a Rao-Blackwellized version of the nonparametric penalties. Thus, Efron (2004) concluded
that model-based penalties such as SURE or Mallow’s C p (the two coincide for models where the
fit is linear in the response variable) offer substantially lower variance in estimating the prediction error, assuming of course the model is true. From a computational perspective, calculating
SURE, when it is explicitly available, is substantially less burdensome than performing cross validation, which usually requires several Monte Carlo replications. Furthermore, SURE, which is a
measure of quadratic risk in prediction, also has connections with the Kullback–Leiber risk for the
predictive density (George et al., 2006).
Given these advantages enjoyed by SURE, we devise a general, explicit and numerically stable technique for computing SURE for regression models that can be employed to compare the
performances of global as well as horseshoe regressions. The key technique to our innovation
is an orthogonalized representation first employed by Frank and Friedman (1993), which results
in particularly simple and numerically stable formulas for SURE. Using the developed tools for
SURE, we demonstrate that the suitable conditions for success of the horseshoe regression model
over global regression models in prediction arise when a certain sparse-robust structure is present
in the orthogonalized regression coefficients. Specifically, our major finding is that when a certain
principal component corresponding to a low singular value of the design matrix is a strong predictor of the outcomes, global shrinkage methods necessarily shrink these components too much,
whereas the horseshoe does not. This results in a substantially increased risk for global regression
over the horseshoe regression, explaining why global-local shrinkage such as the horseshoe can
overcome the two major difficulties encountered by global shrinkage regression methods.
2
The rest of the article is organized as follows. In Section 2, we demonstrate how several standard shrinkage regression estimates can be reinterpreted as posterior means in an orthogonalized
representation of the design matrix. Using this representation, we derive explicit expressions for
SURE for global shrinkage and horseshoe regressions in Sections 3 and 4 respectively. Section 5
compares the actual prediction risk (as opposed to data-dependent estimates of the risk, such as
SURE) of global and horseshoe regressions and explicitly identifies some situations where the
horseshoe regression can outperform global shrinkage methods. A simulation study is presented
in 6 and prediction performance of several competing approaches are assessed in a pharmacogenomics data set in Section 7. We conclude by pointing out some possible extensions of the current
work in Section 8.
2
Shrinkage regression estimates as posterior means
Consider the high-dimensional regression model
y = Xβ + e,
(1)
where y ∈ Rn , X ∈ Rn× p , β ∈ R p and e ∼ N (0, σ2 In ) with p > n. Let X = UDW T be the singular
value decomposition of the design matrix. Let D = diag(d1 , . . . , dn ) with d1 ≥ . . . ≥ dn > 0 and
Rank( D ) = min(n, p) = n. Define Z = UD and α = W T β. Then the regression problem can be
reformulated as:
y = Zα + e.
(2)
The ordinary least squared (OLS) estimate of α is α̂ = ( Z T Z )−1 Z T y = D −1 U T y. Following the
original results by Frank and Friedman (1993), several authors have used the well-known orthogonalization technique (Clyde et al., 1996; Denison and George, 2012; Polson and Scott, 2012a) to
demonstrate that the estimates of many shrinkage regression methods can be expressed in terms
of the posterior mean of the “orthogonalized” regression coefficients α under the following hierarchical model:
ind
(α̂i | αi , σ2 ) ∼ N (αi , σ2 di−2 ),
ind
(αi | σ2 , τ 2 , λ2i ) ∼ N (0, σ2 τ 2 λ2i ),
(3)
(4)
with σ2 , τ 2 > 0. The global term τ controls the amount of shrinkage and the fixed λ2i terms
depend on the method at hand. Given λi and τ, the estimate for β under the global shrinkage
prior, denoted by β̃, can be expressed in terms of the posterior mean estimate for α as follows:
α̃i =
τ 2 λ2i d2i
α̂i ,
1 + τ 2 λ2i d2i
n
β̃ =
∑ α̃i wi ,
(5)
i =1
where α̃i = E(αi | τ, λ2i , X, y); wi is a p × 1 vector and is the ith column of the p × n matrix W and
the term τ 2 λ2i d2i /(1 + τ 2 λ2i d2i ) ∈ (0, 1) is the shrinkage factor. The expression from Equation (5)
makes it clear that it is the orthogonalized OLS estimates α̂i s that are shrunk. We shall show that
this orthogonalized representation is also particularly suitable for calculating the prediction risk
3
estimate. The reason is tied to the independence assumption that is now feasible in Equations (3)
and (4). To give a few concrete examples, we note below that several popular shrinkage regression
models fall under the framework of Equations (3–4):
1. For ridge regression, λ2i = 1, ∀i, and we have α̃i = τ 2 d2i α̂i /(1 + τ 2 d2i ).
2. For K component PCR, λ2i is infinite for the first K components and then 0. Thus, α̃i = α̂i for
i = 1, . . . , K and α̃i = 0 for i = K + 1, . . . , n.
3. For regression with g-prior, λ2i = di−2 and we have α̃i = τ 2 α̂i /(1 + τ 2 ) for i = 1, . . . , n.
This shows the amount of relative shrinkage α̃i /α̂i is constant in di for PCR and g-prior and is
monotone in di for ridge regression. In none of these cases it depends on the OLS estimate α̂i
(consequently, on y). In the next section we quantify the effect of this behavior on the prediction
risk estimate.
3
Stein’s unbiased risk estimate for global shrinkage regression
Define the fit ỹ = X β̃ = Z α̃, where α̃ is the posterior mean of α. As noted by Stein (1981), the fitted
risk is an underestimation of the prediction risk, and SURE for prediction is defined as:
n
∂ỹi
,
∂yi
i =1
SURE = ||y − ỹ||2 + 2σ2 ∑
where the ∑in=1 (∂ỹi /∂yi ) term is also known as the “degrees of freedom” (Efron, 2004). By Tweedie’s
formula (Masreliez, 1975; Pericchi and Smith, 1992) that relates the posterior mean with the marginals;
we have for a Gaussian model of Equations (3–4) that: α̃ = α̂ + σ2 D −2 ∇α̂ log m(α̂), where m(α̂) is
the marginal for α̂. Noting y = Z α̂ yields ỹ = y + σ2 UD −1 ∇α̂ log m(α̂). Using the independence
of αi s, the formula for SURE becomes
n
SURE =σ4 ∑ di−2
i =1
∂
log m(α̂i )
∂α̂i
2
n
∂2
+ 2σ2 ∑ 1 + σ2 di−2 2 log m(α̂i ) .
∂α̂i
i =1
(6)
Thus, the prediction risk estimate for shrinkage regression can be quantified in terms of the first
two derivatives of the log marginal for α̂. Integrating out αi from Equations (3–4) yields in all these
cases,
ind
(α̂i | σ2 , τ 2 , λ2i ) ∼ N (0, σ2 (di−2 + τ 2 λ2i )).
The marginal of α̂ is given by
m(α̂) ∝
n
∏ exp
i =1
(
−
α̂2i /2
σ2 (di−2 + τ 2 λ2i )
)
,
which yields
∂2 log m(α̂i )
−1
= 2 −2
.
2
∂α̂i
σ (di + τ 2 λ2i )
−α̂
∂ log m(α̂i )
= 2 −2 i 2 2 ;
∂α̂i
σ ( di + τ λi )
4
(7)
Therefore, Equation (6) reduces to the following expression for SURE for global shrinkage regressions: SURE = ∑in=1 SUREi , where,
SUREi =
α̂2i d2i
τ 2 λ2i d2i
2
+
2σ
.
(1 + τ 2 λ2i d2i )2
(1 + τ 2 λ2i d2i )
(8)
From a computational perspective, the expression in Equation (8) is attractive, as it avoids costly
matrix inversions. For a given σ one can choose τ to minimize the prediction risk, which amounts
to a one-dimensional optimization. Note that in our notation, d1 ≥ d2 . . . ≥ dn > 0. Clearly, this
is the SURE when λi s are fixed and finite (e.g., ridge regression). For K component PCR, only
the first K terms appear in the sum. The di terms are features of the design matrix X and one
may try to control the prediction risk by varying τ. When τ → ∞, SURE → 2nσ2 , the risk of
prediction with ordinary least squares (unbiased). When τ → 0, we get the mean-only model
(zero variance): SURE → ∑in=1 α̂2i d2i . Regression models with τ ∈ (0, ∞) represent a bias-variance
tradeoff. Following are the two major difficulties of global shrinkage regression.
1. Note from the first term of Equation (8) that SURE is increased by those components for
which α̂2i d2i is large. Choosing a large τ alleviates this problem, but at the expense of an
SUREi of 2σ2 even for components for which α̂2i d2i is small (due to the second term in Equation (8)). Thus, it might be beneficial to differentially minimize the effect of the components for
which α̂2i d2i is large, while ensuring those for which α̂2i d2i is small make a contribution less
than 2σ2 to SURE. Yet, regression models with λi fixed, such as ridge, PCR, regression with
g-priors, provide no mechanism for achieving this, since the relative shrinkage, defined as
the ratio α̃i /α̂i , equals τ 2 λ2i d2i /(1 + τ 2 λ2i d2i ), and is solely driven by a single quantity τ.
2. Equation (5) shows that the relative shrinkage for α̂i is monotone in di ; that is, those α̂i corresponding to a smaller di are necessarily shrunk more (in a relative amount). This is only
sensible in the case where one has reasons to believe the low variance eigen-directions (i.e.,
principal components) of the design matrix are not important predictors of the response
variables, an assumption that can be violated in real data (Polson and Scott, 2012a).
In the light of these two problems, we proceed to demonstrate that putting a heavy-tailed prior on
λi s, in combination with a suitably small value of τ to enable global-local shrinkage can resolve
both these issues. The intuition behind this is that a small value of a global parameter τ enables
shrinkage towards zero for all the components while the heavy tails of the local or componentspecific λi terms ensure the components with large values of α̂i di are not shrunk too much, and
allow the λi terms to be learned from the data. Simultaneously ensuring both of these factors helps
in controlling the prediction risk for both the noise as well as the signal terms.
4
Stein’s unbiased risk estimate for the horseshoe regression
The global-local horseshoe shrinkage regression of Polson and Scott (2012a) extends the global
shrinkage regression models of the previous section by putting a local (component-specific), heavytailed half-Cauchy prior on the λi terms that allow these terms to be learned from the data, in
5
addition to a global τ. The model equations become:
ind
(α̂i | αi , σ2 ) ∼ N (αi , σ2 di−2 ),
(9)
ind
(αi | σ2 , τ 2 , λ2i ) ∼ N (0, σ2 τ 2 λ2i ),
λi
(10)
ind
∼ C + (0, 1),
(11)
with σ2 , τ 2 > 0 and C + (0, 1) denotes a standard half-Cauchy random variable with density p(λi ) =
(2/π )(1 + λ2i )−1 . The marginal prior on αi s that is obtained as a normal scale mixture by integrating out λi s from Equations (10) and (11) is called the horseshoe prior (Carvalho et al., 2010).
Improved mean square error over competing approaches in regression has been empirically observed by Polson and Scott (2012a) with horseshoe prior on αi s. The intuitive explanation for this
improved performance is that a heavy tailed prior of λi leaves the large αi terms of Equation (10)
un-shrunk in the posterior, whereas the global τ term provides shrinkage towards zero for all
components (see, for example, the discussion by Bhadra et al., 2016b; Carvalho et al., 2010; Polson
and Scott, 2012b, and the references therein). However, no explicit formulation of the prediction
risk under horseshoe shrinkage is available so far and we demonstrate below the heavy-tailed
priors on λi terms, in addition to a global τ, can be beneficial in controlling the overall prediction
risk.
Under the model of Equations (9–11), after integrating out αi from the first two equations, we
have,
ind
(α̂i | σ2 , τ 2 , λ2i ) ∼ N (0, σ2 (di−2 + τ 2 λ2i )).
We have, p(λi ) ∝ 1/(1 + λ2i ). Thus, the marginal of α̂, denoted by m(α̂), is given up to a constant
of proportionality by
n
m(α̂) = ∏
Z ∞
i =1 0
N (α̂i | 0, σ2 (di−2 + τ 2 λ2i )) p(λi )dλi
2 −n/2
∝(2πσ )
n
∏
Z ∞
i =1 0
α̂2 d2 /2
exp − 2 i i 2 2 2
σ (1 + τ d i λ i )
di
2
(1 + τ d2i λ2i )1/2
1
dλi .
1 + λ2i
(12)
This integral involves the normalizing constant of a compound confluent hypergeometric distribution that can be computed using a result of Gordy (1998).
PROPOSITION 4.1. (Gordy, 1998). The compound confluent hypergeometric (CCH) density is given by
CCH( x; p, q, r, s, ν, θ ) =
x p−1 (1 − νx )q−1 {θ + (1 − θ )νx }−r exp(−sx )
,
B( p, q) H ( p, q, r, s, ν, θ )
for 0 < x < 1/ν, where the parameters satisfy p > 0, q > 0, r ∈ R, s ∈ R, 0 ≤ ν ≤ 1 and θ > 0. Here
B( p, q) is the beta function and the function H (·) is given by
H ( p, q, r, s, ν, θ ) = ν− p exp(−s/ν)Φ1 (q, r, p + q, s/ν, 1 − θ ),
6
where Φ1 is the confluent hypergeometric function of two variables, given by
∞
Φ1 (α, β, γ, x1 , x2 ) =
∞
(α)m ( β)n m n
x x ,
(
γ
)m+n m!n! 1 2
m =0 n =0
∑ ∑
(13)
where ( a)k denotes the rising factorial with ( a)0 = 1, ( a)1 = a and ( a)k = ( a + k − 1)( a)k−1 .
We present our first result in the following theorem and show that the marginal m(α̂) and all
its derivatives lend themselves to a series representation in terms of the first and second moments
of a random variable that follows a CCH distribution. Consequently, we quantify SURE for the
horseshoe regression.
THEOREM 4.1. Denote m0 (α̂i ) = (∂/∂α̂i )m(α̂i ) and m00 (α̂i ) = (∂2 /∂α̂2i )m(α̂i ). Then, the following
holds.
A. SURE for the horseshoe shrinkage regression model defined by Equations (9–11) is given by SURE =
∑in=1 SUREi , where the component-wise contribution SUREi is given by
SUREi = 2σ
2
− σ4 di−2
m0 (α̂i )
m(α̂i )
2
+ 2σ4 di−2
m00 (α̂i )
.
m(α̂i )
(14)
B. Under independent standard half-Cauchy prior on λi s, for the second and third terms in Equation
(14) we have:
α̂i d2
m0 (α̂i )
= − 2i E( Zi ),
m(α̂i )
σ
α̂2 d4
d2
m00 (α̂i )
= − i2 E( Zi ) + i 4 i E( Zi2 ),
m(α̂i )
σ
σ
and
where ( Zi | α̂i , σ, τ ) follows a CCH( p = 1, q = 1/2, r = 1, s = α̂2i d2i /2σ2 , v = 1, θ = 1/τ 2 d2i )
distribution.
A proof is given in Appendix A.1. Theorem 4.1 provides a computationally tractable mechanism for calculating SURE for the horseshoe shrinkage regression in terms of the moments of
CCH random variables. Gordy (1998) provides a simple formula for all integer moments of CCH
random variables. Specifically, he shows if X ∼ CCH( x; p, q, r, s, ν, θ ) then
E( X k ) =
( p)k H ( p + k, q, r, s, ν, θ )
,
( p + q)k H ( p, q, r, s, ν, θ )
(15)
for integers k ≥ 1. Moreover, as demonstrated by Gordy (1998), these moments can be numerically
evaluated quite easily over a range of parameter values and calculations remain very stable. A
consequence of this explicit formula for SURE is that the global shrinkage parameter τ can now
be chosen to minimize SURE by performing a one-dimensional numerical optimization. Another
consequence is that an application of Theorem 3 of Carvalho et al. (2010) shows
m0 (α̂i )
∂ log m(α̂i )
= lim
= 0,
∂α̂i
|α̂i |→∞ m ( α̂i )
|α̂i |→∞
lim
with high probability, where m(α̂i ) is the marginal under the horseshoe prior. Recall that the posterior mean α̃i and the OLS estimate α̂i are related by Tweedie’s formula as α̃i = α̂i + σ2 di−2 ∂ log m(α̂i )/∂α̂i .
7
Thus, α̃i ≈ α̂i , with high probability, as |α̂i | → ∞, for any fixed di and σ for the horseshoe regression. Since α̂i is unbiased for αi , the resultant horseshoe posterior mean is also seen to be unbiased
when |α̂i | is large. Compare with the resultant α̃i for global shrinkage regression of Equation (5),
which is monotone decreasing in di , and therefore can be highly biased if a true large |αi | corresponds to a small di . Perhaps more importantly, we can use the expression from Theorem 4.1 to
estimate the prediction risk of the horseshoe regression for the signal and the noise terms. First
we treat the case when |α̂i | is large. We have the following result.
THEOREM 4.2. Define si = α̂2i d2i /2σ2 and θi = (τ 2 d2i )−1 . For any si ≥ 1, θi ≥ 1, we have for the
horseshoe regression of Equations (9–11) that
(
!)
)
(
2
C
(1 + s i )
SURE
C
(
1
+
s
)
2
1
i
i
1 − θi (C̃1 + C̃2 )
,
− θi2 (C̃1 + C̃2 )2
≤
≤ 1 + 2θi (1 + si )
+ 3/2
2σ2
s2i
s2i
s3i
si
almost surely, where C1 = {1 − 5/(2e)}−1/2 ≈ 3.53, C2 = 16/15, C̃1 = (1 − 2/e) ≈ 0.26, C̃2 = 4/3,
are constants.
A proof is given in Appendix A.2. Our result is non-asymptotic, i.e., it is valid for any si ≥
1. However, an easy consequence is that SUREi → 2σ2 , almost surely, as si → ∞, provided
τ 2 ≤ di−2 . An intuitive explanation of this result is that component-specific shrinkage is feasible
in the horseshoe regression model due to the heavy-tailed λi terms, which prevents the signal
terms from getting shrunk too much and consequently, making a large contribution to SURE due
to a large bias. With just a global parameter τ, this component-specific shrinkage is not possible.
A comparison of SUREi resulting from Theorem 4.2 with that from Equation (8) demonstrates
using global-local horseshoe shrinkage, we can rectify a major shortcoming of global shrinkage
regression, in that the terms with large si do not make a large contribution to the prediction risk.
Moreover, the main consequence of Theorem 4.2, that is SUREi → 2σ2 , almost surely, as si → ∞,
holds for a larger class of “global-local” priors, of which the horseshoe is a special case.
THEOREM 4.3. Consider the hierarchy of Equations (9–10) and suppose the prior on λi in Equation (11)
satisfies p(λ2i ) ∼ (λ2i ) a−1 L(λ2i ) as λ2i → ∞, where f ( x ) ∼ g( x ) means limx→∞ f ( x )/g( x ) = 1. Assume
a ≤ 0 and L(·) is a slowly-varying function, defined as lim| x|→∞ L(tx )/L( x ) = 1 for all t ∈ (0, ∞). Then
we have SUREi → 2σ2 , almost surely, as si → ∞.
A proof is given in Appendix A.3. Densities that satisfy p(λ2i ) ∼ (λ2i ) a−1 L(λ2i ) as λ2i → ∞
are sometimes called regularly varying or heavy-tailed. Clearly, the horseshoe prior is a special
case, since for the standard half-Cauchy we have p(λi ) ∝ 1/(1 + λ2i ), which yields by a change
of variables p(λ2i ) = (λ2i )−3/2 {λ2i /(1 + λ2i )}, which is of the form (λ2i ) a−1 L(λ2i ) with a = −1/2
since L(λ2i ) = λ2i /(1 + λ2i ) is seen to be slowly-varying. Other priors that fall in this framework
are the horseshoe+ prior of Bhadra et al. (2016b), for which p(λi ) ∝ log(λi )/(λ2i − 1) = λi−2 L(λ2i )
with L(λ2i ) = log(λi )λ2i /(λ2i − 1). Ghosh et al. (2016) show that the generalized double Pareto
prior (Armagan et al., 2013) and the three parameter beta prior (Armagan et al., 2011) also fall in
this framework. Thus, Theorem 4.3 generalizes the main consequence of Theorem 4.2 to a broader
class of priors in the asymptotic sense as si → ∞.
Next, for the case when |α̂i | is small, we have the following result for estimating the prediction
risk of the horseshoe regression.
8
THEOREM 4.4. Define si = α̂2i d2i /2σ2 and θi = (τ 2 d2i )−1 . Then the following statements are true
(almost surely) for the horseshoe regression.
A. SUREi is an increasing function of si in the interval si ∈ [0, 1] for any fixed τ.
B. When si = 0, we have that SUREi is a monotone increasing function of τ, and is bounded in the
interval (0, 2σ2 /3], almost surely, when τ 2 d2i ∈ (0, 1].
C. When si = 1, we have that SUREi is bounded in the interval (0, 1.93σ2 ], almost surely, when
τ 2 d2i ∈ (0, 1].
A proof is given in Appendix A.4. This theorem establishes that: (i) the terms with smaller si
in the interval [0, 1] contribute less to SURE, with the minimum achieved at si = 0 (these terms
can be thought of as the noise terms) and (ii) if τ is chosen to be sufficiently small, the terms for
which si = 0, has an upper bound on SURE at 2σ2 /3. Note that the OLS estimator has risk 2σ2
for these terms. At si = 0, the PCR risk is either 0 or 2σ2 , depending on whether the term is or is
not included. A commonly used technique for shrinkage regressions is to choose the global τ to
minimize a data-dependent estimate of the risk, such as CL or SURE (Mallows, 1973). The ridge
regression SURE at si = 0 is an increasing function of τ and thus, it might make sense to choose a
small τ if all si terms were small. However, in the presence of some si terms that are large, ridge
regression cannot choose a very small τ, since the large si terms will then be heavily shrunk and
contribute too much to SURE. This is not the case with global-local shrinkage regression methods
such as the horseshoe, which can still choose a small τ to mitigate the contribution from the noise
terms and rely on the heavy-tailed λi terms to ensure large signals are not shrunk too much.
Consequently, the ridge regression risk estimate is usually larger than the global-local regression
risk estimate even for very small si terms, when some terms with large si are present along with
mostly noise terms. At this point, the results concern the risk estimate (i.e., SURE) rather than risk
itself, the discussion of which is deferred until Section 5.
To summarize the theoretical findings, Theorem 4.2 together with Theorem 4.4 establishes that
the horseshoe regression is effective in handling both very large and very small values of α̂2i d2i .
Specifically, Theorem 4.4 asserts that a small enough τ shrinks the noise terms towards zero, minimizing their contribution to SURE. Whereas, according to Theorem 4.2, the heavy tails of the
Cauchy priors for the λi terms ensure the large signals are not shrunk too much and ensures a
SURE of 2σ2 for these terms, which is an improvement over purely global methods of shrinkage.
5
Prediction risk for the global and horseshoe regressions
In this section we compare the theoretical prediction risks of global and global-local horseshoe
shrinkage regressions. While SURE is a data-dependent estimate of the theoretical risk, these two
quantities are equal in expectation. We use a concentration argument to derive conditions under
which the horseshoe regression will outperform global shrinkage regression, e.g., ridge regression,
in terms of predictive risk. While the analysis seems difficult for an arbitrary design matrix X, we
are able to treat the case of ridge regression for orthogonal design, i.e., X T X = I. Clearly, if the
SVD of X is written as X = UDV T , then we have D = I and for ridge regression λi = 1 for all i.
9
Thus, for orthogonal design, Equations (3) and (4) become
ind
(α̂i | αi , σ2 ) ∼ N (αi , σ2 ),
ind
(αi | σ2 , τ 2 , λ2i ) ∼ N (0, σ2 τ 2 ),
where τ is the global shrinkage parameter. Since the fit in this model is linear in α̂i , SURE is
equivalent to Mallow’s CL . Equation (14) of Mallows (1973) shows that if τ is chosen to minimize
CL , then the optimal ridge estimate is given in closed form by
αi? =
1−
nσ2
n
∑i=1 α̂2i
α̂i .
Alternatively, the solution can be directly obtained from Equation (8) by taking di = λi = 1 for
all i and by setting τ ? = argminτ ∑in=1 SUREi . It is perhaps interesting to note that this “optimal”
ridge estimate, where the tuning parameter is allowed to depend on the data, is no longer linear
in α̂. In fact, the optimal solution α? can be seen to be closely related to the James–Stein estimate
of α and its risk can therefore be quantified using the risk bounds on the James–Stein estimate. As
expected due to the global nature of ridge regression, the relative shrinkage αi? /α̂i of the optimal
solution only depends on |α̂|2 = ∑in=1 α̂2i but not on the individual components of α̂. Theorem 1 of
Casella and Hwang (1982) shows that
n−2
R(α, α? )
( n − 2)2
1−
≤
≤
1
−
n + | α |2
R(α, α̂)
n
1
n − 2 + | α |2
.
Consequently, if |α|2 /n → c as n → ∞ then the James–Stein estimate satisfies
lim
n→∞
R(α, α? )
c
=
.
R(α, α̂)
c+1
Thus, α? offers large benefits over the least squares estimate α̂ for small c but it is practically
equivalent to the least squares estimate for large c. The prediction risk of the least squares estimate
for p > n is simply 2nσ2 , or an average component-specific risk of 2σ2 . We show that when true
αi = 0, the component-specific risk bound of the horseshoe shrinkage regression is less than 2σ2 .
We have the following result.
THEOREM 5.1. Let D = I and let the global shrinkage parameter in the horseshoe regression be τ 2 = 1.
When true αi = 0, an upper bound of the component-wise risk of the horseshoe regression is 1.75σ2 < 2σ2 .
A proof can be found in Appendix A.5. The proof uses the fact that the actual risk can be
obtained by computing the expectation of SURE. We split the domains of integration into three
distinct regions and use the bounds on SURE from Theorems 4.2 and 4.4, as appropriate.
When true αi is large enough, a consequence of Theorem 4.2 is that the component-specific
risk for global-local shrinkage regression is 2σ2 . This is because SURE in this case is almost surely
equal to 2σ2 and α̂i is concentrated around true αi . Therefore, it is established that if only a few
components of true α are large and the rest are zero in such a way that |α|2 /n is large, then the
global-local horseshoe regression outperforms ridge regression in terms of predictive risk. The
benefit arises from a lower risk for the αi = 0 terms. On the other hand, if all components of true
10
Table 1: The true orthogonalized regression coefficients α0i , their ordinary least squared estimates
α̂i , and singular values di of the design matrix, for n = 100 and p = 500.
i
α0i
α̂i
di
α̂i di
1
2
...
5
6
...
29
30
...
56
57
...
66
67
...
95
96
...
100
0.10
-0.44
...
-0.13
10.07
...
0.46
10.47
...
0.35
10.23
...
-0.00
11.14
...
-0.82
9.60
...
0.61
0.10
-0.32
...
0.30
10.22
...
0.60
11.07
...
0.57
10.66
...
-0.35
11.52
...
-0.56
10.21
...
0.91
635.10
3.16
...
3.05
3.02
...
2.53
2.51
...
2.07
2.07
...
1.90
1.88
...
1.42
1.40
...
1.27
62.13
-1.00
...
0.91
30.88
...
1.53
27.76
...
1.18
22.05
...
-0.66
21.70
...
-0.79
14.26
...
1.15
α are zero or all are large, the horseshoe regression need not outperform ridge regression.
6
Numerical examples
We simulate data where n = 100, and consider the cases p = 100, 200, 300, 400, 500. Let B be a p × k
factor loading matrix, with all entries equal to 1. Let Fi be k × 1 matrix of factor values, with all
entries drawn independently from N (0, 1). The ith row of the n × p design matrix X is generated
by a factor model, with number of factors k = 8, as follows:
Xi = BFi + ξ i ,
ξ i ∼ N (0, 0.1),
for
i = 1, . . . , n.
Thus, the columns of X are correlated. Let X = UDW T denote the singular value decomposition
of X. The observations y are generated from Equation (2) with σ2 = 1, where for the true orthogonalized regression coefficients α0 , the 6, 30, 57, 67, and 96th components are randomly selected as
signals, and the remaining 95 components are noise terms. Coefficients of the signals are generated by a N (10, 0.5) distribution, and coefficients of the noise terms are generated by a N (0, 0.5)
distribution. For the case n = 100 and p = 500, some of the true orthogonalized regression coefficients α0 , their ordinary least squared estimates α̂, and the corresponding singular values d of the
design matrix, are shown in Table 1.
Table 2 lists the SURE for prediction and actual out of sample sum of squared prediction error (SSE) for the ridge, lasso, PCR and horseshoe regressions. Out of sample prediction error of
11
Table 2: SURE and average out of sample prediction SSE (standard deviation of SSE) on one training set and 200 testing sets for the competing methods for n = 100, for ridge regression (RR), the
lasso regression (LASSO), the adaptive lasso (A LASSO), principal components regression (PCR)
and the horseshoe regression (HS). The lowest SURE in each row is in italics and the lowest average prediction SSE is in bold. A formula for SURE is unavailable for the adaptive lasso.
RR
p
SURE
SSE
100
159.02
200
187.38
300
192.78
400
195.02
500
196.11
168.24
(23.87)
174.92
(21.13)
191.91
(22.95)
182.55
(22.70)
188.78
(22.33)
LASSO
SURE
SSE
125.37
140.99
147.83
148.56
159.95
128.98
(18.80)
132.46
(18.38)
145.04
(19.89)
165.63
(21.55)
159.56
(19.94)
A LASSO
SSE
127.22
(18.10)
151.89
(20.47)
153.64
(21.19)
178.98
(20.12)
186.23
(23.50)
PCR
SURE
SSE
SURE
SSE
162.23
120.59
126.33
(18.77)
126.99
(17.29)
136.67
(18.73)
143.91
(18.41)
160.11
(20.29)
213.90
260.65
346.19
386.50
179.81
(25.51)
191.33
(22.62)
253.00
(26.58)
292.02
(28.98)
366.88
(39.38)
HS
139.32
151.24
147.69
144.97
the adaptive lasso is also included in the comparisons, although we are unaware of a formula for
computing the SURE for the adaptive lasso. SURE for ridge and PCR can be computed by an
application of Equation (8) and SURE for the horseshoe regression is given by Theorem 4.1. SURE
for the lasso is calculated using the result given by Tibshirani and Taylor (2012). In each case, the
model is trained on 100 samples. We report the SSE on 100 testing samples, averaged over 200
testing data sets, and their standard deviations. For ridge, lasso, PCR and horseshoe regression,
the global shrinkage parameters were chosen to minimize SURE for prediction. In adaptive lasso,
the shrinkage parameters were chosen by cross validation due to SURE being unavailable. It can
be seen that SURE in most cases are within one standard deviation of the actual out of sample
prediction SSE, suggesting SURE is an accurate method for evaluating actual out of sample prediction performance. When p = 100, 200, 300, 400, horseshoe regression has the lowest prediction
SSE. When p = 500, SSE of the lasso and horseshoe regression are close, and the lasso performs
marginally better. The horseshoe regression also has the lowest SURE in all but one cases. Generally, SURE increases with p for all methods. The SURE for ridge regression approaches the OLS
risk, which is 2nσ2 = 200 in these situations. SURE for PCR is larger than the OLS risk and PCR
happens to be the poorest performer in most settings. Performance of the adaptive lasso also degrades compared to the lasso and the horseshoe, which remain the two best performers. Finally,
the horseshoe regression outperforms the lasso in four out of the five settings we considered.
Figure 1 shows contribution to SURE by each component for n = 100 and p = 500, for ridge,
PCR, lasso and horseshoe regressions. The components are ordered left to right on the x-axis by
decreasing magnitude of di , and SURE for prediction on each component are shown on the yaxis. Note from Table 1 that the 6, 30, 57, 67 and 96th components are the signals, meaning these
terms correspond to a large α0 . The PCR risk on the 96th component is 203.22, which is out of
range for the y-axis in the plot. For this data set, PCR selects 81 components, and therefore SURE
for the first 81 components equal to 2σ2 = 2 and the SURE is equal to α̂2i d2i for i = 82, . . . , 100.
12
6
5
4
3
2
0
1
SURE for prediction
0
20
40
60
80
100
Component
4
3
2
0
1
SURE for prediction
5
6
Figure 1: Component-wise SURE for ridge (blue), PCR (gray), lasso (cyan), and horseshoe regression (red), for n = 100 and p = 500. Signal components are shown in solid squares and noise
components shown in blank circles. Dashed horizontal line is at 2σ2 = 2.
0
10
20
30
^d
α
40
50
60
Figure 2: SURE for ridge (blue), PCR (gray), lasso (cyan) and horseshoe regression (red), versus
α̂d, where α̂ is the OLS estimate of the orthogonalized regression coefficient, and d is the singular
value, for n = 100 and p = 500. Dashed horizontal lines are at 2σ2 = 2 and 2σ2 /3 = 0.67.
Component-wise SURE for ridge regression are large on the signal components, and is decreasing
as the singular values d decrease on the other components. But due to the large global shrinkage
parameter τ ridge must select in presence of both large signals and noise terms, the magnitude of
improvement over the OLS risk 2σ2 is small for the noise terms. On the other hand, the horseshoe
13
estimator does not shrink the components with large α̂i di heavily and therefore the horseshoe
SURE on the signal components are almost equal to 2σ2 (according to Theorem 4.2). SURE for the
horseshoe is also much smaller than 2σ2 on many of the noise components. Lasso also appears to
be quite effective for the noise terms, but its performance for the signal components is generally
not as effective as the horseshoe.
Figure 2 takes a fresh look at the same results and shows component-wise SURE plotted against
α̂i di . The signal components as well as the first component in Table 1 have α̂i di > 10. Horseshoe
SURE converges to 2σ2 for large α̂i di , as expected from Theorem 4.2. For these components, the
SURE for both ridge and lasso are larger than 2σ2 , due to the bias introduced in estimating large
signals by these methods (see also Theorem 1 of Carvalho et al., 2010). When α̂i 2 d2i ≈ 0, risks
for lasso and horseshoe are comparable, with lasso being slightly better. This is because an estimate can be exactly zero for the lasso, but not for the horseshoe, which is a shrinkage method
(as opposed to a selection method). Nevertheless, the upper bound on SURE for the horseshoe
regression at 2σ2 /3 when α̂i 2 d2i ≈ 0 and provided τ is chosen to be small enough so that τ 2 ≤ di−2 ,
as established by Theorem 4.4, can be verified from Figure 2.
Additional simulation results are presented in Supplementary Section S.1, where we (i) treat
a higher dimensional case (p = 1000), (i) perform comparisons with non-convex MCP (Zhang,
2010) and SCAD (Fan and Li, 2001) regressions, (iii) explore different choices of X and (iv) explore
the effect of the choice of α. The main finding is that the horseshoe regression is often the best
performer when α has a sparse-robust structure as in Table 1, that is most elements are very small
while a few are large so that |α|2 is large. This is consistent with the theoretical results of Section 5.
7
Assessing out of sample prediction in a pharmacogenomics data set
We compare the out of sample prediction error of the horseshoe regression with ridge regression,
PCR, the lasso, the adaptive lasso, MCP and SCAD on a pharmacogenomics data set. The data
were originally described by Szakács et al. (2004), in which the authors studied 60 cancer cell lines
in the publicly available NCI-60 database (https://dtp.cancer.gov/discovery development/nci60/). The goal here is to predict the expression of the human ABC transporter genes (responses)
using some compounds or drugs (predictors) at which 50% inhibition of cellular growth for the cell
lines are induced. The NCI-60 database includes the concentration level of 1429 such compounds,
out of which we use 853, which did not have any missing values, as predictors. We investigate the
expression levels of transporter genes A1 to A12 (except for A11, which we omit due to missing
values), and B1. Thus, in our study X is a n × p matrix of predictors with n = 60, p = 853 and Y is a
n-dimensional response vector for each of the 12 candidate transporter genes under consideration.
To test the performance of the methods, we split each data set into training and testing sets,
with 75% (45 out of 60) of the observations in the training sets. We standardize each response by
subtracting the mean and dividing by the standard deviation. We fit the model on the training
data, and then calculate mean squared prediction error (prediction MSE) on the testing data. This
is repeated for 20 random splits of the data into training and testing sets. The tuning parameters in
ridge regression, the lasso, the adaptive lasso, SCAD and MCP are chosen by five-fold cross validation on the training data. Similarly, the number of components in PCR and the global shrinkage
parameter τ for horseshoe regression are chosen by cross validation as well. It is possible to use
14
SURE to select the tuning parameters or the number of components, but one needs an estimate of
the standard deviation of the errors in high-dimensional regressions. This is a problem of recent
interest, as the OLS estimate of σ2 is not well-defined in the p > n case. Unfortunately, some
of the existing methods we tried, such as the method of moments estimator of Dicker (2014), often resulted in unreasonable estimates for σ2 , such as negative numbers. Thus, we stick to cross
validation here, as it is not necessary to estimate the residual standard deviation in that case.
The average prediction MSE over 20 random training-testing splits for the competing methods
is reported in Table 3. Average prediction MSE for responses A1, A8 and A10 are around or larger
than 1 for all of the methods. Since the responses are standardized before analysis, we might
conclude that none of the methods performed well for these cases. Among the remaining nine
cases, the horseshoe regression substantially outperforms the other methods for A3, A4, A9, A12
and B1. It is comparable to PCR for A5 and A7, and is comparable to the adaptive lasso for A6,
which are the best performers in the respective cases. Overall, the horseshoe regression performed
the best in 5 among the total 12 cases we considered.
8
Concluding remarks
We outlined some situations where the horseshoe regression is expected to perform better compared to some other commonly used “global” shrinkage or selection alternatives for high-dimensional
regression. Specifically, we demonstrated that the global term helps in mitigating the prediction
risk arising from the noise terms, and an appropriate choice for the tails of the local terms is crucial
for controlling the risk due to the signal terms. For this article we have used the horseshoe prior
as our choice for the global-local prior. However, in recent years, several other priors have been
developed that fall in this class. This includes the horseshoe+ (Bhadra et al., 2016a,b), the threeparameter beta (Armagan et al., 2011), the normal-exponential-gamma (Griffin and Brown, 2010),
the generalized double Pareto (Armagan et al., 2013), the generalized shrinkage prior (Denison
and George, 2012) and the Dirichlet–Laplace prior (Bhattacharya et al., 2015). Empirical Bayes
approaches have also appeared (Martin and Walker, 2014) and the spike and slab priors have
made a resurgence due to recently developed efficient computational approaches (Ročková and
George, 2016; Ročková and George, 2014). Especially in the light of Theorem 4.3, we expect the
results developed in this article for the horseshoe to foreshadow similar results when many of
these alternatives are deployed. A particular advantage of using the horseshoe prior seems to be
the tractable expression for SURE, as developed in Theorem 4.1. Whether this advantage translates to some of the other global-local priors mentioned above is an open question. Following
the approach of Stein (1981), our risk results are developed in a non-asymptotic setting (finite n,
finite p > n). However, global-local priors such as the horseshoe and horseshoe+ are known to be
minimax in estimation in the Gaussian sequence model (van der Pas et al., 2014, 2016). For linear
regression, frequentist minimax risk results are discussed by Raskutti et al. (2011); and Castillo
et al. (2015) have shown that spike and slab priors achieve minimax prediction risk in regression.
Whether the prediction risk for the horseshoe regression is optimal in an asymptotic sense is an
important question to investigate and recent asymptotic prediction risk results for ridge regression
(Dobriban and Wager, 2015) should prove helpful for comparing with global shrinkage methods.
Another possible direction for future investigation might be to explore the implications of our
15
Table 3: Average out of sample mean squared prediction error computed on 20 random trainingtesting splits (number of splits out of 20 with lowest prediction MSE), for each of the 12 human
ABC transporter genes (A1–A10, A12, B1) in the pharmacogenomics example. Methods under
consideration are ridge regression (RR), principal components regression (PCR) , the lasso, the
adaptive lasso (A LASSO), the minimax concave penalty (MCP), the smoothly clipped absolute
deviation (SCAD) penalty, and the horseshoe regression (HS). Lowest prediction MSE and largest
number of splits with the lowest prediction MSE for each response in bold.
Response
RR
PCR
LASSO
A LASSO
MCP
SCAD
HS
A1
1.12
(2)
1.00
(3)
0.77
(1)
0.92
(2)
0.82
(1)
0.93
(4)
0.92
(0)
1.08
(6)
0.57
(4)
1.18
(0)
1.01
(0)
0.53
(1)
1.10
(5)
1.04
(1)
0.91
(0)
0.95
(0)
0.77
(6)
0.92
(0)
0.83
(8)
1.05
(4)
0.64
(0)
1.04
(7)
1.12
(0)
0.59
(0)
1.00
(7)
0.95
(7)
1.11
(0)
0.97
(2)
1.06
(4)
0.98
(3)
0.92
(1)
1.14
(6)
0.81
(0)
1.00
(4)
1.09
(2)
0.70
(3)
1.00
(2)
0.93
(5)
0.90
(0)
0.96
(2)
0.81
(1)
0.86
(5)
0.93
(4)
1.01
(4)
0.67
(6)
1.01
(3)
1.01
(2)
0.63
(2)
1.01
(1)
0.92
(1)
0.92
(1)
0.93
(2)
0.83
(2)
0.87
(0)
0.99
(0)
1.01
(0)
0.77
(0)
1.00
(2)
1.02
(1)
0.91
(1)
1.06
(1)
0.99
(0)
1.06
(0)
0.99
(0)
0.94
(0)
0.90
(2)
0.93
(0)
1.15
(0)
0.68
(1)
1.06
(0)
1.05
(0)
0.70
(3)
1.30
(2)
1.15
(3)
0.65
(18)
0.79
(12)
0.79
(6)
0.95
(6)
0.85
(7)
1.34
(0)
0.55
(9)
1.33
(4)
0.80
(15)
0.46
(10)
A2
A3
A4
A5
A6
A7
A8
A9
A10
A12
B1
findings on the predictive density in terms of an appropriate metric, say the Kullback-Leibler loss,
following the results of George et al. (2006).
A
A.1
Proofs
Proof of Theorem 4.1
Part A follows from Equation (6) with standard algebraic manipulations. To prove part B, define
Zi = 1/(1 + τ 2 λ2i d2i ). Then, from Equation (12)
2 −n/2
m(α̂) = (2πσ )
n
∏
Z 1
i =1 0
exp(−zi α̂2i d2i /2σ2 )di z1/2
i
16
zi τ 2 d2i
1 − zi + zi τ 2 d2i
1
(1 − zi )−1/2 zi−3/2 dzi
τdi
2 −n/2
= (2πσ )
n
∏
Z 1
i =1 0
exp(−zi α̂2i d2i /2σ2 )(1 − zi )−1/2
−1
1
1
dzi .
+ 1 − 2 2 zi
τ 2 d2i
τ di
From the definition of the compound confluent hypergeometric (CCH) density in Gordy (1998),
the result of the integral is proportional to the normalizing constant of the CCH density and we
have from Proposition 4.1 that,
2 −n/2
m(α̂) ∝ (2πσ )
n
α̂2i d2i
1
1
H
1,
, 1, 2 2
,
1,
∏
2
2
2σ
τ di
i =1
.
In addition, the random variable ( Zi | α̂i , σ, τ ) follows a CCH(1, 1/2, 1, α̂2i d2i /2σ2 , 1, 1/τ 2 d2i ) distribution. Lemma 3 of Gordy (1998) gives,
d
p
H ( p, q, r, s, ν, θ ) = −
H ( p + 1, q, r, s, ν, θ ).
ds
p+q
This yields after some algebra that,
α̂2i d2i
1
1
,
1,
H
2,
,
1,
0
2
2
2
2
α̂i d2
2σ
m (α̂i )
2
τ di
2i ,
=−
2
2
m(α̂i )
3 H 1, 1 , 1, α̂i di , 1, 1
σ
2
2σ2
τ 2 d2i
2
2 4
di
α̂i di
α̂2i d2i
α̂2i d2i
2
1
1
8
1
1
−
H
2,
,
1,
,
1,
+
H
3,
,
1,
,
1,
00
2
2
2
2
2
2
2
3
2
15
2
2σ
σ
2σ
σ4
m (α̂i )
τ di
τ di
=
.
2
2
α̂i di
m(α̂i )
1
H 1, 12 , 1, 2σ
2 , 1, τ 2 d2
i
The correctness of the assertion
α̂i d2
m0 (α̂i )
= − 2i E( Zi ),
m(α̂i )
σ
and
α̂2 d4
d2
m00 (α̂i )
= − i2 E( Zi ) + i 4 i E( Zi2 ),
m(α̂i )
σ
σ
can then be verified using Equation (15), completing the proof.
A.2
Proof of Theorem 4.2
Define si = α̂2i d2i /2σ2 and θi = (τ 2 d2i )−1 , withθi ≥ 1, si ≥ 1. From Theorem 4.1, the componentwise SURE is
SUREi =2σ2 − 2σ2 E( Zi ) − α̂2i d2i {E( Zi )}2 + 2α̂i 2 d2i E( Zi2 )
=2σ2 [1 − E( Zi ) + 2si E( Zi2 ) − si {E( Zi )}2 ],
Thus,
2σ2 [1 − E( Zi ) − si {E( Zi )}2 ] ≤ SUREi ≤ 2σ2 [1 + 2si E( Zi2 )].
17
(A.1)
To find bounds on SURE, we need upper bounds on E( Zi2 ) and E( Zi ). Clearly, θi−1 ≤ {θi + (1 −
θi )zi }−1 ≤ 1, when θi ≥ 1. Let ai = log(s5/2
i ) /si . Then ai ∈ [0, 5/ (2e )) when si ≥ 1. Now,
R1
E( Zi2 )
0
1
z2i (1 − zi )− 2 {θi + (1 − θi )zi }−1 exp(−si zi )dzi
= R1
1
(1 − zi )− 2 {θi + (1 − θi )zi }−1 exp(−si zi )dzi
0
,
An upper bound to the numerator of E( Zi2 ) can be found as follows.
Z 1
0
≤
1
z2i (1 − zi )− 2 {θi + (1 − θi )zi }−1 exp(−si zi )dzi
Z 1
0
=
Z ai
0
1
z2i (1 − zi )− 2 exp(−si zi )dzi
1
z2i (1 − zi )− 2 exp(−si zi )dzi +
− 12
Z 1
ai
1
z2i (1 − zi )− 2 exp(−si zi )dzi
Z ai
Z 1
1
z2i exp(−si zi )dzi + exp(− ai si )
z2i (1 − zi )− 2 dzi
0
ai
Z 1
a2i s2i
1
− 12 2
1 − 1 + ai si +
exp(− ai si ) + exp(− ai si )
z2i (1 − zi )− 2 dzi
= (1 − a i )
3
2
si
ai
≤ (1 − a i )
≤ {1 − 5/(2e)}
=
− 12
2
1
+ 5/2
3
si
si
Z 1
0
1
z2i (1 − zi )− 2 dzi
C2
C1
+ 5/2 ,
3
si
si
R1
1
1
where C1 = {1 − 5/(2e)}− 2 ≈ 3.53 and C2 = 0 z2i (1 − zi )− 2 dzi = Γ (1/2) Γ (3)/Γ (3.5) = 16/15.
Similarly, a lower bound on the denominator of E( Zi2 ) is
Z 1
0
≥
1
(1 − zi )− 2 {θi + (1 − θi )zi }−1 exp(−si zi )dzi
θi−1
Z 1
exp(−si zi )dzi
0
1
−1 1 − exp(− si )
= θi
≥
,
si
θ i (1 + s i )
Thus, combining the upper bound on the numerator and the lower bound on the denominator
!
C
C
2
1
E( Zi2 ) ≤ θi (1 + si )
+ 5/2 .
s3i
si
Thus,
SUREi ≤ 2σ2 [1 + 2si E( Zi2 )]
(
≤ 2σ
2
1 + 2θi (1 + si )
18
C1
C2
+ 3/2
s2i
si
!)
.
(A.2)
An upper bound to the numerator of E( Zi ) can be found as follows. Let ãi = log(s2i )/si . Then,
ãi ∈ [0, 2/e) for si ≥ 1.
Z 1
0
≤
1
zi (1 − zi )− 2 {θi + (1 − θi )zi }−1 exp(−si zi )dzi
Z 1
0
=
Z ãi
0
1
zi (1 − zi )− 2 exp(−si zi )dzi
1
zi (1 − zi )− 2 exp(−si zi )dzi +
≤ (1 − ãi )
− 12
Z ãi
0
1
= (1 − ãi )− 2
ãi
1
zi (1 − zi )− 2 exp(−si zi )dzi
zi exp(−si zi )dzi + exp(− ãi si )
Z 1
ai
1
Z
1 1
1
+ 2
s2i
si
0
1
zi (1 − zi )− 2 dzi
1
{1 − (1 + ãi si ) exp(− ãi si )} + exp(− ãi si )
s2i
≤ (1 − 2/e)− 2
=
Z 1
Z 1
ãi
1
zi (1 − zi )− 2 dzi
1
zi (1 − zi )− 2 dzi
C̃1 C̃2
+ 2,
s2i
si
R1
1
where C̃1 = (1 − 2/e) ≈ 0.26 and C̃2 = 0 zi (1 − zi )− 2 dzi = Γ (1/2) Γ (2)/Γ (2.5) = 4/3. The
lower bound on the denominator is the same as before. Thus,
E( Zi ) ≤
θ i (1 + s i )
.
C̃
+
C̃
2
1
s2i
Thus,
SUREi ≥ 2σ2 [1 − E( Zi ) − si {E( Zi )}2 ]
(
)
2
(1 + s i )
2
2
2 (1 + s i )
≥ 2σ 1 − θi (C̃1 + C̃2 )
− θi (C̃1 + C̃2 )
.
s2i
s3i
Thus, combining Equations (A.2) and (A.3) we get
(
)
(
2
SUREi
(1 + s i )
2
2 (1 + s i )
− θi (C̃1 + C̃2 )
≤
1 − θi (C̃1 + C̃2 )
≤ 1 + 2θi (1 + si )
2σ2
s2i
s3i
(A.3)
C1
C2
+ 3/2
s2i
si
!)
,
for si ≥ 1, θi ≥ 1.
A.3
Proof of Theorem 4.3
Our proof is similar to the proof of Theorem 1 of Polson and Scott (2011). Note from Equations
(9–10) that integrating out αi we have
ind
α̂i | λ2i , σ2 , τ 2 ∼ N (0, σ2 (di−2 + τ 2 λ2i )).
19
Let p(λ2i ) ∼ (λ2i ) a−1 L(λ2i ), as λ2i → ∞ where a ≤ 0. Define ui = σ2 (di−2 + τ 2 λ2i ). Then, as in
Theorem 1 of Polson and Scott (2011), we have
p(ui ) ∼ uia−1 L(ui ), as ui → ∞.
The marginal of α̂i is then given by
m(α̂i ) =
Z
√
1
exp{−α̂2i /(2ui )} p(ui )dui .
2πui
An application of Theorem 6.1 of Barndorff-Nielsen et al. (1982) shows that
m(α̂i ) ∼ |α̂i |2a−1 L(|α̂i |) as |α̂i | → ∞.
Thus, for large |α̂i |
(2a − 1) ∂ log L(|α̂i |)
∂ log m(α̂i )
=
+
.
∂α̂i
|α̂i |
∂α̂i
(A.4)
Clearly, the first term in Equation (A.4) goes to zero as |α̂i | → ∞. For the second term, we need
to invoke the celebrated representation theorem by Karamata. A proof can be found in Bingham
et al. (1989).
RESULT A.1. (Karamata’s representation theorem). A function L is slowly varying if and only if there
exists B > 0 such that for all x ≥ B the function can be written in the form
Z x
ε(t)
dt ,
L( x ) = exp η ( x ) +
t
B
where η ( x ) is a bounded measurable function of a real variable converging to a finite number as x goes to
infinity ε( x ) is a bounded measurable function of a real variable converging to zero as x goes to infinity.
Thus, using the properties of η ( x ) and ε( x ) from the result above
d log( L( x ))
ε( x )
= η 0 (x) +
→0
dx
x
as
x → ∞.
Using this in Equation (A.4) shows ∂ log m(α̂i )/∂α̂i → 0 as |α̂i | → ∞. By similar calculations,
∂2 log m(α̂i )/∂2 α̂i → 0 as |α̂i | → ∞. From Equation (6)
SUREi =σ4 di−2
∂
log m(α̂i )
∂α̂i
2
+ 2σ
2
1 + σ2 di−2
∂2
log m(α̂i ) .
∂α̂2i
Thus, SUREi → 2σ2 , almost surely, as |α̂i | → ∞.
A.4
Proof of Theorem 4.4
The proof of Theorem 4.4 makes use of technical lemmas in Appendix A.6.
Recall from Appendix A.1 that if we define Zi = 1/(1 + τ 2 λ2i d2i ) then the density of Zi is given
20
by
α̂2 d2
1
1
( Zi | α̂i , di , τ, σ ) ∼ CCH Zi | 1, , 1, i 2i , 1, 2 2
2
2σ
τ di
2
.
(A.5)
Then SURE is given by SURE = ∑in=1 SUREi with
SUREi = 2σ2 [1 − E( Zi ) + 2si E( Zi2 ) − si {E( Zi )}2 ]
= 2σ2 [1 − E( Zi ) + si E( Zi2 ) + si Var( Zi )],
(A.6)
where si = α̂2i d2i /2σ2 . Thus,
∂{SUREi }
∂si
∂E( Zi )
∂
∂
+ 2σ2 {si E( Zi2 )} + 2σ2 {si Var( Zi )}
∂si
∂si
∂si
:= I + II + III.
−2σ2
=
(A.7)
Now, as a corollary to Lemma A.1, (∂/∂si )E( Zi ) = {E( Zi )}2 − E( Zi2 ) = −Var( Zi ) < 0, giving
I > 0. The strict inequality follows from the fact that Zi is not almost surely a constant for any
si ∈ R and (∂/∂si )E( Zi ) is continuous at si = 0. Next, consider II. Define θi = (τ 2 d2i )−1 and let
0 ≤ si ≤ 1. Then,
∂
∂
{si E( Zi2 )} = E( Zi2 ) + si E( Zi2 )
∂si
∂si
2
= E( Zi ) + si {E( Zi )E( Zi2 ) − E( Zi3 )}
=
si E( Zi )E( Zi2 ) +
(by Lemma A.1)
{E( Zi2 ) − si E( Zi3 )}.
Now, clearly, the first term, si E( Zi )E( Zi2 ) ≥ 0. We also have Zi2 − si Zi3 = Zi2 (1 − si Zi ) ≥ 0 a.s.
when 0 ≤ Zi ≤ 1 a.s. and 0 ≤ si ≤ 1. Thus, the second term E( Zi2 ) − si E( Zi3 ) ≥ 0. Putting the
terms together gives II ≥ 0. Finally, consider III. Denote E( Zi ) = µi . Then,
∂
∂
{si Var( Zi )} = Var( Zi ) + si {Var( Zi )}
∂si
∂si
∂2 E( Zi )
= Var( Zi ) − si
∂s2i
= E{( Zi − µi )2 } − si E{( Zi − µi )3 }
(by Lemma A.2)
= E[( Zi − µi ) {1 − si ( Zi − µi )}].
2
Now, ( Zi − µi )2 {1 − si ( Zi − µi )} ≥ 0 a.s. when 0 ≤ Zi ≤ 1 a.s. and 0 ≤ si ≤ 1 and thus, III ≥ 0.
Using I, II and III in Equation (A.7) yields SUREi is an increasing function of si when 0 ≤ si ≤ 1,
completing the proof of Part A.
To prove Part B, we need to derive an upper bound on SURE when si = 0. First, consider
si = 0 and 0 < θi ≤ 1. we have from Equation (A.6) that SUREi = 2σ2 (1 − EZi ). By Lemma A.3,
(∂/∂θi )E( Zi ) > 0 and SUREi is a monotone decreasing function of θi , where θi = (τ 2 d2i )−1 . Next
consider the case where si = 0 and θi ∈ (1, ∞). Define Z̃i = 1 − Zi ∈ (0, 1) when Zi ∈ (0, 1). Then,
by Equation (A.11) and a formula on Page 9 of Gordy (1998), we have that Z̃i also follows a CCH
21
distribution. Specifically,
α̂2i d2i
1
2 2
( Z̃i | α̂i , di , τ, σ ) ∼ CCH Z̃i | , 1, 1, − 2 , 1, τ di ,
2
2σ
2
and we have SUREi = 2σ2 E( Z̃i ). Define θ̃i = θi−1 = τ 2 d2i . Then by Lemma A.3, (∂/∂θ̃i )E( Z̃i ) =
−Cov( Z̃i , W̃i ) > 0 on 0 < θ̃i < 1. Therefore, SUREi is a monotone increasing function of θ˜i on
0 < θ˜i < 1, or equivalently a monotone decreasing function of θi on θi ∈ (1, ∞).
Thus, combining the two cases above, we get that SURE at si = 0 is a monotone decreasing
function of θi for any θi ∈ (0, ∞), or equivalently, an increasing function of τ 2 d2i . Since 0 ≤ Z̃i ≤
1 almost surely, a natural upper bound on SUREi is 2σ2 . However, it is possible to do better
provided τ is chosen sufficiently small. Assume that τ 2 ≤ di−2 . Then, since SUREi is monotone
increasing in θi , the upper bound on SURE is achieved when θi = (τ 2 d2i )−1 = 1. In this case, E( Zi )
has a particularly simple expression, given by
R1
1
zi (1 − zi )− 2 {θi + (1 − θi )zi }−1 dzi
E( Zi ) = R 1
0
1
(1 − zi )− 2 {θi + (1 − θi )zi }−1 dzi
0
R1
1
z (1 − zi )− 2 dzi
2
0 i
= R1
= .
1
−
3
(1 − zi ) 2 dzi
0
(A.8)
Thus, sup SUREi = 2σ2 (1 − EZi ) = 2σ2 /3, completing the proof of Part B.
To prove Part C, we first note that when si = 1 we have
SUREi = 2σ2 [1 − E( Zi )|si =1 + 2E( Zi2 )|si =1 − {E( Zi )|si =1 }2 ]
where E( Zi ) and E( Zi2 ) are evaluated at si = 1. Recall that when θi ≥ 1 and zi ∈ (0, 1) we have
θi−1 ≤ {θi + (1 − θi )zi }−1 ≤ 1. Thus,
R1
E( Zi2 )|si =1
0
1
z2i (1 − zi )− 2 {θi + (1 − θi )zi }−1 exp(−zi )dzi
= R1
1
(1 − zi )− 2 {θi + (1 − θi )zi }−1 exp(−zi )dzi
R1 2
1
z (1 − zi )− 2 exp(−zi )dzi
0.459
0 i
≈ θi
≤
= 0.43θi ,
R
1
−1 1
−
1.076
θ
(1 − zi ) 2 exp(−zi )dzi
0
i
(A.9)
0
and
R1
1
zi (1 − zi )− 2 {θi + (1 − θi )zi }−1 exp(−zi )dzi
E( Zi )|si =1 = R 1
0
,
− 12 { θ + (1 − θ ) z }−1 exp(− z ) dz
(
1
−
z
)
i
i
i
i
i
i
0
R1
1
θi−1 0 zi (1 − zi )− 2 exp(−zi )dzi
0.614
≥
≈ θi−1
= 0.57θi−1 .
R1
1
−
1.076
(1 − zi ) 2 exp(−zi )dzi
0
Thus,
"
SUREi ≤ 2σ2
0.57
1−
+ 0.86θi −
θi
22
0.57
θi
2 #
.
(A.10)
When θi = 1, it can be seen that SUREi ≤ 1.93σ2 .
A.5
Proof of Theorem 5.1
The proof of Theorem 5.1 makes use of technical lemmas in Appendix A.6.
Recall from Appendix A.1 that if we define Zi = 1/(1 + τ 2 λ2i d2i ) then the density of Zi is given
by
( Zi | α̂i , di , τ, σ2 ) ∼ CCH ( Zi | 1, 1/2, 1, si , 1, θi ) .
(A.11)
where si = α̂2i d2i /2σ2 and θi = (τ 2 d2i )−1 . Consider the case where di = 1 for all i and τ 2 = 1, i.e.,
θi = 1 for all i. From Equation (A.6), the risk estimate is SURE = ∑in=1 SUREi with
SUREi = 2σ2 [1 − E( Zi ) + si E( Zi2 ) + si Var( Zi )],
≤ 2σ2 [1 − E( Zi ) + si + si Var( Zi )] = Ři .
We begin by showing that the upper bound Ři = 2σ2 [1 − E( Zi ) + si + si Var( Zi )] is convex in si
when si ∈ (0, 1). It suffices to show −E( Zi ) and si Var( Zi ) are separately convex. First, (∂2 /∂2 si )E( Zi ) =
E{( Zi − µi )3 } ≤ 0, by Lemmas A.2 and A.4, proving −E( Zi ) is convex. Next,
∂2
{si Var( Zi )} =
∂s2i
∂
∂
Var( Zi ) + si {Var( Zi )}
∂si
∂si
∂
∂2
{Var( Zi )} + si 2 {Var( Zi )}
∂si
∂si
∂
= −2E( Zi − µi )3 − si E( Zi − µi )3 (by Lemma A.2)
∂si
3
= −2E( Zi − µi ) + si E( Zi − µi )4 , (by Lemma A.5)
= 2
≥ 0,
where the last inequality follows by Lemma A.4. Thus, since Ři is convex, it lies entirely below the
straight line joining the two end points for si ∈ (0, 1). But Ři |si =0 ≤ 2σ2 /3 = 0.67σ2 (by Equation
(A.8)) and
Ři |si =1 ≤ 2σ2 1 − 0.57 + 1 + 0.43 − (0.57)2 = 3.07σ2 ,
by Equations (A.9) and (A.10).Thus, by convexity
SUREi ≤ Ři ≤ 0.67σ2 + si (3.07 − 0.67)σ2 = (0.67 + 2.4si )σ2 for si ∈ (0, 1)
(A.12)
We remark here that our simulations suggest SUREi itself is convex, not just the upper bound
Ři , although a proof seems elusive. Nevertheless, as we shall see below, the convexity of Ři is
sufficient for our purposes.
Next, consider the interval si ∈ (1, 3). Noting that both E( Zi ) and E( Zi2 ) are monotone decreasing functions of si we have
SUREi ≤ 2σ2 [1 − E( Zi )|si =3 + 2si {E( Zi2 )|si =1 } − si {E( Zi )|si =3 }2 ]
23
But,
R1
1
zi (1 − zi )− 2 exp(−3zi )dzi
E( Zi )|si =3,θi =1 = R 1
0
0
1
(1 − zi )− 2 exp(−3zi )dzi
= 0.35.
E( Zi2 )|si =1 < 0.43 from Equation (A.9). Thus,
SUREi ≤ 2σ2 [1 − 0.35 + 0.86si − si (0.35)2 }2 ] = 2σ2 (0.65 + 0.74si ) for si ∈ (1, 3).
(A.13)
Using the upper bound from Theorem 4.2,
SUREi ≤ 11.55σ2 for si ≥ 3.
(A.14)
When αi = 0, we have that α̂i ∼ N (0, σ2 di−2 ). Thus, α̂2i d2i /σ2 ∼ χ2 (1). Since si = α̂2i d2i /2σ2 we
have that p(si ) = (π )−1/2 si−1/2 exp(−si ) for si ∈ (0, ∞). Combining Equations (A.12), (A.13) and
(A.14) we have
Riski = E(SUREi ) ≤
+
Z 1
0
Z 3
1
+
σ2 (0.67 + 2.4)π −1/2 si−1/2 exp(−si )dsi
2σ2 (0.65 + 0.74)π −1/2 si−1/2 exp(−si )dsi
Z ∞
3
11.55σ2 π −1/2 si−1/2 exp(−si )dsi
= 1.75σ2 .
A.6
Technical lemmas
LEMMA A.1. If Z ∼ CCH( p, q, r, s, ν, θ ), then (∂/∂s)E( Z k ) = E( Z )E( Z k ) − E( Z k+1 ).
LEMMA A.2. If Z ∼ CCH( p, q, r, s, ν, θ ), then (∂2 /∂2 s)E( Z ) = −(∂/∂s)Var( Z ) = E{( Z − µ)3 },
where µ = E( Z ).
LEMMA A.3. If Z ∼ CCH( p, q, r, s, ν, θ ), then (∂/∂θ )E( Z ) = −Cov( Z, W ), for W = (1 − νZ ){θ +
(1 − θ )νZ }−1 . If 0 < θ ≤ 1 then (∂/∂θ )E( Z ) > 0.
LEMMA A.4. If Z ∼ CCH( p, q, r, s, 1, 1) with q > p, then E( Z − µ)3 ≤ 0, where µ = E( Z ).
LEMMA A.5. If Z ∼ CCH( p, q, r, s, ν, θ ), then (∂/∂s)E( Z − µ)3 = −E{( Z − µ)4 }, where µ = E( Z ).
A.6.1
Proof of Lemma A.1
Let, Z ∼ CCH( p, q, r, s, ν, θ ). Then for any integer k
R 1/ν
zk+ p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
.
E( Z ) = R0 1/ν
p−1 (1 − νz )q−1 { θ + (1 − θ ) νz }−r exp(− sz ) dz
z
0
k
Thus,
R 1/ν k+ p
−z (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
∂
k
E( Z ) = R0 1/ν
∂s
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
24
" R 1/ν
−
zk+ p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
R 1/ν p−1
z (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
#
R 1/ν
−z p (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
× R 1/ν
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
= − E ( Z k +1 ) + E ( Z )E ( Z k ).
For an alternative proof directly using the H (·) functions, see Appendix D of Gordy (1998).
A.6.2
Proof of Lemma A.2
Let, Z ∼ CCH( p, q, r, s, ν, θ ). From Lemma A.1, (∂/∂s)E( Z ) = −E( Z2 ) + {E( Z )}2 = −Var( Z ).
Let µ = E( Z ). Then,
∂
∂2
E( Z ) = − Var( Z )
2
∂s
∂s
" R 1/ν
#
2 z p−1 (1 − νz )q−1 { θ + (1 − θ ) νz }−r exp(− sz ) dz
(
z
−
µ
)
∂
0
=−
R 1/ν p−1
∂s
z (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
R 1/ν
(z − µ)2 z p (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
= 0 R 1/ν
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
" R 1/ν
(z − µ)2 z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
− 0 R 1/ν
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
#
R 1/ν p
q−1 { θ + (1 − θ ) νz }−r exp(− sz ) dz
z
(
1
−
νz
)
0
× R 1/ν
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
=Cov( Z, ( Z − µ)2 )
=E[( Z − µ){( Z − µ)2 − E( Z − µ)2 }]
=E{( Z − µ)3 } − Var( Z )E( Z − µ) = E{( Z − µ)3 }.
A.6.3
Proof of Lemma A.3
Let Z ∼ CCH( p, q, r, s, ν, θ ) and W = (1 − νZ ){θ + (1 − θ )νZ }−1 . Then,
R 1/ν p
z (1 − νz)q {θ + (1 − θ )νz}−(r+1) exp(−sz)dz
∂
E( Z ) = − R 01/ν
∂θ
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
" R 1/ν
z p (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
+ R 1/ν
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
#
R 1/ν p−1
q { θ + (1 − θ ) νz }−(r +1) exp(− sz ) dz
z
(
1
−
νz
)
× R0 1/ν
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
= − E( ZW ) + E( Z )E(W ) = −Cov( Z, W ).
When 0 < θ ≤ 1, it is obvious that Z and W are negatively correlated, and thus −Cov( Z, W ) > 0.
25
A.6.4
Proof of Lemma A.4
Let Z ∼ CCH( p, q, r, s, 1, 1). Then,
R1
E( Z − µ ) =
3
0
(z − µ)3 z p−1 (1 − z)q−1 exp(−sz)dz
,
R 1 p −1
z (1 − z)q−1 exp(−sz)dz
0
which can be seen to have the same sign as the third central moment, or skewness of a Beta( p, q)
random variable, which is negative when q > p.
A.6.5
Proof of Lemma A.5
Let, Z ∼ CCH( p, q, r, s, ν, θ ). Let µ = E( Z ). Then,
∂
E( Z − µ )3 = −
∂s
R 1/ν
0
(z − µ)3 z p (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
R 1/ν
0
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
" R 1/ν
+
(z − µ)3 z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
R 1/ν p−1
z (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
#
R 1/ν p
q−1 { θ + (1 − θ ) νz }−r exp(− sz ) dz
z
(
1
−
νz
)
0
× R 1/ν
z p−1 (1 − νz)q−1 {θ + (1 − θ )νz}−r exp(−sz)dz
0
0
= − Cov( Z, ( Z − µ)3 )
= − E[( Z − µ){( Z − µ)3 − E( Z − µ)3 }]
= − E{( Z − µ)4 } + E( Z − µ)3 E( Z − µ) = −E{( Z − µ)4 }.
References
Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic
Control 19, 716–723.
Armagan, A., Clyde, M., and Dunson, D. B. (2011). Generalized beta mixtures of Gaussians.
In Shawe-Taylor, J., Zemel, R. S., Bartlett, P., Pereira, F. C. N., and Weinberger, K. Q., editors,
Advances in Neural Information Processing Systems 24, pages 523–531.
Armagan, A., Dunson, D. B., and Lee, J. (2013). Generalized double Pareto shrinkage. Statistica
Sinica 23, 119–143.
Barndorff-Nielsen, O., Kent, J., and Sørensen, M. (1982). Normal variance-mean mixtures and z
distributions. International Statistical Review 50, 145–159.
Bhadra, A., Datta, J., Polson, N. G., and Willard, B. (2016a). Default bayesian analysis with globallocal shrinkage priors. Biometrika 103, 955–969.
Bhadra, A., Datta, J., Polson, N. G., and Willard, B. (2016b). The horseshoe+ estimator of ultrasparse signals. Bayesian Analysis to appear, 1–27.
26
Bhattacharya, A., Pati, D., Pillai, N. S., and Dunson, D. B. (2015). Dirichlet-Laplace priors for
optimal shrinkage. Journal of the American Statistical Association 110, 1479–1490.
Bingham, N. H., Goldie, C. M., and Teugels, J. L. (1989). Regular variation, volume 27 of Encyclopedia
of mathematics and its applications. Cambridge University Press, Cambridge.
Carvalho, C. M., Polson, N. G., and Scott, J. G. (2010). The horseshoe estimator for sparse signals.
Biometrika 97, 465–480.
Casella, G. and Hwang, J. T. (1982). Limit expressions for the risk of James–Stein estimators.
Canadian Journal of Statistics 10, 305–309.
Castillo, I., Schmidt-Hieber, J., Van der Vaart, A., et al. (2015). Bayesian linear regression with
sparse priors. The Annals of Statistics 43, 1986–2018.
Chun, H. and Keles, S. (2010). Sparse partial least squares regression for simultaneous dimension reduction and variable selection. Journal of the Royal Statistical Society: Series B (Statistical
Methodology) 72, 3–25.
Clyde, M., Desimone, H., and Parmigiani, G. (1996). Prediction via orthogonalized model mixing.
Journal of the American Statistical Association 91, 1197–1208.
Craven, P. and Wahba, G. (1978). Smoothing noisy data with spline functions. Numerische Mathematik 31, 377–403.
Denison, D. G. T. and George, E. I. (2012). Bayesian prediction with adaptive ridge estimators, volume 8
of IMS Collections, pages 215–234. Institute of Mathematical Statistics, Beachwood, Ohio, USA.
Dicker, L. H. (2014). Variance estimation in high-dimensional linear models. Biometrika 101, 269–
284.
Dobriban, E. and Wager, S. (2015). High-dimensional asymptotics of prediction: Ridge regression
and classification. arXiv preprint arXiv:1507.03003 .
Efron, B. (1983). Estimating the error rate of a prediction rule: improvement on cross-validation.
Journal of the American Statistical Association 78, 316–331.
Efron, B. (2004). The estimation of prediction error: covariance penalties and cross-validation
(with discussion). Journal of the American Statistical Association 99, 619–642.
Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle
properties. Journal of the American statistical Association 96, 1348–1360.
Foster, D. P. and George, E. I. (1994). The risk inflation criterion for multiple regression. The Annals
of Statistics 22, 1947–1975.
Frank, I. E. and Friedman, J. H. (1993). A statistical view of some chemometrics regression tools.
Technometrics 35, 109–135.
27
George, E. I., Liang, F., and Xu, X. (2006). Improved minimax predictive densities under KullbackLeibler loss. The Annals of Statistics 34, 78–91.
Ghosh, P., Tang, X., Ghosh, M., and Chakrabarti, A. (2016). Asymptotic properties of bayes risk
of a general class of shrinkage priors in multiple hypothesis testing under sparsity. Bayesian
Analysis 11, 753–796.
Gordy, M. B. (1998). A generalization of generalized beta distributions. In Finance and Economics
Discussion Series. Division of Research and Statistics, Division of Monetary Affairs, Federal Reserve Board.
Griffin, J. E. and Brown, P. J. (2010). Inference with normal-gamma prior distributions in regression
problems. Bayesian Analysis 5, 171–188.
Hastie, T., Tibshirani, R., and Friedman, J. (2009). The elements of statistical learning: data mining,
inference and prediction. Springer, New York, 2nd edition.
Hoerl, A. E. and Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal
problems. Technometrics 12, 55–67.
Jolliffe, I. T. (1982). A note on the use of principal components in regression. Journal of the Royal
Statistical Society: Series C (Applied Statistics) 31, 300–303.
Mallows, C. L. (1973). Some comments on C p . Technometrics 15, 661–675.
Martin, R. and Walker, S. G. (2014). Asymptotically minimax empirical Bayes estimation of a
sparse normal mean vector. Electronic Journal of Statistics 8, 2188–2206.
Masreliez, C. (1975). Approximate non-Gaussian filtering with linear state and observation relations. IEEE Transactions on Automatic Control 20, 107–110.
Pericchi, L. and Smith, A. (1992). Exact and approximate posterior moments for a normal location
parameter. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 54, 793–804.
Polson, N. G. and Scott, J. G. (2011). Shrink globally, act locally: sparse Bayesian regularization
and prediction. In Bernardo, J. M., Bayarri, M. J., Berger, J. O., Dawid, A. P., Heckerman, D.,
Smith, A. F. M., and West, M., editors, Bayesian Statistics 9, Oxford. Oxford University Press.
Polson, N. G. and Scott, J. G. (2012a). Local shrinkage rules, Lévy processes, and regularized
regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 74, 287–311.
Polson, N. G. and Scott, J. G. (2012b). On the half-Cauchy prior for a global scale parameter.
Bayesian Analysis 7, 887–902.
Raskutti, G., Wainwright, M. J., and Yu, B. (2011). Minimax rates of estimation for highdimensional linear regression over `q -balls. IEEE Transactions on Information Theory 57, 6976–
6994.
Ročková, V. and George, E. I. (2016). The spike-and-slab lasso. Journal of the American Statistical
Association to appear,.
28
Ročková, V. and George, E. I. (2014). EMVS: The EM approach to Bayesian variable selection.
Journal of the American Statistical Association 109, 828–846.
Stein, C. M. (1981). Estimation of the mean of a multivariate normal distribution. The Annals of
Statistics 9, 1135–1151.
Szakács, G., Annereau, J.-P., Lababidi, S., Shankavaram, U., Arciello, A., Bussey, K. J., Reinhold,
W., Guo, Y., Kruh, G. D., Reimers, M., Weinstein, J. N., and Gottesman, M. M. (2004). Predicting
drug sensitivity and resistance: Profiling ABC transporter genes in cancer cells. Cancer Cell 6,
129 – 137.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society: Series B (Statistical Methodology) 58, 267–288.
Tibshirani, R. J. and Taylor, J. (2012). Degrees of freedom in lasso problems. The Annals of Statistics
40, 1198–1232.
van der Pas, S., Kleijn, B., and van der Vaart, A. (2014). The horseshoe estimator: Posterior concentration around nearly black vectors. Electronic Journal of Statistics 8, 2585–2618.
van der Pas, S., Salomond, J.-B., and Schmidt-Hieber, J. (2016). Conditions for posterior contraction
in the sparse normal means problem. Electronic Journal of Statistics 10, 976–1000.
Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis with g-prior
distributions. In Goel, P. K. and Zellner, A., editors, Bayesian Inference and Decision Techniques:
Essays in Honor of Bruno de Finetti, pages 233–243. Elsevier, North-Holland.
Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. The
Annals of statistics 38, 894–942.
29
Supplementary Material to
Prediction risk for the horseshoe regression
Anindya Bhadra
Department of Statistics, Purdue University, 250 N. University Street, West Lafayette, IN 47907,
USA.
[email protected]
Jyotishka Datta
Department of Mathematical Sciences, University of Arkansas, Fayetteville, AR 72701, USA.
[email protected]
Yunfan Li
Department of Statistics, Purdue University, 250 N. University Street, West Lafayette, IN 47907,
USA.
[email protected]
Nicholas G. Polson and Brandon Willard
The University of Chicago Booth School of Business, 5807 S. Woodlawn Ave., Chicago, IL 60637,
USA.
[email protected], [email protected]
S.1
Additional simulations
We provide additional simulation results, complementing the results in Table 2. For each simulation setting, we report SURE when a formula is available. We also report the average out of
sample prediction SSE (standard deviation of SSE) computed based on one training set and 200
testing sets. For each setting, n = 100. The methods under consideration are ridge regression
(RR), principal components regression (PCR), the lasso, the adaptive lasso (A LASSO), the minimax concave penalty (MCP), the smoothly clipped absolute deviation (SCAD) penalty and the
proposed horseshoe regression (HS). The method with the lowest SSE is in bold and that with
lowest SURE is in italics for each setting. The features of these additional simulations include the
following.
1. We explore a higher dimensional case (p = 1000) for each setting.
2. We incorporate two non-convex regression methods for comparisons. These are SCAD (Fan
and Li, 2001) and MCP (Zhang, 2010).
3. We explore different choices of the design matrix X. These include three cases: (i) X is generated from a factor model, where it is relatively ill-conditioned (as in Table 2), (ii) X is generated from a standard normal, where it is well-conditioned and (iii) X is exactly orthogonal,
with all singular values equal to 1. These are reported in corresponding table captions.
4. We explore different choices of true α. These include three cases: (i) Sparse-robust α, where
most elements of α are close to zero and a few are large, (ii) null α, where all elements of α
are zero and (iii) dense α, where all elements are non-zero. Exact settings and the value of
||α||2 are reported in the table captions.
The major finding is that the horseshoe regression outperforms the other global shrinkage
methods (ridge and PCR) when α is sparse-robust, which is consistent with the theoretical observation in Section 5. It also outperforms the other selection-based methods in this case. On the
other hand, the dense α case is most often favorable to ridge regression, while the null α case is
favorable to selection-based methods such as the lasso, adaptive lasso, MCP or SCAD, due to the
ability of these methods to produce exact zero estimates.
S.1
Table S.1: Sparse-robust α (five large coefficients equal to 10 and other coefficients equal to 0.5 or
−0.5 randomly, giving ∑in=1 α2i = 523.75); X generated by a factor model with 4 factors, each factor
follows a standard normal distribution; d1 /dn is the ratio of largest and smallest singular values
of X.
RR
p
d1 /dn
SURE
SSE
100
2360.43
165.45
200
28.47
188.13
300
22.76
192.35
400
21.81
194.73
500
18.18
196.03
1000
15.20
197.91
159.83
(22.02)
206.39
(28.61)
212.05
(28.50)
199.36
(28.75)
180.12
(27.16)
184.86
(26.42)
PCR
SURE
SSE
LASSO
SURE
SSE
163.80
122.78
217.40
266.84
337.32
410.82
669.69
161.62
(21.28)
244.71
(29.80)
280.25
(32.62)
328.48
(34.79)
379.03
(39.41)
736.69
(56.58)
174.48
155.26
179.45
158.07
196.83
A LASSO
SSE
MCP
SSE
SCAD
SSE
132.25
(17.57)
148.41
(22.48)
175.46
(22.17)
197.25
(25.02)
223.21
(27.98)
345.26
(36.60)
127.07
(16.71)
154.01
(23.17)
172.18
(21.55)
199.08
(25.52)
224.91
(29.65)
344.04
(37.34)
127.85
(17.19)
157.73
(23.23)
176.29
(22.19)
198.40
(25.31)
226.76
(29.26)
344.04
(37.34)
145.07
(19.39)
162.94
(24.44)
190.09
(26.20)
182.89
(27.41)
173.82
(26.50)
205.28
(29.56)
HS
SURE
SSE
116.01
123.07
(16.43)
152.37
(22.75)
164.17
(22.85)
165.15
(24.67)
161.77
(24.22)
182.18
(25.43)
160.89
157.50
172.63
166.10
191.64
Table S.2: Null α (∑in=1 α2i = 0); X is the same as in Table S.1.
RR
p
SURE
SSE
100
88.23
200
121.30
300
125.78
400
113.00
500
90.74
1000
88.86
100.86
(13.20)
107.68
(15.70)
101.36
(13.99)
99.50
(13.12)
101.04
(14.17)
100.34
(14.00)
PCR
SURE
SSE
LASSO
SURE
SSE
92.85
87.36
128.83
139.96
113.50
88.26
85.67
113.28
(14.91)
115.65
(16.28)
124.37
(17.35)
99.41
(13.09)
107.31
(15.08)
103.47
(14.29)
117.90
108.85
102.81
90.26
82.51
100.81
(13.29)
105.77
(15.06)
111.85
(15.37)
111.92
(15.51)
99.49
(14.16)
100.43
(13.90)
S.2
A LASSO
SSE
MCP
SSE
SCAD
SSE
100.70
(13.21)
100.32
(14.80)
101.30
(14.02)
114.62
(15.80)
99.06
(14.04)
99.52
(13.70)
100.81
(13.29)
104.39
(14.93)
104.89
(14.27)
99.40
(13.20)
99.49
(14.16)
100.43
(13.90)
100.81
(13.29)
101.78
(14.89)
102.91
(14.00)
110.30
(15.20)
99.49
(14.16)
100.41
(14.00)
HS
SURE
SSE
92.42
102.31
(13.72)
111.39
(16.12)
112.00
(15.76)
107.20
(14.90)
102.93
(14.68)
104.84
(14.96)
122.29
119.67
113.42
101.55
99.73
Table S.3: Dense α (all coefficients equal to 2, giving ∑in=1 α2i = 400); X is the same as in Table S.1.
RR
p
SURE
SSE
100
162.49
200
183.75
300
189.38
400
193.02
500
194.85
1000
197.37
159.94
(21.60)
200.92
(27.97)
209.92
(27.88)
195.01
(28.92)
175.46
(26.52)
181.65
(26.50)
PCR
SURE
SSE
LASSO
SURE
SSE
177.47
194.86
196.06
200.39
197.74
208.87
247.40
175.19
(22.36)
233.12
(31.18)
225.92
(30.15)
217.68
(31.02)
201.18
(29.07)
197.59
(27.76)
211.99
216.01
218.16
220.34
224.75
203.89
(28.11)
232.36
(31.06)
524.84
(69.45)
306.15
(42.65)
743.40
(100.54)
210.78
(29.70)
A LASSO
SSE
MCP
SSE
SCAD
SSE
504.55
(46.13)
960.77
(59.48)
1344.27
(71.29)
1768.05
(78.92)
2154.61
(92.70)
4280.80
(145.28)
491.67
(45.31)
895.83
(60.85)
1298.80
(77.97)
1675.73
(75.86)
2082.54
(92.93)
4075.00
(138.72)
491.67
(45.31)
911.94
(60.19)
1298.80
(77.97)
1675.73
(75.86)
2081.42
(93.37)
4075.00
(138.72)
HS
SURE
SSE
185.46
173.89
(23.63)
228.18
(31.21)
227.55
(29.98)
213.91
(31.14)
188.93
(28.10)
186.48
(26.95)
204.10
206.99
207.81
207.93
203.47
Table S.4: Sparse-robust α (five large coefficients equal to 10 and other coefficients equal to 0.5 or
−0.5 randomly, giving ∑in=1 α2i = 523.75); X follows a standard normal distribution; d1 /dn is the
ratio of largest and smallest singular values of X.
RR
PCR
p
d1 /dn
SURE
SSE
SURE
SSE
100
351.2
196.72
228.78
200
5.73
199.84
300
3.63
199.91
400
2.89
199.94
500
2.51
199.95
1000
1.88
199.98
188.67
(29.04)
193.41
(28.36)
217.43
(27.91)
197.47
(27.43)
193.86
(27.26)
185.85
(27.17)
231.34
(34.36)
206.25
(28.28)
8082.93
(320.98)
223.43
(31.13)
273.63
(33.97)
560.45
(45.94)
221.35
8538.46
228.38
256.09
605.64
LASSO
SURE
SSE
207.63
218.26
222.62
224.53
224.96
222.18
S.3
425.67
(59.68)
1618.40
(211.54)
1926.13
(248.97)
2384.04
(299.58)
471.73
(60.52)
5781.13
(759.08)
A LASSO
SSE
MCP
SSE
SCAD
SSE
2537.23
(112.58)
4849.94
(146.61)
13132.01
(281.12)
9593.41
(210.42)
11980.15
(235.77)
23866.06
(326.06)
2573.27
(128.46)
4915.72
(186.74)
7316.38
(218.82)
9695.47
(323.72)
11991.11
(272.80)
24566.13
(941.16)
2573.27
(128.46)
4964.26
(186.55)
7316.39
(218.83)
9695.47
(323.72)
11991.11
(272.80)
24566.13
(941.16)
HS
SURE
SSE
195.22
188.52
(28.90)
194.14
(28.45)
219.97
(28.13)
197.69
(27.46)
194.17
(27.24)
185.79
(27.17)
201.91
200.92
200.31
200.15
199.96
Table S.5: Null α (∑in=1 α2i = 0); X is the same as in Table S.4.
RR
p
SURE
SSE
100
118.45
200
136.93
300
152.52
400
158.64
500
166.06
1000
181.23
119.12
(18.19)
135.02
(21.74)
160.29
(21.61)
159.13
(23.15)
158.83
(23.35)
169.22
(25.25)
PCR
SURE
SSE
LASSO
SURE
SSE
96.35
92.06
96.49
119.00
100.88
98.64
89.95
106.88
(15.18)
100.14
(14.77)
131.94
(18.01)
104.06
(15.59)
98.10
(14.50)
100.66
(14.10)
94.54
118.15
96.30
94.30
87.79
A LASSO
SSE
MCP
SSE
SCAD
SSE
100.52
(14.20)
100.13
(14.70)
100.49
(14.37)
100.46
(14.82)
97.99
(14.50)
99.80
(14.00)
101.21
(14.33)
100.39
(14.88)
100.71
(14.48)
103.03
(15.04)
100.36
(14.79)
100.66
(14.08)
101.21
(14.33)
102.06
(15.26)
100.71
(14.48)
103.03
(15.04)
100.36
(14.79)
100.51
(14.07)
101.21
(14.33)
100.15
(14.69)
100.71
(14.48)
103.07
(15.11)
100.36
(14.79)
100.07
(14.03)
HS
SURE
SSE
119.11
114.69
(17.47)
126.34
(20.13)
140.08
(18.93)
132.14
(19.62)
131.53
(19.59)
138.94
(21.12)
126.19
140.91
138.62
140.14
141.11
Table S.6: Dense α (all coefficients equal to 2, giving ∑in=1 α2i = 400); X is the same as in Table S.4.
RR
PCR
p
SURE
SSE
SURE
SSE
100
193.13
206.31
200
199.76
300
199.88
400
199.92
500
199.94
1000
199.97
188.60
(28.91)
193.93
(28.41)
217.60
(27.93)
196.61
(27.35)
193.02
(27.16)
185.98
(27.17)
200.53
(29.51)
349.52
(37.90)
445.11
(43.70)
618.38
(59.10)
823.27
(62.69)
2108.78
(101.50)
392.38
400.63
627.97
794.03
2116.68
LASSO
SURE
SSE
A LASSO
SSE
MCP
SSE
SCAD
SSE
222.52
40019.73
(690.71)
80016.11
(983.86)
120071.46
(1191.24)
159926.82
(1418.60)
199982.59
(1550.64)
399770.90
(2359.10)
40063.42
(717.27)
80187.49
(1102.52)
123161.75
(3757.26)
161662.45
(3304.65)
200043.69
(1647.47)
400168.82
(2934.25)
40063.42
(717.27)
80187.49
(1102.52)
123161.75
(3757.26)
161662.45
(3304.65)
200043.69
(1647.47)
400168.82
(2934.25)
224.73
222.50
222.51
225.01
224.77
210.23
(31.36)
316.05
(42.77)
16845.87
(2167.53)
43325.17
(5447.99)
6497.32
(824.72)
3145.06
(411.60)
S.4
HS
SURE
SSE
199.25
191.74
(29.37)
194.48
(28.50)
217.75
(27.92)
196.70
(27.35)
193.33
(27.18)
186.02
(27.17)
200.14
200.02
200.00
200.00
200.03
Table S.7: Sparse-robust α (five large coefficients equal to 10 and other coefficients equal to 0.5 or
−0.5 randomly, giving ∑in=1 α2i = 523.75); X with all singular values equal to 1.
RR
p
SURE
SSE
100
183.50
200
184.47
300
182.50
400
184.03
500
183.81
1000
185.36
179.99
(25.31)
196.14
(28.79)
192.24
(25.37)
178.58
(25.20)
173.44
(24.08)
166.39
(23.16)
PCR
SURE
SSE
LASSO
SURE
SSE
291.45
139.29
261.65
267.96
311.01
278.35
280.59
275.49
(32.44)
277.35
(33.30)
269.04
(30.40)
287.95
(32.98)
268.85
(30.78)
262.54
(30.01)
135.93
126.35
145.28
147.54
124.52
139.39
(20.36)
150.17
(21.76)
146.72
(18.96)
139.68
(19.20)
139.74
(19.98)
130.61
(18.02)
A LASSO
SSE
MCP
SSE
SCAD
SSE
129.30
(19.20)
128.76
(17.75)
132.26
(17.85)
128.94
(17.40)
126.65
(18.36)
128.83
(17.50)
126.70
(18.79)
129.25
(17.61)
132.05
(17.87)
127.57
(17.42)
127.19
(18.31)
129.78
(17.70)
126.19
(18.83)
129.58
(17.90)
132.17
(17.73)
127.41
(17.43)
126.30
(18.17)
129.47
(17.62)
HS
SURE
SSE
131.81
122.60
(18.72)
131.16
(18.63)
128.72
(17.34)
123.83
(17.02)
120.78
(17.29)
125.49
(17.37)
129.15
119.09
130.13
132.70
119.24
Table S.8: Null α (∑in=1 α2i = 0); X with all singular values equal to 1.
RR
p
SURE
SSE
100
94.70
200
115.52
300
98.74
400
96.78
500
88.55
1000
88.87
100.13
(14.71)
103.43
(14.80)
100.49
(14.80)
97.99
(14.49)
99.97
(14.81)
100.94
(14.17)
PCR
SURE
SSE
LASSO
SURE
SSE
97.63
94.54
111.09
99.45
103.88
89.06
91.96
102.62
(15.37)
118.11
(16.91)
113.35
(16.49)
102.24
(15.12)
100.91
(14.72)
107.30
(15.40)
109.16
96.40
94.02
87.74
88.45
100.15
(14.69)
122.81
(17.79)
103.03
(15.04)
103.08
(14.96)
100.65
(14.87)
101.14
(14.14)
S.5
A LASSO
SSE
MCP
SSE
SCAD
SSE
100.13
(14.70)
100.49
(14.37)
100.46
(14.82)
97.99
(14.50)
99.98
(14.83)
100.95
(14.17)
100.15
(14.69)
112.22
(16.20)
103.03
(15.04)
101.71
(14.81)
100.65
(14.87)
101.62
(14.30)
100.15
(14.69)
100.80
(14.53)
103.03
(15.04)
103.17
(14.97)
100.65
(14.87)
101.14
(14.14)
HS
SURE
SSE
99.92
101.44
(15.06)
106.72
(15.47)
102.40
(14.87)
101.01
(14.78)
101.53
(14.98)
102.26
(14.48)
116.62
103.10
99.64
93.63
94.34
Table S.9: Dense α (all coefficients equal to 2, giving ∑in=1 α2i = 400); X with all singular values
equal to 1.
RR
p
SURE
SSE
100
177.89
200
181.88
300
176.64
400
174.62
500
179.13
1000
179.32
183.69
(25.16)
188.39
(27.10)
193.53
(25.76)
195.83
(26.36)
173.13
(23.18)
173.71
(24.14)
PCR
SURE
SSE
LASSO
SURE
SSE
220.87
203.16
207.49
215.03
248.90
202.78
225.50
200.41
(26.35)
239.95
(33.45)
205.00
(26.70)
249.51
(32.46)
192.97
(25.08)
195.46
(25.78)
214.11
196.19
209.80
214.36
209.35
307.44
(43.19)
255.88
(35.61)
250.76
(32.66)
221.41
(29.85)
201.27
(25.63)
248.20
(30.87)
S.6
A LASSO
SSE
MCP
SSE
SCAD
SSE
502.55
(41.53)
499.88
(41.69)
496.84
(45.74)
495.21
(40.50)
501.67
(38.90)
503.44
(41.48)
505.43
(43.20)
498.84
(42.90)
497.60
(47.41)
494.17
(40.89)
503.19
(38.93)
507.29
(41.38)
505.43
(43.20)
498.84
(42.90)
495.93
(45.90)
494.17
(40.89)
503.19
(38.93)
507.29
(41.38)
HS
SURE
SSE
200.80
204.16
(27.46)
217.46
(30.81)
212.19
(27.68)
206.49
(28.41)
193.40
(25.60)
194.73
(26.67)
205.16
199.33
198.84
205.16
204.51
| 1 |
General Phase Regularized Reconstruction using Phase Cycling
Frank Ong1 , Joseph Cheng2 , and Michael Lustig1
1
Department of Electrical Engineering and Computer Sciences,
University of California, Berkeley, California.
arXiv:1709.05374v1 [cs.CV] 15 Sep 2017
2
Department of Radiology,
Stanford University, Stanford, California.
September 19, 2017
Running head: General Phase Regularized Reconstruction using Phase Cycling
Address correspondence to:
Michael Lustig
506 Cory Hall
University of California, Berkeley
Berkeley, CA 94720
[email protected]
This work was supported by NIH R01EB019241, NIH R01EB009690, GE Healthcare, and NSF Graduate
Fellowship.
Approximate word count: 224 (Abstract) 3925 (body)
Submitted to Magnetic Resonance in Medicine as a Full Paper.
Part of this work has been presented at the ISMRM Annual Conference 2014 and 2017.
1
Abstract
Purpose: To develop a general phase regularized image reconstruction method, with applications to partial
Fourier imaging, water-fat imaging and flow imaging.
Theory and Methods: The problem of enforcing phase constraints in reconstruction was studied under a
regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed
to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along
with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently
non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase
cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was
applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction
methods.
Results: Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat
and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated.
Conclusion: The proposed phase cycling reconstruction provides an alternative way to perform phase
regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of
initial solutions and encourages the joint reconstruction of phase imaging applications.
2
1
Introduction
Phase variations in MRI can be attributed to a number of factors, including field inhomogeneity, chemical
shift, fluid flow, and magnetic susceptibility. These phase structures are often the bases of many useful
MRI applications: Image phase from B0 inhomogenity is used for calibration and shimming. Others provide
important physiological information in clinical imaging methods, such as chemical shift imaging, phase
contrast imaging, and susceptibility imaging.
Phase structures in MRI provide opportunities in reconstruction to resolve undersampling artifacts or to
extract additional information. For example, in partial Fourier imaging, smoothness of the phase images is
exploited to reduce acquisition time by factors close to two. In water-fat imaging, chemical shift induced
phase shifts are used to separate water and fat images, while smoothness in B0 field inhomogeneity is
exploited to prevent water-fat swaps. These methods have either reduced acquisition time as in the case of
partial Fourier imaging, or provided more accurate diagnostic information as in the case of water-fat imaging.
In this paper, we study the problem of exploiting these phase structures in a regularized inverse problem
formulation. An inverse problem formulation allows us to easily incorporate parallel imaging (1,2) and compressed sensing (3) and utilize various phase regularizations for different applications. Our first contribution
is to present a unified reconstruction framework for phase regularized reconstruction problems. In particular,
using the same framework, we demonstrate the joint reconstruction of partial Fourier and water-fat imaging,
along with parallel imaging (PI) and compressed sensing (CS), and the joint reconstruction of partial Fourier
and divergence-free regularized flow imaging, along with PI + CS.
Since phase regularized image reconstructions are inherently non-convex and are sensitive to phase wraps
in the initial solution, we propose a reconstruction technique, named phase cycling, that enables phase
regularized reconstruction to be robust to phase wraps. The proposed phase cycling technique is inspired by
cycle spinning in wavelet denoising (4) and phase cycling in balanced steady state free precession. Instead of
unwrapping phase before or during the reconstruction, the proposed phase cycling makes the reconstruction
invariant to phase wraps by cycling different initial phase solutions during the reconstruction. Our key
difference with existing works is that the proposed phase cycling does not solve for the absolute phase in
reconstruction, but rather averages out the effects of phase wraps in reconstruction. We provide experimental
results showing its robustness to phase wraps in initial solutions.
3
Related Works
Reconstruction methods utilizing phase structures were proposed for water-fat reconstruction (5–13), B0
field map estimation (14), partial Fourier reconstruction (15–18) and divergence-free regularized 4D flow
reconstruction (19–21). In addition, Reeder et al. (22) demonstrated the feasibility of jointly performing
homodyne reconstruction and water-fat decomposition. Johnson et al. (23) presented results in jointly
reconstructing water-fat and flow images.
A closely related line of work is the separate magnitude and phase reconstruction method (24–27). In
particular, Fessler and Noll (24) proposed using alternating minimization for separate magnitude and phase
reconstruction, but their method remained sensitive to phase wraps in initial solution. Zibetti et al. (25) and
Zhao et al. (26) achieved robustness to phase wraps in initial solution by designing a regularization function
that resembles the finite difference penalty, and is periodic in the phase. The regularization function proposed
by Zhao et al. differs from the one proposed by Zibetti et al. in that it is edge-preserving using the Huber loss
function. One limitation of these methods is that they do not support general magnitude and phase operators,
or arbitrary phase regularization. In particular, they cannot be applied to water-fat image reconstruction
and flow image reconstruction with general velocity encoding, or regularization functions other than finite
difference. This restriction to finite difference penalty can lead to well-known staircase artifacts as shown in
Figure 6. In contrast, our proposed method can be applied to general phase imaging methods and supports
arbitrary phase regularization as long as its proximal operator can be computed. This allows us to enforce
application dependent regularization, such as wavelet sparsity penalty and divergence-free constraint for flow
imaging.
2
Theory
Forward Model and Applications
In this section, we describe the forward models for partial Fourier imaging, water-fat imaging and flow
imaging. We then show that these phase imaging methods can be described using a general forward model and
hence can be combined within the same framework. For example, in the experimental sections, we combine
partial Fourier with water-fat imaging, and partial Fourier with divergence-free regularized flow imaging,
under the same framework. Figure 1 provides illustrations of the forward models of these applications.
4
Figure 1: Illustration of forward models y = A(M m · eıP p ) for partial Fourier imaging, water fat imaging and flow
imaging.
.
Partial Fourier Imaging
In partial Fourier imaging, a contiguous portion of k-space is not observed and the k-space data is observed
through the partial Fourier operator F and sensitivity operator S. Let m be the magnitude images and p
be the phase images, our forward model is given by:
y = F S (m · eıp ) + η
where · is the element-wise multiplication operator, ı denotes
√
(1)
−1, eıp denotes the element-wise exponential
operator of the vector ıp, η is a complex, white Gaussian noise vector, and y represents the acquired k-space
data.
Traditional partial k-space reconstruction methods assume smoothness of the phase. Therefore, in our
formulation, we directly impose smoothness constraint on the phase image p. Sparsity of the magnitude
5
image can also be exploited for compressed sensing applications.
Water-fat Imaging
In water-fat imaging, we are interested in reconstructing water and fat images with different chemical shift
induced off-resonance, but the same B0 inhomogeneity induced off-resonance. In order to resolve these
images, k-space measurements are acquired over multiple echo times.
Concretely, let E be the number of echo times, mwater and mfat be the water and fat magnitude images
respectively, pwater and pfat be the water and fat phase images respectively, ∆f be the frequency difference
between water and fat under the single-peak model, and pfield be the B0 field inhomogenity. We denote Fe
as the Fourier sampling operator for each echo time te . Our forward model is given by:
ye = Fe S
mwater eıpwater + mfat eıpfat eıte 2π∆f eıte pfield + ηe
for e = 1, . . . , E
(2)
We note that the water and fat images have independent phase images pwater and pfat , which are often implicitly captured in existing water-fat separation methods by representing the magnitude images as complex
images. These phase images can be attributed to RF pulse with spectral variation and B1 field inhomogeneity. We explicitly represent these phase images because they can be regularized as spatially smooth when
partial Fourier imaging is incorporated (22).
In order to separate the components, the first-order field inhomogeneity pfield is regularized to be spatially
smooth. In addition, sparsity constraints can be imposed on magnitude images for compressed sensing
applications.
Flow Imaging
In three-dimensional phase contrast imaging, we are interested in reconstructing phase images with threedimensional velocity information. Concretely, we define p = (pbg , px , py , pz )> to be the background phase
image and velocity images along three spatial dimensions (x, y, z) respectively. The background phase
includes the B0 field inhomogeneity, and chemical shift induced phase. We also let V be the number of
velocity encodes, and Pv be the velocity encoding vector for velocity encode v. For example, the four point
6
balanced velocity encoding (28) vectors, ignoring scaling, have the form:
P1 = (+1, −1, −1, −1)
P2 = (+1, +1, +1, −1)
(3)
P3 = (+1, +1, −1, +1)
P4 = (+1, −1, +1, +1)
Then, our forward model is given by:
yv = Fv S m · eıPv p + ηv
for v = 1, . . . , V
(4)
Since blood flow is incompressible and hence divergence-free, the velocity images px , py , pz can be constrained
to be divergence-free to provide more accurate flow rates (19–21). Smoothness constraint can also be imposed
on the background phase image to improve flow accuracy.
General Forward Model
With the right representations shown in Appendix A, the above phase imaging applications can all be
described using the following unified forward model:
y = A M m · eıP p + η
(5)
where y is the acquired k-space measurements, m contains the magnitude images, p contains the phase
images, η is a white Gaussian noise vector, A is the forward operator for the complex images, M is the
forward operator for the magnitude image, P is the forward operator for the phase image and · is the
point-wise multiplication operator. We note that both m and p are real-valued.
Objective function
To reconstruct the desired magnitude images m and phase images p, we consider the following regularized
least squares function:
2
1
y − A M m · eıP p 2 + gm (m) + gp (p)
|2
{z
}
|
{z
}
Data consistency f (m,p)
(6)
Regularization g(m,p)
where gm and gp are regularization functions for magnitude and phase respectively. For notation, we split
our objective function into a data consistency term f (m, p) and a regularization term g(m, p).
7
We note that our objective function is non-convex regardless of the application-dependent linear operators
A, M , and P , because the exponential term has periodic saddle points in the data consistency function with
respect to p. In addition, the forward model is bi-linear in the magnitude and complex exponential variables.
Algorithm
Since the phase regularized reconstruction problem is non-convex in general, finding the global minimum is
difficult. In fact, finding a local minimum is also difficult with conventional gradient-based iterative methods
as saddle points have zero gradients. Instead we aim for a monotonically descending iterative algorithm
to ensure the reconstructed result reduces the objective function. In particular, we perform alternating
minimization with respect to the magnitude and phase images separately and use the proximal gradient
method (29, 30) for each sub-problem, which is guaranteed to descend with respect to the objective function
in each iteration. Since the objective function is reduced in each iteration as long as the gradient is non-zero,
the algorithm eventually converges to a stationary point with zero gradient, which can be either a local
minimum or a saddle point, assuming the initialization is not exactly at a local maximum. A high level
illustration is shown in Figure 2.
Concretely, applying proximal gradient descent on our objective function (6), we perform a gradient
descent step with respect to the data consistency function f (m, p) and a proximal operation with respect
to the regularization function g(m, p). Then, we obtain the following update step for magnitude images m
with fixed phase images p at iteration n:
mn+1 = Pαn gm (mn − αn ∇m f (mn , p))
(7)
and the following update step for phase images p with fixed magnitude images m at iteration n:
pn+1 = Pαn gp (pn − αn ∇p f (m, pn ))
(8)
where Pg (x) = argminz 21 kz − xk22 + g(z) denotes the proximal operator for the function g, and αn is the
step-size for iteration n. Note that implicitly, we require the regularization functions to have simple proximal
operators that can be evaluated efficiently, which is true for most commonly used regularization functions.
Examples of proximal operators include wavelet soft-thresholding for wavelet `1 norm, weighting for `2 norm
and projection for any indicator function for convex sets. We refer the reader to the manuscript of Parikh
and Boyd (30) for an overview.
Using the CR calculus (31), we can derive exact expressions for the gradient terms. Substituting the
8
gradient terms, the update steps (7, 8) at iteration n can be explicitly written as:
rn = A∗ y − A(M mn · eıP p )
mn+1 = Pαn gm mn + αn Re(M ∗ (e−ıP p · rn ))
(9)
and
rn = A∗ y − A(M m · eıP pn )
(10)
pn+1 = Pαn gp pn + αn Im P ∗ (M m · e−ıP pn · rn )
where rn can be interpreted as the residual complex image at iteration n.
Figure 2: Conceptual illustration of the proposed algorithm. The overall algorithm alternates between magnitude
update and phase update. Each sub-problem is solved using proximal gradient descent.
Phase Cycling
While the above algorithm converges to a stationary point, this stationary point is very sensitive to the
phase wraps in the initial solution in practice. This is because phase wraps are highly discontinuous and are
artifacts from the initialization method. Phase regularization in each iteration penalizes these discontinuities,
causing errors to accumulate at the same position over iterations, and can result in significant artifacts in
the resulting solution. Figure 4 shows an example of applying smoothing regularization on the phase image
9
Figure 3: Conceptual illustration of the proposed phase cycling reconstruction technique. Phase cycling achieves
robustness towards phase wraps by spreading the artifacts caused by regularizing phase wraps spatially.
by soft-thresholding its Daubechies wavelet coefficients. While the general noise level is reduced, the phase
wraps are also falsely smoothened, causing errors around it, as pointed by the red arrows. These errors
accumulate over iterations and cause significant artifacts near phase wraps in the reconstructed image, as
pointed out by the yellow arrow in Figure 5. The supporting videos S1 and S2 show the development of
the reconstruction results over iterations without and with phase cycling for the experiment in Figure 5,
demonstrating the convergence behavior described above.
To mitigate these artifacts from regularizing phase wraps, we propose a reconstruction technique, called
phase cycling, to make our reconstruction invariant to phase wraps in the initial solution. Our key observation
is that even though it is difficult to reliably unwrap phase, the position of these phase wraps can easily
be shifted to a different spatial location by adding a constant global phase. Hence, artifacts caused by
phase regularization can also be shifted spatially to a different location. Our phase cycling method simply
proposes to shift the phase wraps randomly over iterations, illustrated in Figure 3, to prevent significant
error accumulation at the same spatial location. Then over iterations, the artifacts are effectively averaged
spatially.
Concretely, let W be the set of phase wraps generated from the initial solution. Then for each iteration
n, we randomly draw a phase wrap wn from W with equal probability, and propose the following phase
10
Figure 4: An example of applying smoothing regularization on the phase image by soft-thresholding its Daubechies
wavelet coefficients. While the general noise level is reduced, the phase wraps are also falsely smoothened, causing
errors around it, as pointed by the red arrow. These errors accumulate over iterations and cause significant artifacts
near phase wraps as shown in Figure 5.
cycled update for phase images p with fixed magnitude images m at iteration n:
pn+1 = Pαn gp (pn + wn − αn ∇p f (m, pn )) − wn
(11)
A pseudocode of the proposed algorithm with phase cycling is included in Appendix B.
Finally, we note that the phase cycled update steps can be viewed as an inexact proximal gradient method
applied on the following robust objective function:
1
y − A M m · eıP p
2
2
2
+ gm (m) +
1 X
gp (p + w)
|W|
(12)
w∈W
where the phase regularization function is averaged over phase wraps. We refer the reader to Appendix C
for more details.
3
Methods
The proposed method was evaluated on partial Fourier imaging, water-fat imaging and flow imaging applications. Parallel imaging was incorporated in each application. Sensitivity maps were estimated using
ESPIRiT (32), an auto-calibrating parallel imaging method, using the Berkeley Advanced Reconstruction
11
Toolbox (BART) (33). All reconstruction methods were implemented in MATLAB (MathWorks, Natick,
MA), and run on a laptop with a 2.4 GHz Intel Core i7 with 4 multi-cores, and 8GB memory. Unless
specified otherwise, the magnitude image was regularized with the `1 norm on the Daubechies 4 wavelet
transform for the proposed method. The number of outer iteration was set to be 100, and the number of
inner iteration for both magnitude and phase was set to be 10. The regularization parameters were selected
by first fixing the magnitude regularization parameter, and optimizing the phase regularization parameter
over a grid of parameters with respect to mean squared error. And then fixing the phase regularization
parameter, and optimizing the magnitude regularization parameter similarly. The step-size for the magnitude update was chosen to be
1
λmax (A∗ A)λmax (M ∗ M ) ,
1
λmax (A∗ A)λmax (P ∗ P ) max(|M m|2 ) ,
where λmax denotes the maximum eigenvalue.
and the step-size for the phase update was chosen to be
Partial Fourier Imaging
A fully-sampled dataset from a human knee was acquired on a 3T GE Discovery MR 750 scanner (GE
Healthcare, Waukesha, WI) and 8-channel knee coil, with a 3D FSE CUBE sequence, TE/TR = 25/1550
ms, 40 echo train length, image size 320×320×256, and spatial resolution of 0.5×0.5×0.6 mm3 as described
in Epperson et al. (34) and is available online at http://www.mridata.org/. Another fully-sampled dataset
from a human brain was acquired on 1.5T GE Signa scanner (GE Healthcare, Waukesha, WI) with a 8channel head coil, 3D GRE sequence, TE/TR = 5.2 ms / 12.2 ms, image size of 256 × 256 × 230 and spatial
resolution of 1 mm. 2D slices were extracted along the readout direction for the experiments. Image masks
for displaying the phase images were created by thresholding the bottom 10% of the magnitude images.
A partial Fourier factor of 5/8 was retrospectively applied on both datasets. The brain dataset was further
retrospectively under-sampled by 4 with variable density Poisson-disk pattern and a 24 × 24 calibration
region. The proposed method with and without phase cycling were applied on the under-sampled datasets
and compared. `1 regularization was imposed on the Daubechies 6 wavelet domain for the phase image. The
homodyne method from Bydder et al. (17) with `1 regularization on the wavelet domain was applied and
compared. The method was chosen for comparison because it was shown to be robust to errors in the phase
estimate as it penalizes the imaginary component instead of enforcing the image to be strictly real. The
original formulation included only `2 regularization on the real and imaginary components separately. To
enable a fair comparison using similar image sparsity models, we modified the method to impose `1 wavelet
regularization on the real and imaginary components separately to exploit wavelet sparsity, which achieved
strictly smaller mean squared error than the original method. The number of iterations was set to be 1000.
The regularization parameters were set similarly to how the magnitude and phase regularization parameters
12
were selected.
Besides visual comparison, the quality of the reconstructed magnitude images was evaluated using the
peak signal-to-noise ratio (PSNR). Given a reference image xref , and a reconstructed image xrec , PSNR is
defined as:
PSNR(xref , xrec ) = 20 log10
max(xref )
kxref − xrec k2
(13)
For our proposed method, the magnitude image and phase image were initialized from the zero-filled
reconstructed image, that is m = |A> y| and p = ∠(A> y).
Water-Fat Imaging
Fully sampled water-fat datasets were obtained from the ISMRM Water-Fat workshop toolbox (available
online at http://www.ismrm.org/workshops/FatWater12/data.htm). In particular, an axial slice of the
liver with 8-channel coil array dataset was used, which also appeared in the paper of Sharma et al. (9). The
dataset was acquired on a 3T Signa EXCITE HDx system (GE Healthcare, Waukesha, WI), using a GEinvestigational IDEAL 3D spoiled-gradient-echo sequence at three TE points with T E = [2.184, 2.978, 3.772]
ms, BW = ±125 kHz, flip angle of 5 degrees and a 256 × 256 sampling matrix. Image masks for displaying
the phase images were created by thresholding the bottom 10% of the root-sum-of-squared of the magnitude
images.
The liver dataset was retrospectively under-sampled by 4 with a variable density Poisson-Disk sampling
pattern. Our proposed method was applied and compared with algorithm of Sharma et al. (9) both for the
fully-sampled and under-sampled datasets. The implementation in the Water-Fat workshop toolbox was
used with modification to impose the same wavelet transforms as the proposed method, and the default
parameters were used.
An axial slice of the thigh dataset from the ISMRM Water-Fat workshop toolbox was also used, with
the same parameters. The dataset was retrospectively under-sampled by 4 with a variable density Poisson
Disk sampling pattern and an additional 9/16 partial Fourier factor. Our proposed method was applied and
compared with the result of applying the algorithm of Sharma et al. on the fully-sampled dataset. An `1
regularization on the Daubechies-4 wavelet transform of the image phase was applied.
For our proposed method, the field map was initialized as zero images, the magnitude images were initialized as (mwater , mfat )> = |M > A> y|, and the phase images (pwater , pfat )> were extracted from ∠(M > A> y).
13
Divergence-free Regularized Flow Imaging
Four 4D flow datasets of pediatric patients with tetrahedral flow encoding were acquired with 20 cardiac
phases, 140 slices and an average spatial resolution of 1.2 × 1.2 × 1.5mm3 . The 4D flow acquisitions were
performed on a 1.5T Signa Scanner (GE Healthcare, Waukesha, WI) with an 8 channel array using a spoiled
gradient-echo-based sequence. The acquisitions were prospectively under-sampled by an average factor of
3.82 with variable density Poisson-disk undersampling. The flip angle was 15 degrees and average TR/TE
was 4.94/1.91 ms. The performance of the proposed method was compared with `1-ESPIRiT (32), a PI +
CS algorithm. Volumetric eddy-current correction was performed on velocity data. Segmentations for flow
calculations were done manually on the aorta (AA) and pulmonary trunk (PT). Net flow rate and regurgitant
fraction were calculated for each segmentation. Since the datasets did not contain phase wraps in the region
of interest, phase unwrapping was not performed. Image masks for displaying the phase images were created
by thresholding the bottom 20% of the magnitude images. One of the `1 ESPIRiT reconstruction result was
further processed with divergence-free wavelet denoising (35) to compare with the proposed method.
Another flow dataset of pediatric patient, with randomized flow encode undersampling using the VDRad
sampling pattern (36) and a partial readout factor of 0.7, was acquired with 20 cardiac phase. The proposed
method was applied and compared to `1-ESPIRiT.
For our proposed method, an `1 regularization on the divergence-free wavelet transform (35) of the flow
images was used to impose divergence-free constraint and an `1 regularization on the Daubechies 4 wavelet
coefficients of the background phase was used to impose smoothness. The flow images were initialized as
zero images and the background phase image pbg was extracted from the first velocity encode of ∠(A> y).
4
Results
In the spirit of reproducible research, we provide a software package in MATLAB to reproduce most of the
results described in this paper. The software package can be downloaded from:
https://github.com/mikgroup/phase_cycling.git
Partial Fourier Imaging
Supporting Figure S1 shows the partial Fourier reconstruction results combined with PI on the knee dataset.
The figure compares the proposed reconstruction with and without phase cycling along with the homodyne
14
reconstruction method described in Bydder et al. (17). For the reconstruction without phase cycling, significant artifacts near the phase wraps can be seen in the magnitude image, as pointed by the yellow arrow.
The proposed reconstruction with phase cycling also performs comparably with the state-of-the-art and did
not display significant artifacts. In terms of PSNR, the method of Bydder et al. resulted in 34.55 dB, the
proposed method without phase cycling resulted in 32.83 dB, and the proposed method with phase cycling
resulted in 34.93 dB. One instance of our Matlab implementation of the proposed method took 4 minutes
and 6 seconds.
Figure 5 shows the partial Fourier reconstruction results combined with PI and CS on the brain dataset
with the proposed method. The figure compares the proposed reconstruction with and without phase cycling.
Again, without phase cycling, significant artifacts can be seen in the magnitude image near phase wraps in
the initial solution, pointed by the yellow arrow. Reconstruction with phase cycling does not display these
artifacts. Figure 6 shows the results for the method of Bydder et al. and Zhao et al. As pointed out by the
red arrows, the method of Bydder et al. shows higher error in the magnitude image compared to proposed
method with phase cycling in these regions. The method of Zhao et al. shows higher magnitude image
error in general, and displays staircase artifacts in the phase image, which are common in total variation
regularized images, as pointed by the yellow arrow. In terms of PSNR, the method of Bydder et al. resulted
in 33.64 dB, the method of Zhao et al. resulted in 30.62 dB, the proposed method without phase cycling
resulted in 30.35 dB, and the proposed method with phase cycling resulted in 34.91 dB. One instance of our
Matlab implementation of the proposed method took 1 minute and 53 seconds. We note that the severity
of the artifact in the reconstruction without phase cycling for the brain dataset is much stronger than for
the knee dataset because a higher regularization was needed to obtain lower reconstruction mean squared
error. This is because the brain dataset was further under-sampled for CS. Larger regularization led to larger
thresholding errors around the phase wraps and hence more significant artifacts in the resulting reconstructed
images.
Water-Fat Imaging
Supporting Figure S2 shows the water-fat reconstruction results on the liver dataset, combined with PI and
CS for the under-sampled case. For the fully sampled case, the water and fat images from the proposed
method with phase cycling are comparable with the state-of-the-art water-fat reconstruction result using
the method of Sharma et al. (9). Reconstruction from under-sampled data with the proposed method also
results in similar image quality as the fully-sampled data and is consistent with the result shown in Sharma et
al. (9). One instance of our Matlab implementation of the proposed method took 8 minutes and 27 seconds.
15
Figure 5: Partial Fourier + PI reconstruction results on a knee dataset. Without phase cycling, significant artifacts
can be seen in the magnitude image near phase wraps in the initial solution, pointed by the yellow arrow. With phase
cycling, these artifacts were reduced and the result is comparable to the robust iterative partial Fourier method with
`1 wavelet described in Bydder et al.
Figure 7 shows the water-fat reconstruction results on the thigh dataset, combined with partial Fourier,
PI and CS. Our proposed method produces similar water and fat images on a partial Fourier dataset, as
the fully-sampled reconstruction using the method of Sharma et al. (9). This demonstrates the feasibility of
performing joint partial Fourier and water fat image reconstruction along with PI and CS using the proposed
method. One instance of our Matlab implementation of the proposed method took 8 minutes and 23 seconds.
16
Figure 6: Partial Fourier + PI + CS reconstruction results on a brain dataset for the proposed method. Similar to
Supplementary Figure S1, without phase cycling, siginificant artifacts can be seen in the magnitude image near phase
wraps in the initial solution, as pointed by the yellow arrow.
Divergence-free Regularized Flow Imaging
Figure 8 shows the net flow rates calculated for four patient data. Both `1-ESPIRiT reconstruction and
proposed reconstruction resulted in similar flow rates. Maximum difference in regurgitant fractions was 2%.
17
Figure 7: Partial Fourier + PI + CS reconstruction comparison results on the same brain dataset in Figure 5. The
method of Bydder shows higher error in the magnitude image compared to proposed method with phase cycling in
regions pointed out by the red arrows. The method of Zhao et al. shows higher magnitude image error in general, and
displays staircase artifacts in the phase image, which are common in total variation regularized images, as pointed
by the yellow arrow.
18
Figure 8: Water-fat + PI + CS reconstruction result on a liver dataset with three echoes. Both the method from
Sharma et al. and our proposed method produce similar water and fat images on the fully-sampled dataset and the
retrospectively under-sampled dataset.
The comparison showed that in these four cases, our proposed method provided comparable measurements
to those obtained from `1-ESPIRiT, which were shown to correlate with 2D phase contrast flow rates (37). A
representative velocity image and speed map are shown in Figure 9. Visually, the proposed method reduces
incoherent artifacts and noise compared to the other results, especially in regions pointed by the red arrows
where there should be no fluid flow.
Figure 10 shows the result of reconstructing a partial readout 4D flow dataset with randomized velocity
encoding. The reconstructed magnitude image has significantly reduced blurring from partial readout,
compared to the `1-ESPIRiT result. We also note that the velocity images are not masked and that velocities
in low magnitude regions are naturally suppressed with the proposed reconstruction. One instance of our
Matlab implementation of the proposed method took on the order of three hours.
19
Figure 9: Water-fat + partial Fourier + PI + CS reconstruction result on a thigh dataset. Our proposed method
produce similar water and fat images on a under-sampled partial Fourier dataset compared to the fully-sampled
reconstruction using the method from Sharma et al.
Discussion
In this work, we describe a unified framework and algorithm for phase regularized reconstructions. By presenting various phase regularized image reconstructions within the same framework, we are able to combine
20
Figure 10: Net flow rates across 4 studies for the proposed method compared with `1-ESPIRiT, calculated across segmentations of aorta (AA) and pulmonary trunk (PT). Both `1-ESPIRiT reconstruction and proposed reconstruction
result in similar flow rates.
Figure 11: Divergence-free wavelet regularized flow imaging + PI + CS reconstruction result on a 4D flow dataset,
compared with `1-ESPIRiT and `1-ESPIRiT followed by divergence-free wavelet denoising. Visually, the proposed
method reduces incoherent artifacts and noise compared to the other results, especially in regions pointed by the red
arrows where there should be no fluid flow.
21
Figure 12: Divergence-free wavelet regularized flow imaging + PI + CS reconstruction results on a 4D flow dataset.
The reconstructed magnitude image has significantly reduced blurring from partial readout, compared to the `1ESPIRiT result.
22
these phase sensitive imaging methods to improve the reconstructed results: Our result in partial Fourier
+ water-fat imaging + PI + CS shows the ability to further accelerate water-fat imaging with PI + CS by
combining it with partial Fourier, while achieving comparable image quality as the fully sampled images.
Our result in divergence-free flow + partial Fourier + PI + CS also shows the ability to achieve improvements
in apparent image resolution over standard PI + CS reconstruction. The proposed framework shows promise
and encourages the use of joint application of phase sensitive imaging techniques.
One advantage of our proposed method is that more appropriate phase regularization can be enforced
for each application. In particular, we compared our proposed method with the methods of Bydder et al.,
and Zhao et al. for partial Fourier imaging. The method of Bydder et al. requires imposing regularization
on the image imaginary component, which does not necessarily correlate with regularization of the phase
image. The method of Zhao et al., on the other hand, only supports finite difference penalties, which can
result in staircase artifacts, shown in Figure 6. Our proposed method can impose a more general class of
phase regularization, and in the case of partial Fourier imaging, the proposed method with Daubechies-6
wavelet sparsity constraint on the phase image resulted in the highest PSNR among the compared methods.
For water-fat imaging, we compared our proposed method with the method of Sharma et al., and achieved
similar image quality for both fully-sampled and under-sampled datasets. With phase cycling, our proposed
method can be extended to include partial Fourier imaging in a straightforward way. Since the method of
Sharma et al. solves for the absolute phase, it is unclear whether their proposed restricted subspace model
can be extended to enforce smoothness of water and fat phase images, which often contain phase wraps,
and more effort is needed to investigate such extension. Finally, through comparing the proposed method
on divergence-free flow imaging and l1-ESPIRiT with divergence-free wavelet denoising, we have shown the
advantage of joint PI + CS reconstruction utilizing phase structure over separate application of PI + CS
reconstruction followed by phase image denoising.
Further improvement in these applications can be obtained by incorporating additional system information. For example, multi-peak model can be used in water-fat image reconstruction by extending the forward
model to become:
ye = Fe S mwater eıpwater + mfat eıpfat
J
X
aj eıte 2π∆fj eıte pfield + ηe
for e = 1, . . . , E
(14)
j=1
where J, aj and ∆fj are the number of fat peaks, the relative amplitude for the jth peak, and the frequency
difference between the jth fat peak and water respectively. Temporal constraints can also be added in 4D
flow reconstruction to improve the result. One example would be separately enforcing group sparsity on the
Daubechies wavelet coefficients of the magnitude images over time, and divergence-free wavelet coefficients
of velocity phase images over time, via the group `1 norm.
23
Similar to other iterative methods, our proposed method requires parameter selections for the regularization functions. One disadvantage of our proposed method over standard PI + CS methods is that we
have two parameters to tune for, and thus requires more effort in selecting these parameters. On the other
hand, since phase values are bounded between π and −π, we found that the phase regularization parameter
often translates fairly well between experiments with similar settings. That is, a fixed phase regularization
parameter results in similar smoothness for different scans with similar undersampling factor and noise level.
Since we aim for the joint reconstruction of phase images, phase unwrapping becomes a difficult task.
The proposed phase cycling provides an alternative and complementary way of regularizing phases during
reconstruction, without the need for phase unwrapping. In cases when the absolute phase is needed, for
example in flow imaging with low velocity encodes, phase unwrapping can be performed on the phase
regularized reconstructed images, which has the potential to improve phase unwrapping performance due to
the reduced artifacts.
Finally, we note that the proposed method with phase cycling still requires a good phase initialization,
as the overall objective is non-convex. Our proposed phase cycling only enables the ability to regularize
phase without phase unwrapping. In particular, if the initial phase estimate is randomly initialized, then
the resulting reconstruction will be much worse than the one with a good initialization. Finding a good
phase initialization for all phase imaging methods remains an open problem. Phase initialization for our
proposed method is still manually chosen depending on the application, as described in the Methods section.
In our experiments, we found that often either initializing as the phase of the zero-filled reconstructed image,
or simply a zero-valued phase image provides good initialization for our proposed method. We note that
a better initial solution using existing PI + CS methods, such as `1-ESPIRiT, can potentially result in
a more accurate solution for our proposed method. In our experiments, we found that zero-filled images
as starting solutions have already provided adequate starting points with much lower computational cost,
and initializing with `1 ESPIRiT did not provide noticeable improvements. However, for more aggressively
accelerated scans, we expect a better initialization can improve the reconstructed result.
Conclusion
The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need of performing phase unwrapping. The proposed method showed reduction of
artifacts compared to reconstructions without phase cycling. The advantage of supporting arbitrary regularization functions, and general magnitude and phase linear models was demonstrated by comparing with
24
other state-of-the-art methods. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and
partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed
method unifies reconstruction of phase sensitive imaging methods and encourages their joint application.
A
Phase imaging representation under the general forward model
We first consider representing partial Fourier imaging under the general forward model. Let us define I
to be the identity operator with input and output size equal to the image spatial size. We also define the
magnitude, phase and complex linear operators to be M = I, P = I, and A = F S. Then partial Fourier
imaging forward model can be represented as y = A(M m · eıP p ) + η.
Next, to represent water-fat imaging under the general forward model, let us define the magnitude images
m to be (mwater , mfat )> , and phase images to be (pwater , pfat , pfield )> . We also define the magnitude operator
M , phase operator P , and the complex operator A to be:
I 0 t1
I 0
F1 S F1 Sejt1 2π∆f
0 I t1
0 I
..
.
M = .. P =
A=
.
I 0 tE
I 0
0
0
0 I
0 I tE
...
..
.
0
...
FE S
0
FE SejtE 2π∆f
(15)
where the 2 × 2 identity block in M is repeated E times. Then the water-fat imaging forward model (2) can
be represented as y = A(M m · eıP p ) + η.
Finally we consider representing flow imaging under the general forward model. For concreteness, we
consider the four point balanced velocity encoding as an example. Recall that we define p = (pbg , px , py , pz )> .
Let us define the magnitude operator M , phase operator P , and the complex operator A to be:
I
I −I −I −I
F1 S
0
0
0
I
I +I +I −I
0
F2 S
0
0
M = P =
A=
I
I +I −I +I
0
0
F3 S
0
I
I −I +I +I
0
0
0
F4 S
Then the flow imaging forward model (4) can be represented as y = A(M m · eıP p ) + η.
25
(16)
B
Pseudocode for phase regularized reconstruction with phase
cycling
Algorithm 1 summarizes the proposed reconstruction method with phase cycling.
C
Phase Cycling as an Inexact Proximal Gradient Method
In this section, we show that our proposed phase-cycling is an instance of the inexact proximal splitting
method described in Sra’s paper (38). Following its result, the proposed phase cycling converges to an
inexact stationary point.
Concretely, the inexact proximal splitting method in Sra’s paper considers the following minimization
problem:
minimize f (p) + g(p)
(17)
pn+1 = Pαn g (pn − αn ∇f (pn ) + αn en )
(18)
p
and utilizes the following update steps:
where en is the error at iteration n. The results in Sra’s paper (38) show that the algorithm converges to
a solution that is close to a nearby stationary point, with distance proportional to the iteration error en .
To translate this result to phase cycling, we need to express the error in the proximal operation as additive
error.
In the context of phase cycling, our objective function consists of f (p) = 21 ky − A(M m · eıP p )k22 and
P
1
g(p) = |W|
w∈W gp (p + w). Let us define the regularization function error at iteration n to be n (p) =
P
1
gp (p + wn ) − |W|
w∈W gp (p + w), then the proposed phase cycling update step can be written as:
(19)
pn+1 = Pαn g+αn n (pn − αn ∇f (pn ))
Now, we recall that the proximal operator is defined as Pg (x) = minimize 12 kz − xk22 + g(z). Then using
z
the first-order optimality condition, we obtain that z ∗ = Pg (x) if and only if z ∗ = x − ∇g(z ∗ ), where ∇g(z ∗ )
is a subgradient of g(z ∗ ). Hence, we can rewrite equation (19) as,
pn+1 = pn − αn ∇f (pn ) − αn ∇g(pn+1 ) − αn ∇n (pn+1 )
= Pαn g (pn − αn ∇f (pn ) − αn ∇n (pn+1 ))
26
(20)
Algorithm 1 Pseudocode for phase regularized reconstruction with phase cycling
Input:
y - observed k-space
m0 - initial magnitude images
p0 - initial phase images
A - complex linear operator
M - magnitude linear operator
P - phase linear operator
W - set of phase wraps
N - number of outer iterations
K - number of inner iterations for magnitude and phase update
Output:
mN - reconstructed magnitude images
pN - reconstructed phase images
Function:
αm =
1
λmax (A∗ A)λmax (M ∗ M )
for n = 0, . . . , N − 1 do
// Update m with fixed pn
mn,0 = mn
for k = 0, . . . , K − 1 do
rn,k = A∗ y − A(M mn,k · eıP pn )
mn,k+1 = Pαm gm mn,k + αm Re(M ∗ (e−ıP pn · rn,k ))
end for
mn+1 = mn,K
// Update p with fixed mn+1
pn,0 = pn
αn =
1
λmax (A∗ A)λmax (P ∗ P ) max(|M mn+1 |2 )
for k = 0, . . . , K − 1 do
Randomly draw wn,k ∈ W
rn,k = A∗ y − A(M mn+1 · eıP pn,k )
pn,k+1 = Pαn gp pn,k + wn,k + αn Im P ∗ (M mn+1 · e−ıP pn,k · rn,k ) − wn,k
end for
pn+1 = pn,K
end for
27
Hence, the proposed phase cycling can be viewed as an inexact proximal gradient method with error in each
iteration as en = ∇(pn+1 ) and converges to a point close to a stationary point. We note that this error
is often bounded in practice. For example, if we consider the `1 wavelet penalty g(p) = λkΨpk1 , then the
gradient of g(p) is bounded by λ, and hence the error en is bounded.
References
[1] Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P et al. SENSE: sensitivity encoding for fast
MRI. Magnetic resonance in medicine 1999; 42:952–962.
[2] Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized
Autocalibrating Partially Parallel Acquisitions (GRAPPA). Magnetic Resonance in Medicine 2002;
47:1202–1210.
[3] Lustig M, Donoho D, Pauly JM. Sparse MRI: The application of compressed sensing for rapid MR
imaging. Magnetic Resonance in Medicine 2007; 58:1182–1195.
[4] Coifman RR, Donoho DL. Translation-invariant de-noising. Wavelets and statistics 1995; 103:125–150.
[5] Reeder SB, Pineda AR, Wen Z, Shimakawa A, Yu H, Brittain JH, Gold GE, Beaulieu CH, Pelc NJ.
Iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL):
Application with fast spin-echo imaging. Magnetic Resonance in Medicine 2005; 54:636–644.
[6] Reeder SB, Wen Z, Yu H, Pineda AR, Gold GE, Markl M, Pelc NJ. Multicoil Dixon chemical species
separation with an iterative least-squares estimation method. Magnetic Resonance in Medicine 2003;
51:35–45.
[7] Yu H, Reeder SB, Shimakawa A, Brittain JH, Pelc NJ. Field map estimation with a region growing
scheme for iterative 3-point water-fat decomposition. Magnetic Resonance in Medicine 2005; 54:1032–
1039.
[8] Doneva M, Börnert P, Eggers H, Mertins A, Pauly J, Lustig M. Compressed sensing for chemical
shift-based water-fat separation. Magnetic Resonance in Medicine 2010; 64:1749–1759.
[9] Sharma SD, Hu HH, Nayak KS. Accelerated water-fat imaging using restricted subspace field map
estimation and compressed sensing. Magnetic Resonance in Medicine 2011; 67:650–659.
[10] Lu W, Hargreaves BA. Multiresolution field map estimation using golden section search for water-fat
separation. Magnetic Resonance in Medicine 2008; 60:236–244.
28
[11] Hernando D, Haldar JP, Sutton BP, Ma J, Kellman P, Liang ZP. Joint estimation of water/fat images
and field inhomogeneity map. Magnetic Resonance in Medicine 2008; 59:571–580.
[12] Reeder SB, McKenzie CA, Pineda AR, Yu H, Shimakawa A, Brau AC, Hargreaves BA, Gold GE,
Brittain JH. Water–fat separation with IDEAL gradient-echo imaging. Journal of Magnetic Resonance
Imaging 2007; 25:644–652.
[13] Eggers H, Börnert P. Chemical shift encoding-based water-fat separation methods. Journal of Magnetic
Resonance Imaging 2014; 40:251–268.
[14] Funai AK, Fessler JA, Yeo DTB, Olafsson VT, Noll DC. Regularized field map estimation in MRI.
IEEE Transactions on Medical Imaging 2008; 27:1484–1494.
[15] Noll DC, Nishimura DG, Macovski A. Homodyne Detection in Magnetic Resonance Imaging. IEEE
Transactions on Medical Imaging 2004; 10:1–10.
[16] Haacke E, Lindskogj E, Lin W. A fast, iterative, partial-fourier technique capable of local phase recovery.
Journal of Magnetic Resonance (1969) 1991; 92:126–145.
[17] Bydder M, Robson MD. Partial fourier partially parallel imaging. Magnetic Resonance in Medicine
2005; 53:1393–1401.
[18] Li G, Hennig J, Raithel E, Büchert M, Paul D, Korvink JG, Zaitsev M. ”An L1-norm phase constraint
for half-Fourier compressed sensing in 3D MR imaging”. Magnetic Resonance Materials in Physics,
Biology and Medicine 2015; 28:459–472.
[19] Loecher M, Santelli C, Wieben O, Kozerke S. Improved L1-SPIRiT Reconstruction with a Phase Divergence Penalty for 3D Phase-Contrast Flow Measurements. In: Proceedings of the 21st Annual Meeting
of International Society of Magnetic Resonance in Medicine, Salt Lake City, Utah, USA, November 2012
p. 1355.
[20] Ong F, Uecker M, Tariq U, Hsiao A, Alley M, Vasanawala S, Lustig M. Compressed sensing 4d flow
reconstruction using divergence-free wavelet transform. In: Proceedings of the 22nd Annual Meeting
of International Society of Magnetic Resonance in Medicine, Milan, Italy, 2014. p. 0326.
[21] Santelli C, Loecher M, Busch J, Wieben O, Schaeffter T, Kozerke S. Accelerating 4D flow MRI by
exploiting vector field divergence regularization. Magnetic Resonance in Medicine 2016; 75:115–125.
[22] Reeder SB, Hargreaves BA, Yu H, Brittain JH. Homodyne reconstruction and IDEAL water-fat decomposition. Magnetic Resonance in Medicine 2005; 54:586–593.
29
[23] Johnson KM, Wieben O, Samsonov AA. Phase-contrast velocimetry with simultaneous fat/water separation. Magnetic Resonance in Medicine 2010; 63:1564–1574.
[24] Fessler JA, Noll DC. Iterative image reconstruction in MRI with separate magnitude and phase regularization. Biomedical Imaging: Nano to Macro 2004; pp. 489–492.
[25] Zibetti MV, DePierro AR. Separate magnitude and phase regularization in mri with incomplete data:
Preliminary results. In: Biomedical Imaging: From Nano to Macro, 2010 IEEE International Symposium on, Rotterdam, The Netherlands, 2010. pp. 736–739.
[26] Zhao F, Noll DC, Nielsen JF, Fessler JA. Separate magnitude and phase regularization via compressed
sensing. IEEE Transactions on Medical Imaging 2012; 31:1713–1723.
[27] Zibetti MVW, DePierro AR. Improving compressive sensing in mri with separate magnitude and phase
priors. Multidimensional Systems and Signal Processing 2017; 28:1109–1131.
[28] Pelc NJ, Bernstein MA, Shimakawa A, Glover GH. Three-Direction Phase-Contrast. Journal of Magnetic
Resonance Imaging 2005; 1:1–9.
[29] Combettes PL, Pesquet JC. “Proximal Splitting Methods in Signal Processing”, pp. 185–212. Springer
New York, New York, NY, 2011.
[30] Parikh N, Boyd S et al. Proximal algorithms. Foundations and Trends R in Optimization 2014; 1:127–
239.
[31] Kreutz-Delgado, Ken. The complex gradient operator and the CR-calculus. arXiv preprint 2009;
arXiv:0906.4835.
[32] Uecker M, Lai P, Murphy MJ, Virtue P, Elad M, Pauly JM, Vasanawala SS, Lustig M. ESPIRiTan eigenvalue approach to autocalibrating parallel MRI: Where SENSE meets GRAPPA. Magnetic
Resonance in Medicine 2013; 71:990–1001.
[33] Uecker M, Ong F, Tamir JI, Bahri D, Virtue P, Cheng JY, Zhang T, Lustig M. Berkeley Advanced
Reconstruction Toolbox .
In: Proceedings of the 22nd Annual Meeting of International Society of
Magnetic Resonance in Medicine, Toronto, 2015. p. 2486.
[34] Epperson K, Sawyer AM, Lustig M, Alley M, Uecker M. Creation of fully sampled MR data repository
for compressed sensing of the knee.
In: Proceedings of the 22nd Annual Meeting for Section for
Magnetic Resonance Technologists, Salt Lake City, Utah, USA, 2013.
[35] Ong F, Uecker M, Tariq U, Hsiao A, Alley MT, Vasanawala SS, Lustig M. Robust 4D flow denoising
using divergence-free wavelet transform. Magnetic Resonance in Medicine 2015; 73:828–842.
30
[36] Cheng JY, Zhang T, Ruangwattanapaisarn N, Alley MT, Uecker M, Pauly JM, Lustig M, Vasanawala SS.
Free-breathing pediatric MRI with nonrigid motion correction and acceleration. Journal of Magnetic
Resonance Imaging 2015; 42:407–420.
[37] Hsiao A, Lustig M, Alley MT, Murphy M, Chan FP, Herfkens RJ, Vasanawala SS. Rapid pediatric
cardiac assessment of flow and ventricular volume with compressed sensing parallel imaging volumetric
cine phase-contrast MRI. AJR Am J Roentgenol 2012; 198:W250–259.
[38] Sra S. Scalable nonconvex inexact proximal splitting. Advances in Neural Information Processing
Systems 2012; pp. 530–538.
31
List of Figures
1. Illustration of forward models y = A(M m · ejP p ) for partial Fourier imaging, water fat imaging and
flow imaging.
2. Conceptual illustration of the proposed algorithm. The overall algorithm alternates between magnitude
update and phase update. Each sub-problem is solved using proximal gradient descent.
3. Conceptual illustration of the proposed phase cycling reconstruction technique. Phase cycling achieves
robustness towards phase wraps by spreading the artifacts caused by regularizing phase wraps spatially.
4. An example of applying smoothing regularization on the phase image by soft-thresholding its Daubechies
wavelet coefficients. While the general noise level is reduced, the phase wraps are also falsely smoothened,
causing errors around it, as pointed by the red arrow. These errors accumulate over iterations and
cause significant artifacts near phase wraps as shown in Figure 5.
5. Partial Fourier + PI + CS reconstruction results on a brain dataset for the proposed method. Similar
to Supplementary Figure S1, without phase cycling, significant artifacts can be seen in the magnitude
image near phase wraps in the initial solution, as pointed by the yellow arrow.
6. Partial Fourier + PI + CS reconstruction comparison results on the same brain dataset in Figure 5.
The method of Bydder shows higher error in the magnitude image compared to proposed method
with phase cycling in regions pointed out by the red arrows. The method of Zhao et al. shows
higher magnitude image error in general, and displays staircase artifacts in the phase image, which are
common in total variation regularized images, as pointed by the yellow arrow.
7. Water-fat + partial Fourier + PI + CS reconstruction result on a thigh dataset. Our proposed method
produces similar water and fat images on a undersampled partial Fourier dataset compared to the
fully-sampled reconstruction using the method from Sharma et al.
8. Net flow rates across 4 studies for the proposed method compared with `1-ESPIRiT, calculated across
segmentations of aorta (AA) and pulmonary trunk (PT). Both `1-ESPIRiT reconstruction and proposed reconstruction result in similar flow rates.
9. Divergence-free wavelet regularized flow imaging + PI + CS reconstruction result on a 4D flow dataset,
compared with `1-ESPIRiT and `1-ESPIRiT followed by divergence-free wavelet denoising. Visually,
the proposed method reduces incoherent artifacts and noise compared to the other results, especially
in regions pointed by the red arrows where there should be no fluid flow.
32
10. Divergence-free wavelet regularized flow imaging + PI + CS reconstruction results on a 4D flow dataset.
The reconstructed magnitude image has significantly reduced blurring from partial readout, compared
to the `1-ESPIRiT result.
33
Supporting Figures
S1 Partial Fourier + PI reconstruction results on a knee dataset. Without phase cycling, significant
artifacts can be seen in the magnitude image near phase wraps in the initial solution, pointed by the
yellow arrow. With phase cycling, these artifacts were reduced and the result is comparable to the
robust iterative partial Fourier method with `1 wavelet described in Bydder et al.
S2 Water-fat + PI + CS reconstruction result on a liver dataset with three echoes. Both the method from
Sharma et al. and our proposed method produce similar water and fat images on the fully-sampled
dataset and the retrospectively undersampled dataset.
Supporting Videos
S1 Reconstructed magnitude and phase images over iterations for the proposed method without phase
cycling. Errors accumulate over iterations near phase wraps and cause significant artifacts in the
resulting images.
S2 Reconstructed magnitude and phase images over iterations for the proposed method with phase cycling.
With phase cycling, no significant artifacts like the ones in Supplementary Video S1 are seen during
the iterations.
34
| 1 |
Joint optimization of transmission and propulsion in aerial
communication networks
arXiv:1710.01529v1 [cs.SY] 4 Oct 2017
Omar J. Faqir, Eric C. Kerrigan, and Deniz Gündüz
Abstract— Communication energy in a wireless network of
mobile autonomous agents should be considered as the sum of
transmission energy and propulsion energy used to facilitate the
transfer of information. Accordingly, communication-theoretic
and Newtonian dynamic models are developed to model the
communication and locomotion expenditures of each node.
These are subsequently used to formulate a novel nonlinear
optimal control problem (OCP) over a network of autonomous
nodes. It is then shown that, under certain conditions, the
OCP can be transformed into an equivalent convex form.
Numerical results for a single link between a node and access
point allow for comparison with known solutions before the
framework is applied to a multiple-node UAV network, for
which previous results are not readily extended. Simulations
show that transmission energy can be of the same order of
magnitude as propulsion energy allowing for possible savings,
whilst also exemplifying how speed adaptations together with
power control may increase the network throughput.
U2
U1
a1
p
a22 + δ22
AP
(0, 0)
Fig. 1: Geometric configuration for simulation setups featuring N = 1 (black) and N = 2 (green) nodes. Speeds
along these paths may be variable or fixed. The altitudes
and lateral displacements of U1 , U2 are a1 = a2 = 1000 m
and δ1 = 0, δ2 = 1000 m, respectively.
I. I NTRODUCTION
We aim to derive a control strategy to minimize communication energy in robotic networks. In particular, uninhabited
aerial vehicle (UAV) networks are considered, with results
being generalizable to broader classes of autonomous networks. A dynamic transmission model, based on physical
layer communication-theoretic bounds, and a mobility model
for each node is considered alongside a possible network
topology. As a cost function, we employ the underused
interpretation of communication energy as the sum of transmission energy and propulsion energy used for transmission,
i.e. when a node changes position to achieve a better channel.
For simulation purposes we consider the two wireless
network setups shown in Figure 1. We first present the most
basic scenario consisting of a single agent U1 moving along a
predefined linear path while offloading its data to a stationary
access point (AP). We compare results for variable and fixed
speeds, before studying a two-agent single-hop network.
For UAV networks, research efforts largely break down
into two streams: the use of UAVs in objective based
missions (e.g. search and pursuit [1], information gathering/mobile sensor networks [2], [3]), and use as supplementary network links [4]. Optimal completion of these
macro goals has been addressed in the literature, but there
The support of the EPSRC Centre for Doctoral Training in High Performance Embedded and Distributed Systems (HiPEDS, Grant Reference
EP/L016796/1) is gratefully acknowledged.
O. J. Faqir and Deniz Gündüz are with the Department of Electrical
& Electronics Engineering, Imperial College London, SW7 2AZ, U.K.
[email protected], [email protected]
Eric C. Kerrigan is with the Department of Electrical & Electronic
Engineering and Department of Aeronautics, Imperial College London,
London SW7 2AZ, U.K. [email protected]
is no necessary equivalence between optimal task-based and
energy-efficient operations.
Efforts concerning mobility focus on mobile (in which
node mobility models are random) or vehicular (where
mobility is determined by higher level objectives and infrastructure) ad-hoc networks [5]. Since neither are fully
autonomous networks, mobility is not available as a decision
variable. The work in [6] introduced the concept of proactive
networks, where certain nodes are available as mobile relays.
However, the focus is on relay trajectory design and a simplistic transmission model is assumed, inherently prohibiting
energy efficiency. The related problem of router formation is
investigated in [7] using realistic models of communication
environments.
We assume hard path constraints, possibly due to the
existence of higher level macro objectives, but allow changes
in trajectory along the path by optimizing their speed (as
in [8], we define a trajectory as being a time-parameterized
path). Use of fixed paths does not restrict our results as
most UAV path planning algorithms operate over longer time
horizons and are generally restricted to linear or circular
loiter trajectories [8]. A linear program (LP) is used in [9]
to determine how close a rolling-robot should move before
transmission in order to minimize total energy. However, the
linear motion dynamics used restricts applicability of the
model. Similarly to our current work, [10] uses a single
mobile UAV relay to maximize data throughput between
a stationary source-destination pair. An optimal trajectory
for an a priori transmission scheme is iteratively found.
Similarly, for a given trajectory, the optimal relaying scheme
may be obtained through water-filling over the source-torelay and relay-to-receiver channels.
Our contribution differs from the above works in terms of
the formulation of a more general nonlinear convex OCP for
finding joint transmission and mobility strategies to minimize
communication energy. We solve this problem, exemplifying
possible savings for even just a single node. As a final
point, we show analytically and numerically that, even at
fixed speeds, the optimal transmission scheme for a twouser multiple-access channel(MAC) is counter-intuitive and
not captured by naı̈ve transmission policies.
II. P ROBLEM D ESCRIPTION
Consider N homogeneous mobile nodes Un , n ∈ N ,
{1, . . . , N }, traveling along linear non-intersecting trajectories at constant altitudes an and lateral displacements δn over
a time interval T , [0, T ]. The trajectory of node Un is
denoted by t 7→ (qn (t), δn , an ), relative to a single stationary
AP U0 at position (0, 0, 0) in a three dimensional space. At
t = 0, Un is initialized with a data load of Dn bits, which
must all be offloaded to U0 by time t = T . We consider a
cooperative network model, in which all nodes cooperate to
offload all the data in the network to the AP by relaying each
other’s data. Each node has a data buffer of capacity M bits,
which limits the amount of data it can store and relay.
CN (q, p) , {r ≥ 0 | fm (q, p, r, S) ≤ 0, ∀S ⊆ N } ,
We employ scalar additive white Gaussian noise (AWGN)
channels. For UAV applications, we assume all links are
dominated by line-of-sight (LoS) components, resulting in
flat fading channels, meaning all signal components undergo
similar amplitude gains [11]. All nodes have perfect information regarding link status, which in practice may be
achieved through feedback of channel measurements, while
the overhead due to channel state feedback is ignored.
Similar to [12], for a given link from source node Un to
receiver node Um , the channel gain ηnm (·) is expressed as
(1)
where qnm , qn − qm , constant G represents transmit and
receive antenna gains and α ≥ 1 the path loss exponent. We
define anm and δnm in a similar fashion. The channel gain
is inversely related to the Euclidean distance between nodes.
Each node has a single omnidirectional antenna of maximum transmit power of Pmax Watts. We consider half duplex
radios; each node transmits and receives over orthogonal
frequency bands. Accordingly, a different frequency band
is assigned for each node’s reception, and all messages
destined for this node are transmitted over this band, forming
a MAC. We do not allow any coding (e.g. network coding) or
combining of different data packets at the nodes, and instead
consider a decode-and-forward-based routing protocol at the
relay nodes [13]. The resulting network is a composition of
Gaussian MACs, for each of which the set of achievable
rate tuples defines a polymatroid capacity region [14]. If N
(2)
where q is the tuple of the differences qnm in positions
between the N users and the receiver, p ∈ P N is the
tuple of transmission powers allocated by the N users on
this channel, and P , [0, Pmax ] is the range of possible
transmission powers for each user. fm (·) is a nonlinear
function bounding CN (q, p), given by
X
fm (q, p, r, S) ,
rn −
n∈S
Bm log2
A. Communication Model
G
ηnm (qnm ) , p
2α ,
2 + q2
a2nm + δnm
nm
nodes simultaneously transmit independent information to
the same receiver, the received signal is a superposition of
the transmitted signals scaled by their respective channel
gains, plus an AWGN term. We model the achievable data
rates using Shannon capacity, which is a commonly used
upper bound on the practically achievable data rates subject
to average power constraints. Due to the convexity of the
capacity region, throughput maximization does not require
time-sharing between nodes [14], but may be achieved
through successive interference cancellation (SIC).
Consider a single MAC consisting of N users Un , n ∈ N ,
transmitting to a receiver Um , m 6∈ N . The capacity region
CN (·, ·), which denotes the set of all achievable rate tuples r,
is defined as
X ηnm (qnm )pn
1+
σ2
n∈S
!
, (3)
where rn is the nth component of r, Bm is the bandwidth
allocated to Um , and σ 2 is the receiver noise power. Consider
the example (Section IV-B) where we do not allow relaying.
This gives rise to a MAC with N = 2 transmitters U1 , U2
and the AP U0 . The capacity region C2 (q, p) is the set of
non-negative tuples (r1 , r2 ) that satisfy
η10 (q10 )p1
(4a)
r1 ≤ B0 log2 1 +
σ2
η20 (q20 )p2
r2 ≤ B0 log2 1 +
(4b)
σ2
η10 (q10 )p1 + η20 (q20 )p2
(4c)
r1 + r2 ≤ B0 log2 1 +
σ2
for all (p1 , p2 ) ∈ P 2 . The first two bounds restrict individual
user rates to the single-user Shannon capacity. Dependence
between U1 and U2 leads to the final constraint, that the sum
rate may not exceed the point-to-point capacity with full
cooperation. For transmit powers (p1 , p2 ) these constraints
trace out the pentagon shown in Figure 2. The sum rate
is maximized at any point on the segment L3 . Referring
to SIC, the rate pair at boundary point R(1) is achieved
if the signal from source U2 is decoded entirely before
source U1 , resulting in the signal from U2 being decoded at
a higher interference rate than the signal from U1 . At R(2)
the opposite occurs.
B. Propulsion Energy Model
The electrical energy used for propulsion in rolling robots
has been modeled as a linear or polynomial function of speed
achievable data rates is bounded above by a set of 2|N | − 1
nonlinear submodular functions fm (·, ·, ·, ·), where |·| applied
to a set denotes the cardinality operator. Exponential growth
in the number of nodes is a computational intractability.
Hence, results are limited to small or structured networks
where only a subset of nodes use each MAC.
The trajectory of node Un is denoted by the tuple
r1
L1
R(1)
L3
R(2)
L2
Yn , (pn , rn , sn , qn , vn , v̇n , Fn ),
r2
Fig. 2: Capacity region for a given power policy across two
parallel channels, with corner rate pairs labeled as R(1) =
(1) (1)
(2) (2)
(r1 , r2 ) and R(2) = (r1 , r2 ) and line segments labeled
as L1 , L2 , L3 .
(7)
where qn (t) is the node’s position at time t and sn (t) the
state of its storage buffer subject to maximum memory of
M bits. The optimal control problem that we want to solve
is
N Z T
X
min
pn (t) + vn (t)Fn (t)dt
(8a)
p,r,s,q,v,F
n=1
0
s.t. ∀n ∈ N , m ∈ {N , N + 1}, t ∈ T , S ⊆ N
in [9], [15] respectively. We take a more general approach,
restricting the fixed wing UAV to moving at strictly positive
speeds and using Newtonian laws as a basis, as in [16]. The
function Ω(·) models the resistive forces acting on node Un
in accordance with the following assumption.
Assumption 1: The resistive forces acting on each
node Un may be modeled by the function x 7→ Ω(x) such
that x 7→ xΩ(x) is convex on x ∈ [0, ∞) and ∞ on
x ∈ (−∞, 0).
Comparatively, in the fixed wing model proposed in [17],
the drag force of a UAV traveling at constant altitude at subsonic speed v is
Ω(v) =
ρCD0 Sv 2
2L2
+
2
(πe0 AR )ρSv 2
(5)
where the first term represents parasitic drag and the second
term lift-induced drag. Parasitic drag is proportional to v 2 ,
where ρ is air density, CD0 is the base drag coefficient, and S
is the wing area. Lift induced drag is proportional to v −2 ,
where e0 is the Oswald efficiency, AR the wing aspect ratio
and L the induced lift [17]. For fixed-altitude flight, L must
be equal to the weight of the craft W = mg. The power
required to combat drag is the product of speed and force.
The propulsion force Fn (·) must satisfy the force balance
equation
Fn (t) − Ω(vn (t)) = mn v̇n (t),
(6)
where mn is the node mass, vn (t) is the speed and v̇n (t) is
the acceleration. The instantaneous power used for propulsion is the product vn (t)Fn (t), with the total propulsion
energy taken as the integral of this power over T . We assume
vn (t) ≥ 0, ∀t ∈ T , which is valid for fixed wing aircrafts.
Thrust is restricted to the range [Fmin , Fmax ].
C. General Continuous-Time Problem Formulation
We formulate the problem in continuous-time. At time t,
node Un , n ∈ N can transmit to any node Um , m ∈ {N , N +
1}\{n} at a non-negative data rate rnm (t) using transmission
power pnm (t). The sum power used in all outgoing transmissions from Un is denoted by pn (t). From this, the set of
fm (q(t), p(t), r(t), S \ {m}) ≤ 0
ṡn (t) =
N
X
rmn (t) −
N
+1
X
(8b)
rnm (t)
(8c)
sn (0) = Dn , sn (T ) = 0
qn (0) = Qn,init , qn (T ) = Qn,final
(8d)
(8e)
vn (0) = vn,init
Fn (t) = mn v̇n (t) + Ω(vn (t))
(8f)
(8g)
q̇n (t) = ζn vn (t)
Yn,min ≤ Yn (t) ≤ Yn,max
(8h)
(8i)
m6=n
m6=n
The cost function (8a) is the sum of nodal transmission and
propulsion energies. Constraint (8b) bounds the achievable
data rate to within the receiving nodes’ capacity region,
and (8c) updates the storage buffers with sent/received data.
Constraints (8d) act as initial and final constraints on the
buffers, while (8e)–(8h) ensure all nodes travel from their
initial to final destinations without violating a Newtonian
force-acceleration constraint; ζn ∈ {−1, 1} depending on
whether the position qn (t) decreases or increases, respectively, if the speed vn (t) ≥ 0. The final constraint (8i) places
simple bounds on the decision variables, given by
Yn,min , (0, 0, 0, −∞, Vmin , −∞, Fmin ),
(9a)
Yn,max , (Pmax , ∞, M, ∞, Vmax , ∞, Fmax ),
(9b)
where 0 ≤ Vmin ≤ Vmax and Fmin ≤ Fmax . The above optimal
control problem may then be fully discretized using optimal
control solvers, such as ICLOCS [18]. Before simulation
results are presented we prove that this problem admits an
equivalent convex form under certain conditions.
III. C ONVEXITY A NALYSIS
Efficient convex programming methods exist, which may
be used in real-time applications. We first show that the
nonlinear data rate constraints (8b) are convex in both
positions and transmission power. We then show that the
non-linear equality constraint (8g) may be substituted into the
cost function, convexifying the cost function. This, however,
turns the previously simple thrust bound Fmin ≤ Fn (t) into
a concave constraint, resulting in a convex OCP if thrust
bounds are relaxed. The absence of thrust bounds arises when
considering a fixed trajectory, or is a reasonable assumption
if the speed range is sufficiently small.
Lemma 1: The rate constraints (8b) are convex in powers
and positions for all path loss exponents α ≥ 1.
Proof: By writing the channel gains as an explicit
function of node positions, for receiver Um each of the
capacity region constraints is of the form
X
rn (t)−
n∈S
Bm log2
G X
pn (t)
1+ 2
2 +q
2 α
σ
(a2nm + δnm
nm (t) )
n∈S
!
≤ 0.
(10)
Since the non-negative weighted sum of functions preserves
convexity properties, without loss of generality we take S
to be a singleton, and drop subscripts. We also drop time
dependencies. The above function is the composition of two
functions φ1 ◦ φ2 (·), respectively defined as
φ1 (r, φ2 (·)) , r − B log2 (1 + φ2 (·)),
G
p
φ2 (p, q) , 2 2
.
σ (a + δ 2 + q 2 )α
(11)
(12)
The function (p, q) 7→ φ2 (p, q) is concave on the domain
R+ ×R. We show this by dropping constants and considering
the simpler function h(x, y) , xy −2α with Hessian
#
"
−2α
0
y −2α−1
2
(13)
∇ h(x, y) =
2α(2α−1)x ,
−2α
y −2α−1
y −2α−2
which is negative semi-definite, because it is symmetric with
non-positive sub-determinants. Therefore, φ2 is jointly concave in both power and the difference in positions over the
specified domain. φ1 is convex and non-increasing as a function of φ2 . Since the composition of a convex, non-increasing
function with a concave function is convex [19], all data rate
constraint functions are convex functions of (r, p, q).
The posynomial objective function is not convex over
the whole of its domain and the logarithmic data rate term
prevents the use of geometric programming (GP) methods.
Lemma 2: The following problem
Z T
min
Fn (t)vn (t)dt
(14a)
vn ,Fn
0
s.t. ∀t ∈ T
Fn (t) − Ω(vn (t)) = mn v̇n (t)
Fmin ≤ fm (t) ≤ Fmax
(14b)
(14c)
vn (t) ≥ 0
(14d)
vn (0) = vn,init
(14e)
of minimizing propulsion energy of a single node Un ,
subject to initial and final conditions, admits an equivalent
convex form for mappings vn (t) 7→ Ω(vn (t)) satisfying
Assumption 1 and force bounds (Fmin , Fmax ) = (−∞, ∞).
Proof: By noting that Fn (t) = Ω(vn (t)) + mn v̇n (t),
we move the equality into the cost function, rewriting the
problem as
(15)
min φ(vn ) s.t. (14c)–(14e),
vn
where
φ(vn ) ,
Z
|0
T
Z T
vn (t)v̇n (t)dt .
vn (t)Ω(vn (t))dt +
{z
} |0
{z
}
φ1 (vn )
(16)
φ2 (vn )
We now show that both φ1 (·) and φ2 (·) are convex.
Starting with the latter, by performing a change of variable,
the analytic cost is derived by first noting that φ2 (vn ) is the
change in kinetic energy
Z vn (T )
mn 2
φ2 (vn ) = mn
vdv =
vn (T ) − vn2 (0) , (17)
2
vn (0)
which is a convex function of vn (T ) subject to fixed initial
conditions (14d); in fact, it is possible to drop the vn2 (0)
term completely without affecting the argmin. By Assumption 1, vn (t) 7→ vn (t)Ω(vn (t)) is convex and continuous on
the admissible domain of speeds. Since integrals preserve
convexity, the total cost function φ(·) is also convex.
Removal of thrust F as a decision variable results in the
set
VF , {vn | Fmin ≤ Ω(vn (t)) + mn v̇n (t) ≤ Fmax }.
(18)
Even if Ω(·) is convex on the admissible range of speeds,
the lower bound represents a concave constraint not admissible within a convex optimization framework. Therefore,
dropping constraints on thrust results in a final convex
formulation of
Z T
mn 2
min
vn (t)Ω(vn (t))dt +
vn (T ) − vn2 (0) (19a)
vn
2
0
s.t. ∀t ∈ T
Vmin ≤ vn ≤ Vmax
(19b)
vn (0) = vn,init .
(19c)
Addition of bounds vn ∈ VF naturally results in a difference
of convex (DC) problem [20] that may be solved through
exhaustive or heuristic procedures.
Theorem 1: In the absence of constraints on thrust, the
general problem (8) admits an equivalent convex form.
Proof: Non-convexities in this formulation arise from
the posynomial function of speed v(t) and thrust Fm (t) in the
cost function (8a), the nonlinear force balance equality (8g),
and the capacity region data rate constraints (8b). The cost
function is a superposition of the energies used by each
node for propulsion and transmission. By noting that there
is no coupling between nodes or between propulsion and
transmission powers in this cost, the transformation used in
Lemma 2 may be used to eliminate the nonlinear equality. We
eliminate Fn (t) and v̇n (t) and move the nonlinear equality
into the objective function, simultaneously convexifying the
objective to get
"Z
#
N
T
X
mn 2
v (T )
pn (t) + vn (t)Ω(vn (t))dt +
min
p,r,s,q,v
2 n
0
n=1
s.t.
∀n ∈ N , m ∈ {N , N + 1}, t ∈ T , v ∈ V N , S ⊆ N
35
100
30
80
25
60
20
15
40
10
20
5
0
(8b)–(8f), (8h), Ỹn,min ≤ Ỹn (t) ≤ Ỹn,max
0
0
where Ỹn (t) , (pn (t), rn (t), sn (t), qn (t), vn (t)), and the
bounds Ỹn,min , and Ỹn,max are similarly changed. It follows
from Lemma 1 that all data rate constraints in (8b) are also
convex, therefore the whole problem is convex.
IV. S IMULATION R ESULTS
A. Single Node
A single mobile node U1 of mass 3 kg traveling at fixed
altitude a = 1000 m and lateral displacement δ = 0 m,
depicted in Figure 1, is considered first. In this section,
simulation results are presented for the problem of minimizing the total communication energy to offload all data
to U0 . This is compared to a water-filling solution [21]
for minimizing the transmission energy. Subscripts denoting
different nodes have been dropped in the remainder of this
section. Specifically, we use Ω(·) of the form
∞,
∀x ∈ (−∞, 0)
Ω(x) ,
(21)
CD1 x2 + CD2 x−2 , ∀x ∈ [0, ∞),
where CD1 = 9.26 × 10−4 is the parasitic drag coefficient
and CD2 = 2250 is the lift induced drag coefficient [17].
Simulation results are shown in Figure 3 for a storage
buffer initialized to D = 75 MB and speeds restricted in the
range [Vmin , Vmax ] = [30, 100] km/h. This results in a total
energy expenditure of 309.50 kJ, where 105.05 kJ is due to
transmission and 204.51 kJ is due to propulsion. Of this, only
48.01 kJ of extra propulsion energy is used to vary speed on
top of the base energy required to traverse the distance at a
constant speed. Furthermore, the problem would have been
infeasible if the node was restricted to a constant speed of
65 km/h. We note that, with the given parameterization, it is
possible to transmit up to 78 MB of data in the defined time
interval.
B [Hz]
105
M [GB]
1
Pmax [W]
100
α
1.5
400
600
800
1000
1200
(a) Optimal transmission power and propulsion force used by U1 .
10 5
10
35
8
30
25
The open source primal dual Interior Point solver Ipopt
v.3.12.4 has been used through the MATLAB interface.
Table I contains parameters common to the following experiments. Force constraints are relaxed in all experiments.
From [5], the speed of a typical UAV is in the range 30 to
460 km/h. All nodes are initialized to their average speeds
vn,init = (Vmax + Vmin )/2, We assume all nodes move in
symmetric trajectories around the AP such that Qn,final =
−Qn,init = (T /2)vn,init .
σ2 [W]
10−10
200
T [min]
20
TABLE I: Dynamic model parameters that have been used
across all simulation results.
6
20
15
4
10
2
5
0
0
0
200
400
600
800
1000
1200
(b) Associated achieved data rate and velocity profile of U1 .
Fig. 3: Simulation results for the single-node problem, with
trajectories shown as solid, and bounds shown as dashed
lines.
In comparison, if the speed of U1 is fixed, then the
maximum transmittable data is approximately 56 MB, using 120.00 kJ of transmission energy. Although considerably
more energy is used, the optimal power policy for a fixed
trajectory is characterized by a water-filling solution, an
equivalent proof of which may be found in [21]. This
problem results in a one dimensional search space, easily
solved through such algorithms as binary search.
B. Multiple Nodes
We now investigate the transmission energy problem for
two nodes, traveling in parallel trajectories at fixed speeds
such that Vmax = Vmin = 65 km/h, as depicted by the green
lines in Figure 1. Relaying is not allowed, as may be the case
if no bandwidth is allocated to U1 and U2 to receive each
other’s transmissions, equivalently turning them into pure
source nodes. Simulation results are presented in Figure 4.
U1 is closer to the AP at all times, and therefore is
advantaged in that it experiences more favorable channel
conditions. The disadvantaged node U2 transmits for a longer
duration due to the smaller relative change in its channel gain. The interior point algorithm converged after 42
iterations to a minimum energy of 52.707kJ and 26.77kJ
for U1 and U2 , respectively, for a starting data load of
D1 = D2 = 25 MB.
It is notable that the advantaged node uses considerably
more transmission energy than the disadvantaged node.
Referring to [22], which derives two-user optimal power
allocations that achieve arbitrary rate tuples on the boundary
of C we explain this as follows. From Figure 2, the optimal
rate pairs for given transmit powers p1 and p2 lie on the
R EFERENCES
100
80
60
40
20
0
200
400
600
800
1000
1200
(a) Transmit powers of nodes U1 and U2 .
7
6
5
4
3
2
1
0
200
400
600
800
1000
1200
(b) Associated transmission rates achieved by nodes U1 and U2 .
Fig. 4: Simulation results for the two-node transmission
power problem.
segment L3 . Equivalently, ∃̺ ∈ [0, 1] such that the rate
(∗) (∗)
pair for an arbitrary point R(∗) = (r1 , r2 ) on L3 is
given by the interpolation R(∗) = ̺ · R(1) + (1 − ̺) · R(2) .
We may interpret ̺ as being the priority assigned to each
transmitting node by the U0 when SIC is being carried out.
̺ = 1 means that data from U1 is being decoded second,
subject to a lower noise rate, while ̺ = 0 means the opposite
decoding order. We may think of the mapping t 7→ ̺(t) as
a time-varying priority. However, by calculating ̺(t) from
the optimum powers and rates seen in Figure 4, we find
that ̺(t) = 0, ∀t ∈ T such that p1 (t) > 0, p2 (t) > 0. In
other words, the disadvantaged node is always given priority,
which is why it uses less energy at the optimum, even though
it always experiences a worse channel gain.
V. C ONCLUSIONS
We have presented a general optimization framework
for joint control of propulsion and transmission energy for
single/multi-hop communication links in robotic networks.
The relaxation of transmission constraints to theoretic capacity bounds, with relatively mild assumptions on the mobility
model, results in a nonlinear but convex OCP. We showed
that optimizing over a fixed path, as opposed to a fixed
trajectory, increases the feasible starting data by at least 30%
for just a single node. For the fixed-trajectory two-node
MAC simulation, the optimal solution has been presented
and analyzed. Immediate extensions of this work include
higher fidelity models, and analysis of the relay network encompassed in problem (8). Considering the overarching goal
of real-time control, further developments will be closedloop analysis of the control strategy, and consideration of
the computational burden and energy expenditure [3], [23]
in the network.
[1] J. Ko, A. Mahajan, and R. Sengupta, “A network-centric UAV organization for search and pursuit operations,” in Aerospace Conference
Proceedings, 2002. IEEE, vol. 6, pp. 6–6, IEEE, 2002.
[2] S. Wang, A. Gasparri, and B. Krishnamachari, “Robotic message ferrying for wireless networks using coarse-grained backpressure control,”
in Globecom Workshops (GC Wkshps), 2013 IEEE, pp. 1386–1390,
IEEE, 2013.
[3] M. Thammawichai, S. P. Baliyarasimhuni, E. C. Kerrigan, and J. B.
Sousa, “Optimizing communication and computation for multi-UAV
information gathering applications,” arXiv preprint arXiv:1610.04091,
2016.
[4] P. Zhan, K. Yu, and A. L. Swindlehurst, “Wireless relay communications with unmanned aerial vehicles: Performance and optimization,”
IEEE Transactions on Aerospace and Electronic Systems, vol. 47,
no. 3, pp. 2068–2085, 2011.
[5] I. Bekmezci, O. K. Sahingoz, and Ş. Temel, “Flying ad-hoc networks
(FANETs): A survey,” Ad Hoc Networks, vol. 11, no. 3, pp. 1254–
1270, 2013.
[6] W. Zhao and M. H. Ammar, “Message ferrying: Proactive routing
in highly-partitioned wireless ad hoc networks,” in Distributed Computing Systems, 2003. FTDCS 2003. Proceedings. The Ninth IEEE
Workshop on Future Trends of, pp. 308–314, IEEE, 2003.
[7] Y. Yan and Y. Mostofi, “Robotic router formation in realistic communication environments,” IEEE Transactions on Robotics, vol. 28,
no. 4, pp. 810–827, 2012.
[8] P. Sujit, S. Saripalli, and J. B. Sousa, “Unmanned aerial vehicle
path following: A survey and analysis of algorithms for fixed-wing
unmanned aerial vehicles,” IEEE Control Systems, vol. 34, no. 1,
pp. 42–59, 2014.
[9] Y. Yan and Y. Mostofi, “To go or not to go: On energy-aware
and communication-aware robotic operation,” IEEE Transactions on
Control of Network Systems, vol. 1, no. 3, pp. 218–231, 2014.
[10] Y. Zeng, R. Zhang, and T. J. Lim, “Throughput maximization for
mobile relaying systems,” arXiv preprint arXiv:1604.02517, 2016.
[11] D. Tse and P. Viswanath, Fundamentals of wireless communication.
Cambridge university press, 2005.
[12] L. Ren, Z. Yan, M. Song, and J. Song, “An improved water-filling
algorithm for mobile mimo communication systems over time-varying
fading channels,” in Electrical and Computer Engineering, 2004.
Canadian Conference on, vol. 2, pp. 629–632, IEEE, 2004.
[13] D. Gunduz and E. Erkip, “Opportunistic cooperation by dynamic
resource allocation,” IEEE Transactions on Wireless Communications,
vol. 6, no. 4, 2007.
[14] D. N. C. Tse and S. V. Hanly, “Multiaccess fading channels. I.
polymatroid structure, optimal resource allocation and throughput
capacities,” IEEE Transactions on Information Theory, vol. 44, no. 7,
pp. 2796–2815, 1998.
[15] Y. Mei, Y.-H. Lu, Y. C. Hu, and C. G. Lee, “Energy-efficient motion
planning for mobile robots,” in Robotics and Automation, 2004.
Proceedings. ICRA’04. 2004 IEEE International Conference on, vol. 5,
pp. 4344–4349, IEEE, 2004.
[16] U. Ali, H. Cai, Y. Mostofi, and Y. Wardi, “Motion and communication
co-optimization with path planning and online channel estimation,”
arXiv preprint arXiv:1603.01672, 2016.
[17] Y. Zeng and R. Zhang, “Energy-efficient UAV communication with
trajectory optimization,” IEEE Transactions on Wireless Communications, vol. 16, no. 6, pp. 3747–3760, 2017.
[18] P. Falugi, E. Kerrigan, and E. Van Wyk, “Imperial College London Optimal Control Software User Guide (ICLOCS).”
http://www.ee.ic.ac.uk/ICLOCS/, 2010.
[19] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge
university press, 2004.
[20] A. L. Yuille and A. Rangarajan, “The concave-convex procedure,”
Neural computation, vol. 15, no. 4, pp. 915–936, 2003.
[21] S. Wolf, “An introduction to duality in convex optimization,” Network,
vol. 153, 2011.
[22] R. S. Cheng and S. Verdú, “Gaussian multiaccess channels with ISI:
Capacity region and multiuser water-filling,” IEEE Transactions on
Information Theory, vol. 39, no. 3, pp. 773–785, 1993.
[23] S. Nazemi, K. K. Leung, and A. Swami, “QoI-aware tradeoff between
communication and computation in wireless ad-hoc networks,” in
Personal, Indoor, and Mobile Radio Communications (PIMRC), 2016
IEEE 27th Annual International Symposium on, pp. 1–6, IEEE, 2016.
| 3 |
1
Quality and Diversity Optimization:
A Unifying Modular Framework
Abstract—The optimization of functions to find the best solution according to one or several objectives has a central role in
many engineering and research fields. Recently, a new family of
optimization algorithms, named Quality-Diversity optimization,
has been introduced, and contrasts with classic algorithms.
Instead of searching for a single solution, Quality-Diversity
algorithms are searching for a large collection of both diverse
and high-performing solutions. The role of this collection is to
cover the range of possible solution types as much as possible,
and to contain the best solution for each type. The contribution of
this paper is threefold. Firstly, we present a unifying framework
of Quality-Diversity optimization algorithms that covers the two
main algorithms of this family (Multi-dimensional Archive of
Phenotypic Elites and the Novelty Search with Local Competition), and that highlights the large variety of variants that
can be investigated within this family. Secondly, we propose
algorithms with a new selection mechanism for Quality-Diversity
algorithms that outperforms all the algorithms tested in this
paper. Lastly, we present a new collection management that
overcomes the erosion issues observed when using unstructured
collections. These three contributions are supported by extensive
experimental comparisons of Quality-Diversity algorithms on
three different experimental scenarios.
QD-algorithm
Collection of diverse and
high-performing solutions
Quality
arXiv:1708.09251v1 [cs.NE] 12 May 2017
Antoine Cully and Yiannis Demiris, Senior Member, IEEE
Coverage
search space
descriptor space
Previously encountered
solution (not stored)
Solution contained
in the collection
Fig. 1: The objective of a QD-algorithm is to generate a
collection of both diverse and high-performing solutions. This
collection represents a (model free) projection of the highdimensional search space into a lower dimensional space
defined by the solution descriptors. The quality of a collection
is defined by its coverage of the descriptor space and by the
global quality of the solutions that are kept in the collection.
evolutionary algorithms are used to generate neural networks,
robot behaviors, or objects [9], [10].
However, from a more general perspective and in contrast
Index Terms—Optimization Methods, Novelty Search, Quality- with Artificial Evolution, Natural Evolution does not produce
Diversity, Behavioral Diversity, Collection of Solutions.
one effective solution but rather an impressively large set
of different organisms, all well adapted to their respective
environment. Surprisingly, this divergent search aspect of
I. I NTRODUCTION
Natural Evolution is rarely considered in engineering and
Searching for high-quality solutions within a typically high- research fields, even though the ability to provide a large
dimensional search space is an important part of engineering and diverse set of high-performing solutions appears to be
and research. Intensive work has been done in recent decades promising for multiple reasons.
to produce automated procedures to generate these solutions,
For example, in a set of effective solutions, each provides
which are commonly called “Optimization Algorithms”. The an alternative in the case that one solution turns out to be less
applications of such algorithms are numerous and range from effective than expected. This can happen when the optimization
modeling purposes to product design [1]. More recently, process takes place in simulation, and the obtained result does
optimization algorithms have become the core of most machine not transfer well to reality (a phenomenon called the reality
learning techniques. For example, they are used to adjust gap [11]). In this case, a large collection of solutions can
the weights of neural networks in order to minimize the quickly provide a working solution [4]. Maintaining multiple
classification error [2], [3], or to allow robots to learn new solutions and using them concurrently to generate actions or
behaviors that maximize their velocity or accuracy [4], [5].
predict actions when done by other agents has also been shown
Inspired by the ability of natural evolution to generate to be very successful in bioinspired motor control and cognitive
species that are well adapted to their environment, Evolutionary robotics experiments[12].
Computation has a long history in the domain of optimization,
Moreover, most artificial agents, like robots, should be able
particularly in stochastic optimization [6]. For example, evolu- to exhibit different types of behavior in order to accomplish
tionary methods have been used to optimize the morphologies their mission. For example, a walking robot needs to be able
and the neural networks of physical robots [7], and to infer to move not only forwards, but in every direction and at
the equations behind collected data [8]. These optimization different speeds, in order to properly navigate in its environment.
abilities are also the core of Evolutionary Robotics in which Similarly, a robotic arm needs to be able to reach objects at
different locations rather than at a single, predefined target.
A. Cully and Y. Demiris are with the Personal Robotics Laboratory,
Department of Electrical and Electronic Engineering, Imperial College London, Despite this observation, most optimization techniques that
U.K. (e-mail: [email protected]; [email protected]).
are employed to learn behaviors output only a single solution:
2
the one which maximizes the optimized function [10], [7], [5]. tend to the global-optimum and the diversity of the produced
Learning generic controllers that are able to solve several tasks walking behaviors will not be enough to properly control the
is particularly challenging, as it requires testing each solution robot. For instance, it will not contain slow behaviors, which
on several scenarios to assess their quality [13]. The automatic are essential for the robot’s manoeuvrability. This example
creation of a collection of behaviors is likely to overcome these illustrates that sampling the entire range of possible solutions
limitations and will make artificial agents more versatile.
is not always related to searching for the local optima, and why
The diversity of the solutions could also be beneficial for it may be useful to have the diversity preservation mechanism
the optimization process itself. The exploration process may not correlated with the performance function, but rather based
find, within the diversity of the solutions, stepping stones that on differences in the solution type.
allow the algorithm to find even higher-performing solutions.
Similarly, the algorithms may be able to solve a given problem B. Searching for Diverse Solutions
faster if they can rely on solutions that have been designed
Following this idea of a non performance-based diversity
for different but related situations. For example, modifying mechanism, the Novelty Search algorithm [18] introduces the
an existing car design to make it lighter might be faster than idea of searching for solutions that are different from the
inventing a completely new design.
previous ones, without considering their quality. This concept
Attracted by these different properties several recent works, is applied by optimizing a “novelty score” that characterizes the
such as Novelty Search with Local Competition [14] and the difference of a solution compared to those already encountered,
MAP-Elites algorithm [15], started to investigate the question which are stored in a “novelty archive”. The novelty archive is
of generating large collections of both diverse and high- independent from the population of the evolutionary algorithm.
performing solutions. Pugh et al. [16], [17] nicely named this The novelty score is computed as the average distance of
question as the Quality-Diversity (QD) challenge.
the k-nearest neighboring solutions that currently are in the
After a brief description of the origins of QD-algorithms novelty archive, while the distances are computed according
in the next section, we unify these algorithms into a single to a user-defined solution descriptor (also called a behavioral
modular framework, which opens new directions to create characterization, or behavioral descriptor [18], [13]). When the
QD-algorithms that combine the advantages of existing meth- novelty score of a solution exceeds a pre-defined threshold,
ods (see section III). Moreover, we introduce a new QD- this solution is added to the archive and thus used to compute
algorithm based on this framework that outperforms the the novelty score of future solutions.
existing approaches by using a new selective pressure, named
The main hypothesis behind this approach is that, in some
the “curiosity score”. We also introduce a new archive cases, the optimal solutions cannot be found by simply maximanagement approach for unstructured archives, like the mizing the objective function. This is because the algorithm
novelty archive [18]. The performance of these contributions is first needs to find stepping stones that are ineffective according
assessed via an extensive experimental comparison involving to the objective function, but lead to promising solutions
numerous variants of QD-algorithms (see section IV). After afterwards. A good illustration of this problem is the “deceptive
the conclusion, we introduce the open-source library designed maze” [18] in which following the objective function inevitably
for this study, which can be openly used by interested readers leads to a dead-end (a local extremum). The algorithm has to
(see section VI).
investigate solutions that lead the agent further from the goal
before being able to find solutions that actually solve the task.
The authors of Novelty Search also introduced the “Novelty
II. R ELATED W ORKS AND D EFINITIONS
Search
with Local Competition” algorithm (NSLC) [14], in
While the notion of Quality-Diversity is relatively recent,
which
the
exploration focuses on solutions that are both novel
the problem of finding multiple solutions to a problem is a
(according
to the novelty score) and locally high-performing.
long-standing challenge.
The main insight consists of comparing the performance of a
solution only to those that are close in the descriptor space.
A. Searching for Local Optima
This is achieved with a “local quality score” that is defined
This challenge was first addressed by multimodal function as the number of the k-nearest neighboring solutions in the
optimization algorithms, including niching methods in Evolu- novelty archive with a lower performance (e.g., slower walking
tionary Computation [19], [20], [21], which aim to find the speed [14]) than the considered solution. The exploration is then
local optima of a function. These algorithms mainly involve achieved with a multi-objective optimization algorithm (e.g.,
niche and genotypic diversity preservation mechanisms [21], NSGA-II [26]) that optimizes both the novelty and local quality
like clustering [22] and clearing [23] methods.
scores of the solutions. However, the local quality score does
However, in many applications, some interesting solutions not influence the threshold used to select whether an individual
are not captured by the local-optima of the fitness function. For is added to the novelty archive. The final result of NSLC is
example, it is important for walking robots to be able to control the population of the optimization algorithm, which contains
the walking speeds, however, there is no guarantee that the solutions that are both novel and high-performing compared
performance function (i.e., the walking speed [24], [25]) will to other local solutions. In other words, the population gathers
show local-optima that are diverse enough to provide a complete solutions that are both different from those saved in the novelty
range of walking speeds. Typically, if the optimized function is archive, and high-performing when compared to similar types
mono-modal (i.e., without local-optima), the population would of solutions.
3
The first applications of NSLC consisted of evolving both
the morphology and the behavior of virtual creatures in order to
generate a population containing diverse species, ranging from
slow and massive quadrupeds to fast and lightweight unipedal
hoppers by comparing velocity only between similar species
[14]. In this experiment, the solution descriptor was defined
as the height, the mass and the number of active joints, while
the quality of the solutions was governed by their walking
speed. At the end of the evolutionary process, the population
contained 1,000 different species. These results represent the
very first step in the direction of generating a collection of
diverse and high-performing solutions covering a significant
part of the spectrum of possibilities.
C. Gathering and Improving these Solutions into Collections
optimizing each solution independently (at least 5 times faster
and about 10 times more accurate [13]). By “recycling” and
improving solutions that are usually discarded by traditional
evolutionary algorithms, the algorithm is able to quickly find
necessary stepping stones. This observation correlates with the
earlier presented hypothesis that QD-algorithms are likely to
benefit from the diversity contained in the collection to improve
their optimization and exploration abilities.
However, it has been noticed that the archive improvement
mechanism may “erode” the borders and alter the coverage of
the collection [29]. Indeed, there are cases where the new, and
better, solution found by the algorithm is less novel than the one
it will replace in the archive. For instance, if high-performance
can be more easily achieved for a solution in the middle of
the descriptor space, then it is likely that the solutions near the
borders will progressively be replaced by slightly better, but
less novel, solutions. In addition to eroding the borders of the
collection, this phenomenon will also increase the density of
solutions in regions with a high performance. For instance, this
phenomenon has been observed in the generation of collections
containing different walking and turning gaits [29]. The novelty
archive of the original NSLC algorithm had a better coverage
of the descriptor space (but with lower performance scores)
than the one from the BR-Evolution, because it is easier for the
algorithms to find solutions that make the robot walk slowly
rather than solutions that make it walk fast or execute complex
turning trajectories (In section III-A2 of this paper, we introduce
a new archive management mechanism that overcomes these
erosion issues).
Instead of considering the population of NSLC as the result
of the algorithms, Cully et al. [13] suggested to consider the
novelty archive as the result. Indeed, the aim of the novelty
archive is to keep track of the different solution types that
are encountered during the process, and thus to cover as
much as possible of the entire descriptor space. Therefore,
the novelty archive can be considered as a collection of diverse
solutions on its own. However, the solutions are stored in the
collection without considering their quality: as soon as a new
type of solution is found, it is added to archive. While this
procedure allows the archive to cover the entire spectrum of
the possible solutions, in the original version of NSLC only
the first encountered solution of each type is added to the
archive. This implies that when finding a better solution for
a solution type already present in the archive, this solution is
not added to the archive. This mechanism prevents the archive D. Evolving the Collection
from improving over time.
Following different inspirations from the works presented
Based on this observation, a variant of NSLC, named above, the Multi-dimensional Archive of Phenotypic Elites
“Behavioral Repertoire Evolution”(BR-Evolution [13]), has (MAP-Elites) algorithm [15] has been recently introduced.
been introduced to progressively improve the archive’s quality While this algorithm was first designed to “illuminate” the
by replacing the solutions that are kept in the archive with landscape of objective functions [30], it showed itself to be an
better ones as soon as they are found. This approach has been effective algorithm to generate a collection of solutions that are
applied to generate “Behavioral Repertoires” in robotics, which both diverse and high-performing. The main difference with
consists of a large collection of diverse, but effective, behaviors NSLC and BR-Evolution is that, in MAP-Elites, the population
for a robotic agent in a single run of an evolutionary algorithm. of the algorithms is the collection itself, and the selection,
It has also been used to produce collections of walking gaits, mutations and preservation mechanisms directly consider the
allowing a virtual six-legged robot to walk in every direction solutions that are stored in the collection.
and at different speeds. The descriptor space is defined as the
In MAP-Elites, the descriptor space is discretized and
final position of the robot after walking for 3 seconds, while represented as a grid. Initially, this grid is empty and the
the quality score corresponds to an orientation error. As we algorithm starts with a randomly generated set of solutions.
reproduce this experiment in this paper, we provide additional After evaluating each solution and recording its associated
descriptions and technical details in section IV-C.
descriptor, these solutions are potentially added to the correThe concepts introduced with BR-Evolution have also later sponding grid cells. If the cell is empty, then the solution is
been employed in the Novelty-based Evolutionary Babbling added to the grid, otherwise, only the best solution among the
(Nov-EB) [27] that allows a robot to autonomously discover new one and the one already in the grid is kept. After the
the possible interactions with objects in its environment. This initialization, a solution is randomly selected via a uniform
work draws a first link between the QD-algorithms and the distribution among those in the grid, and is mutated. The
domain of developmental robotics, which is also studied in new solution obtained after the mutation is then evaluated and
several other works (see [28] for overview).
fitted back in the grid following the same procedure as in
One of the main results that has been demonstrated with BR- the initialization. This selection/mutation/evaluation loop is
Evolution experiments is that this algorithm is able to generate repeated several millions times, which progressively improves
an effective collection of behaviors several times faster than by the coverage and the quality of the collection.
4
Definition II.1: Quality-Diversity optimization
In one of its first applications, MAP-Elites was used to
generate a large collection of different but effective ways
A Quality-Diversity optimization algorithm aims to
to walk in a straight line by using differently the legs of
produce a large collection of solutions that are both as
a six-legged robot. This collection of behaviors was then used
diverse and high-performing as possible, which covers
to allow the robot to quickly adapt to unforeseen damage
a particular domain, called the descriptor space.
conditions by selecting a new walking gait that still works in
spite of the situation [4]. The same algorithm has also been
used to generate behavioral repertoires containing turning gaits,
While this definition is shared with the existing literature,
similarly to the work described previously, and it was shown we also stress the importance of the coverage regularity of the
that MAP-Elites generates better behavior collections while produced collections. In the vast majority of the applications
being faster than the BR-Evolution algorithm [31].
presented previously, not only is the coverage of importance but
The behaviors contained in these collections can be seen as its uniformity is as well. For example, in the locomotion tasks,
locomotion primitives and thus can be combined to produce an even coverage of all possible turning abilities of the robot
complex behaviors. Following this idea, the Evolutionary is required to allow the execution of arbitrary trajectories [29].
Based on this definition, the overall performance of a QDRepertoire-Based Control (EvoRBC [32]) evolves a neural
network, called the “arbitrator”, that selects the appropriate algorithm is defined by the quality of the produced collection
behavior in the repertoire, which was previously generated with of solutions according to three criteria:
MAP-Elites. This approach has been applied on a four-wheeled
1) the coverage of the descriptor space;
steering robot that has to solve a navigation task through a
2) the uniformity of the coverage; and
maze composed of several sharp angles, and a foraging task
3) the performance of the solution found for each type.
in which the robots needs to collect and consume as many
objects as possible.
F. Understanding the Underlying Mechanisms
These applications take advantage of the non-linear dimenIn addition to direct applications, several other works focus
sionality reduction provided by MAP-Elites. Indeed, both
on
studying the properties of QD-algorithms. For example,
applications select behaviors from the descriptor space, which
Lehman
et al. [37] revealed that extinction events (i.e., erasing
is composed of fewer than a dozen of dimensions (respectively,
a
significant
part of the collection) increases the evolvability
36 to 6 dimensions [4] and 8 to 2 dimensions [32]), while the
of
the
solutions
[38] and allow the process to find higherparameter space often consists of several dozen dimensions.
performing
solutions
afterwards. For example, with MAP-Elites,
MAP-Elites has been employed in several other applications,
erasing
the
entire
collection
except 10 solutions every 100 000
including the generation of different morphologies of soft
generations
increases
the
number
of filled cells by 20% and the
robots [15], or the production of images that are able to fool
average
quality
of
the
solutions
by
50% in some experimental
deep neural networks [33]. It has also been used to create
setups
[37].
“innovation engines” that are able to autonomously synthesize
In other studies, Pugh et al. [16], [17] analyzed the impact of
pictures that resemble to actual objects (e.g., television, bagel,
the
alignment between the solution descriptor and the quality
strawberry) [34].
score on both Novelty-based approaches (including NSLC) and
However, the obligation to discretize the descriptor space
MAP-Elites. For example, if the descriptor space represents
may be limiting for some applications, and the uniform
the location of the robot in a maze, and the quality score
random selection may not be suitable for particularly large
represents the distance between this position and the exit,
collections, as it dilutes the selection pressure. Indeed, the
then the descriptor space and the quality score are strongly
uniform random selection of individuals among the collection
aligned because the score can be computed according to the
makes the selection pressure inversely proportional to the
descriptor. The experimental results show that in the case
number of solutions actually contained in the collection. A
of such alignments with the quality score, then novelty-based
simple way to mitigate this limitation is to use a biased
approaches are more effective than MAP-Elites, and vice-versa.
selection according to the solution performance or according
Another study also reveals that the choice of the encoding
to its novelty score (like introduced by Pugh et al. [16],
(the mapping between the genotype and the phenotype)
[17]). Another direction consists in having a number of cells
critically impacts the quality of the produced collections [39].
irrespective of the dimensionality descriptor space, for example
The experimental results link these differences to the locality
by using computational geometry to uniformly partition the
of the encoding (i.e., the propensity of the encoding to produce
high-dimensional descriptor space into a pre-defined number of
similar behaviors after a single mutation). In other words, the
regions [35], or by using Hierarchical Spatial Partitioning [36].
behavioral diversity provided by indirect encoding, which is
known to empower traditional evolutionary algorithms [40],
appears to be counterproductive with MAP-Elites, while the
locality of direct encodings allows MAP-Elites to consistently
E. Quality-Diversity Optimization
fill the collection of behaviors.
Based on the seminal works presented previously [14], [15],
These different works illustrate the interest of the community
[13] and the formulation of Pugh et al. [16], [17], we can in QD-algorithms and that our understanding of the underlying
outline a common definition:
dynamics is only in its early stages. However, very few works
5
compare MAP-Elites and NSLC on the same applications (the
• Finally, several scores, like the novelty, the local compefew exceptions being [16], [17], [31], [36]), or investigate
tition, or the curiosity (defined in section III-B3) score,
alternative approaches to produce collections of solutions. One
are updated.
of the goals of this paper is to introduce a new and common These four steps repeat until a stopping criterion is reached
framework for these algorithms to exploit their synergies and (typically, a maximum number of iterations) and the algorithm
to encourage comparisons and the creation of new algorithms. outputs the collection stored in the container. More details
The next section introduces this framework.
can be found in the pseudo-code of the algorithm, defined
in Algorithm 1. In the following subsections, we will detail
III. A UNITED AND MODULAR FRAMEWORK FOR
different variants of the container, as well as the selection
QD-O PTIMIZATION ALGORITHMS
operators.
As presented in the previous section, most works using
or comparing QD-algorithms consider either MAP-Elites or
NSLC-based algorithms, or direct comparisons of these two A. Containers
The main purpose of a container is to gather all the solutions
algorithms. These comparisons are relevant because of the
distinct origins of these two algorithms. However, they only found so far into an ordered collection, in which only the
provide high-level knowledge and do not provide much insight best and most diverse solutions are kept. One of the main
of properties or particularities which make one algorithm better differences between MAP-Elites and NSLC is the way the
collection of solutions is formed. While MAP-Elites relies on an
than the other.
In this section, we introduce a new and common framework N-dimensional grid, NSLC uses an unstructured archive based
for QD-algorithms, which can be instantiated with different on the Euclidean distance between solution descriptors. These
operators, such as different selection or aggregation operators, two different approaches constitute two different container
similarly to most evolutionary algorithms. This framework types. In the following, we will detail their implementation
demonstrates that MAP-Elites and NSLC can be formulated as and particularities.
1) The Grid: MAP-Elites employs an N-dimensional grid
the same algorithm using a different combination of operators.
Indeed, specific configurations of this framework are equivalent to form the collection of solutions [15], [4]. The descriptor
to MAP-Elites or NSLC. However, this framework opens new space is discretized and the different dimensions of the grid
perspectives as some other configurations lead to algorithms correspond to the dimensions of the solution descriptor. With
that share the advantages of both MAP-Elites and NSLC. For this discretization, each cell of the grid represents one solution
example, it can be used to design an algorithm that is as simple type. In the initial works introducing MAP-Elites, only one
as MAP-Elites but working on an unstructured archive (rather solution can be contained in each cell. However, one can
than a grid), or to investigate different selection pressures like imagine having more individuals per cell (like in [17] in
NSLC. Moreover, this decomposition of the algorithms allows which two individuals are kept). Similarly, in the case of
us to draw conclusions on the key elements that make an multi-objective optimization, each cell can represent the Pareto
algorithm better than the others (e.g., the selective pressure or front for each solution type. Nevertheless, these considerations
are outside the scope of this paper.
the way to form the collection).
This new formulation is composed of two main operators:
a) Procedure to add solutions into the container: The
1) a container, which gathers and orders the solutions into procedure to add an individual to the collection is relatively
a collection, and 2) the selection operator, which selects straight forward. If the cell corresponding to the descriptor of
the solutions that will be altered (via mutations and cross- the individual is empty, then the individual is added to the grid.
over) during the next batch (or generation). The selection Otherwise, if the cell is already occupied, only the individual
operator is similar to the selection operators used in traditional with the highest performance is kept in the grid.
evolutionary algorithms, except that it considers not only
b) Computing the novelty/diversity of a solution: The
the current population, but all the solutions contained in the inherent structure of the grid provides an efficient way to
container as well. Other operators can be considered with this compute the novelty of each solution. Instead of considering
new formulation, like the traditional mutation or cross-over the average distance of the k-nearest neighbors as a novelty
operators. However, in this paper we only consider the operators score, like suggested in [18], here we can consider the number
described above that are specific to QD-algorithms.
of filled cells around the considered individual. The density of
After a random initialization, the execution of a QD- filled cells of the sub-grid defined around the individual is a
algorithm based on this framework follows four steps that good indicator of the novelty of the solution. Similarly to the
are repeated:
“k” parameter used in the k-nearest neighbors, the sub-grid is
defined according to a parameter that governs its size, which
• The selection operator produces a new set of individuals
(bparents ) that will be altered in order to form the new is defined as ±k cells around the individual (in each direction).
In this case, the score needs to be minimized.
batch of evaluations (boffspring ).
• The individuals of boffspring are evaluated and their
2) The Archive: The novelty archive introduced in the
performance and descriptor are recorded.
Novelty Search algorithm consists of an unstructured collection
• Each of these individuals is then potentially added to
of solutions that are only organized according to their descriptor
the container, according to the solutions already in the and their Euclidean distance. As introduced in the BR-Evolution
collection.
algorithm [13], the novelty archive can be used to form the
6
Algorithm 1 QD-Optimization algorithm ( I iterations)
(A ← ∅)
for iter = 1 → I do
if iter == 1 then
bparents ← random()
boffspring ← random()
else
bparents ← selection(A, boffspring )
boffspring ← random variation(bparents )
for each indiv ∈ boffspring do
{desc, perf} ← eval(indiv)
if add to container(indiv) then
curiosity(parent(indiv))+ = Reward
else
curiosity(parent(indiv))− = Penalty
update container()
return A
. Creation of an empty container.
. The main loop repeats during I iterations.
. Initialization.
. The first 2 batches of individuals are generated randomly.
. The next controllers are generated using the container and/or the previous batch.
. Selection of a batch of individuals from the container and/or the previous batch.
. Creation of a randomly modified copy of bparents (mutation and/or crossover).
. Evaluation of the individual and recording of its descriptor and performance.
. “add to container” returns true if the individual has been added to the container.
. The parent gets a reward by increasing its curiosity score (typically Reward = 1).
. Otherwise, its curiosity score is decreased (typically Penalty = 0.5).
. Update of the attributes of all the individuals in the container (e.g. novelty score).
collection of solutions by substituting solutions when better
ones are found. In contrast with the grid container presented
previously, the descriptor space here is not discretized and
the structure of the collection autonomously emerges from the
encountered solutions.
a) Procedure to add solutions into the container: The
management of the solutions is crucial with this container
because it affects both the quality, and the final coverage of the
collection. A first attempt was proposed in the BR-Evolution
algorithm [13] by extending the archive management of the
Novelty Search [18]: an individual is added to the archive if
its novelty score exceeds a predefined threshold (which can be
adjusted over time), or if it outperforms its nearest neighbor in
the archive. In the second case, the nearest neighbor is removed
from the archive and only the best of the two individuals is
kept.
While this archive management is relatively simple, further
experiments reveal underlying limitations [29]. First, an individual with the same (or very close) descriptor as another
individual can be added to the archive. Indeed, the novelty
score, which is based on the average distance of the k-nearest
neighbors, can still be relatively high even when two individuals
are close if the rest of the collection is further. One of the
consequences of using the novelty score as a criterion to add
the solution in the container is that the collection is likely to
show an uneven density of solutions [13], [29]. For example,
experiments in these works show collections that contain a high
density of solutions in certain regions (the inter-individuals
distance being notably lower than the Novelty Score threshold
used to add individual into the collection). While this property
can be interesting for some applications, it mainly originates
from a side effect. Second, the same experiments reveal that
the replacement of individuals by better ones can erode the
border of the collection, as discussed in the previous section.
Indeed, in some cases, the individuals in the center of the
collection show better performance than the ones in its border
(because of the intrinsic structure of the performance function
or because the center has been more intensively explored).
This can lead to the replacement of individuals that are on
the border of the collection by individuals that are closer to
the center. This is an important limitation as it reduces the
A
l
C
Quality
Zone dominating I1
𝜀 x N1
I2
B
Q1
I1
N1
𝜀 x Q1
Novelty
Fig. 2: Management of collections of solutions based on an
unstructured archive. A) A solution is directly added to the
collection if its nearest neighbor from the collection is further
than l. B) Conversely, if the distance is smaller than l (i.e., if
the circles overlap), the new solution is not automatically added
to the collection, but competes against its nearest neighbor. If
the new solution dominates the one already in the collection,
then the new solution replaces the previous one. C) In the
strict -domination, a solution dominates another one if the
progress in one objective is larger than the decrease in the
other objective (up to a predefined value ).
coverage of the collection, as shown in [29].
In order to mitigate these limitations, we propose the
following new way to manage solutions in the archive. A
solution is added to the archive if the distance to its nearest
neighbor exceeds a predefined threshold l (see Fig. 2.A). This
parameter defines the maximal density of the collection. The
threshold is similar to the novelty score threshold used in the
original Novelty Search algorithm, except that in this case we
only consider the distance of the nearest neighbor, and not the
average distance of the k-nearest ones.
If the distance between the new individual and its nearest
neighbor is lower than l, then this new individual can potentially
replace its nearest neighbor in the collection. This is only the
case if its distance from its second nearest neighbor exceeds
the l parameter, such that the distance among the solutions is
preserved (see Fig. 2.B) and if it improves the overall quality
of the collection. A new individual can improve the overall
collection in two ways: 1) if it has a better quality, which
increases the total quality of the collection or 2) if it has a
better novelty score, which means that it extends the coverage
7
of the collection. This can be seen as two objectives that
For these reasons, the choice of the most suitable container
need to be maximized. From this perspective, we can use depends more on the considered applications, rather than on
the definition of Pareto dominance to decide if an individual their performance. Therefore, while we will consider both of
should replace another one already in the collection. Therefore, the containers in the experimental section of this paper, we
a simple criterion could be to replace an existing individual, will not directly compare their results, as the comparison may
only if it is dominated by the new one. However, this criterion not be fair and may be irrelevant with respect to the considered
is very difficult to reach, as the new individual should be both applications.
better and more diverse than the previous one. This prevents
These two containers have been designed to provide uniform
most new individuals from being added to the collection, which coverage of the descriptor space. However, experiments reveal
limits the quality of the produced collections.
that the accumulation of density on specific regions of the
In order to soften this criterion, we introduce a variant of the descriptor space is a key factor for the Novelty Search
-dominance [41], that we name the exclusive -dominance. In algorithm, as it allows the novelty score to constantly change
this variant, we tolerate the dominating individual being worse over time. To avoid this issue, one can use an additional
than the other individual according to one of the objectives (up container in which the density accumulates and that drives
to a predefined percentage governed by ), only if it is better the exploration, while the other container gathers the collection
on the other objective by at least the same amount (see Fig. that will be return to the user. In this paper, we will only focus
2.C). This criterion is more strict than the original -dominance, on variants using only one container, however we will consider
which allows an individual to be dominated by another one extending the framework presented in this paper to multiple
that is worse on both objectives. From a mathematical point containers in future works.
of view, an individual x1 dominates x2 if these three points
are verified:
B. Selection Operators
1) N (x1 ) >= (1 − ) ∗ N (x2 )
The second main difference between MAP-Elites and NSLC
2) Q(x1 ) >= (1 − ) ∗ Q(x2 )
is the way the next batch, or population2 , of solutions is selected
3) (N (x1 )−N (x2 ))∗Q(x2 ) > −(Q(x1 )−Q(x2 ))∗N (x2 )
before being evaluated. On the one hand, MAP-Elites forms
with N corresponding to the Novelty Score and Q to the the next batch by randomly selecting solutions that are already
Quality (or performance) of an individual, which both need in the collection. On the other hand, NSLC relies on the current
to be maximized1 . This set of conditions makes the addition population of solutions and selects the individuals that are both
of new individuals in the container more flexible, but rejects novel and locally high-performing (according to a Pareto front).
individuals that do not improve the collection.
This difference is of significant importance as MAP-Elites uses
The experimental results presented in section IV demonstrate the entire collection of solutions, while NSLC only considers
that this new archive management overcomes the limitation of a smaller set of solutions.
the previous approaches by producing collections with similar
Similarly to the concept of containers, different approaches
coverage and quality compared with the grid-based container. for selecting the individuals of the next batch can be considered.
b) Computing the novelty of a solution: With the archive- In the following subsections, we will present several selection
based container, the computation of the novelty score can be methods that can be employed with both container types.
done with the traditional approach proposed by Lehman and
1) No Selection: A naive way to generate the next batch
Stanley [18], which consists of the average distance of the of evaluation is to generate random solutions. However, this
k-nearest neighbors.
approach is likely ineffective because it makes the QD3) Partial Conclusion: These two different types of con- algorithm equivalent to a random sampling of the search
tainers both present advantages and disadvantages. On the one space. In general, this approach provides an intuition about
hand, the grid-based container provides a simple and effective the difficulty of the task and can be used as a base-line when
way to manage the collection. However, it requires discretizing comparing alternative approaches.
the descriptor space beforehand, which can be problematic
2) Uniform Random Selection: A second way to select
for example if the discretization is not adequate, or needs to solutions that will be used in the next batch is to select solutions
be changed over time. On the other hand, the archive-based with a uniform probability from those that are already in the
container offers more flexibility, as it only requires the definition collection. This approach is the one used in MAP-Elites and
of a distance in the descriptor space. For example, specific follows the idea that promising solutions are close to each other.
distances can be used to compare complex descriptors, like In addition to being relatively simple, this approach has the
images, without a strong knowledge of the structure of the advantage of being computationally effective. However, one of
descriptor space (e.g., number of dimensions or limits) [27]. its main drawbacks is that the selection pressure decreases as
However, this advantage is a disadvantage as well, because it the number of solutions in the collection increases (the chance
implies that the algorithm needs to find the appropriate structure for a solution to be selected being inversely proportional to
of the collection on its own, which represents an additional
2 We use the word batch instead of generation because most of the approaches
challenge compared to the grid-based container.
1 This definition could very likely be generalized to more than two objectives,
but this question is beyond the scope of this paper.
presented in this paper can be used in a “steady state”, selecting and evaluating
only one individual at each iteration. However, considering the selection and
evaluation in batches allows the algorithm to execute the evaluation in parallel,
which increases the computational efficiency of the algorithm.
8
the number of solutions in the collection), which is likely to
be ineffective with large collections.
3) Score Proportionate Selection: An intuitive way to
mitigate the loss of selection pressure from the random selection
is to bias the selection according to a particular score. Similarly
to traditional evolutionary algorithms, the selection among the
solutions of the collection can be biased according to their
quality (fitness), following the roulette wheel or the tournamentbased selection principles [42].
Other scores can also be considered to bias the selection.
For example, the novelty score of each solution can substitute
for the quality score for fostering the algorithm to focus on
solutions that are different from the others.
In addition to these two scores, in this paper we introduce
a new score, named the Curiosity Score, that can be used to
bias the selection and which is defined as follows:
Definition III.1: Curiosity Score
fitter than any yet existing [46]. One important aspect shared
by these two definitions is that the score or the evolvability
may dynamically change over time according to the state of
the population or the collection, which is rarely considered
in evolvability’s definitions. For instance, the definition often
used in Evolutionary Computation [38], [45], [30], which
considers that the evolvability captures the propensity of
random variations to generate phenotypic diversity, depends
on the genome of the individual but not on the state of the
population.
4) Population-based Selection: All selection approaches
described so far select the individuals from the solutions
contained in the collection. This represents one of the main
differences introduced by MAP-Elites compared to NSLC
and traditional evolutionary algorithms, as the collection
becomes the “population” of the algorithm and this population
progressively grows during the evolutionary process. However,
to handle the selection, we can consider employing populations
The curiosity score represents the propensity of an
in parallel to the collection. This is in line with the Novelty
individual to generate offspring that are added to the
Search algorithm which computes the novelty score based
collection.
on the Collection (the Novelty archive), but instead uses a
traditional population to handle the selection.
A practical implementation (see Algorithm 1) consists of
This approach can be included in the framework proposed
increasing the curiosity score of an individual (initially set to in this paper by initializing the population with the first batch
zero) each time one of its offspring is added to the collection. and then, after each batch evaluation, a new population can be
Conversely, when an offspring fails to be added to the archive generated based on the individuals of the current population
(because it is not novel or high-performing enough), the (boffspring ) and their parents (bparents ). Classic selection
Curiosity Score is decreased. In this paper, we use respectively approaches, like tournament or score proportionate, can be
1 and −0.5 for the reward and the penalty values. With this employed to select the individuals that will be part of the
implementation, individuals may gain momentum, but this next population. Like in the collection-based selection, the
means that such individual will be selected more often, making selection can be biased according to either the quality, novelty
their score more likely to rapidly decrease.
or curiosity scores.
We named this score “Curiosity” because it encourages
5) Pareto-based Selection: The population-based selection
the algorithm to focus on individuals that produce interesting approach can be extended to multi-objective selection, via
solutions, until nothing new is produced. In other words, the the Pareto ranking, by taking inspiration from the NSGA-II
algorithm focuses on regions of the search space as long as algorithm [26]. In this paper, we will particularly consider
they produce interesting results, then, when the algorithm a Pareto-based selection operator that takes both the novelty
gets “bored”, it focuses its attention on different regions. This score and the local quality score (number of neighbors that
behavior is similar to the one of the “Intelligent Adaptive outperform the solution) of the individuals into account. This
Curiosity” [43], while the implementation and the inspiration selection operator is similar to the selection procedure of NSLC.
are strictly different.
6) Partial Conclusion: These different selection operators
A similar approach has recently been introduced to bias can all be equally used with both of the containers presented
the selection by using the same kind of successful offspring in the previous section. While the choice of the container
counter [44]. The difference is that, in this paper, the counter is influences the type of the produced results (e.g., unstructured
initialized to a fixed value (i.e., 10 in [44]) instead of starting or discretized descriptor space, see section III-A3), the selecat 0 like with the curiosity score, and that when an offspring tion operators will only influence the quality of the results.
is added to the collection, the counter is not incremented (like Therefore, it is of importance to know which operators provide
with the curiosity score), but rather reset to its maximal value. the best collection of solutions. In the following section, we
This difference make the selection process more forgivable, as provide a first answer to this question by comparing the
only one successful offspring is enough to make its parent very collections produced by the different selection operators and
likely to be selected again. While it would be interesting to by investigating their behaviors.
compare the effect of these two different, but related, methods,
this comparison is out of the scope of this paper.
IV. E XPERIMENTAL C OMPARISONS
Although there is no overall agreement on the definition of
evolvability [45], we can note that our definition of the curiosity
To compare the different combinations of containers and
score shares similarities with some of the first definitions of selection operators, we consider three experimental scenarios
evolvability, like the one given by Lee Altenberg who defines that take place in simulation: 1) a highly redundant robotic
the evolvability as the ability of a population to produce variants arm discovering how to reach points in its vicinity, 2) a virtual
9
TABLE I: The different combinations of containers and selection operators that are evaluated in this paper. The variants in bold
are tested on the three experimental scenarios while the others are only tested on the first one.
Variant name
arch no selection
arch random
arch pareto
arch fitness
arch novelty
arch curiosity
arch pop fitness
arch pop novelty
arch pop curiosity
grid no selection
grid random
grid pareto
grid fitness
grid novelty
grid curiosity
grid pop fitness
grid pop novelty
grid pop curiosity
NSLC
Container
archive
archive
archive
archive
archive
archive
archive
archive
archive
grid
grid
grid
grid
grid
grid
grid
grid
grid
grid
Selection Op.
noselection
random
Pareto
Score-based
Score-based
Score-based
Population-based
Population-based
Population-based
noselection
random
Pareto
Score-based
Score-based
Score-based
Population-based
Population-based
Population-based
Population & archive based
Considered Value
Novelty & Local Quality
Fitness
Novelty
Curiosity
Fitness
Novelty
Curiosity
Novelty & Local Quality
Fitness
Novelty
Curiosity
Fitness
Novelty
Curiosity
Novelty & Local Quality
Related approach
Random Search / Motor Babbling
MAP-Elites with Novelty [16]
Traditional EA
Novelty Search [18]
Random Search / Motor Babbling
MAP-Elites [15]
Traditional EA
Novelty Search with Local Competition [14]
six-legged robot learning to walk in every direction, and 3) the
same robot searching for a large number of ways to walk on a
straight line.
In addition to the tested combinations of containers and
selection operators, we include the original Novelty Search with
Local Competition algorithm (NSLC, [14]) in our experimental
comparisons in order to assess the influence of the lack of
density accumulation in the descriptor space, as discussed in
section III-A3. Like in [16], all individuals of the population
are potentially added to a grid container (the same as the one
used with the others variants) after each generation. We then
used the produced grid container to compare NSLC with the
other variants. For this experiment, we used the implementation
of NSLC provided by the Sferesv2 framework [47].
In the experiments presented in this paper, we only consider
direct encodings with genomes that are small and fixed in
size. It would be interesting to see how the conclusion drawn
from these experiments hold with large genomes, genomes of
increasing complexity over generations, or indirect encodings.
For instance, [39] highlights that indirect encodings may have
a negative impact on QD-algorithms. However, these further
considerations are out of the scope of this paper and will be
considered in future works.
be improved either by finding additional individuals or by
improving those already in the collection. It corresponds to
the metric named “Quality Diversity” used in [16].
4) Total Novelty: This metric is similar to the previous one,
except that the sum considers the novelty score and not the
quality value. This metric indicates if the collection is well
distributed over the description space or rather if the solutions
are highly concentrated. This metric will not be considered for
collections produced with the grid-based container because the
distribution of the solutions is forced by the grid.
Other metrics: In [15], [39], the authors presented other
metrics to compare collections produced by MAP-Elites.
However, the main idea of these metrics is to normalize the
quality of each solution by the maximal quality that can be
achieved by each type of solution (i.e., by each grid cell). To
infer the highest possible quality for each cell, the authors
selected the best solution found by all the algorithms over all
the replicates. However, this approach is only feasible with the
grid-based container because the continuous descriptor space
used in the archive-based container makes it challenging to
associate and compare each “solution type”. For this reason,
in this paper we decided to only consider the four metrics
presented previously.
A. Quality Metrics
B. The Redundant Arm
In order to compare the quality of the collections generated
1) Experimental Setup: In this first experimental comparison,
by each variant, we define four quality metrics that characterize we consider a redundant and planar robotic arm with 8 degrees
both the coverage and the performance of the solutions:
of freedom that needs to discover how to reach every point
1) Collection Size: indicates the number of solutions con- in its vicinity. The quality function captures the idea that all
tained in the collection and thus refers to the proportion of the joints of the arm should contribute equally to the movement,
descriptor space that is covered by the collection.
which allows quick transitions from one configuration to the
2) Maximal Quality: corresponds to the quality of the best next one. This constraint is defined by the variance of the
solution contained in the collection and indicates if the global angular position of the joints when the robot reaches its final
extremum of the performance function has been found (if it is configuration, and needs to be minimized by the algorithm.
known).
This experimental setup illustrates the need of quality-diversity
3) Total Quality: is the sum of the qualities over all the algorithms because it needs to simultaneously find a solution
solutions contained in the collection. This metric provides for all the reachable positions and to optimize the quality
information on the global quality of the collection as it can function for each of them.
10
No_selection
Random
Pareto
No_selection
Random
Pareto
Fitness
Novelty
Curiosity
Fitness
Novelty
Curiosity
NSLC
0
-0.1
Pop_fitness
Pop_curiosity
Pop_novelty
Pop_fitness
Pop_novelty
Pop_curiosity
Quality (rad2)
-0.2
-0.3
-0.4
-0.5
-0.6
-0.7
Archive-based container
Grid-based container
Fig. 3: Typical collections of solutions produced with QD-algorithms. These collections consist of several thousand colored
dots or cells that represent the final position of the gripper. The color of each dot or cell indicates the quality of the solution
(lighter is better).
TABLE II: Parameter values used the experiments.
Parameters
Batch size
No. of Iterations
Descriptor size
Genotype size
Genotype type
Crossover
Mutation rate for
each parameter
Mutation type
Grid container:
Grid size
Sub-grid depth
Archive container:
l
k
NSLC variant:
ρinit
k
First exp
200
50.000
2
8
disabled
Second exp
200
10.000
2
36
sampled
values
disabled
Third exp
200
20.000
6
36
sampled
values
disabled
12.5%
5%
5%
Polynomial
Random new
value
Random new
value
100 ∗ 100
±3 cells
100 ∗ 100
±5 cells
5 cells/dim
±1 cells
0.01
0.1
15
0.01
0.1
15
0.25
0.1
15
0.01
15
0.01
15
1
15
real values
To simulate the robotic arm, we consider its kinematic
structure, which provides the location of its gripper according
to the angular position of all joints. The solutions that are
optimized by the algorithms consist of a set of angular positions
that govern the final configuration of the different joints of the
robot. Neither the trajectory of the robot between its initial
and final positions, nor internal collisions are simulated in this
experiment.
The solution descriptor is defined as the final position of
the gripper, which is then normalized according to a square
bounding box to have values between 0 and 1. The size of the
bounding box is 2 ∗ 1.1 ∗ L, where L is the total length of the
robot when totally deployed (the factor 1.1 is used to leave
some room between the border of the descriptor space and the
robot). The center of the box corresponds to the location of
the robot’s base.
An extensive set of configurations from the QD-algorithm
framework (see algorithm 1) has been tested on this experimental setup (see Table I), and the execution of each of those
variants has been replicated 20 times. The parameter values
used for this experiment can be found in Table II.
2) Results: A typical collection of solutions produced by
each of the tested variants is pictured in Figure 3. The
collections using the archive-based container appear very
similar to those using the other container type. This similarity,
which holds in the other experiments as well, demonstrates that
the archive management introduced in this paper successfully
address the erosion issues described previously. Theoretically,
the ideal result homogeneously covers a quasi-circular region
and the performance (i.e., the color) should be arranged in
concentric shapes resembling cardioids (inverted, heart-shaped
curves)3 . This type of collection is found using the random,
the fitness or the curiosity-based selection operators (over the
collection) regardless of the container type used, as well as
with the NSLC algorithm. The novelty based selection with
the archive-based container also produces such a collection,
while this is not the case with the grid-based container. It is
interesting to note that the no-selection approach, which can
be considered as a motor babbling or random search, is unable
to produce the desired result. While the coverage is decent,
the quality of the gathered solutions is not satisfactory.
None of the population-based variants managed to produce a
collection that both covers all the reachable space and contains
high-performing solutions. This result could be explained by a
convergence of the population toward specific regions of the
collection. Typically, the population considering the fitness is
likely to converge toward regions with high quality, whereas
the population considering the novelty score converges to the
3 We can demonstrate that the points with the highest performance are
located on a curve resembling a cardioid by computing the position of the
end-effector for which all angular positions of the joints are set to the same
angle (from −π/2 to +π/2).
11
5350
Archive
5000
5340
4000
5330
3000
5320
2000
5310
1000
0
0
Grid
Maximal Quality
Collection size
6000
1
2
3
5300
4
5
4.9
× 104
5
× 104
0
-0.05
-0.1
-0.15
-0.2
-0.25
-0.3
-0.35
-0.4
-0.45
0
7000
6280
0
6000
6260
-0.05
5000
6240
-0.1
6220
4000
-0.2
6180
-0.25
2000
6160
-0.3
1000
6140
-0.35
0
0
1
2
3
6120
4.9
4
5
× 104
Number of Iterations
2
3
2
-0.15
6200
3000
1
5
× 104
-0.4
0
Total Quality
0
-0.002
-0.004
-0.006
-0.008
-0.01
-0.012
-0.014
-0.016
-0.018
4
5
4.9
× 104
1
2
3
4
5
× 104
5
× 104
× 10-3
4500
4000
3500
3000
2500
2000
1500
1000
500
0
0
4000
-4
3000
-6
2000
-8
1000
-10
4.9
5
× 104
Number of Iterations
0
0
92.1
70
60
4320
1
2
3
4300
4
5
4.9
× 104
5040
5020
5000
3
4
5
× 104
4980
4.9
91.8
30
5
× 104
5060
2
91.9
40
5080
1
92
50
4310
5100
5000
92.2
80
4330
5120
0
92.3
90
4340
6000
-2
Total Novelty
100
4350
5
× 104
20
0
1
2
3
4
5
× 104
91.7
4.9
Number of Iterations
pareto
pop_curiosity
pop_novelty
pop_fitness
curiosity
novelty
fitness
random
no_selection
NSLC
5
× 104
Number of Iterations
Fig. 4: Progression of the quality metrics in the redundant arm experiment. The first row depicts the results from variants using
the archive-based container, while the second row considers variants with the grid-based container. Because of the difficulty to
distinguish the different variants, a zoom on the best variants during the last 1000 batches is pictured on the right of each plot.
The middle lines represent the median performance over the 20 replications, while the shaded areas extend to the first and third
quartiles. In this experiment, the quality score is negative, thus in order to get a monotonic progression in the “Total Quality”
metric, +1 is added to the Quality to have a positive score.
border of the collection. The results of the variant using a
population with the curiosity score could be explained by the
difficulty to keep track of all individuals with a relatively
small population (200 individuals in the population compared
to about 6.000 in the entire collection). The curiosity score
is dynamic, and changes during the evolutionary process (an
individual can have a high curiosity score at one moment, for
example if it reaches a new region of the descriptor space, and
can have a very low curiosity score later during the process, for
instance when the region becomes filled with good solutions).
Therefore, it is likely that the individuals with the highest
curiosity score are not contained in the population.
score is continuous and the distribution of the solutions in the
collection adds some variability in the novelty score, which
makes it impossible to have several individuals with the lowest
novelty score.
While the Pareto-based selection is designed to be similar to
the NSLC algorithm, by keeping in the population individuals
that both have a high novelty and local-competition scores, we
can see that the collection produced by NSLC is significantly
better than the Pareto-based selection approach. We can explain
this poor performance by the presence of a Pareto-optimal
solution in this scenario. Indeed, the solution in which the
robot has all his joint positions set to zero has the best fitness
Moreover, we can observe different results between the grid- and is located on the border of the collection, which provides
based and the archive-based container variants considering a high novelty score. It is worth noting that we can mitigate
the novelty score. This difference is likely to originate from this issue by implementing a toroidal distance or container
the fact that the novelty score is computed differently in (like in [17]), when such a representation is compatible with
these two container types. Indeed, while in the archive- the descriptor space. This is not the case in our experiments.
based container the novelty score follows the formulation A behavior that reaches one end of the reachable space of
introduced by Lehman and Stanley [18], in the grid-based the robot is not meant to be considered similar to individuals
container, the novelty score is computed based on the number that reach the opposite end of the reachable space. For these
of individuals in the neighboring cells (see section III-A1b). reasons, the population is then very likely to converge to this
Both of these expressions capture the density of solutions Pareto-optimal solution and thus, to neglect certain regions
around the considered individuals. However, in the grid based of the collection. The size of the population is probably a
container, the novelty score is discretized (because it is related limiting factor as well. A large number of equivalent solutions
to the number of neighboring solutions). This discretization in terms of Pareto-dominance exist (all those in the center of
is likely to have a strong impact on score-based selection the collection with the highest fitness), which makes it difficult
variants using the novelty score because all individuals in the for the population to cover the entire descriptor space.
center of the collection will have the same and lowest novelty
NSLC is not impacted in the same way because the
score (because of all neighboring cells being filled). In the original archive management allows the density to constantly
score-based selection, individuals with the lowest score have accumulate around over-explored regions (for instance by
nearly no chance of being selected, which makes the selection varying the novelty threshold, as described in [14]). Thanks
focus on the border of the collection. This behavior is not to this feature, the novelty score constantly changes over time
observed with the archive-based container because the novelty and makes pareto optimal solutions disappear quickly. Indeed,
12
the regions that contain pareto optimal solutions will rapidly
This experimental setup has first been introduced in [13].
see their density increased making the novelty score of the Each potential solution consists of a set of 36 parameters (6
corresponding individuals less competitive compared with the per leg) that define the way each of the robot’s joint is moving
rest of the population.
(the controller is the same as the one used in [4]). During the
It is important to note that the NSLC variant uses two evaluation of a solution, the robot executes the behavior defined
containers and one population during the evolutionary process. by the parameters for three seconds, and its final position and
The population and one of the containers (the novelty archive) orientation are recorded. The descriptor space is defined by the
are used to drive the exploration process, while the second final position of the robot (X and Y coordinates), while the
container (a grid-based container) gathers the collection that quality of the solution corresponds to the orientation error with
respect to a desired orientation, which encourages the robot
will be delivered to the user.
The variations of the quality metrics (see Fig. 4) demonstrate to follow circular trajectories. These kinds of trajectories are
that among all tested variants, the best collections are provided interesting for planning purposes as any arbitrary trajectory
by variants which perform the selection based on the entire can be decomposed as a succession of circular arcs. In order
to be able to chain circular trajectories, the robot needs to be
collection.
The coverage, maximal quality, total quality, and total novelty aligned with the tangent of these circles at the beginning and
of the collections produced with selection operators considering the end of each movement. We can note that only one circular
the entire collections is higher than those using population- trajectory goes through both the initial and final positions of
based selection (all p-values < 7e − 8 from the Wilcoxon rank the robot with its tangent aligned with the initial orientation of
sum tests4 , except for the “(grid/arch) pop fitness” approaches the robot. The difference between the final orientation of the
which are not significantly different in terms of maximal quality robot and the direction of the tangent of this unique circular
and for “grid novelty” which performs significantly worse than trajectory defines the orientation error, which is minimized by
the other collection-based approaches). The only exception the QD algorithms (more details can be found in [13]).
is the novelty-based selection with the grid-based container,
The usage of the physical simulator makes the experiments
which is unable to correctly fill the center of the collection, as significantly longer (between 4 and 5 hours are required to
it focuses on its borders.
perform 10,000 batches with one variant). For this reason,
We can note that the variant using the Pareto-based selection the number of generations has been decreased to 10,000 and
with the archive-based container produces collections that are only 10 variants (those in bold in Table I) are considered for
better than those from variants using population-based selection, this experiment. This sub-set of variants includes variants that
but worse than those produced by variants that consider the are related to MAP-Elites, NSLC, Motor Babbling, traditional
entire collection for the selection. However, the Pareto-based population-based EA and the variant considering the curiosity
selection shows the best results according to the maximal score over the entire collection. The execution of each of
quality metrics.
those variants has been replicated 10 times. The value of the
While the difference among variants using the entire collec- parameters used for this experiment can be found in Table II.
tion in the selection with the grid-based container is negligible,
the curiosity-based selection appears to be significantly better
2) Results: From a high-level point of view, the same
(even if the difference is small) than the other selection conclusion as previously can be drawn based on the resulting
approaches on all the metrics with the archive-based container collections (see Fig. 5): The variants “no selection” and
(all p-values< 2e − 4 for all the metrics except for the “pop fitness” produce worse collections than the other variants,
total novelty in which p-values< 0.01). This observation while the variants “random”, “curiosity” and NSLC generate
demonstrates that relying on individuals with a high-propensity the best collections. In this experiment, the “Pareto” variant
to generate individuals that are added to the collection is a performs better than in the previous one. This result can be
promising selection heuristic.
explained by the absence of a unique Pareto-optimal solution.
We can observe that the NSLC variant performs significantly
The quality metrics indicate that the “curiosity” variants,
better than the pareto-based approach and that its performance
on both the grid and the archive containers, significantly
is close to, but lower than, those of the variants that use
outperform the other algorithms (see Fig. 6, all p-values < 0.01,
selection operators considering the entire collections.
except when compared to arch random in terms of total novelty
in which p-value = 0.05). These results also demonstrate that
this second experimental scenario is more challenging for the
C. The Robot Walking in Every Direction
1) The Experimental Setup: In this second experimental algorithms, as the difference in the metrics is clear and the
setup, we consider a six-legged robot in a physical simulator. performance of the naive “no selection” is very low.
The objective of the QD-algorithms is to produce a collection
of behaviors that allows the robot to walk in every direction
and at different speeds.
4 The reported p-values should be compared to a threshold α (usually set
to0.05) which is corrected to deal with the “Multiple Comparisons problem”.
In this paper, all our conclusions about the significance of a difference is given
by correcting α according to the Holm-Bonferroni method [48].
In this experiment, the NSLC variant shows similar results
to the “random” variant (which corresponds to the MAP-Elites
algorithm). In particular, the final size of the collection and
the final total quality are not significantly different (p-values<
0.61). However, the performance of the “curiosity” approach
remains significantly better on both aspects (p-values< 0.0047)
compared to NSLC.
13
Population-based
Selection wrt Fitness
Pareto-based Selection
Random Selection
(novelty and local quality) (over the entire collection)
Curiosity-based Selection
(over the entire collection)
Archive-based
Collection
No Selection
Novelty Search with
Local Competition
-180
Back
Grid-based
Collection
Front
1m
-160
-140
-120
-100
-80
Quality (degree)
-60
-40
-20
0
Right
Left
-1m
1m
-1m
Fig. 5: Typical collections of solutions produced by considered variants in the experiment with the virtual legged-robot learning
to walk in every direction. The center of each collection corresponds to the starting position of the robot and the vertical axis
represents the front axis of the robot. The position of each colored pixel or dot represent the final position of the robot after
walking for 3 seconds and its color depicts the absolute (negative) difference between the robot orientation and the desired
orientation. Lighter colors indicate better solutions. The collections are symmetrical because the robot learns how to walk both
forward and backward. This possibility, as well as the overall shape of the collection is not predefined but rather autonomously
discovered by the algorithms.
Total Quality
7
3500
3000
2500
2000
1500
60
50
40
3
30
1
0
0
4000
6000
8000
10000
6000
0
10
2000
× 10
4000
6000
8000
10000
4000
6000
8000
10000
5
0
0.5
1
1.5
2
× 10
3000
4
2000
1000
2
0
2000
4000
6000
8000
Number of Iterations
10000
0
0
2000
4000
6000
8000
10000
pareto
pop_fitness
curiosity
random
no_selection
NSLC
Number of Iterations
800
600
400
500
0.2
200
0
0.5
1
1.5
4
12000
6
1000
0.3
14000
8
1200
1000
0.4
Number of Iterations
4000
Grid
2000
1400
1500
0.5
1000
0
1600
0.6
2000
10
0
Total Novelty
1800
2000
0.7
3000
0
Total Quality
2500
0.8
4000
20
5000
0
1
0.9
5000
70
4
500
Collection size Maximal Quality
6000
80
5
2
2000
Total Novelty
5
6
1000
0
× 10
Archive
8
4000
2
× 10
0
6000
1
5000
0.8
6000
0.6
0.5
1
1.5
2
× 10
1.2
8000
0
4
10000
Grid
Archive
Collection size
4500
4
4000
3000
2000
4000
0.4
2000
0
0
0.5
1
1.5
2
× 10
4
Number of Iterations
0.2
1000
0
0.5
1
1.5
2
× 10
4
Number of Iterations
0
0
0.5
1
1.5
2
× 10
0
0
0.5
1
1.5
2
× 10
4
Number of Iterationss
4
pareto
pop_fitness
curiosity
random
no_selection
NSLC
Number of Iterations
Fig. 6: Progression of three quality metrics in the turning legged- Fig. 7: Progression of the four quality metrics in the experiment
robot experiment. The progression of the maximal quality is not with the legged-robot learning different ways to walk in a
depicted because all the variants found at least one solution with straight line. The first row depicts the results from variants
the highest possible quality (i.e., 0) in fewer than 1.000 batches. using the archive-based container, while the second row
The first row depicts the results from variants using the archive- considers variants with the grid-based container. The middle
based container, while the second row considers variants with lines represent the median performance over the 10 replications,
the grid-based container. The middle lines represent the median while the shaded areas extend to the first and third quartiles.
performance over the 10 replications, while the shaded areas
extend to the first and third quartiles. In this experiment, the
quality score is negative, thus in order to get a monotonic The experiment has been replicated 10 times and the other
progression in the “Total Quality” metric, +180 is added to parameters of the algorithm can be found in Table II.
the Quality to have positive score.
2) Results: From a general point of view, the same conclusion as in the previous experiments can be drawn from the
progression of quality metrics (see Fig.7)5 . Variants selecting
D. The Robot Walking with Different Gaits
individuals from the whole collection significantly outperform,
1) The Experimental Setup: In this third experimental setup, in terms of coverage, total quality and diversity, those that
we use the same virtual robot as in the previous experiment consider populations (all the p-values< 2e − 4). In particular,
with the same controller. However, in this case the robot has the curiosity-based selection operator shows the best results
to learn a large collection of gaits to walk in a straight line as both with the grid-based and the archive-based containers. For
instance, one can note that the total quality achieved by the
fast as possible. This scenario is inspired by [4].
In this experiment, the quality score is the traveled distance
after walking for 3 seconds, and the solution descriptor is the
proportion of time that each leg is in contact with the ground.
The descriptor space has thus 6 dimensions in this experiment.
5 Visualizations of the collections are not provided in this experiment because
of the high-dimensionality of the descriptor-space. While the grid-based
collections could have been depicted with the same approach as in [4], this
approach cannot be applied with the archive-based container.
14
random selection (second best approach) after 20,000 batches,
is achieved by the curiosity-based selection after only 11,000
batches with the archive-based container and 13,500 batches
with the grid-based container.
In contrast with the previous experiment, the “no selection”
variants manage to achieve good coverage (about half of the
coverage produced by the variants using the collection-wise
selection). However, they show the worst results according to
the total quality and the maximal quality metrics.
The variants using the population-based selection with
respect to the performance show the opposite results. While
the coverage of this variant is the worst among all the
evaluated variants with both of the container types, this
selection approach, which is similar to a traditional EA,
found the solutions with the best quality (the fastest way to
walk). In particular, the performance achieved with this variant
significantly outperforms the best solutions compared to every
other variant, even those using the collection-wise selection (pvalues< 0.0017). This observation shows that the best variants
tested so far are not always able to find the global extremum
of the quality. The quality difference between the “pop fitness”
variants and the others is smaller with the grid-based container
than with the archive-based. This quality difference could
be explained by the difference in the collection sizes, or the
additional difficulty of finding the inherent structure of the
collection for the archive-based container.
The Pareto-based variants are low-performing in this experiment. They show neither a good coverage (similar to
the “no selection” or the “pop fitness” variants) nor a good
maximal quality (lower than the variants with a collection-wise
selection). It is difficult to understand the reasons for such a
low performance in this experiment, as the behavioral space
is 6 dimensional, making it hard to visualize. However, it is
likely that it happens for the same reasons as in the previous
experiments, like a premature convergence to the border of
the collection (which show relatively bad performance), or
the existence of a Pareto-optimal solution. In contrast with
the Pareto-based variants, NSLC achieves good coverage of
the behavioral space in this experiment, while smaller than
the “random” and “curiosity” ones. However, the maximal
quality found on the produced collection is lower than most
of the considered variants (p-values< 0.03746 except with the
“no selection” variant, p-value= 0.9696), and the global quality
of the collections is equivalent to those of the Pareto-based
variant.
V. C ONCLUSION AND D ISCUSSION
In this paper, we presented three new contributions. First,
we introduced a new framework that unifies QD-algorithms,
showing for example that MAP-Elites and the Novelty Search
with Local Competition are two different configurations of the
same algorithm. Second, we suggested a new archive management procedure that copes with the erosion issues observed
with the previous approaches using unstructured archives (like
BR-evolution). This new procedure demonstrates good results
6 These p-values do not reject the null-hypothesis based on the HolmBonferroni method with a α = 0.05, but reject it with α = 0.1.
as it allows the algorithms to produce unstructured collections
with the same coverage as those with grid containers, which
was not the case with the previous management procedure [31].
Finally, we proposed a new selective pressure specific for QDalgorithms, named “curiosity score” that shows very promising
results by outperforming all the existing QD-algorithms on all
the experiments presented in this paper.
In addition to these three contributions, we presented the
results of an experimental comparison between a large number
of QD-algorithms, including MAP-Elites and NSLC. One of
the main results that can be outlined from these experiments
is that selection operators considering the collection instead of
a population showed better performance on all scenarios. We
can hypothesize that this results from the inherent diversity
of solutions contained in the collection. Indeed, several works
suggest that maintaining the behavioral diversity in populations of evolutionary algorithms (via additional objective for
example) is a key factor to avoid local extremum and to find
promising stepping stones [40], [18].
Another fundamental lesson learned from the experiments
presented in this paper is about the importance of allowing the
density of solutions to increase in diverse regions of the archive
to obtain the full effectiveness the NSLC. This can be achieved
by varying the novelty-score threshold or via probabilistic
addition to the archive[37]. While such mechanisms are often
used in the literature, their importance is rarely highlighted
by experimental comparisons like in this paper. In particular,
we demonstrated that algorithms using the novelty score, but
with archives in which the density does not increase, are
unable to show similar results to NSLC, because they are
severely impacted by certain aspects of the fitness landscape
(e.g., presence of Pareto-optimal solutions).
This unified and modular framework for QD-algorithms
is intended to encourage new research directions via novel
container types, selection operators, or selective pressures that
are specific to this domain. We expect that the emergence of
new QD-algorithms will provide insights about the key factors
for producing the best collection of solutions.
VI. Q UALITY D IVERSITY L IBRARY
The source code of the QD-algorithm framework is available
at https://github.com/sferes2/modular QD. It is based on the
Sferesv2 framework [47] and implements both the grid-based
and archive-based containers and several selection operators,
including all those that have been evaluated in this paper. The
source code of the experimental setups is available at the same
location and can be used by interested readers to investigate
and evaluate new QD-algorithms.
The implementation allows researchers to easily implement
and evaluate new combinations of operators, while maintaining
high execution speed. For this reason, we followed the policybased design in C++ [49], which allows developers to replace
the behavior of the program simply by changing the template
declarations of the algorithm. For example, changing from the
grid-based container to the archive-based one only requires
changing “container::Grid” to “container::Archive” in the
template definition of the QD-algorithm object. Moreover, the
15
modularity provided by this design pattern does not add any
overhead, contrary to classic Object-Oriented Programming
design. Interested readers are welcome to use and to contribute
to the source code.
ACKNOWLEDGMENT
This work was supported by the EU Horizon2020 project
PAL (643783-RIA). The authors gratefully acknowledge the
support from the members of the Personal Robotics Lab.
R EFERENCES
[1] A. Antoniou and W.-S. Lu, Practical optimization: algorithms and
engineering applications. Springer Science & Business Media, 2007.
[2] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Cognitive modeling, vol. 5, no. 3,
p. 1, 1988.
[3] S. J. Russell and P. Norvig, Artificial intelligence: a modern approach,
2003.
[4] A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, “Robots that can
adapt like animals,” Nature, vol. 521, no. 7553, pp. 503–507, 2015.
[5] J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics:
A survey,” International Journal of Robotics Research, vol. 32, no. 11,
p. 1238, 2013.
[6] J. C. Spall, Introduction to stochastic search and optimization: estimation,
simulation, and control. John Wiley & Sons, 2005, vol. 65.
[7] H. Lipson and J. B. Pollack, “Automatic design and manufacture of
robotic lifeforms,” Nature, vol. 406, no. 6799, pp. 974–978, 2000.
[8] M. Schmidt and H. Lipson, “Distilling free-form natural laws from
experimental data,” science, vol. 324, no. 5923, pp. 81–85, 2009.
[9] A. E. Eiben and J. Smith, “From evolutionary computation to the
evolution of things,” Nature, vol. 521, no. 7553, pp. 476–482, 2015.
[10] J. Bongard, V. Zykov, and H. Lipson, “Resilient machines through
continuous self-modeling,” Science, vol. 314, no. 5802, 2006.
[11] S. Koos, J.-B. Mouret, and S. Doncieux, “The transferability approach:
Crossing the reality gap in evolutionary robotics,” Evolutionary Computation, IEEE Transactions on, vol. 17, no. 1, pp. 122–145, 2013.
[12] Y. Demiris, L. Aziz-Zadeh, and J. Bonaiuto, “Information processing in
the mirror neuron system in primates and machines,” Neuroinformatics,
vol. 12, no. 1, pp. 63–91, 2014.
[13] A. Cully and J.-B. Mouret, “Behavioral repertoire learning in robotics,” in
Proceedings of the 15th annual conference on Genetic and Evolutionary
Computation. ACM, 2013, pp. 175–182.
[14] J. Lehman and K. O. Stanley, “Evolving a diversity of virtual creatures
through novelty search and local competition,” in Proceedings of the 13th
annual conference on Genetic and Evolutionary Computation. ACM,
2011, pp. 211–218.
[15] J.-B. Mouret and J. Clune, “Illuminating search spaces by mapping elites,”
arXiv preprint arXiv:1504.04909, 2015.
[16] J. K. Pugh, L. Soros, P. A. Szerlip, and K. O. Stanley, “Confronting the
challenge of quality diversity,” in Proceedings of the 2015 on Genetic
and Evolutionary Computation Conference. ACM, 2015, pp. 967–974.
[17] J. K. Pugh, L. B. Soros, and K. O. Stanley, “Quality diversity: A new
frontier for evolutionary computation,” Frontiers in Robotics and AI,
vol. 3, p. 40, 2016.
[18] J. Lehman and K. O. Stanley, “Abandoning objectives: Evolution through
the search for novelty alone,” Evolutionary Computation, vol. 19, no. 2,
pp. 189–223, 2011.
[19] D. E. Goldberg and J. Richardson, “Genetic algorithms with sharing
for multimodal function optimization,” in Genetic algorithms and their
applications: Proceedings of the Second International Conference on
Genetic Algorithms. Hillsdale, NJ: Lawrence Erlbaum, 1987, pp. 41–49.
[20] S. W. Mahfoud, “Niching methods for genetic algorithms,” Urbana,
vol. 51, no. 95001, pp. 62–94, 1995.
[21] G. Singh and K. Deb Dr, “Comparison of multi-modal optimization
algorithms based on evolutionary algorithms,” in Proceedings of the 8th
annual conference on Genetic and Evolutionary Computation. ACM,
2006, pp. 1305–1312.
[22] X. Yin and N. Germay, “A fast genetic algorithm with sharing scheme
using cluster analysis methods in multimodal function optimization,” in
Artificial neural nets and genetic algorithms. Springer, 1993.
[23] A. Pétrowski, “A clearing procedure as a niching method for genetic
algorithms,” in Evolutionary Computation, 1996., Proceedings of IEEE
International Conference on. IEEE, 1996, pp. 798–803.
[24] D. J. Lizotte, Practical bayesian optimization. University of Alberta,
2008.
[25] N. Kohl and P. Stone, “Policy gradient reinforcement learning for fast
quadrupedal locomotion,” in Proceedings of the IEEE International
Conference on Robotics and Automation (ICRA), vol. 3. IEEE, 2004,
pp. 2619–2624.
[26] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist
multiobjective genetic algorithm: Nsga-ii,” Evolutionary Computation,
IEEE Transactions on, vol. 6, no. 2, pp. 182–197, 2002.
[27] C. Maestre, A. Cully, C. Gonzales, and S. Doncieux, “Bootstrapping
interactions with objects from raw sensorimotor data: a novelty search
based approach,” in Development and Learning and Epigenetic Robotics
(ICDL-EpiRob), 2015 Joint IEEE International Conference on. IEEE,
2015, pp. 7–12.
[28] F. Benureau and P.-Y. Oudeyer, “Behavioral diversity generation in
autonomous exploration through reuse of past experience,” Frontiers in
Robotics and AI, vol. 3, p. 8, 2016.
[29] A. Cully and J.-B. Mouret, “Evolving a behavioral repertoire for a
walking robot,” Evolutionary Computation, 2015.
[30] J. Clune, J.-B. Mouret, and H. Lipson, “The evolutionary origins of
modularity,” Proceedings of the Royal Society of London B: Biological
Sciences, vol. 280, no. 1755, 2013.
[31] A. Cully, “Creative adaptation through learning,” Ph.D. dissertation,
Université Pierre et Marie Curie, 2015.
[32] M. Duarte, J. Gomes, S. M. Oliveira, and A. L. Christensen, “Evorbc:
Evolutionary repertoire-based control for robots with arbitrary locomotion
complexity,” in Proceedings of the 25th annual conference on Genetic
and Evolutionary Computation. ACM, 2016.
[33] A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily
fooled: High confidence predictions for unrecognizable images,” in
Conference on Computer Vision and Pattern Recognition. IEEE, 2015.
[34] A. M. Nguyen, J. Yosinski, and J. Clune, “Innovation engines: Automated
creativity and improved stochastic optimization via deep learning,” in
Proceedings of the 2015 Annual Conference on Genetic and Evolutionary
Computation. ACM, 2015, pp. 959–966.
[35] V. Vassiliades, K. Chatzilygeroudis, and J.-B. Mouret, “Scaling up
map-elites using centroidal voronoi tessellations,” arXiv preprint
arXiv:1610.05729, 2016.
[36] D. Smith, L. Tokarchuk, and G. Wiggins, “Rapid phenotypic landscape
exploration through hierarchical spatial partitioning,” in International
Conference on Parallel Problem Solving from Nature. Springer, 2016,
pp. 911–920.
[37] J. Lehman and R. Miikkulainen, “Enhancing divergent search through extinction events,” in Proceedings of the 2015 on Genetic and Evolutionary
Computation Conference. ACM, 2015, pp. 951–958.
[38] D. Tarapore and J.-B. Mouret, “Evolvability signatures of generative
encodings: beyond standard performance benchmarks,” Information
Sciences, vol. 313, pp. 43–61, 2015.
[39] D. Tarapore, J. Clune, A. Cully, and J.-B. Mouret, “How do different
encodings influence the performance of the map-elites algorithm?” in
Genetic and Evolutionary Computation Conference, 2016.
[40] J.-B. Mouret and S. Doncieux, “Encouraging behavioral diversity in
evolutionary robotics: An empirical study,” Evolutionary Computation,
vol. 20, no. 1, pp. 91–133, 2012.
[41] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining convergence
and diversity in evolutionary multiobjective optimization,” Evolutionary
Computation, vol. 10, no. 3, pp. 263–282, 2002.
[42] D. E. Goldberg and K. Deb, “A comparative analysis of selection schemes
used in genetic algorithms,” Foundations of genetic algorithms, 1991.
[43] P.-Y. Oudeyer, F. Kaplan, and V. V. Hafner, “Intrinsic motivation systems
for autonomous mental development,” Evolutionary Computation, IEEE
Transactions on, vol. 11, no. 2, pp. 265–286, 2007.
[44] J. Lehman, S. Risi, and J. Clune, “Creative generation of 3d objects
with deep learning and innovation engines,” in Proceedings of the 7th
International Conference on Computational Creativity, 2016.
[45] M. Pigliucci, “Is evolvability evolvable?” Nature Reviews Genetics, vol. 9,
no. 1, pp. 75–82, 2008.
[46] L. Altenberg et al., “The evolution of evolvability in genetic programming,” Advances in genetic programming, vol. 3, pp. 47–74, 1994.
[47] J.-B. Mouret and S. Doncieux, “Sferes v2: Evolvin’in the multi-core
world,” in Evolutionary Computation (CEC), 2010 IEEE Congress on.
IEEE, 2010, pp. 1–8.
[48] J. P. Shaffer, “Multiple hypothesis testing,” Annual review of psychology,
vol. 46, no. 1, pp. 561–584, 1995.
[49] A. Alexandrescu, Modern C++ design: generic programming and design
patterns applied. Addison-Wesley, 2001.
| 9 |
THE RATIONAL HOMOLOGY OF THE OUTER AUTOMORPHISM
GROUP OF F7
arXiv:1512.03075v2 [math.GR] 19 Jan 2016
LAURENT BARTHOLDI
Abstract. We compute the homology groups H∗ (Out(F7 ); Q) of the outer automorphism group of the free group of rank 7.
We produce in this manner the first rational homology classes of Out(Fn ) that are
neither constant (∗ = 0) nor Morita classes (∗ = 2n − 4).
1. Introduction
The homology groups Hk (Out(Fn ); Q) are intriguing objects. On the one hand, they are
known to “stably vanish”, i.e. for all n ∈ N we have Hk (Out(Fn ); Q) = 0 as soon as k is large
enough [3]. Hatcher and Vogtmann prove that the natural maps Hk Out(Fn ) → Hk Aut(Fn )
and Hk Aut(Fn ) → Hk Aut(Fn+1 ) are isomorphisms for n ≥ 2k + 2 respectively n ≥ 2k + 4,
see [4, 5]. On the other hand, Hk (Out(Fn ); Q) = 0 for k > 2n − 3, since Out(Fn ) acts
geometrically on a contractible space (the “spine of outer space”, see [2]) of dimension
2n − 3. Combining these results, the only k ≥ 1 for which Hk (Out(Fn ); Q) could possibly
be non-zero are in the range n2 − 2 < k ≤ 2n − 3. Morita conjectures in [9, page 390] that
H2n−3 (Out(Fn ); Q) always vanishes; this would improve the upper bound to k = 2n − 4,
and H2n−4 (Out(Fn ); Q) is also conjectured to be non-trivial.
We shall see that the first conjecture does not hold. Indeed, the first few values of
Hk (Out(Fn ); Q) may be computed by a combination of human and computer work, and
yield
n\k 0 1 2 3 4 5 6 7 8 9 10 11
2
1 0
3
1 0 0 0
4
1 0 0 0 1 0
1 0 0 0 0 0 0 0
5
6
1 0 0 0 0 0 0 0 1 0
7
1 0 0 0 0 0 0 0 1 0 0 1
The values for n ≤ 6 were computed by Ohashi in [12]. They reveal that, for n ≤ 6, only
the constant class (k = 0) and the Morita classes k = 2n − 4 yield non-trivial homology. The
values for n = 7 are the object of this Note, and reveal that the picture changes radically:
Theorem. The non-trivial homology groups Hk (Out(F7 ); Q) occur for k ∈ {0, 8, 11} and
are all 1-dimensional.
P
Previously, only the rational Euler characteristic χQ (Out(F7 )) = (−1)k dim Hk (Out(F7 ); Q)
was known [10], and shown to be 1. These authors computed in fact the rational Euler characteristics up to n = 11 in that paper and the sequel [11].
2. Methods
We make fundamental use of a construction of Kontsevich [6], explained in [1]. We follow
the simplified description from [12].
Let Fn denote the free group of rank n. This parameter n is fixed once and for all, and will
in fact be omitted from the notation as often as possible. An admissible graph of rank n is a
Date: 18 January 2016.
Partially supported by ANR grant ANR-14-ACHN-0018-01.
1
2
LAURENT BARTHOLDI
graph G that is 2-connected (G remains connected even after an arbitrary edge is removed),
without loops, with fundamental
group isomorphic to Fn , and without vertex of valency ≤ 2.
P
Its degree is deg(G) := v∈V (G) (deg(v) − 3). In particular, G has 2n − 2 − deg(G) vertices
and 3n − 3 − deg(G) edges, and is trivalent if and only if deg(G) = 0. If Φ is a collection
of edges in a graph G, we denote by G/Φ the graph quotient, obtained by contracting all
edges in Φ to points.
A forested graph is a pair (G, Φ) with Φ an oriented forest in G, namely an ordered
collection of edges that do not form any cycle. We note that the symmetric group Sym(k)
acts on the set of forested graphs whose forest contains k edges, by permuting the forest’s
edges.
For k ∈ N, let Ck denote the Q-vector space spanned by isomorphism classes of forested
graphs of rank n with a forest of size k, subject to the relation
(G, πΦ) = (−1)π (G, Φ) for all π ∈ Sym(k).
Note, in particular, that if (G, Φ) ∼ (G, πΦ) for an odd permutation π then (G, Φ) = 0
in Ck . These spaces (C∗ ) form a chain complex for the differential ∂ = ∂C − ∂R , defined
respectively on (G, Φ) = (G, {e1 , . . . , ep }) by
∂C (G, Φ) =
p
X
(−1)i (G/ei , Φ \ {ei }),
∂R (G, Φ) =
i=1
p
X
(−1)i (G, Φ \ {ei }),
i=1
and the homology of (C∗ , ∂) is H∗ (Out(Fn ); Q).
The spaces Ck may be filtered by degree: let Fp Ck denote the subspace spanned by
forested graphs (G, Φ) with deg(G/Φ) ≤ p. The differentials satisfy respectively
∂C (Fp Ck ) ⊆ Fp Ck−1 ,
∂R (Fp Ck ) ⊆ Fp−1 Ck−1 .
A spectral sequence argument gives
(1)
2
Hp (Out(Fn ); Q) = Ep,0
=
ker(∂C |Fp Cp ) ∩ ker(∂R |Fp Cp )
.
∂R (ker(∂C |Fp+1 Cp+1 ))
Note that if (G, Φ) ∈ Fp Cp then G is trivalent. We compute explicitly bases for the vector
spaces Fp Cp , and matrices for the differentials ∂C , ∂R , to prove the theorem.
3. Implementation
We follow for n = 7 the procedure sketched in [12]. Using the software program nauty [8],
we enumerate all trivalent graphs of rank n and vertex valencies ≥ 3. The libraries in nauty
produce a canonical ordering of a graph, and compute generators for its automorphism
group. We then weed out the non-2-connected ones.
For given p ∈ N, we then enumerate all p-element oriented forests in these graphs, and
weed out those that admit an odd symmetry. These are stored as a basis for Fp Cp . Let ap
denote the dimension of Fp Cp .
For (G, Φ) a basis vector in Fp Cp , the forested graphs that appear as summands in
∂C (G, Φ) and ∂R (G, Φ) are numbered and stored in a hash table as they occur, and the
matrices ∂C and ∂R are computed as sparse matrices with ap columns.
The nullspace ker(∂C |Fp Cp ) is then computed: let bp denote its dimension; then the
nullspace is stored as a sparse (ap × bp )-matrix Np . The computation is greatly aided by
the fact that ∂C is a block matrix, whose row and column blocks are spanned by {(G, Φ) :
G/Φ = G0 } for all choices of the fully contracted graph G0 . The matrices Np are computed
using the linear algebra library linbox [7], which provides exact linear algebra over Q and
finite fields.
Finally, the rank cp of ∂R ◦ Np is computed, again using linbox. By (1), we have
dim Hp (Out(Fn ); Q) = bp − cp − cp+1 .
THE RATIONAL HOMOLOGY OF THE OUTER AUTOMORPHISM GROUP OF F7
3
For memory reasons (the computational requirements reached 200GB of RAM at its
peak), some of these ranks were computed modulo a large prime (65521 and 65519 were
used in two independent runs).
Computing modulo a prime can only reduce the rank; so that the values cp we obtained
are underestimates of the actual ranks of ∂R ◦ Np . However, we also know a priori that
bp − cp − cp+1 ≥ 0 since it is the dimension of a vector space; and none of the cp we
computed can be increased without at the same time causing a homology dimension to
become negative, so our reduction modulo a prime is legal.
For information, the parameters ap , bp , cp for n = 7 are as follows:
p
ap
bp
cp
0
365
365
0
1
3712
1784
364
2
23227
5642
1420
3
≈105k
14766
4222
4
≈348k
28739
10544
5
≈854k
39033
18195
6
≈1.6m
38113
20838
7
≈2.3m
28588
17275
8
≈2.6m
16741
11313
9
≈2.1m
6931
5427
10
≈1.2m
1682
1504
The largest single matrix operations that had to be performed were computing the
nullspace of a 2038511 × 536647 matrix (16 CPU hours) and the rank modulo 65519 of
a (less sparse) 1355531 × 16741 matrix (10 CPU hours).
The source files used for the computations are available as supplemental material. Compilation requires g++ version 4.7 or later, a functional linbox library, available from the
site http://www.linalg.org, as well as the nauty program suite, available from the site
http://pallini.di.uniroma1.it. It may also be directly downloaded and installed by
typing ‘make nauty25r9’ in the directory in which the sources were downloaded. Beware
that the calculations required for n = 7 are prohibitive for most desktop computers.
Conclusion
Computing the dimensions of the homology groups is only the first step in understanding
them; much more interesting would be to know visually, or graph-theoretically, where these
non-trivial classes come from.
It seems almost hopeless to describe, via computer experiments, the non-trivial class
in degree 8. It may be possible, however, to arrive at a reasonable understanding of the
non-trivial class in degree 11.
This class may be interpreted as a linear combination w of trivalent graphs on 12 vertices,
each marked with an oriented spanning forest. There are 376365 such forested graphs that
do not admit an odd symmetry. The class w ∈ Q376365 is an Z-linear combination of 70398
different forested graphs, with coefficients in {±1, . . . , ±16}. For example, eleven graphs
occur with coefficient ±13; four of them have indices 25273, 53069, 53239, 53610 respectively,
and are, with the spanning tree in bold,
8
11
3
4
0
2
5
0
11
5
9
8
10
1
6
1
2
3
7
6
10
9
4
7
11
≈376k
179
178
4
LAURENT BARTHOLDI
5
8
7
4
7
8
11
1
5
2
9
0
3
4
10
6
0
11
3
1
6
9
2
10
The coefficients of w, and corresponding graphs, are distributed as ancillary material in
the file w_cycle, in format ‘coefficient [edge1 edge2 ...]’, where each edge is ‘x-y’ or
‘x+y’ to indicate whether the edge is absent or present in the forest. Edges always satisfy
x ≤ y, and the forest is oriented so that its edges are lexicographically ordered. Edges are
numbered from 0 while graphs are numbered from 1. There are no multiple edges.
Acknowledgments
I am grateful to Alexander Berglund and Nathalie Wahl for having organized a wonderful
and stimulating workshop on automorphisms of free groups in Copenhagen in October 2015,
when this work began; to Masaaki Suzuki, Andy Putman and Karen Vogtmann for very
helpful conversations that took place during this workshop; and to Jim Conant for having
checked the cycle w (after finding a mistake in its original signs) with an independent
program.
References
[1] James Conant and Karen Vogtmann, On a theorem of Kontsevich, Algebr. Geom. Topol. 3 (2003),
1167–1224, DOI 10.2140/agt.2003.3.1167. MR2026331 (2004m:18006)
[2] Marc Culler and Karen Vogtmann, Moduli of graphs and automorphisms of free groups, Invent. Math.
84 (1986), no. 1, 91–119, DOI 10.1007/BF01388734. MR830040 (87f:20048)
[3] Søren Galatius, Stable homology of automorphism groups of free groups, Ann. of Math. (2) 173 (2011),
no. 2, 705–768, DOI 10.4007/annals.2011.173.2.3. MR2784914 (2012c:20149)
[4] Allen Hatcher and Karen Vogtmann, Homology stability for outer automorphism groups of free groups,
Algebr. Geom. Topol. 4 (2004), 1253–1272, DOI 10.2140/agt.2004.4.1253. MR2113904 (2005j:20038)
[5] Allen Hatcher, Karen Vogtmann, and Nathalie Wahl, Erratum to: “Homology stability for outer
automorphism groups of free groups [Algebr. Geom. Topol. 4 (2004), 1253–1272 (electronic); MR
2113904] by Hatcher and Vogtmann, Algebr. Geom. Topol. 6 (2006), 573–579 (electronic), DOI
10.2140/agt.2006.6.573. MR2220689 (2006k:20069)
[6] Maxim Kontsevich, Formal (non)commutative symplectic geometry, The Gel′fand Mathematical Seminars, 1990–1992, Birkhäuser Boston, Boston, MA, 1993, pp. 173–187. MR1247289 (94i:58212)
[7] LinBox — Exact Linear Algebra over the Integers and Finite Rings, Version 1.1.6, The LinBox Group,
2008.
[8] Brendan D. McKay and Adolfo Piperno, Practical graph isomorphism, II, J. Symbolic Comput. 60
(2014), 94–112, DOI 10.1016/j.jsc.2013.09.003, available at arXiv:1301.1493. MR3131381
[9] Shigeyuki Morita, Structure of the mapping class groups of surfaces: a survey and a prospect, Proceedings of the Kirbyfest (Berkeley, CA, 1998), Geom. Topol. Monogr., vol. 2, Geom. Topol. Publ., Coventry, 1999, pp. 349–406 (electronic), DOI 10.2140/gtm.1999.2.349, (to appear in print). MR1734418
(2000j:57039)
[10] Shigeyuki Morita, Takuya Sakasai, and Masaaki Suzuki, Computations in formal symplectic geometry and characteristic classes of moduli spaces, Quantum Topol. 6 (2015), no. 1, 139–182, DOI
10.4171/QT/61. MR3335007
, Integral Euler characteristic of Out F11 , Exp. Math. 24 (2015), no. 1, 93–97, DOI
[11]
10.1080/10586458.2014.956373. MR3305042
[12] Ryo Ohashi, The rational homology group of Out(Fn ) for n ≤ 6, Experiment. Math. 17 (2008), no. 2,
167–179. MR2433883 (2009k:20118)
École Normale Supérieure, Paris and Georg-August-Universität zu Göttingen
E-mail address: [email protected]
| 4 |
ABOUT VON NEUMANN’S PROBLEM FOR LOCALLY
COMPACT GROUPS
arXiv:1702.07955v1 [math.GR] 25 Feb 2017
FRIEDRICH MARTIN SCHNEIDER
Abstract. We note a generalization of Whyte’s geometric solution to the von
Neumann problem for locally compact groups in terms of Borel and clopen
piecewise translations. This strengthens a result of Paterson on the existence
of Borel paradoxical decompositions for non-amenable locally compact groups.
Along the way, we study the connection between some geometric properties of
coarse spaces and certain algebraic characteristics of their wobbling groups.
1. Introduction
In his seminal article [18] von Neumann introduced the concept of amenability for groups in order to explain why the Banach-Tarski paradox occurs only for
dimension greater than two. He proved that a group containing an isomorphic
copy of the free group F2 on two generators is not amenable. The converse, i.e.,
the question whether every non-amenable group would have a subgroup being isomorphic to F2 , was first posed in print by Day [5], but became known as the von
Neumann problem (or sometimes von Neumann-Day problem). The original question has been answered in the negative by Ol’šanskiı̆ [19]. However, there are very
interesting positive solutions to variants of the von Neumann problem in different
settings: a geometric solution by Whyte [27], a measure-theoretic solution by Gaboriau and Lyons [9] and its generalization to locally compact groups by Gheysens
and Monod [14], as well as a Baire category solution by Marks and Unger [13].
Whyte’s geometric version reads as follows.
Theorem 1.1 (Theorem 6.2 in [27]). A uniformly discrete metric space of uniformly bounded geometry is non-amenable if and only if it admits a partition whose
pieces are uniformly Lipschitz embedded copies of the 4-regular tree.
In particular, the above applies to Cayley graphs of finitely generated groups
and in turn yields a geometric solution to the von Neumann problem.
The aim of the present note is to extend Whyte’s relaxed version of the von
Neumann conjecture to the realm of locally compact groups. For this purpose, we
need to view the result from a slightly different perspective. Given a uniformly
discrete metric space X, its wobbling group (or group of bounded displacement ) is
defined as
W (X) := {α ∈ Sym(X) | ∃r ≥ 0 ∀x ∈ X : d(x, α(x)) ≤ r}.
Wobbling groups have attracted growing attention in recent years [11, 12, 3]. Since
the 4-regular tree is isomorphic to the standard Cayley graph of F2 , one can easily
Date: 28th February 2017.
2010 Mathematics Subject Classification. Primary 22D05, 43A07, 20E05, 20F65.
This research has been supported by funding of the German Research Foundation (reference
no. SCHN 1431/3-1) as well as by funding of the Excellence Initiative by the German Federal and
State Governments.
1
2
FRIEDRICH MARTIN SCHNEIDER
reformulate Whyte’s in terms of semi-regular subgroups. Let us recall that a subgroup G ≤ Sym(X) is said to be semi-regular if no non-identity element of G has
a fixed point in X.
Corollary 1.2 (Theorem 6.1 in [27]). A uniformly discrete metric space X of
uniformly bounded geometry is non-amenable if and only if F2 is isomorphic to a
semi-regular subgroup of W (X).
For a finitely generated group G, the metrics generated by any two finite symmetric generating sets containing the neutral element are equivalent and hence give
rise to the very same wobbling group W (G). It is easy to see that W (G) is just the
group of piecewise translations of G, i.e., a bijection α : G → G belongs to W (G) if
and only if exists a finite partition P of G such that
∀P ∈ P ∃g ∈ G :
α|P = λg |P .
Furthermore, we note that the semi-regularity requirement in the statement above
cannot be dropped: in fact, van Douwen [6] showed that W (Z) contains an isomorphic copy of F2 , despite Z being amenable. As it turns out, F2 embeds into the
wobbling group of any coarse space of positive asymptotic dimension (see Proposition 4.3 and Remark 4.4).
We are going to present a natural counterpart of Corollary 1.2 for general locally
compact groups. Let G be a locally compact group. We call a bijection α : G → G
a clopen piecewise translation of G if there exists a finite partition P of G into
clopen subsets such that
∀P ∈ P ∃g ∈ G :
α|P = λg |P ,
i.e., on every member of P the map α agrees with a left translation of G. It is
easy to see that the set C (G) of all clopen piecewise translations of G constitutes
a subgroup of the homeomorphism group of the topological space G and that the
mapping Λ : G → C (G), g 7→ λg embeds G into C (G) as a regular, i.e., semi-regular
and transitive, subgroup. Similarly, a bijection α : G → G is called a Borel piecewise
translation of G if there exists a finite partition P of G into Borel subsets with
∀P ∈ P ∃g ∈ G :
α|P = λg |P .
Likewise, the set B(G) of all Borel piecewise translations of G is a subgroup of the
automorphism group of the Borel space of G and contains C (G) as a subgroup.
For a locally compact group G, both B(G) and C (G) are reasonable analogues
of the wobbling group. Yet, the mere existence of an embedding of F2 as a semiregular subgroup of B(G), or even C (G), does not prevent G from being amenable.
In fact, there are many examples of compact (thus amenable) groups that admit
F2 as a (non-discrete) subgroup and hence as a semi-regular subgroup of C (G).
For example, since F2 is residually finite, it embeds into the compact group formed
by the product of its finite quotients. Therefore, we have to seek for a topological
analogue of semi-regularity, which amounts to a short discussion.
Remark 1.3. Let X be a set. A subgroup G ≤ Sym(X) is semi-regular if and only
if there exists a (necessarily surjective) map ψ : X → G such that ψ(gx) = gψ(x)
for all g ∈ G and x ∈ X. Obviously, the latter implies the former. To see the
converse, let σ : X → X be any orbit cross-section for the action of G on X, i.e.,
σ(X) ∩ Gx = {σ(x)} for every x ∈ X. Since G is semi-regular, for each x ∈ X there
is a unique ψ(x) ∈ G such that ψ(x)σ(x) = x. For all g ∈ G and x ∈ X, we have
gψ(x)σ(gx) = gψ(x)σ(x) = gx = ψ(gx)σ(gx),
which readily implies that ψ(gx) = gψ(x). So, ψ : X → G is as desired.
The purpose of this note is to show the following.
ABOUT VON NEUMANN’S PROBLEM FOR LOCALLY COMPACT GROUPS
3
Theorem 1.4. Let G be a locally compact group. The following are equivalent.
(1) G is not amenable.
(2) There exist a homomorphism ϕ : F2 → C (G) and a Borel measurable map
ψ : G → F2 such that ψ ◦ ϕ(g) = λg ◦ ψ for all g ∈ F2 .
(3) There exist a homomorphism ϕ : F2 → B(G) and a Borel measurable map
ψ : G → F2 such that ψ ◦ ϕ(g) = λg ◦ ψ for all g ∈ F2 .
We remark that any map ϕ as in (2) or (3) of Theorem 1.4 has to be injective.
In view of the discussion above, we also note that for finitely generated discrete
groups the statement of Theorem 1.4 reduces to Whyte’s geometric solution to the
von Neumann problem. More specifically, the existence of a map ψ as in (2) or (3)
above may be thought of as a Borel variant of the semi-regular embedding condition
in Corollary 1.2. In general, we cannot arrange for ψ to be continuous, as there
exist non-amenable connected locally compact groups
Both (2) and (3) of Theorem 1.4 may be considered relaxed versions of containing
F2 as a discrete subgroup: according to a result of Feldman and Greenleaf [7], if
H is a σ-compact metrizable closed (e.g., countable discrete) subgroup of a locally
compact group G, then the right coset projection G → H \ G, x 7→ Hx admits a
Borel measurable cross-section τ : H \G → G, and hence the H-equivariant map
ψ : G → H,
x 7→ xτ (Hx)−1
is Borel measurable, too. This particularly applies if H ∼
= F2 is discrete.
The proof of Theorem 1.4 combines a result of Rickert resolving the original
von Neumann problem for almost connected locally compact groups (Theorem 3.3)
with a slight generalization of Whyte’s result for coarse spaces (Theorem 2.2) and
in turn refines an argument of Paterson proving the existence of Borel paradoxical
decompositions for non-amenable locally compact groups [21]. In fact, Theorem 1.4
implies Paterson’s result [21].
Corollary 1.5 (Paterson [21]). A locally compact group G is non-amenable if and
only if it admits a Borel paradoxical decomposition, i.e., there exist finite partitions
P and Q of G into Borel subsets and gP , hQ ∈ G (P ∈ P, Q ∈ Q) such that
[
[
G = · gP P ∪· · hQ Q.
P ∈P
Q∈Q
This note is organized as follows. Building on some preparatory work concerning
coarse spaces done in Section 2, we prove Theorem 1.4 in Section 3. Since our
approach to proving Theorem 1.4 involves wobbling groups, and there has been
recent interest in such groups, we furthermore include some complementary remarks
about finitely generated subgroups of wobbling groups in Section 4.
2. Revisiting Whyte’s result
Our proof of Theorem 1.4 will make use of Whyte’s argument [27] – in the form
of Corollary 2.3. More precisely, we will have to slightly generalize his result from
metric spaces to arbitrary coarse spaces. However, this will just require very minor
adjustments, and we only include a proof for the sake of completeness.
For convenience, let us recall some terminology from coarse geometry as it may
be found in [24]. For a relation E ⊆ X × X on a set X and x ∈ X, A ⊆ X, let
[
E[x] := {y ∈ X | (x, y) ∈ E},
E[A] := {E[z] | z ∈ A}.
A coarse space is a pair (X, E ) consisting of a set X and a collection E of subsets
of X × X (called entourages) such that
• the diagonal ∆X = {(x, x) | x ∈ X} belongs to E ,
4
FRIEDRICH MARTIN SCHNEIDER
• if F ⊆ E ∈ E , then also F ∈ E ,
• if E, F ∈ E , then E ∪ F, E −1 , E ◦ F ∈ E .
A coarse space (X, E ) is said to have bounded geometry if
∀E ∈ E ∀x ∈ X :
E[x] is finite,
and (X, E ) has uniformly bounded geometry if
∀E ∈ E ∃m ≥ 0 ∀x ∈ X :
|E[x]| ≤ m.
Among the most important examples of coarse spaces are metric spaces: if X is
a metric space, then we obtain a coarse space (X, EX ) by setting
EX := {E ⊆ X × X | sup{d(x, y) | (x, y) ∈ E} < ∞}.
Another crucial source of examples of coarse spaces is given by group actions.
Indeed, if G is a group acting on a set X, then we obtain a coarse space (X, EG ) of
uniformly bounded geometry by
EG := {R ⊆ X × X | ∃E ⊆ G finite : R ⊆ {(x, gx) | x ∈ X, g ∈ E}}.
Note that the coarse structure induced by a finitely generated group G acting on
itself by left translations coincides with the coarse structure on G generated by the
metric associated with any finite symmetric generated subset of G containing the
neutral element.
Now we come to amenability. Adopting the notion from metric coarse geometry,
we call a coarse space (X, E ) of bounded geometry amenable if
∀θ > 1 ∀E ∈ E ∃F ⊆ X finite, F 6= ∅ :
|E[F ]| ≤ θ|F |,
which is (easily seen to be) equivalent to saying that
∃θ > 1 ∀E ∈ E ∃F ⊆ X finite, F 6= ∅ :
|E[F ]| ≤ θ|F |.
This definition is compatible with the existing notion of amenability for group
actions (Proposition 2.1). Recall that an action of a group G on a set X is amenable
if the space ℓ∞ (X) of all bounded real-valued functions on X admits a G-invariant
mean, i.e., there exists a positive linear functional µ : ℓ∞ (X) → R with µ(1) = 1
and µ(f ◦ g) = µ(f ) for all f ∈ ℓ∞ (X) and g ∈ G.
Proposition 2.1 (cf. Rosenblatt [25]). An action of a group G on a set X is
amenable if and only if the coarse space (X, EG ) is amenable.
Proof. Generalizing Følner’s work [8] on amenable groups, Rosenblatt [25] showed
that an action of a group G on a set X is amenable if and only if
∀θ > 1 ∀E ⊆ G finite ∃F ⊆ X finite, F 6= ∅ :
|EF | ≤ θ|F |,
which is easily seen to be equivalent to the amenability of (X, EG ).
Let us turn our attention towards Theorem 1.1. A straightforward adaptation of
Whyte’s original argument readily provides us with the following only very slight
generalization (Theorem 2.2). For a binary relation E ⊆ X × X, we will denote the
associated undirected graph by
Γ(E) := (X, {{x, y} | (x, y) ∈ E}).
Furthermore, let gr(f ) := {(x, f (x)) | x ∈ X} for any map f : X → Y . Our proof
of Theorem 2.2 will utilize the simple observation that, for a map f : X → X, the
graph Γ(gr(f )) is a forest, i.e., it contains no cycles, if and only if f has no periodic
points, which means that P (f ) := {x ∈ X | ∃n ≥ 1 : f n (x) = x} is empty.
Theorem 2.2. Let d ≥ 3. A coarse space (X, E ) of bounded geometry is nonamenable if and only if there is E ∈ E such that Γ(E) is a d-regular forest.
ABOUT VON NEUMANN’S PROBLEM FOR LOCALLY COMPACT GROUPS
5
Proof. (⇐=) Due to a very standard fact about isoperimetric constants for regular
trees [2, Example 47], if E ⊆ X × X is symmetric and Γ(E) is a d-regular tree, then
|E[F ]| ≥ (d − 1)|F | for every finite subset F ⊆ X. Of course, this property passes
to d-regular forests, which readily settles the desired implication.
(=⇒) Suppose that (X, E ) is not amenable. Then there is a symmetric entourage
E ∈ E such that |E[F ]| ≥ d|F | for every finite F ⊆ X. Consider the symmetric
relation R := E \ ∆X ⊆ X × X. Since |R[x]| < ∞ for every x ∈ X and
|R[F ]| ≥ |E[F ] \ F | ≥ |E[F ]| − |F | ≥ (d − 1)|F |
for every finite subset F ⊆ X, the Hall harem theorem [1, Theorem H.4.2] asserts
that there exists a function f : X → X with gr(f ) ⊆ R and |f −1 (x)| = d − 1 for
all x ∈ X. Notice that f does not have any fixed points as R ∩ ∆X = ∅. Since
the set of f -orbits of its elements
S partitions the set P (f ), we may choose a subset
P0 ⊆ P (f ) such that P (f ) = · x∈P0 {f n (x) | n ∈ N}. Furthermore, choose functions
g, h : P0 × N → X such that, for all x ∈ P0 and n ≥ 1,
• g(x, 0) = x and h(x, 0) = f (x),
• {g(x, n), h(x, n)} ∩ P (f ) = ∅,
• f (g(x, n)) = g(x, n − 1) and f (h(x, n)) = h(x, n − 1).
It follows that g and h are injective functions with disjoint ranges. Now we define
f∗ : X → X by setting
g(z, n + 2) if x = g(z, n) for z ∈ P0 and even n ≥ 0,
g(z, n − 2) if x = g(z, n) for z ∈ P and odd n ≥ 3,
0
f∗ (x) :=
f 2 (x)
if
x
=
h(z,
n)
for
z
∈
P
0 and n ≥ 2,
f (x)
otherwise
for x ∈ X. We observe that
gr(f∗ ) ⊆ gr(f 2 )−1 ∪ gr(f 2 ) ∪ gr(f ).
In particular, gr(f∗ ) ⊆ E ◦ E and therefore gr(f∗ ) ∈ E . Moreover, it follows that
P (f∗ ) ⊆ P (f ). However, for every x ∈ P (f ), there exists a smallest m ∈ N such that
f m (x) ∈ P0 , and we conclude that f∗m+1 (x) = f∗ (f m (x)) = g(f m (x), 2) ∈
/ P (f ) and
hence f∗m+1 (x) ∈
/ P (f∗ ), which readily implies that x ∈
/ P (f∗ ). Thus, P (f∗ ) = ∅.
In particular, f∗ has no fixed points. Furthermore,
(f −1 (x) ∪ {g(z, n − 2)}) \ {g(z, n + 1)} if x = g(z, n) for z ∈ P0
and even n ≥ 2,
(f −1 (x) ∪ {g(z, n + 2)}) \ {g(z, n + 1)} if x = g(z, n) for z ∈ P0
and odd n ≥ 1,
−1
f∗ (x) =
−1
(f
(x)
∪
{h(z,
n
+
2)})
\
{h(z,
n
+
1)}
if
x = h(z, n) for z ∈ P0
and n ≥ 1,
−1
(f
(x)
∪
{h(z,
2)})
\
{z}
if
x = f (z) for z ∈ P0 ,
−1
f (x)
otherwise
and thus |f∗−1 (x)| = d−1 for each x ∈ X. Hence, Γ(gr(f∗ )) is a d-regular forest.
Just as Theorem 1.1 corresponds to Corollary 1.2, we can translate Theorem 2.2
into an equivalent statement about wobbling groups. Given a coarse space (X, E ),
we define its wobbling group (or group of bounded displacement ) as
W (X, E ) := {α ∈ Sym(X) | gr(α) ∈ E }.
Since the 4-regular tree is isomorphic to the standard Cayley graph of the free group
on two generators, we now obtain the following consequence of Theorem 2.2.
6
FRIEDRICH MARTIN SCHNEIDER
Corollary 2.3. A coarse space X of bounded geometry is non-amenable if and only
if F2 is isomorphic to a semi-regular subgroup of W (X).
We note that Corollary 2.3 for group actions has been applied already (though
without proof) in the recent work of the author and Thom [26, Corollary 5.12],
where a topological version of Whyte’s result for general (i.e., not necessarily locally
compact) topological groups in terms of perturbed translations is established. In
the present note, Corollary 2.3 will be used to prove Theorem 1.4, which generalizes
Whyte’s result to locally compact groups by means of clopen and Borel piecewise
translations and is in turn quite different to [26, Corollary 5.12].
3. Proving the main result
In this section we prove Theorem 1.4. For the sake of clarity, recall that a locally
compact group G is said to be amenable if there is a G-invariant1 mean on the space
Cb (G) of bounded continuous real-valued functions on G, i.e., a positive linear map
µ : Cb (G) → R with µ(1) = 1 and µ(f ◦ λg ) = µ(f ) for all f ∈ Cb (G) and g ∈ G.
In preparation of the proof of Theorem 1.4, we note the following standard fact,
whose straightforward proof we omit.
Lemma 3.1. Let H be a subgroup of a locally compact group G and consider the
usual action of G on the set G/H of left cosets of H in G. If µ : ℓ∞ (G/H) → R is a
G-invariant mean and ν : Cb (H) → R is an H-invariant mean, then a G-invariant
mean ξ : Cb (G) → R is given by
ξ(f ) := µ(xH 7→ ν((f ◦ λx )|H ))
(f ∈ Cb (G)).
It is a well-known fact (see Section 2 in [10]) that a locally compact group G
(considered together with a left Haar measure) is amenable if and only if there
exists a G-invariant mean on L∞ (G), i.e., a positive linear map µ : L∞ (G) → R
such that µ(1) = 1 and µ(f ◦ λg ) = µ(f ) for all f ∈ Cb (G) and g ∈ G. An easy
calculation now provides us with the following.
Lemma 3.2. Let G be a locally compact group.
(1) A mean µ : L∞ (G) → R is G-invariant if and only if µ is B(G)-invariant.
(2) Let H be a locally compact group, let ϕ : H → B(G) be a homomorphism
and ψ : G → H be Borel measurable with ψ ◦ ϕ(g) = λg ◦ ψ for all g ∈ H.
If G is amenable, then so is H.
Proof. (1) Clearly, B(G)-invariance implies G-invariance. To prove the converse,
suppose that µ is G-invariant. Let α ∈ B(G) and let P be a finite partition of G
into Borel subsets and gP ∈ G (P ∈ P) with α|P = λgP |P for each P ∈ P. Now,
X
X
µ(f ◦ α) =
µ ((f ◦ α) · 1P ) =
µ ((f ◦ λgP ) · 1P )
P ∈P
=
X
P ∈P
=
X
P ∈P
P ∈P
X
=
µ (f · (1gP P ))
µ f · 1P ◦ λg−1
P
P ∈P
µ f · 1α(P ) = µ(f )
for every f ∈ L∞ (G), as desired.
(2) Let ν : L∞ (G) → R be a G-invariant mean. Define µ : Cb (H) → R by
µ(f ) := ν(f ◦ ψ)
(f ∈ Cb (H)).
It is easy to see that µ is a mean. Furthermore, (1) asserts that
µ(f ◦ λg ) = ν(f ◦ λg ◦ ψ) = ν(f ◦ ψ ◦ ϕ(g)) = ν(f ◦ ψ) = µ(f )
1In case of ambiguity, invariance shall always mean left invariance.
ABOUT VON NEUMANN’S PROBLEM FOR LOCALLY COMPACT GROUPS
for all f ∈ Cb (H) and g ∈ H. Hence, µ is H-invariant.
7
We note that Lemma 3.2 readily settles the implication (3)=⇒(1) of Theorem 1.4.
The remaining part of the proof of Theorem 1.4 will rely on some structure theory
for locally compact groups – most importantly the following remarkable result of
Rickert [22] building on [23]. We recall that a locally compact group G is said to
be almost connected if the quotient of G by the connected component of its neutral
element is compact.
Theorem 3.3 (Theorem 5.5 in [22]). Any almost connected, non-amenable, locally
compact group has a discrete subgroup being isomorphic to F2 .
Now everything is prepared to prove our main result.
Proof of Theorem 1.4. Evidently, (2) implies (3) as C (G) is a subgroup of B(G).
Furthermore, (3) implies (1) due to Lemma 3.2 and the non-amenability of F2 .
(1)=⇒(2). Let G be a non-amenable locally compact group. It follows by classical work of van Dantzig [4] that any locally compact group contains an almost
connected, open subgroup (see, e.g., [20, Proposition 12.2.2 (c)]). Choose any almost connected, open (and hence closed) subgroup H of G. We will distinguish
two cases depending upon whether H is amenable.
H is not amenable. According to Theorem 3.3, H contains a discrete subgroup F
being isomorphic to F2 , and so does G. By a result of Feldman and Greenleaf [7],
the right coset projection π : G → F \ G, x 7→ F x admits a Borel measurable
cross-section, i.e., there exists a Borel measurable map τ : F \ G → G such that
π ◦ τ = idF \G . Clearly, the F -equivariant map ψ : G → F, x 7→ xτ (F x)−1 is Borel
measurable. This readily settles the first case: the maps
ϕ : F2 ∼
= F → C (G),
g 7→ λg
and ψ are as desired.
H is amenable. Since G is not amenable, Lemma 3.1 implies that the action
of G on the set G/H is not amenable. By Proposition 2.1, this means that the
coarse space X := (G/H, EG ) is not amenable. Due to Corollary 2.3, there exists
an embedding ϕ : F2 = F (a, b) → W (X) such that ϕ(F2 ) is semi-regular. Thus, by
definition of W (X), there exists some finite subset E ⊆ G such that
∀x ∈ {a, b} ∀z ∈ X ∃g ∈ E :
ϕ(x)(z) = gz.
Hence, we find a finite partition P of X along with gP , hP ∈ E (P ∈ P) such that
ϕ(a)|P = λgP |P and ϕ(b)|P = λhP |P for every P ∈ P. Consider the projection
π : G → G/H, x 7→ xH. Since H is an open subgroup of G, the quotient topology on
G/H, i.e., the topology induced by π, is discrete. So, π −1 (P) = {π −1 (P ) | P ∈ P}
is a finite partition of G into clopen subsets. What is more,
[
[
[
G = · π −1 (ϕ(a)(P )) = · π −1 (gP P ) = · gP π −1 (P ),
P ∈P
P ∈P
P ∈P
[
[
[
G = · π −1 (ϕ(b)(P )) = · π −1 (hP P ) = · hP π −1 (P ).
P ∈P
P ∈P
P ∈P
Therefore, we may define ϕ : {a, b} → C (G) by setting
ϕ(a)|π−1 (P ) = λgP |π−1 (P ) ,
ϕ(b)|π−1 (P ) = λhP |π−1 (P )
∗
(P ∈ P).
∗
Consider the unique homomorphism ϕ : F2 → C (G) satisfying ϕ |{a,b} = ϕ. Since
π ◦ ϕ(x) = ϕ(x) ◦ π for each x ∈ {a, b}, it follows that π ◦ ϕ∗ (w) = ϕ(w) ◦ π for
every w ∈ F2 . Appealing to Remark 1.3, we find a mapping ψ : G/H → F2 such
that ψ(ϕ(w)(z)) = wψ(z) for all w ∈ F2 and z ∈ G/H. Since the quotient space
8
FRIEDRICH MARTIN SCHNEIDER
G/H is discrete, the map ψ ∗ := ψ ◦ π : G → F2 is continuous and therefore Borel
measurable. Finally, we note that
ψ ∗ (ϕ∗ (w)(x)) = ψ(π(ϕ∗ (w)(x))) = ψ(ϕ(w)(π(x))) = wψ(π(x)) = wψ ∗ (x)
for all w ∈ F2 and x ∈ G, as desired. This completes the proof.
Let us deduce Paterson’s result [21] from Theorem 1.4.
Proof of Corollary 1.5. (⇐=) This is clear.
(=⇒) Let G be a non-amenable locally compact group. By Theorem 1.4, there
exist a homomorphism ϕ : F2 → B(G) and a Borel measurable map ψ : G → F2
with ψ ◦ ϕ(g) = λg ◦ ψ for all g ∈ F2 . Consider any paradoxical decomposition of
F2 given by P, Q, (gP )P ∈P , (hQ )Q∈Q . Taking a common refinement of suitable
finite Borel partitions of G corresponding to the elements ϕ(gP ), ϕ(hQ ) ∈ B(G)
(P ∈ P, Q ∈ Q), we obtain a finite Borel partition R of G along with mappings
σ : P × R → G and τ : Q × R → G such that
ϕ(gP )|R = λσ(P,R) |R
ϕ(hQ )|R = λτ (Q,R) |R
for all P ∈ P, Q ∈ Q, and R ∈ R. By ψ being Borel measurable, the refinements
ψ −1 (P) ∨ R and ψ −1 (Q) ∨ R are finite Borel partitions of G. What is more,
[
[
G = · ψ −1 (gP P ) ∪· · ψ −1 (hQ Q)
P ∈P
Q∈Q
[
[
= · ϕ(gP )(ψ −1 (P )) ∪· · ϕ(hQ )(ψ −1 (Q))
P ∈P
=
Q∈Q
[
·
ϕ(gP )(ψ
−1
(P ) ∩ R) ∪·
(P,R)∈P×R
=
[
·
[
·
ϕ(hQ )(ψ −1 (Q) ∩ R)
(Q,R)∈Q×R
σ(P, R)(ψ
−1
(P,R)∈P×R
(P ) ∩ R) ∪·
[
·
τ (Q, R)(ψ −1 (Q) ∩ R).
(Q,R)∈Q×R
Thus, the data
ψ −1 (P) ∨ R,
ψ −1 (Q) ∨ R,
(σ(P, R))(P,R)∈P×R ,
(τ (Q, R))(Q,R)∈Q×R
constitute a Borel paradoxical decomposition of G.
4. Further remarks on wobbling groups
We are going to conclude with some additional remarks about wobbling groups,
which we consider noteworthy complements of Corollary 2.3. As van Douwen’s
result [6] shows, the presence of F2 as a subgroup of the wobbling group does not
imply the non-amenability of a coarse space. As it turns out, containment of F2 is
just a witness for positive asymptotic dimension (Proposition 4.3).
Let us once again recall some terminology from [24]. The asymptotic dimension
asdim(X, E ) of a coarse space (X, E ) is defined as the infimum of all those n ∈ N
such that, for every E ∈ E , there exist C0 , . . . , Cn ⊆ P(X) with
S
S
• X = C0 ∪ . . . ∪ Cn ,
• (C
S × D) ∩ E = ∅ for all i ∈ {0, . . . , n} and C, D ∈ Ci with C 6= D,
• {C × C | C ∈ Ci , i ∈ {0, . . . , n}} ∈ E .
The concept of asymptotic dimension was first introduced for metric spaces by
Gromov [15] and later extended to coarse spaces by Roe [24]. We refer to [24] for
a thorough discussion of asymptotic dimension, related results and examples.
As we aim to describe positive asymptotic dimension in algebraic terms, we will
unravel the zero-dimensional case in the following lemma. Let us denote by [R] the
equivalence relation on a set X generated by a given binary relation R ⊆ X × X.
ABOUT VON NEUMANN’S PROBLEM FOR LOCALLY COMPACT GROUPS
9
Lemma 4.1. Let (X, E ) be a coarse space. Then asdim(X, E ) = 0 if and only if
[E] ∈ E for every E ∈ E .
Proof. (=⇒) Let E ∈ E . Without loss of generality, assume that E contains ∆X .
As asdim(X, E ) = 0, there exists C0 ⊆ P(X) such that
S
(1) X = C0 ,
(2) S
(C × D) ∩ E = ∅ for all C, D ∈ C0 with C 6= D,
(3) {C × C | C ∈ C0 } ∈ E .
As ∆X ⊆ E, assertion (2) implies that any two distinct members of C0 are disjoint.
Hence,
S(1) gives that C0 is a partition of X. By (2), the induced equivalence relation
R := {C × C | C ∈ C0 } contains E, thus [E]. By (3), it follows that [E] ∈ E .
(⇐=) Let E ∈ E . It is straightforward to check that C0 := {[E][x] | x ∈ X} has
the desired properties. Hence, asdim(X, E ) = 0.
Our proof of Proposition 4.3 below will rely upon the following slight modification
of the standard argument for residual finiteness of free groups. For an element
w ∈ F2 = F (a, b), let us denote by |w| the length of w with respect to the generators
a and b, i.e., the smallest integer n ≥ 0 such that w can be represented as a word
of length n in the letters a, a−1 , b, b−1 .
Lemma 4.2. Let w ∈ F2 with w 6= e and let M := {0, . . . , 2|w|}. Then there exists
a homomorphism ϕ : F2 → Sym(M ) such that ϕ(w) 6= e and |ϕ(v)(i) − i| ≤ 2|v| for
all i ∈ M and v ∈ F2 .
Proof. Let (k0 , . . . , kn ) ∈ (Z \ {0})n × ZP
and (ℓ0 , . . .P
, ℓn ) ∈ Z × (Z \ {0})n such that
n
k0 ℓ0
kn ℓn
w = a b · · · a b . Of course, |w| = i=0 |ki | + ni=0 |ℓi |. Let
Xi−1
Xi−1
Xi
Xi−1
|ℓj |
|kj | +
|ℓj |,
βi :=
|kj | +
αi :=
j=0
j=0
j=0
j=0
for i ∈ {0, . . . , n} and let βn+1 := |w|. We will define a map ϕ : {a, b} → Sym(M ).
First, let us define ϕ(a) ∈ Sym(M ) by case analysis as follows: if i ∈ [2αj , 2βj+1 ]
for some j ∈ {0, . . . , n} with kj > 0, then
i + 2 if i is even and i ∈ [2αj , 2βj+1 − 2],
i − 1 if i = 2β ,
j+1
ϕ(a)(i) :=
i
−
2
if
i
is
odd
and i ∈ [2αj + 3, 2βj+1 − 1],
i − 1 if i = 2αj + 1,
if i ∈ [2αj , 2βj+1 ] for some j ∈ {0, . . . , n} with kj < 0, then
i − 2 if i is even and i ∈ [2αj + 2, 2βj+1 ],
i + 1 if i = 2α ,
j
ϕ(a)(i) :=
i
+
2
if
i
is
odd
and i ∈ [2αj + 1, 2βj+1 − 3],
i + 1 if i = 2βj+1 − 1,
S
and if i ∈
/ {[2αj , 2βj+1 ] | j ∈ {0, . . . , n}, kj 6= 0}, then ϕ(a)(i) := i. Analogously,
let us define ϕ(b) ∈ Sym(M ) by case analysis as follows: if i ∈ [2βj , 2αj ] for some
j ∈ {0, . . . , n} with ℓj > 0, then
i + 2 if i is even and i ∈ [2βj , 2αj − 2],
i − 1 if i = 2α ,
j
ϕ(b)(i) :=
i − 2 if i is odd and i ∈ [2βj + 3, 2αj − 1],
i − 1 if i = 2βj + 1,
10
FRIEDRICH MARTIN SCHNEIDER
if i ∈ [2βj , 2αj ] for some j ∈ {0, . . . , n} with ℓj < 0, then
i − 2 if i is even and i ∈ [2βj + 2, 2αj ],
i + 1 if i = 2β ,
j
ϕ(b)(i) :=
i
+
2
if
i
is
odd
and i ∈ [2βj + 1, 2αj − 3],
i + 1 if i = 2αj − 1,
S
and if i ∈
/ {[2βj , 2αj ] | j ∈ {0, . . . , n}, ℓj 6= 0}, then ϕ(b)(i) := i. It is easy to
check that ϕ(a) and ϕ(b) are well-defined permutations of M , and that moreover
|ϕ(x)(i) − i| ≤ 2 for each x ∈ {a, b} and all i ∈ M . Considering the unique
homomorphism ϕ∗ : F2 → Sym(M ) with ϕ∗ |{a,b} = ϕ, we observe that
ϕ∗ (w)(0) = ϕ(a)kn ϕ(b)ℓn · · · ϕ(a)k0 ϕ(b)ℓ0 (0) = 2|w|
and thus ϕ∗ (w) 6= e. Also, |ϕ∗ (v)(i) − i| ≤ 2|v| for all i ∈ M and v ∈ F2 .
For the sake of clarity, we recall that a group is locally finite if each of its finitely
generated subgroups is finite. For a subset S of a group G, we will denote by hSi
the subgroup of G generated by S.
Proposition 4.3. Let X be a coarse space of uniformly bounded geometry. The
following are equivalent.
(1) asdim(X) > 0.
(2) W (X) is not locally finite.
(3) F2 embeds into W (X).
Proof. We will denote by E the coarse structure of X.
(2)=⇒(1). Let us recall a general fact: for a finite group G and any set M , the
group GM is locally finite. Indeed, considering a finite subset S ⊆ GM and the
induced equivalence relation R := {(x, y) ∈ M × M | ∀α ∈ S : α(x) = α(y)} on M ,
we observe that N := {R[x] | x ∈ M } is finite, due to G and S being finite. The
map π : M → N, x 7→ R[x] induces a homomorphism ϕ : GN → GM , α 7→ α ◦ π.
Evidently, S is contained in the finite group ϕ(GN ), and so is hSi.
Suppose now that asdim(X) = 0. Consider a finite subset S ⊆ W (X). We aim
to show that H := hSi is finite. To this end, we first observe that
Y
ϕ: H →
Sym(Hx), α 7→ (α|Hx )x∈X
x∈X
S
constitutes a well-defined embedding. Since D := {gr(α) | α ∈ S} belongs to E ,
Lemma 4.1 asserts that E := [D] ∈ E , too. Note that gr(α) ∈ E for all α ∈ H.
Hence, Hx ⊆ E[x] for every x ∈ X. Due to X having uniformly bounded geometry,
there exists m ≥ 0 such that |E[x]| ≤ m and thus |Hx|
Q ≤ m for every x ∈ X. Now,
let M := {0, . . . , m − 1}. It follows that the group x∈X Sym(Hx) is isomorphic to
a subgroup of Sym(M )X , and so is H by virtue of ϕ. Since H is finitely generated
and Sym(M )X is locally finite by the remark above, this implies that H is finite.
(3)=⇒(2). This is trivial.
(1)=⇒(3). Suppose that asdim(X) > 0. By Lemma 4.1, there exists E ∈ E such
that [E] ∈
/ E .SWithout loss of generality, we may assume that ∆X ⊆ E = E −1 .
Hence, [E] = {E n | n ∈ N}. For each n ∈ N, let us define
Tn := x ∈ X n+1 |{x0 , . . . , xn }| = n + 1, ∀i ∈ {0, . . . , n − 1} : (xi , xi+1 ) ∈ E .
Claim. For every n ∈ N and every finite subset F ⊆ X, there exists x ∈ Tn such
that {x0 , . . . , xn } ∩ F = ∅.
Proof of claim. Let n ∈ N and let F ⊆ X be finite. Put ℓ := (n + 1)(|F | + 1).
Since E ∈ E and [E] ∈
/ E , we conclude that E ℓ * E ℓ−1 . Let x0 , . . . , xℓ ∈ X such
ℓ−1
that (x0 , xℓ ) ∈
/E
and (xi , xi+1 ) ∈ E for every i ∈ {0, . . . , ℓ − 1}. As ∆X ⊆ E, it
ABOUT VON NEUMANN’S PROBLEM FOR LOCALLY COMPACT GROUPS
11
follows that |{x0 , . . . , xℓ }| = ℓ + 1. Applying the pigeonhole principle, we find some
j ∈ {0, . . . , ℓ−n} such that {xj , . . . , xj+n }∩F = ∅. Hence, y0 := xj , . . . , yn := xj+n
are as desired.
Since N := F2 \ {e} is countable, we may recursively apply the claim above and
choose a family (xw )w∈N such that
(i) xw ∈ T2|w| for every w ∈ N ,
(ii) {xw,0 , . . . , xw,2|w| } ∩ {xv,0 , . . . , xv,2|v| } = ∅ for any two distinct v, w ∈ N .
Let w ∈ N and define Dw := {xw,0 , . . . , xw,2|w| }. Due to Lemma 4.2, there exists a
homomorphism ϕw : F2 → Sym(Dw ) such that ϕw (w) 6= e and
ϕw (v)(xw,i ) ∈ {xw,j | j ∈ {0, . . . , 2|w|}, |i − j| ≤ 2|v|}
for all v ∈ F2 , i ∈ {0, . . . , 2|w|}. Since (xw,i , xw,i+1 ) ∈ E for i ∈ {0, . . . , 2|w| − 1},
it follows that gr(ϕw (v)) ⊆ E 2|v| for all v ∈ F2 . As Dw and Dv are disjoint for any
distinct v, w ∈ N , we may define a homomorphism ϕ : F2 → Sym(X) by setting
(
ϕw (v)(x) if x ∈ Dw for some w ∈ N,
ϕ(v)(x) :=
x
otherwise
for v ∈ F2 and x ∈ X. By construction, ϕ is an embedding, and furthermore
[
gr(ϕ(v)) ⊆ ∆X ∪ {gr(ϕw (v)) | w ∈ N } ⊆ E 2|v| ∈ E
for every v ∈ F2 . Hence, the image of ϕ is contained in W (X), as desired.
Remark 4.4. The assumption of uniformly bounded geometry in Theorem 1.4 is
needed only to prove that (2) implies (1). In fact, a similar argument as in the proof
of (1)=⇒(3) (not involving Lemma 4.2 though) shows that the wobbling group of
any coarse
Q space not having uniformly bounded geometry contains an isomorphic
copy of n∈N Sym(n), hence F2 .
One might wonder whether Proposition 4.3 could have been deduced readily
from van Douwen’s result [6] on F2 embedding into W (Z). However, there exist uniformly discrete metric spaces of uniformly bounded geometry and positive
asymptotic dimension whose wobbling group does not contain an isomorphic copy
of W (Z) (see Example 4.7). We clarify the situation in Proposition 4.5.
As usual, a group is called residually finite if it embeds into a product of finite
groups, and a group is called locally residually finite if each of its finitely generated
subgroups is residually finite. Let us recall from [24] that a map f : X → Y between
two coarse spaces X and Y is bornologous if, for every entourage E of X, the set
{(f (x), f (y)) | (x, y) ∈ E} is an entourage of Y .
Proposition 4.5. Let X be a coarse space. The following are equivalent.
(1) There is a bornologous injection from Z into X.
(2) W (X) is not locally residually finite.
(3) W (X) contains a subgroup being isomorphic to W (Z).
Remark 4.6. (i) For groups there is no difference between positive asymptotic
dimension and the existence of a bornologous injection of Z: a group has asymptotic
dimension 0 if and only if it is locally finite, and any group which is not locally
finite admits a bornologous injection of Z by a standard compactness argument
(see, e.g., [17, IV.A.12]). However, for arbitrary coarse spaces, even of uniformly
bounded geometry, the situation is slightly different (see Example 4.7).
(ii) One may equivalently replace Z by N in item (1) of Proposition 4.5: on the
one hand, the inclusion map constitutes a bornologous injection from N into Z; on
12
FRIEDRICH MARTIN SCHNEIDER
the other hand, there is a bornologous bijection f : Z → N given by
(
2n
if n ≥ 0,
f (n) :=
(n ∈ Z).
2|n| − 1 if n < 0
Unless explicitly stated otherwise, we always understand N as being equipped with
the coarse structure generated by the usual (i.e., Euclidean) metric.
(iii) Any bornologous injection f : X → Y between two coarse spaces X and Y
induces an embedding ϕ : W (X) → W (Y ) via
(
f (α(f −1 (y))) if y ∈ f (X),
ϕ(α)(y) :=
(α ∈ W (X), y ∈ Y ).
y
otherwise
Hence, by (ii), the groups W (N) and W (Z) mutually embed into each other, and
thus Z may equivalently be replaced by N in item (3) of Proposition 4.5.
Proof of Proposition 4.5. (1)=⇒(3). This is due to Remark 4.6(iii).
(3)=⇒(2). It suffices to show that W (Z) is not locally residually finite. A result
of Gruenberg [16] states that, for a finite group F , the restricted wreath product
F ≀ Z = F (Z) ⋊ Z (i.e., the lamplighter group over F ) is residually finite if and only
Sn−1
if F is abelian. For n ≥ 1, the action of Sym(n) ≀ Z on Z = · r=0 nZ + r given by
(α, m).(nk + r) := n(m + k) + αm+k (r)
α ∈ Sym(n)(Z) , m, k ∈ Z, 0 ≤ r < n
defines an embedding of Sym(n) ≀ Z into Sym(Z), the image of which is contained
in W (Z) as supz∈Z |z − (α, m).z| ≤ n(|m| + 1) for every (α, m) ∈ Sym(n) ≀ Z. Since
the embedded lamplighter groups are finitely generated and not residually finite for
n ≥ 3, it follows that W (Z) is not locally residually finite.
(2)=⇒(1). Let E denote the coarse structure of X. If X does not have bounded
geometry, then there exist E ∈ E and x ∈ X such that E[x] is infinite, and any
thus existing injection f : Z → X with f (Z) ⊆ E[x] is bornologous. Hence, we
may without loss of generality assume that X has bounded geometry. On the other
hand, there must exist E ∈ E and x ∈ X such that [E][x] is infinite. Otherwise,
W (X) would
S have to be locally residually finite: for any finite subset F ⊆ W (X),
since E := {gr(α) | α ∈ F } ∈ E , the homomorphism
Y
hF i →
Sym([E][x]), α 7→ α|[E][x] x∈X
x∈X
would embed hF i into a product of finite groups. So, let E ∈ E and x ∈ X such that
[E][x] is infinite. S
Without loss of generality, we may assume that ∆X ⊆ E = E −1 .
Therefore, [E] = {E n | n ∈ N}. We conclude that E n [x] 6= E n+1 [x] and thus
Rn := f ∈ X N f0 = x, |{f0 , . . . , fn }| = n + 1, ∀i ∈ N : (fi , fi+1 ) ∈ E
is non-empty for all
Q n ∈ N. As (Rn )n∈N is a chain
T of closed subsets of the compact
topological space m∈N E m [x], we have R := n∈N Rn 6= ∅. Since any member of
R is a bornologous injection from N into X, this implies (1) by Remark 4.6(ii).
Example 4.7. Let I be a partition of N into finite intervals with supI∈I |I| = ∞.
Consider the metric space X := (N, d) given by
(
|x − y|
if x, y ∈ I for some I ∈ I ,
d(x, y) :=
(x, y ∈ N).
max(x, y) otherwise
It is easy to see that X has uniformly bounded geometry. Moreover, by Lemma 4.1
and the unboundedness assumption for the interval lengths, it follows that X has
positive asymptotic dimension. On the other hand, essentially by finiteness of the
considered intervals, there is no bornologous injection from N into X. Due to
Proposition 4.5, this readily implies that W (Z) does not embed into W (X).
ABOUT VON NEUMANN’S PROBLEM FOR LOCALLY COMPACT GROUPS
13
The interplay between certain geometric properties of coarse spaces on the one
hand and algebraic peculiarities of their wobbling groups on the other is a subject
of recent attention [12, 3]. It would be interesting to have further results in that
direction, e.g., to understand if (and how) specific positive values for the asymptotic
dimension may be characterized in terms of wobbling groups.
Acknowledgments
The author would like to thank Andreas Thom for interesting discussions about
Whyte’s variant of the von Neumann conjecture, as well as Warren Moors and Jens
Zumbrägel for their helpful comments on earlier versions of this note.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
Tullio Ceccherini-Silberstein and Michel Coornaert, The Hall Harem Theorem, In: Cellular
Automata and Groups, Springer Monographs in Mathematics, Springer Berlin Heidelberg
(2010), pp. 391–401.
Tullio Ceccherini-Silberstein, Rostislav I. Grigorchuk, and Pierre de la Harpe, Amenability
and paradoxical decompositions for pseudogroups and discrete metric spaces, Proc. Steklov
Inst. Math. 224 (1999), pp. 57–97.
Yves Cornulier, Irreducible lattices, invariant means, and commensurating actions, Math. Z.
279 (2015), no. 1–2, pp. 1–26.
David van Dantzig, Zur topologischen Algebra. III. Brouwersche und Cantorsche Gruppen,
Compositio Math. 3 (1936), pp. 408–426.
Mahlon M. Day, Amenable semigroups, Illinois J. Math. 1 (1957), pp. 509–544.
Eric K. van Douwen, Measures invariant under actions of F2 , Topology and its Applications
34 (1990), pp. 53–68.
Jacob Feldman and Frederick P. Greenleaf, Existence of Borel transversals in groups, Pacific
J. Math. 25 (1968), pp. 455–461.
Erling Følner, On groups with full Banach mean value, Math. Scand. 3 (1955), pp. 243–254.
Damien Gaboriau and Russell Lyons, A measurable-group-theoretic solution to von Neumann’s problem, Invent. Math. 177 (2009), no. 3, pp. 533–540.
Frederick P. Greenleaf, Invariant means on topological groups and their applications, Van
Nostrand Mathematical Studies, No. 16. Van Nostrand Reinhold Co., New York-Toronto,
Ont.-London, 1969, pp. ix+113.
Kate Juschenko and Nicolas Monod, Cantor systems, piecewise translations and simple
amenable groups, Ann. of Math. (2) 178 (2013), no. 2, pp. 775–787.
Kate Juschenko and Mikael de la Salle, Invariant means for the wobbling group, Bull. Belg.
Math. Soc. Simon Stevin 22 (2015), no. 2, pp. 281–290.
Andrew Marks and Spencer Unger, Baire measurable paradoxical decompositions via matchings, Adv. Math. 289 (2016), pp. 397–410.
Maxime Gheysens and Nicolas Monod, Fixed points for bounded orbits in Hilbert spaces,
August 2015, arXiv: 1508.00423[math.GR], to appear in Annales scientifiques de l’École
normale supérieure.
Michail Gromov, Asymptotic invariants of infinite groups. In: Geometric group theory, Vol.
2 (Sussex, 1991). London Math. Soc. Lecture Note Ser. 182, Cambridge Univ. Press, Cambridge, 1993, pp. 1–295.
Karl W. Gruenberg, Residual properties of infinite soluble groups, Proc. London Math. Soc.
(3) 7 (1957), pp. 29–62.
Pierre de la Harpe, Topics in geometric group theory, Chicago Lectures in Mathematics.
University of Chicago Press, Chicago, IL, 2000, pp. vi+310.
John von Neumann, Über die analytischen Eigenschaften von Gruppen linearer Transformationen und ihrer Darstellungen, Math. Z. 30 (1929), no. 1, pp. 3–42.
Alexander Ju. Ol’šanskiı̆, On the question of the existence of an invariant mean on a group,
Uspekhi Mat. Nauk 35 (1980), no. 4(214), pp. 199–200.
Theodore W. Palmer, Banach algebras and the general theory of ∗-algebras. Vol. 2, Encyclopedia of Mathematics and its Applications 79. Cambridge University Press, Cambridge,
2001, pp. i–xii and 795–1617.
Alan L. T. Paterson, Nonamenability and Borel paradoxical decompositions for locally compact groups, Proc. Amer. Math. Soc. 96 (1986), pp. 89–90.
Neil W. Rickert, Amenable groups and groups with the fixed point property, Trans. Amer.
Math. Soc. 127 (1967), pp. 221–232.
14
FRIEDRICH MARTIN SCHNEIDER
[23] Neil W. Rickert, Some properties of locally compact groups, J. Austral. Math. Soc. 7 (1967),
pp. 433–454.
[24] John Roe, Lectures on coarse geometry, University Lecture Series 31. American Mathematical
Society, Providence, RI, 2003, pp. viii+175.
[25] Joseph M. Rosenblatt, A generalization of Følner’s condition, Math. Scand. 33 (1973), no. 3,
pp. 153–170.
[26] Friedrich M. Schneider and Andreas B. Thom, On Følner sets in topological groups, August
2016, arXiv: 1608.08185[math.GR].
[27] Kevin Whyte, Amenability, bi-Lipschitz equivalence, and the von Neumann conjecture, Duke
Math. J. 99 (1999), no. 1, pp. 93–112.
Institute of Algebra, TU Dresden, 01062 Dresden, Germany
Current address: Department of Mathematics, The University of Auckland, Private Bag 92019,
Auckland 1142, New Zealand
E-mail address: [email protected]
| 4 |
1
Adaptive Coded Caching for Fair Delivery over
Fading Channels
arXiv:1802.02895v1 [cs.IT] 7 Feb 2018
Apostolos Destounis, Member, IEEE, Asma Ghorbel, Student Member, IEEE Georgios S. Paschos, Senior
Member, IEEE, and Mari Kobayashi, Senior Member, IEEE,
Abstract—The performance of existing coded caching schemes
is sensitive to the worst channel quality, a problem which is
exacerbated when communicating over fading channels. In this
paper, we address this limitation in the following manner: in
short-term, we allow transmissions to subsets of users with good
channel quality, avoiding users with fades, while in long-term we
ensure fairness among users. Our online scheme combines the
classical decentralized coded caching scheme [1] with (i) joint
scheduling and power control for the fading broadcast channel, as
well as (ii) congestion control for ensuring the optimal long-term
average performance. We prove that our online delivery scheme
maximizes the alpha-fair utility among all schemes restricted
to decentralized placement. By tuning the value of alpha, the
proposed scheme enables to balance between different operating
points on the average delivery rate region. We demonstrate via
simulations that our scheme outperforms two baseline schemes:
(a) standard coded caching with multicast transmission, limited
by the worst channel user yet exploiting the global caching gain;
(b) opportunistic scheduling with unicast transmissions exploiting
only the local caching gain.
Index Terms—Broadcast channel, coded caching, fairness,
Lyapunov optimization.
I. I NTRODUCTION
A key challenge for the future wireless networks is the
increasing video traffic demand, which reached 70% of total
mobile IP traffic in 2015 [2]. Classical downlink systems
cannot meet this demand since they have limited resource
blocks, and therefore as the number K of simultaneous video
transfers increases, the per-video throughput vanishes as 1/K.
Recently it was shown that scalable per-video throughput can
be achieved if the communications are synergistically designed
with caching at the receivers. Indeed, the recent breakthrough
of coded caching [3] has inspired a rethinking of wireless
downlink. Different video sub-files are cached at the receivers,
and video requests are served by coded multicasts. By careful
selection of sub-file caching and exploitation of the wireless
broadcast channel, the transmitted signal is simultaneously
useful for decoding at users who requested different video
files. This scheme has been theoretically proven to scale well,
and therefore has the potential to resolve the challenge of
A. Destounis and G. S. Paschos are with the Mathematical and Algorithmic
Sciences Lab, France Research Center - Huawei Technologies Co. Ltd.,
20 quai de Point du Jour, 92100 Boulogne-Bilancourt, France. Emails:
[email protected]
M. Kobayashi and A. Ghorbel are with the Laboratoire des Signaux et
Systèmes (L2S), CentraleSupélec, Université Paris-Saclay, 3, Rue Joliot-Curie,
91192 Gif sur Yvette, France. Emails: [email protected]
Part of the work in this paper has been presented at the 15th International
Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless
Networks (WiOpt), Telecom ParisTech, Paris, France, 15th - 19th May, 2017.
downlink bottleneck for future networks. Nevertheless, several
limitations hinder its applicability in practical systems [4]. In
this work, we take a closer look to the limitations that arise
from the fact that coded caching was originally designed for
a symmetric error-free shared link.
If instead we consider a realistic wireless channel, we
observe that coded caching faces a short-term limitation.
Namely, its performance is limited by the user in the worst
channel condition because the wireless multicast capacity is
determined by the worst user [5, Chapter 7.2]. This is in
stark contrast with standard downlink techniques such as
opportunistic scheduling [6]–[8], which serve the user with
the best instantaneous channel quality. Thus, a first challenge
is to modify coded caching for exploitation of fading peaks,
similar to the opportunistic scheduling.
In addition to the fast fading consideration, there is also a
long-term limitation due to a network topology. Namely, the
ill-positioned users, e.g. users at the cell edge, may experience
consistently poor channel quality during a whole video delivery. The classical coded caching scheme is designed to provide
video files at equal data rates to all users, which leads to illpositioned users consuming most of the air time and hence
driving the overall system performance to low efficiency. In the
literature of wireless scheduling without caches at receivers,
this problem has been resolved by the use of fairness among
user throughputs [7]. By allowing poorly located users to
receive less throughput than others, precious air time is saved
and the overall system performance is greatly increased. Since
the sum throughput rate and equalitarian fairness are typically
the two extreme objectives, past works have proposed the use
of alpha-fairness [9] which allows to select the coefficient α
and drive the system to any desirable tradeoff point in between
of the two extremes. Previously, the alpha-fair objectives have
been studied in the context of (i) multiple user activations
[6], (ii) multiple antennas [10] and (iii) broadcast channels
[11]. However, in the presence of caches at user terminals,
the fairness problem is further complicated by the interplay
between user scheduling and designing codewords for multiple
users. In particular, we wish to shed light into the following
questions: Which user requests shall we combine together
to perform coded caching? How shall we schedule a set of
users to achieve our fairness objective while adapting to timevarying channel quality?
To address these questions, we study the content delivery
over a realistic block-fading broadcast channel, where the
channel quality varies across users and time. Although the
decisions of user scheduling and codeword design are inher-
2
ently coupled, we design a scheme which decouples these two
problems, while maintaining optimality through a specifically
designed queueing structure. On the transmission side, we
select the multicast user set dynamically depending on the
instantaneous channel quality and user urgency captured by
queue lengths. On the coding side, we adapt the codeword
construction of [1] to the set of users chosen by the appropriate
routing which depends also on the past transmission side decisions. Combining with an appropriate congestion controller,
we show that this approach yields our alpha-fair objective.
More specifically, our approaches and contributions are
summarized below:
1) We design a novel queueing structure which decouples
the channel scheduling from the codeword construction.
Although it is clear that the codeword construction needs
to be adaptive to channel variation, our scheme ensures
this through our backpressure that connects the user
queues and the codeword queues. Hence, we are able to
show that this decomposition is without loss of optimality
(see Theorem 6).
2) We then provide an online policy consisting of (i) admission control of new files into the system; (ii) combination
of files to perform coded caching; (iii) scheduling and
power control of codeword transmissions to subset of
users on the wireless channel. We prove that the longterm video delivery rate vector achieved by our scheme
is a near optimal solution to the alpha-fair optimization
problem under the restriction to policies that are based
on the decentralized coded caching scheme [1].
3) Through numerical examples, we demonstrate the superiority of our approach versus (a) standard coded caching
with multicast transmission limited by the worst channel
condition yet exploiting the global caching gain, (b)
opportunistic scheduling with unicast transmissions exploiting only the local caching gain. This shows that our
scheme not only is the best among online decentralized
coded caching schemes, but moreover manages to exploit
opportunistically the time-varying fading channels.
A. Related work
Since coded caching was first introduced in [3] and its
potential was recognized by the community, substantial efforts have been devoted to quantify the gain in realistic
scenarios, including decentralized placement [1], non-uniform
popularities [12], [13], and more general network topologies
(e.g. [14]–[16]). A number of recent works have studied
coded caching by replacing the original perfect shared link
with wireless channels [17]–[22]. In particular, the harmful
effect of coded caching over wireless multicast channels has
been highlighted recently [17], [18], [21], [23], while similar
conclusions and some directions are given in [17], [18], [20],
[23]. Although [23] consider the same channel model and
address a similar question as in the current work, they differ in
their objectives and approaches. [23] highlights the scheduling
part and provides rigorous analysis on the long-term average
per-user rate in the regime of large number of users. In the
current work, a new queueing structure is proposed to deal
Fig. 1. Decentralized coded caching for N = K = 3 over block-fading
broadcast channel
jointly with admission control, routing, as well as scheduling
for a finite number of users.
Furthermore, most of existing works have focused on
offline caching where both cache placement and delivery
phases are performed once without capturing the random and
asynchronous nature of video traffic. The works [24], [25]
addressed partly the online aspect by studying cache eviction
strategies, the delivery delay, respectively. In this work, we will
explore a different online aspect. Namely, we assume that the
file requests from users arrive dynamically and the file delivery
is performed continuously over time-varying fading broadcast
channels.
Finally, online transmission scheduling over wireless channels has been extensively studied in the context of opportunistic scheduling [6] and network utility maximization [26]. Prior
works emphasize two fundamental aspects: (a) the balancing
of user rates according to fairness and efficiency considerations, and (b) the opportunistic exploitation of the time-varying
fading channels. There have been some works that study
scheduling policies over a queued-fading downlink channel;
[27] gives a maxweight-type of policy and [28] provides a
throughput optimal policy based on a fluid limit analysis. Our
work is the first to our knowledge that studies coded caching
in this setting. The new element in our study is the joint
consideration of user scheduling with codeword construction
for the coded caching delivery phase.
II. C ODED C ACHING OVER W IRELESS C HANNELS
A. System Model
We consider a content delivery network where a server (or
a base station) wishes to convey requested files to K user
terminals over a wireless channel; in Fig. 1 we give an example
with K=3. The wireless channel is modeled by a standard
block-fading broadcast channel, such that the channel state
remains constant over a slot and changes from one slot to
another in an i.i.d. manner. Each slot is assumed to allow for
Tslot channel uses. The channel output of user k in any channel
use of slot t is given by
p
x(t) + ν k (t),
(1)
y k (t) = hk (t)x
3
T
where the channel input x ∈ C slot is subject to the
xk2 ] ≤ P Tslot ; ν k (t) ∼ NC (0, ITslot )
power constraint E[kx
are additive white Gaussian noises with covariance matrix
identity of size Tslot , assumed independent of each other;
{hk (t) ∈ C} are channel fading coefficients independently
distributed across time. At each slot t, the channel state
h(t) = (h1 (t), . . . , hK (t)) is perfectly known to the base
station while each user knows its own channel realization.
We follow the network model considered in [3] as well as
its follow-up works. The server has an access to N equally
popular files W1 , . . . , WN , each F bits long, while each user
k is equipped with cache memory Zk of M F bits, where
M ∈ {0, 1, . . . , N }. We restrict ourselves to decentralized
cache placement [1]. More precisely, each user k independently caches a subset of MNF bits of file i, chosen uniformly
at random for i = 1, . . . , N , under its memory constraint of
M F bits. For later use, we let m = M
N denote the normalized
memory size. By letting Wi|J denote the sub-file of Wi stored
exclusively in the cache memories of the user set J, the cache
memory Zk of user k after decentralized placement is given
by
Zk = {Wi | J : J ⊆ {1, . . . , K}, k ∈ J, ∀i = 1, . . . , N }. (2)
Under the assumption of large file size (F → ∞), we use
the law of large numbers to calculate the size of each sub-file
(measured in bits) as the following
K−|J|
|Wi | J | = m|J| (1 − m)
F.
(3)
Once the requests of all users are revealed, decentralized
coded caching proceeds to the delivery of the requested files
(delivery phase). Assuming that user k demands file k, and
writing dk = k, the server generates and conveys the following
codeword simultaneously useful to the subset of users J:
VJ = ⊕k∈J Wk|J\{k} ,
(4)
where ⊕ denotes the bit-wise XOR operation. The central idea
of coded caching is to create a codeword simultaneously useful
to a subset of users by exploiting the receiver side information
established during the placement phase. This multicasting
operation leads to a gain: let us consider the uncoded delivery
such that sub-files are sent sequentially. The total number of
transmissions intended to |J| users is equal to |J|×|Wk|J\{k} |.
The coded delivery requires the transmission of |Wk|J\{k} |,
yielding a reduction of a factor |J|. It can be shown that the
transmitted signal as per (4) can be decoded correctly with
probability 1 by all intended receivers. In order to further
illustrate the placement and delivery of decentralized coded
caching, we provide a three-user example in Fig. 1.
Example 1. Let us assume that user 1, 2, 3, requests file
A, B, C, respectively. After the placement phase, a given file
A will be partitioned into 8 sub-files, one per user subset.
Codewords to be sent are the following:
• A∅ , B∅ and C∅ to user 1, 2 and 3, respectively.
• A2 ⊕ B1 is intended to users {1, 2}. Once received, user
1 decodes A2 by combining the received codeword with
B1 given in its cache. Similarly user 2 decodes B1 . The
•
same holds for codeword B3 ⊕ C2 to users {2, 3} and
codeword A3 ⊕ C1 to users {1, 3}, respectively.
A23 ⊕ B13 ⊕ C12 is intended users {1, 2, 3}. User 1 can
decode A23 by combining the received codeword with
{B13 , C12 } given in its cache. The same approach is used
for user 2, 3 to decode B13 , C12 respectively.
In order to determine the user throughput under this scheme
we must inspect the achievable transmission rate per codeword, then determine the total time to transmit all codewords,
and finally extract the user throughput. To this aim, the
next subsection will specify the transmission rates of each
codeword by designing a joint scheduling and power allocation
to subsets of users.
B. Degraded Broadcast Channel with Private and Common
Messages
The placement phase creates 2K − 1 independent sub-files
{VJ }J⊆{1,...,K} , each intended to a subset of users. We address
the question on how the transmitter shall convey these subfiles while opportunistically exploiting the underlying wireless
channel. We start by remarking that the channel in (1) for
a given channel realization h is stochastically degraded BC
which achieves the same capacity region as the physically
degraded BC [5, Sec. 5]. The capacity region of the degraded
broadcast channel for K private messages and a common
message is well-known [5]. Here, we consider the extended
setup where the transmitter wishes to convey 2K − 1 mutually
independent messages, denoted by {MJ }, where MJ denotes
the message intended to the users in subset J ⊆ {1, . . . , K}.
We require that each user k must decode all messages {MJ }
for J 3 k. By letting RJ denote the multicast rate of the
2K −1
is
message MJ , we say that the rate-tuple R ∈ R+
achievable if there exists encoding and decoding functions
which ensure the reliability and the rate condition as the slot
duration Tslot is taken arbitrarily large. The capacity region is
defined as the supremum of the achievable rate-tuple as shown
in [23], where the rate is measured in bit/channel use.
h) of a K-user degraded
Theorem 1. The capacity region Γ (h
Gaussian broadcast channel with fading gains h1 ≥ · · · ≥ hK
and 2K − 1 independent messages {MJ } is given by
R1 ≤ log(1 + h1 p1 )
Pk
X
1 + hk j=1 pj
RJ ≤ log
Pk−1
1 + hk j=1 pj
J⊆{1,...,k}:k∈J
for non-negative variables {pk } such that
(5)
k = 2, . . . , K (6)
PK
k=1
pk ≤ P .
Proof. The proof is quite straightforward and is based on ratesplitting and the private-message region of degraded broadcast
channel. For completeness, see details in Appendix IX-A.
The achievability builds on superposition coding at the
transmitter and successive interference cancellation at receivers. For K = 3, the transmit signal is simply given by
x = x1 + x2 + x3 + x12 + x13 + x23 + x123 ,
4
where xJ denotes the signal corresponding to the message MJ
intended to the subset J ⊆ {1, 2, 3}. We suppose that all {xJ :
J ⊆ {1, . . . , K}} are mutually independent Gaussian distributed random variables satisfying the power constraint. User
3 (the weakest user) decodes M̃3 = {M3 , M13 , M23 , M123 }
by treating all the other messages as noise. User 2 decodes first
the messages M̃3 and then jointly decodes M̃2 = {M2 , M12 }.
Finally, user 1 (the strongest user) successively decodes
M̃3 , M̃2 and, finally, M1 .
Later in our online coded caching scheme, we will need to
characterize specific boundary points of the capacity region
h) that maximize a weighted sum rate. To this end, it
Γ (h
suffices to consider the weighted sum rate maximization:
X
max
θJ rJ .
(7)
h)
r ∈Γ (h
J:J⊆{1,...,K}
We first simplify the problem using the following theorem.
Theorem 2. The weighted sum rate maximization with 2K −1
variables in (7) reduces to a simpler problem with K variables, given by
Pk
K
X
1 + hk j=1 pj
(8)
max
θ̃k log
Pk−1 .
p
1 + hk j=1 pj
k=1
K
where p = (p1 , . . . , pK ) ∈ R+ is a positive real vector
satisfying the total power constraint, and θ̃k denotes the
largest weight for user k
θ̃k =
max
K:k∈K⊆{1,...,k}
θK .
Proof. The proof builds on the simple structure of the capacity
region. We remark that for a given power allocation of users
1 to k − 1, user k sees 2k−1 messages {MJ } for all J such
that k ∈ J ⊆ {1, . . . , k} with the equal channel gain. For a
given set of {pj }k−1
j=1 , the capacity region of these messages
is a simple hyperplane characterized by 2k−1 vertices R̃ke i
for i = 1, . . . , 2k−1 , where R̃k is the sum rate of user k in
the RHS of (6) and e i is a vector with one for the i-th entry
and zero for the others. Therefore, the weighted sum rate is
maximized for user k by selecting the vertex corresponding to
the largest weight, denoted by θ̃. This holds for any k.
We provide an efficient algorithm to solve this power
allocation problem as a special case of the parallel Gaussian
broadcast channel studied in [29, Theorem 3.2]. Following
[29], we define the rate utility function for user k given by
uk (z) =
θ̃k
− λ,
1/hk + z
(9)
where λ is a Lagrangian multiplier. The optimal solution
corresponds to selecting the user with the maximum rate utility
at each z and the resulting power allocation for user k is
∗
pk = z : [max uj (z)]+ = uk (z)
(10)
j
with λ satisfying
"
θ̃k
1
P = max
−
k
λ
hk
#
.
+
(11)
Throughout the paper, we assume that each slot is arbitrarily
large to achieve transmission rates of the whole capacity region
of the broadcast channel (as given above) without errors, for
each possible channel realization. This is necessary to ensure
the successful decoding of each sub-file at the receivers.
C. Application to Online Delivery
In this subsection, we wish to apply the superposition encoding over different subsets of users, proposed in the previous
subsection to the online delivery phase of decentralized coded
caching. Compared to the original decentralized coded caching
in [1], we introduce here the new ingredients: i) at each
slot, the superposition based delivery scheme is able to serve
multiple subsets of users, such that each user shall decode
multiple sub-files; ii) users’ requests arrive randomly and each
user decodes a sequence of its requested files. In the original
framework [1], [3], the vector of user requests, denoted by
d = (d1 , . . . , dK ), is assumed to be known by all users. This
information is necessary for each user to recover its desired
sub-files by operating XOR between the received signal and
the appropriate sub-files available in its cache content. Let
us get back to the three-user example in Fig. 1. Upon the
reception of A2 ⊕ B1 , user 1 must identify both its desired
sub-file identity (A2 ) and the combined sub-file available in its
cache (B1 ). Similarly upon the reception of A23 ⊕ B13 ⊕ C12 ,
user 1 must identify its desired sub-file A23 and the combined
sub-files B13 , C12 . In the case of a single request per user,
the base station simply needs to disseminate the vector of
user requests. However, if user requests arrive dynamically and
the delivery phase is run continuously, we associate a header
to identify each sub-file (combined files index and intended
receivers) as we discuss in details in Section V-C.
At the end of the whole transmission as t → ∞, each
receiver decodes its sequence of requested files by applying
a decoding function ξk to the sequence of the received
signals y tk = (yy k (1), . . . , y k (t)), that of its channel state
hk (1), . . . , h k (t)), its cache Zk . Namely, the output
h tk = (h
of the k-th user’s decoding function at slot t is given by
F D̂k (t)
ξk (t) = ξk (Zk , y tk , h tk ) ∈ F2
(12)
where D̂k (t) is defined to be the number of decoded files by
user k up to slot t. Under the assumption that Tslot is arbitrarily
large, each receiver can successfully decode the sequence of
the encoded symbols and reconstruct its requested files.
III. P ROBLEM F ORMULATION
After specifying the codeword generation and the transmission scheme over the broadcast channel, this section will
formulate the problem of alpha-fair file delivery.
Now we are ready to define the feasible rate region as the
set of the average number of successfully delivered files for
K users. We let rk denote time average delivery rate of user
k, measured in files par slot. We let Λ denote the set of all
feasible delivery rate vectors.
Definition 1 (Feasible rate). A rate vector r = (r1 , . . . , rK ),
measured in file/slot, is said to be feasible r ∈ Λ if there exist
a file combining and transmission scheme such that
5
rk = lim inf
t→∞
Dk (t)
.
t
(13)
where Dk (t) denotes the number of successfully delivered files
to user k up to t.
It is worth noticing that as t → ∞ the number of decoded
files D̂k (t) shall coincide with the number of successfully
delivered files Dk (t) under the assumptions discussed previously. In contrast to the original framework [1], [3], our
rate metric measures the ability of the system to continuously
and reliably deliver requested files to the users. Since finding
the optimal policy is very complex in general, we restrict our
study to a specific class of policies given by the following
mild assumptions:
Definition 2 (Admissible class policies Π CC ). The admissible
policies have the following characteristics:
1) The caching placement and delivery follow the decentralized scheme [1].
2) The users request distinct files, i.e. the IDs of the requested files of any two users are different.
Since we restrict our action space, the feasibility rate region,
denoted by ΛCC , under the class of policies Π CC is smaller
than the one for the original problem Λ. However, the joint
design of caching and online delivery appears to be a very hard
problem; note that the design of an optimal code for coded
caching alone is an open problem and the proposed solutions
are constant factor approximations. Restricting the caching
strategy to the decentralized scheme proposed in [1] makes the
problem amenable to analysis and extraction of conclusions
for general cases such as the general setup where users may
not have the symmetrical rates. Additionally, if two users
request the same file simultaneously, it is efficient to handle
exceptionally the transmissions as naive broadcasting instead
of using the decentralized coded caching scheme, yielding a
small efficiency benefit but complicating further the problem.
Note, however, the probability that two users simultaneously
request the same parts of video is very low in practice, hence
to simplify our model we exclude this consideration altogether.
Our objective is to solve the fair file delivery problem:
∗
r =arg max
r∈ΛCC
K
X
g(rk ),
(14)
k=1
where the utility function corresponds to the alpha fair
family of concave functions obtained by choosing:
(
(d+x)1−α
, α 6= 1
1−α
g(x) =
(15)
log(1 + x/d), α = 1
for some arbitrarily small d > 0 (used to extend the domain
of the functions to x = 0). Tuning the value of α changes
the shape of the utility function and consequently drives the
system performance r ∗ to different operating points: (i) α = 0
yields max sum delivery rate, (ii) α → ∞ yields max-min
delivery rate [9], (iii) α = 1 yields proportionally fair delivery
rate [30]. Choosing α ∈ (0, 1) leads to a tradeoff between max
sum and proportionally fair delivery rates.
Fig. 2. Illustration of the feasibility region and different performance
operating points for K = 2 users. Point A corresponds to a naive adaptation
of [3] on our channel model, while the rest points are solutions to our fair
delivery problem.
The optimization (14) is designed to allow us tweak the
performance of the system; we highlight its importance by
an example. Suppose that for a 2-user system Λ is given by
the convex set shown on Fig. 2. Different boundary points
are obtained as solutions to (14). If we choose α = 0,
the system is operated at the point that maximizes the sum
r1 + r2 . The choice α → ∞ leads to the maximum r such
that r1 = r2 = r, while α = 1 maximizes the sum of
logarithms. The operation point A is obtained when we always
broadcast to all users at the weakest user rate and use [3]
for coded caching transmissions. Note that this results in a
significant loss of efficiency due to the variations of the fading
channel, and consequently A lies in the interior of Λ. To reach
the boundary point that corresponds to α → ∞ we need to
carefully group users together with good instantaneous channel
quality but also serve users with poor average channel quality.
This shows the necessity of our approach when using coded
caching in realistic wireless channel conditions.
IV. Q UEUED D ELIVERY N ETWORK
This section presents the queued delivery network and then
the feasible delivery rate region, based on stability analysis of
the queueing model.
A. Queueing Model
At each time slot t, the controller admits ak (t) files to be
delivered to user k, and hence ak (t) is a control variable. We
equip the base station with the following types of queues:
1) User queues to store admitted files, one for each user.
The buffer size of queue k is denoted by Sk (t) and
expressed in number of files.
2) Codeword queues to store codewords to be multicast.
There is one codeword queue for each subset of users
I ⊆ {1, . . . , K}. The size of codeword queue I is denoted
by QI (t) and expressed in bits.
A queueing policy π performs the following operations:
(i) it decides how many files to admit into the user queues
Sk (t) in the form of (ak (t)) variables, (ii) it combines files
destined to different users to create multiple codewords. When
a new codeword is form in this way, we denote this with
6
the codeword routing control variable σJ , that denotes the
number of combinations among files from the subset J f users
according to the coded caching scheme in [3], (iii) it decides
the encoding function for the wireless transmission. Below we
explain in detail the queue operations and the queue evolution:
1) Admission control: At the beginning of each slot, the
controller decides how many requests for each user,
ak (t) should be pulled into the system from the infinite
reservoir.
2) Codeword Routing: The admitted files for user k are
stored in queues Sk (t) for k = 1, . . . , K. At each
slot, files from subsets of these queues are combined
into codewords by means of the decentralized coded
caching encoding scheme. Specifically, the decision at
slot t for a subset of users J ⊆ {1, .., K}, denoted
by σJ (t) ∈ {0, 1, . . . , σmax }, refers to the number of
combined requests for this subset of users. 1 The size
of the user queue Sk evolves as:
X
+
(16)
Sk (t + 1) = Sk (t) −
σJ (t) + ak (t)
| {z }
J:k∈J
number of
| {z }
admitted files
number of files
combined into
codewords
If σJ (t) > 0, the server creates codewords by applying
(4) for this subset of users as a function of the cache
contents {Zj : j ∈ J}.
3) Scheduling: The codewords intended to the subset I of
users are stored in codeword queue whose size is given
by QI (t) for I ⊆ {1, . . . , K}. Given the instantaneous
channel realization h (t) and the queue state {QI (t)}, the
server performs multicast scheduling and rate allocation.
Namely, at slot t, it determines the number µI (t) of bits
per channel use to be transmitted for the users in subset
I. By letting bJ,I denote the number of bits generated for
codeword queue I ⊆ J when coded caching is performed
to the users in J, codeword queue I evolves as
X
+
+
bJ,I σJ (t)
QI (t + 1) = QI (t) − Tslot µI (t)
| {z }
J:I⊆J
number of bits
|
{z
}
multicast to I
number of bits
created by
combining files
(17)
where bJ,I = m|I| (1 − m)|J|−|I|−1 .
A control policy is fully specified by giving the rules with
a(t), σ(t), µ(t)} are taken at every slot
which the decisions {a
t. The first step towards this is to characterize the set of
feasible delivery rates, ΛCC , which is the subject of the next
subsection.
B. Feasibility Region
The main idea here is to characterize the set of feasible file delivery rates via characterizing the stability performance of the queueing system. To this end, let ak =
1 It is worth noticing that standard coded caching lets σ = 1 for J =
J
{1, . . . , K} and zero for all the other subsets. On the other hand, uncoded
caching can be represented by sigmaJ = 1 for J = k, k ∈ 1, ...., K. Our
scheme can, therefore be seen as a combination of both, which explains its
better performance.
lim sup 1t
t→∞
Pt−1
t=0
E [ak (t)] , denote the time average number of
admitted files for user k. We use the following definition of
stability:
Definition 3 (Stability). A queue S(t) is said to be (strongly)
stable if
T −1
1 X
lim sup
E [S(t)] < ∞.
T →∞ T t=0
A queueing system is said to be stable if all its queues are
stable. Moreover, the stability region of a system is the set of
all vectors of admitted file rates such that the system is stable.
If the queueing system we have introduced is stable the rate
of admitted files (input rate) is equal to the rate of successfully
decoded files (output rate), hence we can characterize the
system performance by means of the stability region of our
h) denote the capacity region for
queueing system. We let Γ (h
a fixed channel state h, as defined in Theorem 1. Then we
have the following:
Theorem 3 (Stability region). Let Γ CC be a set to which a
rate vectorP
of admitted files a belongs to, if and only if there
exist µ ∈ h∈H φh Γ (h), σ I ∈ [0, σmax ], ∀I ⊆ {1, . . . , K}
such that:
X
σ J ≥ ak , ∀k = 1, . . . , K
(18)
J:k∈J
Tslot µI ≥
X
bJ,I σ J , ∀I ⊆ {1, 2, ..., K}.
(19)
J:I⊆J
Then, the stability region of the system is the interior of Γ CC ,
where the above inequalities are strict.
Constraint (18) says that the aggregate service rate is greater
than the arrival rate, while (19) implies that the long-term
average rate for the subset J is greater than the arrival rate of
the codewords intended to this subset. In terms of the queueing
system defined, these constraints impose that the service rates
of each queue should be greater than their arrival rates, thus
rendering them stable 2 . The proof of this theorem relies on
existence of static policies, i.e. randomized policies whose
decision distribution depends only on the realization of the
channel state. See the Appendix, Section IX-B for a definition
and results on these policies.
Since the channel process h (t) is a sequence of i.i.d.
realizations of the channel states (the same results hold if,
more generally, h (t) is an ergodic Markov chain), we can
obtain any admitted file rate vector a in the stability region by a
a(t), σ(t), µ(t)}
Markovian policy, i.e. a policy that chooses {a
based only the state of the system at the beginning of time
h(t), S (t), Q (t)}, and not the time index itself. This
slot t, {h
S (t), Q (t)) evolves as a Markov chain, therefore
implies that (S
our stability definition is equivalent to that Markov chain
being ergodic with every queue having finite mean under the
stationary distribution. Therefore, if we develop a policy that
keeps user queues S (t) stable, then all admitted files will,
at some point, be combined into codewords. Additionally, if
2 We restrict vectors a to the interior of Γ CC , since arrival rates at the
boundary are exceptional cases of no practical interest, and require special
treatment.
7
codeword queues Q (t) are stable, then all generated codewords
will be successfully conveyed to their destinations. This in turn
means that all receivers will be able to decode the admitted
files that they requested:
Lemma 4. The region of all feasible delivery rates ΛCC is
the same as the stability region of the system, i.e. ΛCC =
Int(Γ CC ).
Proof. Please refer to Appendix IX-C.
Lemma 4 implies the following Corollary.
ak (t) = γk,max 1{Uk (t) ≥ Sk (t)}
k=1
the system is stable.
This implies that the solution to the original problem (14)
in terms of the long-term average rates is equivalent to the
new problem in terms of the admission rates stabilizing the
system. Next Section provides a set of the explicit solutions
to this new problem.
V. P ROPOSED O NLINE D ELIVERY S CHEME
(23)
For every subset J ⊆ {1, . . . , K}, routing combines σJ (t)
demands of users in J given by
X
X bJ,I
Q
(t)
.
(24)
Sk (t) >
σJ (t) = σmax 1
I
F2
I:I⊆J
k∈J
Corollary 5. Solving (14) is equivalent to finding a policy π
K
such that
X
aπ =arg max
(20)
gk (ak )
s.t.
We present our on-off policy for admission control and
routing. For every user k, admission control chooses ak (t)
demands given by
B. Scheduling and Transmission
In order to stabilize all codeword queues, the scheduling
and resource allocation explicitly solve the following weighted
sum rate maximization at each slot t where the weight of the
subset J corresponds to the queue length of QJ
X
µ(t) = arg max
QJ (t)rJ .
(25)
h(t))
r ∈Γ (h
J⊆{1,...,K}
We propose to apply the power allocation algorithm in subsection II-B to solve the above problem by sorting users in a
decreasing order of channel gains and treating QJ (t) as θJ .
Algorithm 1 summarizes our online delivery scheme.
A. Admission Control and Codeword Routing
Our goal is to find a control policy that optimizes (20). To
this aim, we need to introduce one more set of queues. These
queues are virtual, in the sense that they do not hold actual file
demands or bits, but are merely counters to drive the control
policy. Each user k is associated with a queue Uk (t) which
evolves as follows:
+
Uk (t + 1) = [Uk (t) − ak (t)] + γk (t)
(21)
where γk (t) represents the arrival process to the virtual queue
and is an additional control parameter. We require these queues
to be stable: The actual mean file admission rates are greater
than the virtual arrival rates and the control algorithm actually
seeks to optimize the time average of the virtual arrivals γk (t).
However, since Uk (t) is stable, its service rate, which is the
actual admission rate, will be greater than the rate of the virtual
arrivals, therefore giving the same optimizer. Stability of all
other queues will guarantee that admitted files will be actually
delivered to the users. With thee considerations, Uk (t) will
be a control indicator such that when Uk (t) is above Sk (t)
then we admit files into the system else we set ak (t) = 0.
In particular, we will control the way Uk (t) grows over time
using the actual utility objective gk (.) such that a user with
rate x and rapidly increasing utility gk (x) (steep derivative at
x) will also enjoy a rapidly increasing Uk (t) and hence admit
more files into the system.
In our proposed policy, the arrival process to the virtual
queues are given by
γk (t) = arg
max
0≤x≤γk,max
[V gk (x) − Uk (t)x]
(22)
In the above, V > 0 is a parameter that controls the utilitydelay tradeoff achieved by the algorithm (see Theorem 6).
C. Practical Implementation
When user requests arrive dynamically and the delivery
phase is run continuously, it is not clear when and how the
base station shall disseminate the useful side information to
each individual users. This motivates us to consider a practical
solution which associates a header to each sub-file Wi|J for
i = 1, . . . , N and J ⊆ {1, . . . , K} . Namely, any sub-file shall
indicate the following information prior to message symbols:
a) the indices of files; b) the identities of users who cache
(know) the sub-files 3 .
At each slot t, the base station knows the cache contents
of all users Z K , the sequence of the channel state h t , as well
as that of the demand vectors d t . Given this information, the
base station constructs and transmits either a message symbol
or a header at channel use i in slot t as follows.
(
h dt
ft,i
(d , Z K )
if header
(26)
xi (t) =
m
ft,i ({Wdk (τ ) : ∀k, τ ≤ t}, h t ) if message
h
m
where ft,i
, ft,i
denotes the header function, the message
encoding function, respectively, at channel use i in slot t.
Example 2. We conclude this section by providing an example
of our proposed online delivery scheme for K = 3 users as
illustrated in Fig. 3.
We focus on the evolution of codeword queues between two
slots, t and t + 1. The exact backlog of codeword queues is
shown in Table I. Given the routing and scheduling decisions
(σJ (t) and µJ (t)), we provide the new states of the queues at
the next slot in the same Table.
3 We assume here for the sake of simplicity that the overhead due to a header
is negligible. This implies in practice that each of sub-files is arbitrarily large.
8
TABLE I
C ODEWORD QUEUES EVOLUTION FOR µ{1,2} (t) > 0, µ{1,2,3} (t) > 0 AND σ{1,2} (t) = σ{1} (t) = 1.
QJ (t)
Output
µ{1,2} (t) > 0, µ{1,2,3} (t) > 0
Input
σ{1,2} (t) = σ{1} (t) = 1
QJ (t + 1)
Q{1}
A∅
Q{2}
B∅
Q{3}
C∅
Q{1,2}
A2 ⊕ B 1
Q{1,3}
A 3 ⊕ C1
Q{2,3}
B3 ⊕ C2
Q{1,2,3}
A23 ⊕ B13 ⊕ C12
-
-
-
A2 ⊕ B 1
-
-
A23 ⊕ B13 ⊕ C12
E∅ ; E3
-
-
-
-
B∅
E∅ ; E3
C∅
A 3 ⊕ C1
B3 ⊕ C2
-
D∅ ; D3
{FJ }1∈J
/
A∅ ; D∅ ; D3
{FJ }1∈J
/
E1 ⊕ D2
E13 ⊕ D23
E1 ⊕ D2
E13 ⊕ D23
Algorithm 1 Proposed delivery scheme
1: PLACEMENT (same as [1]):
2: Fill the cache of each user k
Zk = {Wi | J :
J ⊆ {1, . . . , K}, k ∈ J, ∀i =
1, . . . , N }.
3: DELIVERY:
4: for t = 1, . . . , T
5: Decide the arrival process to the virtual queues
γk (t) = arg max [V gk (x) − Uk (t)x]
0≤x≤γk,max
Decide the number of admitted files
ak (t) = γk,max 1{Uk (t) ≥ Sk (t)} .
7: Update the virtual queues
+
Uk (t + 1) = [Uk (t) − ak (t)] + γk (t)
8: Decide the number of files to be combined
X
X bJ,I
Q
(t)
.
σJ (t) = σmax 1
Sk (t) >
I
F2
6:
k∈J
9:
I:I⊆J
Scheduling decides the instantaneous
rate
P
µ(t) = arg max
J⊆{1,...,K} QJ (t)rJ .
h(t))
r ∈Γ (h
Fig. 3. An example of the queueing model for a system with 3 users. Dashed
lines represent wireless transmissions, solid circles files to be combined and
solid arrows codewords generated.
10:
Update user queues
queues:
and codeword
+
P
Sk (t + 1) = Sk (t) − J:k∈J σJ (t) + ak (t),
X
+
QI (t + 1) = [QI (t) − Tslot µI (t)] +
bJ,I σJ (t).
J:I⊆J
We suppose that h1 (t) > h2 (t) > h3 (t). The scheduler uses
(25) to allocate positive rates to user set {1, 2} and {1, 2, 3}
given by µ{1,2} , µ{1,2,3} and multicasts the superposed signal
x(t) = B∅ + B3 ⊕ C2 . User 3 decodes only B3 ⊕ C2 . User 2
decodes first B3 ⊕ C2 , then subtracts it and decodes B∅ . Note
that the sub-file B∅ is simply a fraction of the file B whereas
the sub-file B3 ⊕ C2 is a linear combination of two fractions
of different files. In order to differentiate between each subfile, each user uses the data information header existing in
the received signal. In the next slot, the received sub-files are
evacuated from the codeword queues.
For the routing decision, the server decides at slot t to
combine D requested by user 1 with E requested by user 2
and to process F requested by user 1 uncoded. Therefore, we
have σ{1,2} (t) = σ{1} (t) = 1 and σJ (t) = 0 otherwise. Given
this codeword construction, codeword queues have inputs that
change its state in the next slot as described in Table I.
D. Performance Analyis
Here we present the main result of the paper, by proving
that our proposed online algorithm achieves near-optimal
performance for all policies within the class Π CC :
Theorem 6. Let rπk the mean time-average delivery rate for
user k achieved by the proposed policy. Then
K
X
k=1
lim sup
T →∞
1
T
T
−1
X
t=0
gk (rπk ) ≥ max
r ∈ΛCC
K
X
k=1
gk (rk ) −
B
V
n
o B + V PK g (γ
k=1 k max,k )
E Q̂(t) ≤
,
0
where Q̂(t) is the sum of all queue lengths at the beginning of
time slot t, thus a measure of the mean delay of file delivery.
The quantities B an 0 are constants that depend on the
statistics of the system and are given in the Appendix.
The above theorem states that, by tuning the constant V ,
the utility resulting from our online policy can be arbitrarily
close to the optimal one, where there is a tradeoff between
the guaranteed optimality gap O(1/V ) and the upper bound
on the total buffer length O(V ). We note that these tradeoffs
9
TABLE II
PARAMETERS
QI (t)
Sk (t)
Uk (t)
σJ (t)
µI (t)
ak (t)
γk (t)
Zk
Dk (t)
Ak (t)
codeword queue storing XOR-packets intended users in I.
user queue storing admitted files for user k.
virtual queue for the admission control.
decision variable of number of combined requests for users J in [0, σmax ].
decision variable for multicast transmission rate to users I.
decision variable of the number of admitted files for user k in [0, γmax ].
the arrival process to the virtual queue in [0, γmax ], given by eq. (22).
cache content for user k
number of successfully decoded files by user k up to slot t.
number of (accumulated) requested files by user k up to slot t.
rk
λk
bJ,I
Tslot
h)
Γ (h
H
φh
time average delivery rate equal to lim inf t→∞ kt
in files/slot.
mean of the arrival process.
length of codeword intended to users I from applying coded caching for user in J.
number of channel use per slot.
the capacity region for a fixed channel state h .
the set of all possible channel states.
the probability that the channel state at slot t is h ∈ H.
D (t)
are in direct analogue to the converge error vs step size of the
subgradient method in convex optimization.
Sketch of proof. For proving the Theorem, we use the Lyapunov function
K
X
X
1
1 2
L(t) =
Uk2 (t) + Sk2 (t) +
Q (t)
2 I
2
F
K
k=1
I∈2
and specifically the related drift-plus-penalty quantity,
defined
E {L(t + 1) − L(t)|S(t),
Q(t), U(t)} −
nP as:
o
K
VE
The
proposed
k=1 g(γk (t))|S(t), Q(t), U(t) .
algorithm is such that it minimizes (a bound on) this quantity.
The main idea is to use this fact in order to compare the
evolution of the drift-plus-penalty under our policy and
two ”static” policies, that is policies that take random
actions (admissions, demand combinations and wireless
transmissions), drawn from a specific distribution, based
only on the channel realizations (and knowledge of the
channel statistics). We can prove from Theorem 4 that these
policies can attain every feasible delivery rate. The first
static policy is one such that it achieves the stability of the
system for an arrival rate vector a 0 such that a 0 + δ ∈ ∂ΛCC .
Comparing with our policy, we deduce strong stability of
all queues and the bounds on the queue lengths by using
a Foster-Lyapunov type of criterion. In order to prove
near-optimality, we consider a static
P policy that admits file
requests at rates a ∗ = arg maxa k gk (ak ) and keeps the
queues stable in a weaker sense (since the arrival rate is now
in the boundary ΛCC ). By comparing the drift-plus-penalty
quantities and using telescopic sums and Jensen’s inequality
on the time average utilities, we obtain the near-optimality of
our proposed policy.
The full proof, as well as the expressions for the constants
B and 0 , are in Section IX-D of the Appendix (equations
(35) and (41) - (42), respectively).
VI. DYNAMIC F ILE R EQUESTS
In this Section, we extend our algorithm to the case where
there is no infinite amount of demands for each user, rather
each user requests a finite number of files at slot t. Let Ak (t)
be the number of files requested by user k at the beginning
of slot t. We assume it is an i.i.d. random process with mean
λk and such that Ak (t) ≤ Amax almost surely. 4 In this case,
the alpha fair delivery problem is to find a delivery rate r that
solves
Maximize
K
X
gk (rk )
k=1
s.t. r ∈ ΛCC
rk ≤ λk , ∀k ∈ {1, ..., K},
where the additional constraints rk ≤ λk denote that a user
cannot receive more files than the ones actually requested.
The fact that file demands are not infinite and come as a
stochastic process is dealt with by introducing one ”reservoir
queue” per user, Lk (t), which stores the file demands that have
not been admitted, and an additional control decision on how
many demands to reject permanently from the system, dk (t).
At slot t, no more demands then the ones that arrived at the
beginning of this slot and the ones waiting in the reservoir
queues can be admitted, therefore the admission control must
have the additional constraint
ak (t) ≤ Ak (t) + Lk (t), ∀k, t,
and a similar restriction holds for the number of rejected files
from the system, dk (t). The reservoir queues then evolve as
Lk (t + 1) = Lk (t) + Ak (t) − ak (t) − dk (t).
The above modification with the reservoir queues has only
an impact that further constrains the admission control of
files to the system. The queuing system remains the same
as described in Section V, with the user queues S (t), the
codeword queues Q (t) and the virtual queues U (t). Similar
to the case with infinite demands we can restrict ourselves
to policies that are functions only of the system state at
S (t), Q (t), L (t), A (t), h (t), U (t)} without loss
time slot t, {S
4 The assumptions can be relaxed to arrivals being ergodic Markov chains
with finite second moment under the stationary distribution
10
of optimality. Furthermore, we can show that the alpha fair
optimization problem equivalent to the problem of controlling
the admission rate. That is, we want to find a policy π such
that
aπ = arg max
K
X
gk (ak )
•
Standard coded caching: We use decentralized coded
caching among all K users. For the delivery, nonopportunistic TDMA transmission is used. The server
sends a sequence of codewords {VJ } at the worst transmission rate. The number of packets to be multicast in
order to satisfy one demand for each user is given by [1]
k=1
S (t), Q (t), U (t)) are strongly stable
s.t. the queues (S
ak (t) ≤ min[amax,k , Lk (t) + Ak (t)], ∀t ≥ 0, ∀k
The rules for scheduling, codeword generation, virtual
queue arrivals and queue updating remain the same as in the
case of infinite demands in subsections C and D of Sec. V.
The only difference is that there are multiple possibilities for
the admission control; see [7] and Chapter 5 of [31] for more
details. Here we propose that at each slot t, any demand that
is not admitted get rejected (i.e. the reservoir queues hold no
demands), the admission rule is
n
o
1
K
(1 − m) 1 − (1 − m)
.
(30)
m
Thus the average delivery rate (in file per slot) is symmetric, and given as the following
Tslot
rk =
E log(1 + P min hi ) . (31)
Ttot (K, m)F
i∈{1,...,K}
Ttot (K, m) =
We consider a system with normalized memory of m =
0.6, power constraint P = 10dB, file size F = 103 bits and
number of channel uses per slot Tslot = 102 . The channel
coefficient hk (t) follows an exponential distribution with mean
βk .
aπk (t) = Ak (t)1{Uk (t)≥Sk (t)} ,
(27)
We compare the three algorithms for the cases where the
objective
of the system is sum rate maximization (α = 0) and
and the constants are set as γk,max , σmax ≥ Amax . Using the
proportional
fairness (α = 1) in two different scenarios. The
same ideas employed in the performance analysis of the case
results
depicted
in Fig. 4 consider a deterministic channel with
with infinite demands and the ones employed in [7], we can
two
classes
of
users
of K/2 each: strong users with βk = 1 and
prove that the O(1/V ) − O(V ) utility-queue length tradeoff
weak
users
with
β
=
0.2. For Fig. 5, we consider a symmetric
k
of Theorem 6 holds for the case of dynamic arrivals as well.
block fading channel with βk = 1 for all users. Finally, for
Fig. 6, we consider a system with block fading channel and
VII. N UMERICAL E XAMPLES
two classes of users: K/2 strong users with βk = 1 and K/2
In this section, we compare our proposed delivery scheme weak users with βk = 0.2.
with two other schemes described below, all building on the
It is notable that our proposed scheme outperforms the
decentralized cache placement described in (2) and (3).
unicast opportunistic scheme, which maximizes the sum rate
if only private information packets are to be conveyed, and
• Our proposed scheme: We apply Algorithm 1 for t =
105 slots. Using the scheduler (25), we calculate µJ (τ ) standard coded caching which transmit multicast packets with
denoting the rate allocated to a user set J at slot τ ≤ t. the worst user channel quality.
In Fig. 4 for the deterministic channel scenario with α = 0,
As defined in (13), the long-term average rate of user k
the unicast opportunistic scheme serves only the K/2 strong
measured in file/slot is given by
log(1+P )
Pt P
users in TDMA at a constant rate equal to Tslot
=
F
1−m
Tslot limt→∞ 1t τ =1 J:k∈J µJ (τ )
0.865
in
file/slot.
For
the
standard
coded
caching,
the
sum
rate
rk =
.
(28)
(1 − m)F
increases linearly with the number of users. This is because the
Notice that the numerator corresponds to the average multicast rate is constant over the deterministic channel and
number of useful bits received over a slot by user k and the behavior of Ttot (K, m) is almost constant with m = 0.6,
the denominator (1 − m)F corresponds to the number of which makes the per-user rate almost constant.
For the symmetric fading channel in Fig 5, the performance
bits necessary to recover one file.
of
unicast opportunistic and that of standard coded caching
• Unicast opportunistic scheduling: For any request, the
schemes
are limited due to the lack of global caching gain
server sends the remaining (1 − m)F bits to the correand
vanishing
multicast rate, respectively.
sponding user without combining any files. Here we only
Finally,
for
the
case of both fading and differences in the
exploit the local caching gain. In each slot the transmitter
mean
SNRs,
we
can
see from Fig. 6 that, again, our proposed
sends with full power to the following user
scheme outperforms the unicast opportunistic scheduling and
log (1 + hk (t)P )
standard coded caching both in terms of sum rate and in terms
,
k ∗ (t) = arg max
k
Tk (t)α
of proportional fair utility.
P
µ
(τ
)
In all scenarios, the relative merit of our scheme increases
k
≤t−1
where Tk (t) = 1≤τ(t−1)
is the empirical average
as the number of users grows. This can be attributed to the fact
rate for user k up to slot t. The resulting long-term
that our scheme can exploit any available multicast opportuniaverage rate of user k measured in file/slot is given by
ties. Our result here implies that, in realistic wireless systems,
Pt
Tslot limt→∞ 1t τ =1 log(1 + P hk (τ ))1{k = k ∗ (τ )} coded caching can indeed provide a significant throughput
rk =
.
increase
when an appropriate joint design of routing and
(1 − m)F
opportunistic
transmission is used. Regarding the proportional
(29)
11
(a) Rate (α = 0) vs K.
(b) Proportional fair utility (α = 1) vs K.
Fig. 4. Deterministic channel with different SNR.
(a) Rate (α = 0) vs K.
(b) Proportional fair utility (α = 1) vs K.
Fig. 5. Symmetric fading channel.
(a) Rate (α = 0) vs K.
Fig. 6. Fading channels with two groups of users, each with different average SNR.
(b) Proportional fair utility (α = 1) vs K.
12
fair objective, we can see that the average sum utility increases
with a system dimension for three schemes although our
proposed scheme provides a gain compared to the two others.
VIII. C ONCLUSIONS
In this paper, we studied coded caching over wireless fading
channels in order to address its limitation governed by the
user with the worst fading state. By formulating an alpha-fair
optimization problem with respect to the long-term average
delivery rates, we proposed a novel queueing structure that allowed us to obtain an optimal algorithm for joint file admission
control, codeword construction and wireless transmissions.
The main conclusion is that, by appropriately combining the
multicast opportunities and the opportunism due to channel
fading, coded caching can lead to significant gains in wireless
systems with fading. Low-complexity algorithms which retain
the benefits of our approach as well as a delay-constrained
delivery scheme, are left as interesting topics of future investigation.
IX. A PPENDIX : P ROOFS
A. Proof of Theorem 1
Let MJ be the message for all the users in J ⊆ [K] and of
size 2nRJ . We first show the converse. It follows that the set
of 2K − 1 independent messages {MJ : J ⊆ [K], J 6= ∅} can
be partitioned as
K
[
{MJ : k ∈ J ⊆ [k]}.
(32)
k=1
We can now define K independent mega-messages
M̃k :=
P
{MJ : k ∈ J ⊆ [k]} with rate R̃k := J: k∈J⊆[k] RJ . Note
that each mega-message k must be decoded at least by user k
reliably. Thus, the K-tuple (R̃1 , . . . , R̃K ) must lie inside the
private-message capacity region of the K-user BC. Since it is
a degraded BC, the capacity region is known [5], and we have
Pk
1 + hk j=1 pj
(33)
R̃k ≤ log
Pk−1 , k = 2, . . . , K,
1 + hk j=1 pj
PK
for some pj ≥ 0 such that j=1 pj ≤ P . This establishes the
converse.
To show the achievability, it is enough to use rate-splitting.
Specifically, the transmitter first assembles the original messages into K mega-messages, and then applied the standard
K-level superposition coding [5] putting the (k − 1)-th signal
on top of the k-th signal. The k-th signal has average power
pk , k ∈ [K]. At the receivers’ side, if the rate of the megamessages are inside the private-message capacity region of the
K-user BC, i.e., the K-tuple (R̃1 , . . . , R̃K ) satisfies (33), then
each user k can decode the mega-message k. Since the channel
is degraded, the users 1 to k − 1 can also decode the megamessage k and extract its own message. Specifically, each
user j can obtain MJ (if J 3 j), from the mega-message M̃k
when k ∈ J ⊆ [k]. This completes the achievability proof.
B. Static policies
An important concept for characterizing the feasibility region and proving optimality of our proposed policy is the one
we will refer to here as ”static policies”. The concept is that
decisions taken according to these policies depend only on
the channel state realization (i.e. the uncontrollable part of the
system) as per the following definition:
Definition 4 (Static Policy). Any policy that selects the
a(t), σ(t), µ(t)} according to a probability
control variables {a
distribution that depends only on the channel state h (t) will
be called a static policy.
It is clear from the definition that all static policies belong to
the set of admissible policies for our setting. An important case
is where actually admission control a (t) and codeword routing
σ(t) are decided at random and independently of everything
and transmissions µ(t) are decided at by a distribution that
depends only on the channel state realization of the slot: It
can be shown using standard arguments in stochastic network
optimization (see for example [6], [7], [26], [31]) that the
optimal long term file delivery vector and any file delivery
vector in the stability region of the queueing system can be
achieved by such static policies, as formalized by the following
Lemmas:
Lemma 7 (Static Optimal Policy). Define a policy π ∗ ∈ Π CC
that in each slot where the channel states are h works as
follows: (i) it pulls random user demands with mean ā∗k , and
it gives the virtual queues arrivals with mean γ k = a∗k as
well (ii) the number of combinations for subset J is a random
variable with mean σ ∗J and uniformly bounded by σmax , (iii)
selects one out of K + 1 suitably defined rate vectors µl ∈
h), l = 1, .., K + 1 with probability ψl,hh . The parameters
Γ (h
above are selected such that they solve the following problem:
max
a
s.t.
K
X
gk (a∗k )
k=1
X
σ ∗J ≥ a∗k , ∀k ∈ {1, .., K}
J:k∈J
X
J:I⊆J
bJ,I σ ∗J ≤ Tslot
X
h
φh
K+1
X
h), ∀I ⊆ {1, 2, ..., K}
ψl,hh µlI (h
l=1
Then, π ∗ results in the optimal delivery rate vector (when all
possible policies are restricted to set Π CC ).
Lemma 8 (Static Policy for the δ− interior of Γ CC ). Define
a policy π δ ∈ Π CC that in each slot where the channel states
are h works as follows: (i) it pulls random user demands with
a +δ) ∈ Γ CC , and gives the virtual queues
mean aδk such that (a
random arrivals with mean γ k ≤ ak + 0 for some 0 > 0 (ii)
the number of combinations for subset J is a random variable
with mean σ δJ and uniformly bounded by σmax , (iii) selects
h), l =
one out of K + 1 suitably defined rate vectors µl ∈ Γ (h
δ
1, .., K + 1 with probability ψl,h
.
The
parameters
above
are
h
13
P
h) is the capacity
assumed to be irrevocable and h ∈H φh Γ (h
region of the wireless channel, the above implies that there is
not enough wireless capacity to satisfy a long term file delivery
/ ΛCC , finishing the proof. 5
rate vector of a . Therefore, a ∈
selected such that:
X
σ δ J ≥ + aδk , ∀k ∈ {1, .., K}
J:k∈J
X
bJ,I σ δJ ≤ + Tslot
J:I⊆J
X
h
φh
K+1
X
δ
l
h), ∀I ∈ 2K
ψl,h
h µI (h
l=1
for some appropriate < δ. Then, the system under π δ has
mean incoming rates of a δ and is strongly stable.
C. Proof of Lemma 4
We prove the Lemma in two parts: (i) first we prove that
c
c
Γ CC ⊆ ΛCC and (ii) then that Γ CC ⊆ ΛCC .
For the first part, we show that if a ∈ Int(Γ CC ) then
also λ ∈ ΛCC , that is the long term file delivery rate vector
observed by the users as per (13) is r = a . Denote Ak (t)
the number of files that have been admitted to the system
for user k up to slot t. Also, note that due to our restriction
on the class of policies Π CC and our assumption about long
enough blocklengths, there are no errors in decoding the files,
therefore the number of files correctly decoded for user k till
slot t is Dk (t) . From Lemma 8 it follows that there exists
a static policy π RAN D , the probabilities of which depending
only on the channel state realization at each slot, for which the
system is strongly stable. Since the channels are i.i.d. random
with a finite state space and queues are measured in files and
bits, the system now evolves as a discrete time Markov chain
(S(t), Q(t), H(t)), which can be checked that is aperiodic,
irreducible and with a single communicating class. In that case,
strong stability means that the Markov chain is ergodic with
finite mean.
Further, this means that the system reaches to the set of
states where all queues are zero infinitely often. Let T [n] be
the number of timeslots between the n−th and (n + 1)−th
visit to this set (we make the convention that T [0] is the time
slot that this state is reached for the first time). In addition,
let Ãk [n], D̃k [n] be the number of demands that arrived and
were delivered in this frame, respectively. Then, since within
this frame the queues start and end empty, we have
Ãk [n] = D̃k [n], ∀n, ∀k.
In addition since the Markov chain is ergodic,
PN
Ãk [n]
A(t)
ak = lim
= lim Pn=0
N
t→∞
N →∞
t
n=0 T [n]
and
PN
D̃k [n]
D(t)
rk = lim
= lim Pn=0
N
t→∞
N →∞
t
n=0 T [n]
Combining the three expressions, r = a thus the result follows.
We now proceed to show the second part, that is given any
arrival rate vector a that is not in the stability region of the
queueuing system we cannot have a long term file delivery
rate vector r = a . Indeed, sinceP
a ∈
/ Γ CC , for any possible
h) there will be
σ satisfying (18), for every µ ∈ h ∈H φh Γ (h
some subset(s) of users for which the corresponding inequality
(19) is violated. Since codeword generation decisions are
D. Proof ot Theorem 6
We first look at static policies, which take random decisions
based only on the channel realizations. We focus on two
such policies: (i) one that achieves the optimal utility, as
described in Lemma 7 and (ii) one that achieves (i.e. admits
and stabilizes the system for that) a rate vector in the δ−
interior of ΛCC (for any δ > 0), as described in Lemma 8.
Then, we show that our proposed policy minimizes a bound
on the drift of the quadratic Lyapunov function and compare
with the two aforementioned policies: Comparison with the
second policy proves strong stability of the system under our
proposed policy, while comparison with the first one proves
almost optimality.
From Lemma 4 and Corollary 5, it suffices to prove that
under the online policy the queues are strongly stable and the
resulting time average admission rates maximize the desired
utility function subject to minimum rate constraints.
The proof of the performance of our proposed policy is
based on applying Lyapunov optimization theory [26] with
the following as Lyapunov function (where we have defined
Z(t) = (S(t), Q(t), U(t)) to shorten the notation)
K
X Q2 (t)
1 X 2
I
.
Z ) = L(S, Q, U) =
Uk (t) + Sk2 (t) +
L(Z
2
F2
K
I∈2
k=1
We then define the drift of the aforementioned Lyapunov
function as
∆L(Z) = E {L(Z(t + 1)) − L(Z(t))|Z(t) = Z} ,
where the expectation is over the channel distribution and
possible randomizations of the control policy. Using the queue
evolution equations (16), (17), (21) and the fact that ([x]+ )2 ≤
x2 , we have
∆L(Z(t)) ≤B
X QI (t) X
+
E
bI,J σJ (t) − Tslot µI (t) Z(t)
2
F
J:I⊆J
I∈2K
(
)
K
X
X
+
Sk (t)E ak (t) −
σI (t) Z(t)
k=1
+
X
I:k∈I
Uk (t)E {γk (t) − ak (t)|Z(t)} ,
(34)
k=1K
5 We would also need to check the boundary of Γ CC . Note, however, that
by similar arguments we can show that for each vector on ∂Γ CC we need to
achieve a rate vector on the boundary of the capacity region of the wireless
channel. Since, as mentioned in the main text, we do not consider boundaries
in this work, we can discard these points.
14
some δ > 0, we get that there exist , 0 > 0 such that (the
superscript π denotes the quantities under our proposed policy)
where
B=
K
X
k=1
!2
1
2
γk,max
+
2
X
σmax
∆Lπ (Z(t)) ≤ B + V
I:k∈I
1 X X
2
+
(σmax bI,J )
2F 2
I∈2K J:I⊆J
o
2
X X n
Tslot
2
+
E
(log
(1
+
P
h
(t)))
.
k
2
2F 2
K
I∈2
K
X
E {gk (aπk (t))} − V
K
X
gk (aδk )
k=1
k=1
K
X
X QJ (t)
−
Sk (t) +
F2
K
J∈2
k=1
(35)
− 0
k∈I
Note that B is a finite constant that depends only
on P
the parameters of the system. Adding the quantity
K
−V k=1 E {gk (γk (t))|Z(t)} to both hands of (34) and rearranging the right hand side, we have the drift-plus-penalty
expression
K
X
Uk (t)
(38)
k=1
Since ak (t) ≤ γmax,k ∀t, it follows that gk (aπk ) < gk (γmax,k ).
In addition, gk (x) ≥ 0, ∀x ≥ 0 therefore
∆Lπ (Z(t)) ≤ B + V
K
X
E {gk (γmax,k )}
k=1
∆L(Z(t)) − V
K
X
E {gk (γk (t))|Z(t)} ≤
−
k=1
B+
K
X
k=1
E {−V gk (γk (t)) + γk (t)Uk (t)|Z(t)}
− 0
k=1
+
X
E {σJ (t)|Z(t)}
X
X QI (t)
bI,J −
Sk (t)
2
F
I:I⊆J
J∈2K
+
K
X
k:k∈J
(Sk (t) − Uk (t)) E {ak (t)|Z(t)}
k=1
−
K
X
X QJ (t)
Tslot E {µJ (t)|Z(t)}
F2
K
(36)
J∈2
Now observe that the proposed scheme π minimizes the
right hand side of (36) given any channel state h (t) (and hence
in expectation over the channel state distributions). Therefore,
for every vectors a ∈ [1,P
γmax ]K , γ ∈ [1, γmax ]K , σ ∈
M
h) that denote time
Conv({0, .., σmax } ), µ ∈ h ∈H φh Γ (h
averages of the control variables achievable by any static (i.e.
depending only on the channel state realizations) randomized
policies it holds that
K
X
X QJ (t)
Sk (t) +
F2
K
Uk (t)
J∈2
(39)
k=1
Using the the Foster-Lyapunov criterion, the above inequality
implies that the system Z(t)(S(t), Q(t), U(t)) under our proposed policy π has a unique stationary probability distribution,
under which the mean queue lengths are finite 6 . Moreover,
T
−1 X
K
X
X
1
QJ (t)
lim sup
E
+
(S
(t)
+
U
(t))
k
k
K F2
T →∞ T t=0
k=1
J∈2
PK
B + V k=1 gk (γmax,k )
≤
.
(40)
Therefore the queues are strongly stable under our proposed
policy. In order to prove the part of Theorem 6 regarding the
guaranteed bound on the average queue lengths, we first note
that the above inequality holds for every > 0 and define 0
as
0 = argmax
(41)
>0
K
X
∆Lπ (Z(t)) − V
B−V
+
K
X
k=1
K
X
gk (γ k ) +
k=1
K
X
!
X
Sk (t) ak −
σJ
J:k∈J
X QJ (t)
J
Uk (t) (γ k − ak )
k=1
k=1
+
s.t. 1 ∈ ΛCC .
E {gk (γkπ (t))} ≤
F2
X
bJ,I σ I − Tslot µJ
(37)
I:J⊆I
We will use (37) to compare our policy with the specific static
policies defined in Lemmas 7, 8.
Proof of strong stability: Replacing the time averages we
get from the static stabilizing policy π δ of Lemma 8 for
(42)
Following the same arguments as in Section IV of [7], we can
show that the Right Hand Side of (40) is bounded from below
by
PK
B + V k=1 gk (γmax,k )
,
0
therefore proving the requested bound on the long-term average queue lengths.
We now proceed to proving the near-optimality of our
proposed policy.
Proof of near optimal utility: Here we compare π with
the static optimal policy π ∗ from Lemma 7. Since π ∗ takes
decisions irrespectively of the queue lengths, we can replace
6 For the utility-related virtual queues, note that if g 0 (0) < ∞, then
k
Uk (t) < V gk0 (0) + γk,max , i.e. their length is deterministically bounded
15
quantities a , σ, µ on (37) with the time averages corresponding to π ∗ , i.e. a ∗ , σ ∗ , µ∗ . From the inequalities in Lemma 7
we have
V
K
X
E {gk (γkπ (t))} ≥ V
k=1
K
X
gk (a∗k ) − B + ∆Lπ (Z(t))
k=1
Taking expectations over Z(t) for both sides and summing the
inequalities for t = 0, 1, .., T − 1 and dividing by V T we get
T −1 K
K
X
B
1 XX
E {Lπ (Z(0))}
E {gk (γkπ (t))} ≥
gk (a∗k ) − −
T t=1
V
VT
k=1
k=1
+
E {Lπ (Z(T ))}
VT
Assuming E {Lπ (Z(0))} < ∞ (this assumption is standard
in this line of work, for example it holds if the system starts
empty), since E{Lπ (Z(T ))} > 0, ∀T > 0, taking the limit as
T goes to infinity gives
T −1 K
K
X
B
1 XX
E {gk (γkπ (t))} ≥
gk (a∗k ) −
T →∞ T
V
t=1
lim
k=1
k=1
In addition, since gk (x) are concave, Jensen’s inequality
implies
!
K
K
T
X
X
1X
π
π
gk (γ k ) =
gk lim
E{γk (t)}
T →∞ T
t=0
k=1
k=1
T −1 K
1 XX
E {gk (γkπ (t))}
T →∞ T
t=1
≥ lim
k=1
≥
K
X
gk (a∗k ) −
k=1
B
.
V
Finally, since the virtual queues Uk (t) are strongly stable, it
holds aπk > γ πk . We then have
K
X
k=1
gk (aπk ) >
K
X
k=1
gk (γ πk ) ≥
K
X
k=1
gk (a∗k ) −
B
,
V
which proves the near optimality of our proposed policy π.
R EFERENCES
[1] M. A. Maddah-Ali and U. Niesen, “Decentralized coded caching attains order-optimal memory-rate tradeoff,” IEEE/ACM Transactions on
Networking, vol. 23, no. 4, pp. 1029–1040, Aug. 2015.
[2] CISCO, “Visual networking index: Forecast and methodology 20152020,” white paper, Jun. 2016.
[3] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,”
IEEE Transactions on Information Theory, vol. 60, no. 5, pp. 2856–
2867, Mar. 2014.
[4] G. Paschos, E. Baştuğ, I. Land, G. Caire, and M. Debbah, “Wireless
caching: Technical misconceptions and business barriers,” IEEE Communications Magazine, vol. 54, no. 8, pp. 16–22, Aug. 2016.
[5] A. El Gamal and Y.-H. Kim, Network information theory. Cambridge
University Press, 2011.
[6] A. L. Stolyar, “On the asymptotic optimality of the gradient scheduling
algorithm for multiuser throughput allocation,” Operations research,
vol. 53, no. 1, pp. 12–25, Jan. 2005.
[7] M. J. Neely, E. Modiano, and C.-P. Li, “Fairness and optimal stochastic
control for heterogeneous networks,” IEEE/ACM Transactions on Networking, vol. 16, no. 2, pp. 396–409, Apr. 2008.
[8] R. Knopp and P. A. Humblet, “Information capacity and power control
in single-cell multiuser communications,” in IEEE International Conference on Communications (ICC ’96), Seattle, WA, USA, 1995, pp.
331–335.
[9] J. Mo and J. Walrand, “Fair end-to-end window-based congestion
control,” IEEE/ACM Transactions on Networking, vol. 8, no. 5, pp. 556–
567, Oct. 2000.
[10] H. Shirani-Mehr, G. Caire, and M. J. Neely, “Mimo downlink scheduling with non-perfect channel state knowledge,” IEEE Transactions on
Communications, vol. 58, no. 7, pp. 2055–2066, Jul. 2010.
[11] G. Caire, R. R. Muller, and R. Knopp, “Hard fairness versus proportional fairness in wireless communications: The single-cell case,” IEEE
Transactions on Information Theory, vol. 53, no. 4, pp. 1366–1385, Apr.
2007.
[12] M. Ji, A. M. Tulino, J. Llorca, and G. Caire, “Order-optimal rate of
caching and coded multicasting with random demands,” IEEE Transactions on Information Theory, vol. 63, no. 6, pp. 3923–3949, Jun. 2017.
[13] U. Niesen and M. A. Maddah-Ali, “Coded caching with nonuniform
demands,” IEEE Transactions on Information Theory, vol. 63, no. 2,
pp. 1146–1158, Feb. 2017.
[14] M. Ji, G. Caire, and A. F. Molisch, “Fundamental limits of distributed
caching in d2d wireless networks,” in 2013 IEEE Information Theory
Workshop (ITW), Sevilla, Spain, 2013, pp. 1–5.
[15] N. Karamchandani, U. Niesen, M. A. Maddah-Ali, and S. N. Diggavi,
“Hierarchical coded caching,” IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 3212–3229, Jun. 2016.
[16] S. P. Shariatpanahi, S. A. Motahari, and B. H. Khalaj, “Multi-server
coded caching,” IEEE Transactions on Information Theory, vol. 62,
no. 12, pp. 7253–7271, Dec. 2016.
[17] W. Huang, S. Wang, L. Ding, F. Yang, and W. Zhang, “The performance
analysis of coded cache in wireless fading channel,” arXiv preprint
arXiv:1504.01452, 2015.
[18] J. Zhang and P. Elia, “Wireless coded caching: A topological perspective,” in 2018 IEEE International Symposium on Information Theory
(ISIT), Aachen, Germany, 2017, pp. 401–405.
[19] ——, “Fundamental limits of cache-aided wireless BC: Interplay of
coded-caching and CSIT feedback,” IEEE Transactions on Information
Theory, vol. 63, no. 5, pp. 3142–3160, May 2017.
[20] S. S. Bidokhti, M. Wigger, and R. Timo, “Noisy broadcast networks
with receiver caching,” arXiv preprint arXiv:1605.02317, 2016.
[21] K.-H. Ngo, S. Yang, and M. Kobayashi, “Cache-aided content delivery
in mimo channels,” in 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 2016.
[22] S. P. Shariatpanahi, G. Caire, and B. H. Khalaj, “Multi-antenna coded
caching,” arXiv preprint arXiv:1701.02979, 2017.
[23] R. Combes, A. Ghorbel, M. Kobayashi, and S. Yang, “Utility optimal
scheduling for coded caching in general topologies,” arXiv preprint
arXiv:1801.02594, 2018.
[24] R. Pedarsani, M. A. Maddah-Ali, and U. Niesen, “Online coded
caching,” IEEE/ACM Transactions on Networking, vol. 24, no. 2, pp.
836–845, Apr. 2016.
[25] U. Niesen and M. A. Maddah-Ali, “Coded caching for delay-sensitive
content,” in 2015 IEEE International Conference on Communications
(ICC), London, UK, 2015, pp. 5559–5564.
[26] M. J. Neely, “Stochastic network optimization with application to
communication and queueing systems,” Synthesis Lectures on Communication Networks, vol. 3, no. 1, pp. 1–211, 2010.
[27] K. Seong, R. Narasimhan, and J. M. Cioffi, “Queue proportional
scheduling via geometric programming in fading broadcast channels,”
IEEE Journal on Selected Areas in Communications, vol. 24, no. 8, pp.
1593–1602, Aug. 2006.
[28] A. Eryilmaz, R. Srikant, and J. R. Perkins, “Throughput-optimal scheduling for broadcast channels,” in Proc. of SPIE, vol. 4531, 2001, pp. 70–78.
[29] D. N. Tse, “Optimal power allocation over parallel gaussian broadcast
channels,” 2001.
[30] F. Kelly, “Charging and rate control for elastic traffic,” European
transactions on Telecommunications, vol. 8, no. 1, pp. 33–37, Jan. 1997.
[31] L. Georgiadis, M. J. Neely, L. Tassiulas et al., “Resource allocation and
cross-layer control in wireless networks,” Foundations and Trends R in
Networking, vol. 1, no. 1, pp. 1–144, 2006.
| 7 |
arXiv:1801.04369v1 [math.ST] 13 Jan 2018
Is profile likelihood a true likelihood? An
argument in favor
O.J. Maclaren
Department of Engineering Science, University of Auckland
January 16, 2018
Abstract
Profile likelihood is the key tool for dealing with nuisance parameters in likelihood
theory. It is often asserted, however, that profile likelihood is not a true likelihood.
One implication is that likelihood theory lacks the generality of e.g. Bayesian inference, wherein marginalization is the universal tool for dealing with nuisance parameters. Here we argue that profile likelihood has as much claim to being a true likelihood
as a marginal probability has to being a true probability distribution. The crucial
point we argue is that a likelihood function is naturally interpreted as a maxitive
possibility measure: given this, the associated theory of integration with respect to
maxitive measures delivers profile likelihood as the direct analogue of marginal probability in additive measure theory. Thus, given a background likelihood function, we
argue that profiling over the likelihood function is as natural (or as unnatural, as the
case may be) as marginalizing over a background probability measure.
Keywords: Estimation; Inference; Profile Likelihood; Marginalization; Nuisance Parameters; Idempotent Integration; Maxitive Measure Theory
1
1
Introduction
Consider the opening sentence from the entry on profile likelihood in the Encyclopedia of
Biostatistics (Aitkin 2005):
The profile likelihood is not a likelihood, but a likelihood maximized over nuisance parameters given the values of the parameters of interest.
Numerous similar assertions that profile likelihood is not a ‘true’ likelihood may be found
throughout the literature and various textbooks, and is apparently the accepted viewpoint
of the statistical community. Importantly, this includes the ‘pure’ likelihood literature,
which generally accepts a lack of systematic methods for dealing with nuisance parameters,
while still recommending profile likelihood as the most general, albeit ‘ad-hoc’, solution (see
e.g. Royall 1997, Rohde 2014, Edwards 1992, Pawitan 2001). Similarly, recent monographs
on characterizing statistical evidence presents favorable opinions of the likelihood approach
but criticize the lack of general methods for dealing with nuisance parameters (Aitkin 2010,
Evans 2015). The various justifications given, however, appear to the present author to
rather vague and unconvincing. For example, suppose we modified the above quotation to
refer to marginal probability instead of profile likelihood:
A marginal probability is not a probability, but a probability distribution integrated over nuisance variables given the values of the variables of interest.
The above would be a perfectly fine characterization of a marginal probability if the
“not a probability, but” part was dropped, i.e.
A marginal probability is a probability distribution integrated over nuisance
variables given the values of the variables of interest.
Simply put: the fact that a marginal probability is obtained by integrating over a
‘background’ probability distribution does not prevent the marginal probability from being
a true probability. The crucial observation in the case of marginal probability is that
integration over variables takes probability distributions to probability distributions.
2
The purpose of the present article is to point out that there is an appropriate notion
of integration over variables that takes likelihood functions to likelihood functions via maximization. This notion of integration is based on the idea of idempotent analysis, wherein
one replaces a standard algebraic operation such as addition in a given mathematical theory
with another basic algebraic operation, defining a form of ‘idempotent addition’, to obtain
a new analogous, self-consistent theory (Maslov 1992, Kolokoltsov & Maslov 1997). In this
case one simply replaces the usual ‘addition’ operations, including the usual (Lebesgue)
integration, with ‘maximization’ operations, including taking supremums, to obtain a new,
‘idempotent probability theory’. Maximization in this context is understood algebraically
as an idempotent addition operation, hence the terminology. While perhaps somewhat exotic at first sight, this idea finds direct applications in e.g. large deviation theory (Puhalskii
2001) and, most relevantly, possibility theory, fuzzy set theory and pure-likelihood-based
decision theory (Dubois et al. 1997, Cattaneo 2013, 2017).
2
Likelihood as a possibility measure
Though apparently not well known in the statistical literature, likelihood theory is known
in the wider literature on uncertainty quantification to have a natural correspondence to
possibility theory rather than to probability theory (Dubois et al. 1997, Cattaneo 2013,
2017). This has perhaps been obscured by the usefulness of likelihood methods as tools in
probabilistic statistical inference. It is not our intention to review this wider literature in
detail here (see e.g. Dubois et al. 1997, Cattaneo 2013, 2017, Augustin et al. 2014, Halpern
2017, for more), but to simply point out the implications of this correspondence. In particular, likelihood theory interpreted as a possibilistic, rather than probabilistic theory can
be summarized as:
Probability theory with addition replaced by maximization.
As indicated above, this is sometimes known as, for example, ‘idempotent measure
theory’, ‘maxitive measure theory or ‘possibility’ theory, among other names (see e.g.
Dubois et al. 1997, Cattaneo 2013, 2017, Augustin et al. 2014, Halpern 2017, Maslov 1992,
3
Kolokoltsov & Maslov 1997, Puhalskii 2001, for more). This correspondence perhaps explains the preponderance of maximization methods in likelihood theory, including the methods of maximum likelihood and profile likelihood.
The most important consequence of this perspective is that the usual Lebesgue integration with respect to an additive measure, as in probability theory, becomes, in likelihood/possibility theory, a different type of integration, defined with respect to a maxitive
measure. Again, the key point is simply that addition operations (including summation and
integration) are replaced by maximization operations (or taking supremums in general).
For completeness, we contrast the key axioms of possibility theory with those of probability theory. Given a set of possibilities of Ω, assumed to be discrete for the moment for
simplicity, and for two discrete sets of possibilities A, B ⊆ Ω the key axioms of elementary
possibility theory are (Halpern 2017):
poss(∅) = 0
(1)
poss(Ω) = 1
poss(A ∪ B) = max{poss(A), poss(B)}
which can be contrasted with those of elementary probability theory:
prob(∅) = 0
(2)
prob(Ω) = 1
prob(A ∪ B) = sum{prob(A), prob(B)}
where A and B are required to be disjoint in the probabilistic case, but this is not
strictly required in the possibilistic case.
Given a ‘background’ or ‘starting’ likelihood measure, likelihood theory can be developed as a self-contained theory of possibility, where derived distributions are manipulated
according to the first set of axioms above. This is entirely analogous to developing probability theory from a background measure, with derived distributions manipulated according
to the second set of axioms. As our intention is to consider methods for obtaining derived
distributions by ‘eliminating’ nuisance parameters, we need not consider here where the
starting measure comes from (but see the Discussion).
4
To make the correspondences of interest clear in what follows, we first present probabilistic marginalization as a special case of a pushforward measure or, equivalently, as a
special case of a general (not necessarily 1-1) change of variables. We then consider the
possibilistic analogues.
3
Pushforward probability measures and the delta function method for general changes of variable
Given a probability measure µ over a random variable x ∈ Rn with associated density ρ,
define the new random variable t = T (x) where T : Rn → Rm . This variable is distributed
according to the pushforward measure T ⋆ µ, i.e. t ∼ T ⋆ µ.
The density of t, here denoted by q = T ⋆ ρ, is conveniently calculated via the delta
function method which is valid for arbitrary changes of variables (not necessarily 1-1):
q(t) = [T ⋆ ρ](t) =
Z
δ(t − T (x))ρ(x)dx
(3)
As a side point, we note that this method of carrying out arbitrary transformations
of variables is standard in statistical physics (see e.g. Van Kampen 1992), but is apparently less common in statistics (see the articles Au & Tam 1999, Khuri 2004, aimed at
highlighting this method to the statistical community).
3.1
Marginalization via the delta function method
The above means that we can interpret marginalization to a component x1 , say, as a special
case of a (non-1-1) deterministic change of variables via:
ρ(x1 ) =
Z
δ(x1 − projX1 (x))ρ(x)dx
(4)
where projX1 (x) is simply the projection of x to its first coordinate. Thus marginalization can be thought of as the pushforward under the projection operator and as a special
case of a general (not necessarily 1-1) change of variables t = T (x).
5
4
Profile likelihood as marginal possibility and an extension to general changes of variable
As we have repeatedly stressed above, likelihood theory interpreted as a possibilistic, and
hence maxitive, measure theory simply means that addition operations such as the usual
Lebesgue integration are replaced by maximization operations such as taking the supremum.
Consider first then the analogue of a marginal probability density, which we will call a
marginal possibility distribution and denote by Lp . Starting from a ‘background’ likelihood
measure L(x) we ‘marginalize’ in the analogous manner to before:
Lp (x1 ) = sup{δ(x1 − projX1 (x))L(x)} = sup{x|projX
1
(x)=x1 } {L(x)}
(5)
This is again simply the pushforward under the projection operator, but here under a
different type of ‘integration’ - i.e. the operation of taking a supremum. Of course, this is
just the usual profile likelihood for x1 .
As above, we need not be restricted to marginal possibility distributions: we can consider arbitrary functions of the parameter t = T (x). This leads to an analogous pushforward
operation of L(x) to Lp (t) that we denote by ⋆p :
Lp (t) = [T ⋆p L](t) = sup{δ(t − T (x))L(x)} = sup{x|T (x)=t} {L(x)}
(6)
which again corresponds to the usual definition of profile likelihood.
5
5.1
Discussion
Objections to profile likelihood
As discussed, it is frequently asserted that profile likelihood is not a true likelihood (Aitkin
2005, Royall 1997, Pawitan 2001, Rohde 2014, Evans 2015). Common reasons include:
that it is obtained from a likelihood via maximization (Aitkin 2005), that it is not based
directly on observable quantities (Royall 1997, Pawitan 2001, Rohde 2014) and that it lacks
particular repeated sampling properties (Royall 1997, Cox & Barndorff-Nielsen 1994).
6
None of the above objections appear to the present author to apply to the following:
given a starting or ‘background’ likelihood function, profile likelihood satisfies the axioms
of possibility theory, in which the basic additivity axiom of probability theory is replaced
by a maxitivity axiom. Profile likelihood is simply the natural possibilistic counterpart to
marginal probability, where additive integration is replaced by a maxitive analogue. We
thus argue that, if marginal probability is a ‘true’ probability, then profile likelihood should
likewise be considered a ‘true’ likelihood, at least when likelihood theory is interpreted in
a possibilistic manner.
5.2
Fixed data
Regarding the second two objections mentioned above: observable quantities and repeated
sampling properties, it is important to note that the given data must be held fixed to
give a consistent background likelihood over which to profile. Given fixed data one has
a fixed possibility measure and thus can consider ‘marginal’ - i.e. profile - likelihoods.
In contrast, repeated sampling will produce a distribution of such possibility measures,
and these may or may not have good frequentist properties. None of this is in contrast
to marginal probability: changing the distribution over which we marginalize changes the
resulting marginal probability. Of course, despite this caveat, profile likelihood often does
have good repeated sampling properties (Royall 1997, Cox & Barndorff-Nielsen 1994) and
also plays a key role in frequentist theory, though we do not discuss this further here.
5.3
Why?
A natural question, perhaps, is why worry about whether profile likelihood is a true likelihood? One answer is that profile likelihood is a widely used tool but is often dismissed
as ‘ad-hoc’ or lacking proper justification. This gives the impression that, for example,
likelihood theory is lacking in comparison with e.g. Bayesian theory in terms of systematic
methods for dealing with nuisance parameters. By understanding that profile likelihood
does in fact have a systematic basis in terms of possibility theory practitioners and students
can better understand and reason about a widely popular and useful tool. Understanding
the connection to possibilistic as opposed to probabilistic reasoning may also help explain
7
why profile likelihood has emerged as a particularly promising method of identifiability
analysis (Raue et al. 2009), where identifiability is traditionally a prerequisite for probabilistic analysis.
5.4
Ignorance
The possibilistic interpretation of likelihood also helps understand the representation of
ignorance. While probabilistic ignorance is not preserved under arbitrary changes of variables (e.g. non-1-1 transformations), even in the discrete case, possibilistic ignorance is in
the following sense: if we take the maximum likelihood over a set of possibilities, such as
{x | T (x) = t} for each t, rather than summing them, a flat ‘prior likelihood’ (Edwards
1969, 1992) over x becomes a flat prior likelihood over t. On the other hand, a flat prior
probability over x in general becomes non-flat over t under non-1-1 changes of variable.
Thus a profile prior likelihood has what, in many cases, may be desirable properties as a
representation of prior ignorance (see the discussion in Edwards 1969, 1992, for more on
likelihood and the representation of ignorance).
6
Conclusions
We have argued that profile likelihood has as much claim to being a true likelihood as a
marginal probability has to being a true probability distribution. In the case of marginal
probability, integration over variables takes probability distributions to probability distributions, while in the case of likelihood, maximization takes likelihood functions to likelihood
functions. Maximization can be considered in this context as an alternative (idempotent)
notion of integration, and a likelihood function as a maxitive possibility measure. This
gives a self-consistent theory of possibilistic statistical analysis with a well-defined method
of treating nuisance parameters.
8
References
Aitkin, M. (2005), Profile likelihood, in ‘Encyclopedia of Biostatistics’, John Wiley & Sons,
Ltd.
Aitkin, M. (2010), Statistical Inference: An Integrated Bayesian/Likelihood Approach,
Chapman & Hall/CRC Monographs on Statistics & Applied Probability, CRC Press.
Au, C. & Tam, J. (1999), ‘Transforming variables using the dirac generalized function’,
Am. Stat. 53(3), 270–272.
Augustin, T., Coolen, F. P. A., de Cooman, G. & Troffaes, M. C. M. (2014), Introduction
to imprecise probabilities, John Wiley & Sons.
Cattaneo, M. E. G. (2013), ‘Likelihood decision functions’, Electron. J. Stat. 7, 2924–2946.
Cattaneo, M. E. G. V. (2017), ‘The likelihood interpretation as the foundation of fuzzy set
theory’, Int. J. Approx. Reason. .
Cox, D. R. & Barndorff-Nielsen, O. E. (1994), Inference and asymptotics, Chapman and
Hall, London.
Dubois, D., Moral, S. & Prade, H. (1997), ‘A semantics for possibility theory based on
likelihoods’, J. Math. Anal. Appl. 205(2), 359–380.
Edwards, A. W. F. (1969), ‘Statistical methods in scientific inference’, Nature
222(5200), 1233–1237.
Edwards, A. W. F. (1992), ‘Likelihood, expanded ed’, Johns Hopkins University Press,
Baltimore .
Evans, M. (2015), Measuring Statistical Evidence Using Relative Belief, Chapman &
Hall/CRC Monographs on Statistics & Applied Probability, CRC Press.
Halpern, J. Y. (2017), Reasoning about Uncertainty, MIT Press.
Khuri, A. I. (2004), ‘Applications of dirac’s delta function in statistics’, Internat. J. Math.
Ed. Sci. Tech. 35(2), 185–195.
9
Kolokoltsov, V. & Maslov, V. P. (1997), Idempotent Analysis and Its Applications, Springer
Science & Business Media.
Maslov, V. P. (1992), Idempotent Analysis, American Mathematical Soc.
Pawitan, Y. (2001), In All Likelihood: Statistical Modelling and Inference Using Likelihood,
Oxford science publications, OUP Oxford.
Puhalskii, A. (2001), Large Deviations and Idempotent Probability, CRC Press.
Raue, A., Kreutz, C., Maiwald, T., Bachmann, J., Schilling, M., Klingmüller, U. & Timmer,
J. (2009), ‘Structural and practical identifiability analysis of partially observed dynamical
models by exploiting the profile likelihood’, Bioinformatics 25(15), 1923–1929.
Rohde, C. A. (2014), Introductory Statistical Inference with the Likelihood Function:,
Springer International Publishing.
Royall, R. (1997), Statistical Evidence: A Likelihood Paradigm, CRC Press.
Van Kampen, N. G. (1992), Stochastic processes in physics and chemistry, Vol. 1, Elsevier.
10
| 1 |
Functional Map of the World
arXiv:1711.07846v2 [cs.CV] 23 Nov 2017
Gordon Christie1
Neil Fendley1
James Wilson2
Ryan Mukherjee1
1
The Johns Hopkins University Applied Physics Laboratory 2 DigitalGlobe
{gordon.christie,neil.fendley,ryan.mukherjee}@jhuapl.edu
[email protected]
Abstract
1
We present a new dataset, Functional Map of the World
(fMoW), which aims to inspire the development of machine
learning models capable of predicting the functional purpose of buildings and land use from temporal sequences
of satellite images and a rich set of metadata features.
The metadata provided with each image enables reasoning
about location, time, sun angles, physical sizes, and other
features when making predictions about objects in the image. Our dataset consists of over 1 million images from over
200 countries1 . For each image, we provide at least one
bounding box annotation containing one of 63 categories,
including a “false detection” category. We present an analysis of the dataset along with baseline approaches that reason about metadata and temporal views. Our data, code,
and pretrained models have been made publicly available.
FD
flooded road
…
gsd: 0.5349
utm: 21J
timestamp: 2014-07-08T14:10:29Z
…
off_nadir_angle_dbl: 21.865
…
FD
flooded road
2
2
gsd: 0.5087
utm: 21J
timestamp: 2017-03-26T14:04:08Z
…
off_nadir_angle_dbl: 17.407
...
Figure 1: In fMoW, temporal sequences of images, metadata and bounding boxes for 63 categories (including “false
detections”) are provided. In this example, if we only look
inside the yellow box for the right image, we will only see
road and vegetation. In the left image, we will only see water, and potentially predict this to be a lake. However, by
observing both views of this area we can now reason that
this sequence contains a flooded road.
1. Introduction
Satellite imagery presents interesting opportunities for
the development of object classification methods. Most
computer vision (CV) datasets for this task focus on images
or videos that capture brief moments [22, 18]. With satellite imagery, temporal views of objects are available over
long periods of time. In addition, metadata is also available
to enable reasoning beyond visual information. For example, by combining temporal image sequences with timestamps, models may learn to differentiate office buildings
from multi-unit residential buildings by observing whether
or not their parking lots are full during business hours.
Models may also be able to combine certain metadata parameters with observations of shadows to estimate object
heights. In addition to these possibilities, robust models
must be able to generalize to unseen areas around the world
that may include different building materials and unique architectural styles.
Enabling the aforementioned types of reasoning requires
1 207
1
a large dataset of annotated and geographically diverse
satellite images. In this work, we present our efforts to collect such a dataset, entitled Functional Map of the World
(fMoW). fMoW has several notable features, including a
variable number of temporal images per scene and an associated metadata file for each image. The task posed for our
dataset falls in between object detection and classification.
That is, for each temporal sequence of images, at least one
bounding box is provided that maps to one of 63 categories,
including a “false detection” (FD) category that represents
content not characterized by the other 62 categories. These
boxes are intended to be used as input to a classification
algorithm. Figure 1 shows an example.
Collecting a dataset such as fMoW presents some interesting challenges. For example, one consideration would
be to directly use crowdsourced annotations provided by
OpenStreetMap2 (OSM). However, issues doing so include
2 https://www.openstreetmap.org
of the total 247 ISO Alpha-3 country codes are present in fMoW.
1
inconsistent, incorrect, and missing annotations for a large
percentage of buildings and land use across the world.
Moreover, OSM may only provide a single label for the
current contents of an area, making it difficult to correctly
annotate temporal views. Another possibility is to use the
crowd to create annotations from scratch. However, annotating instances of a category with no prior information
is extremely difficult in a large globally-diverse satellite
dataset. This is due in part to the unique perspective that
satellite imagery offers when compared with ground-based
datasets, such as ImageNet [22]. Humans are seldom exposed to aerial viewpoints in their daily lives and, as such,
objects found in satellite images tend to be visually unfamiliar and difficult to identify. Buildings can also be repurposed throughout their lifetime, making visual identification even more difficult. For these reasons, we use a multiphase process that combines map data and crowdsourcing.
Another problem for fMoW is that full annotation is
made very difficult by the increased object density for certain categories. For example, single-unit residential buildings often occur in dense clusters alongside other categories, where accurately discriminating and labeling every
building would be very time-consuming. To address this
shortcoming, we propose providing bounding boxes as part
of algorithm input as opposed to requiring bounding box
output, which would be more akin to a typical detection
dataset. This avoids full image annotation issues that stem
from incomplete map data and visual unfamiliarity. Imagery does not have to be fully annotated, as algorithms are
only asked to classify regions with known contents. This
allows us to focus collection on areas with more accurate
map data and limit annotations to a small number of category instances per image.
Our contributions are summarized as follows: (1) To the
best of our knowledge, we provide the largest publicly available satellite dataset containing bounding box annotations,
metadata and revisits. This enables joint reasoning about
images and metadata, as well as long-term temporal reasoning for areas of interest. (2) We present methods based
on CNNs that exploit the novel aspects of our dataset, with
performance evaluation and comparisons, which can be applied to similar problems in other application domains. Our
code, data, and pretrained models have all been publicly released3 . In the following sections, we provide an analysis
of fMoW and baseline methods for the task.
lected and annotated. Recently, there have been several,
mostly successful, attempts to leverage techniques that were
founded on first-person imagery and apply them to remote
sensing data [13, 19, 28]. However, these efforts highlight the research gap that has developed due to the lack
of a large dataset to appropriately characterize the problems
found in remote sensing. fMoW offers an opportunity to
close this gap by providing, to the best of our knowledge,
the largest quantity of labeled satellite images that has been
publicly released to date, while also offering several features that could help unify otherwise disparate areas of research around the multifaceted problem of processing satellite imagery. We now highlight several of these areas where
we believe fMoW can make an impact.
Reasoning Beyond Visual Information
Many works
have extended CV research to simultaneously reason about
other modules of perception, such as joint reasoning about
language and vision [2, 14, 21], audio and vision [10], 2D
and 3D information [3], and many others. In this work, we
are interested in supporting joint reasoning about temporal
sequences of images and associated metadata features. One
of these features is UTM zone, which provides location
context. In a similar manner, [24] shows improved image
classification results by jointly reasoning about GPS coordinates and images, where several features are extracted
from the coordinates, including high-level statistics about
the population. Although we use coarser location features
(UTM zones) than GPS in this work, we do note that using
similar features would be an interesting study.
Multi-view Classification Satellite imagery offers a
unique and somewhat alien perspective on the world. Most
structures are designed for recognition from ground level.
For example, buildings often have identifying signs above
entrances that are not visible from overhead. As such, it
can be difficult, if not impossible, to identify the functional
purpose of a building from a single overhead image.
One of the ways in which fMoW attempts to address this
issue is by providing multiple temporal views of each object, when available. Along these lines, several works in
the area of video processing have been able to build upon
advancements in single image classification [15, 6, 30] to
create networks capable of extracting spatio-temporal features. These works may be a good starting point, but it is
important to keep in mind the vastly different temporal resolution on which these datasets operate. For example, the
YouTube-8M dataset [1], on which many of these video processing algorithms were developed, contains videos with 30
frames per second temporal resolution that each span on the
order of minutes. Satellites, on the other hand, typically
cannot capture imagery with such dense temporal resolution. Revisit times vary, but it is not uncommon for satellites
to require multiple days before they can image the same location; it is possible for months to go by before they can get
2. Related Work
While large datasets are nothing new to the vision
community, they have typically focused on first-person or
ground-level imagery [22, 18, 1, 8, 9, 7, 17]. This is likely
due in part to the ease with which this imagery can be col3 https://github.com/fMoW
2
an unobstructed view. As such, temporal views in fMoW
span multiple years as opposed to minutes. Techniques that
attempt to capture features across disjoint periods of time,
such as [20], are likely better candidates for the task.
Perhaps the most similar work to ours in terms of temporal classification is PlaNet [26]. They pose the image localization task as a classification problem, where photos are
classified as belonging to a particular bucket that bounds a
specific area on the globe. They extend their approach to
classify the buckets of images in photo albums taken in the
same area. A similar approach is used in one of our baseline
methods for fMoW.
Another recent work similar to fMoW is TorontoCity [25]. They provide a large dataset that includes imagery and LiDAR data collected by airplanes, low-altitude
unmanned aerial vehicles, and cars in the greater Toronto
area. While they present several tasks, the two that are related to land-use classification are zoning classification and
segmentation (e.g., residential, commercial). Aerial images
included in TorontoCity were captured during four different
years and include several seasons. While this is an impressive dataset, we believe fMoW is more focused on satellite
imagery and offers advantages in geographic diversity.
Satellite Datasets One of the earliest annotated satellite
datasets similar to fMoW is the UC Merced Land Use
Dataset, which offers 21 categories and 100 images per
category with roughly 30cm resolution and image sizes of
256x256 [29]. While some categories from this dataset
overlap with fMoW, we believe fMoW offers several advantages in that we have three times the number of categories, localized objects within the images, and multiple orders of magnitude more images per category. We also provide metadata, temporal views, and multispectral images.
SpaceNet [5], a recent dataset that has received substantial attention, contains both 30cm and 50cm data of 5 cities.
For the most part, the data in SpaceNet currently includes
building footprints. However, earlier this year, point of interest (POI) data was also released into SpaceNet. This POI
data includes the locations of several categories within Rio
de Janeiro. Unrelated to SpaceNet, efforts have also been
made to label data from Google Earth, with the largest released thus far being the AID [27] and NWPU-RESISC45
[4] datasets. The AID dataset includes 10,000 images of
30 categories, while the NWPU-RESISC45 dataset includes
31,500 images of 45 categories. In comparison, fMoW offers over 1,000,000 images of 63 categories. Datasets derived from Google Earth imagery lack associated metadata,
temporal views, and multispectral data, which would typically be available to real-world systems.
million images, collection resources, plan to collect temporal views, and discussions with researchers in the CV
community, we set a goal of including between 50 and
100 categories. We searched sources such as the OSM
Map Features4 list and NATO Geospatial Feature Concept
Dictionary5 for categories that highlight some of the challenges discussed in Section 2. For example, “construction
site” and “impoverished settlement” are categories from
our dataset that may require temporal reasoning to identify,
which presents a unique challenge due to temporal satellite
image sequences typically being scattered across large time
periods. We also focused on grouping categories according to their functional purpose, which should encourage the
development of approaches that reason about contextual information, both visually and in the associated metadata.
Beyond research-based rationales for picking certain categories, we had some practical ones as well. Before categories could be annotated within images, we needed to find
locations where we have high confidence of their existence.
This is where maps play a crucial role. “Flooded road”,
“debris or rubble”, and “construction site” were the most
difficult categories to collect because open source data does
not generally contain temporal information. However, with
more careful search procedures, reuse of data from humanitarian response campaigns, and calculated extension of keywords to identify categories even when not directly labeled,
we were able to collect temporal stacks of imagery that contained valid examples.
All imagery used in fMoW was collected from the DigitalGlobe constellation6 . Images were gathered in pairs,
consisting of 4-band or 8-band multispectral imagery in the
visible to near-infrared region, as well as a pan-sharpened
RGB image that represents a fusion of the high-resolution
panchromatic image and the RGB bands from the lowerresolution multispectral image. 4-band imagery was obtained from either the QuickBird-2 or GeoEye-1 satellite systems, whereas 8-band imagery was obtained from
WorldView-2 or WorldView-3.
More broadly, fMoW was created using a three-phase
workflow consisting of location selection, image selection,
and bounding box creation. The location selection phase
was used to identify potential locations that map to our categories while also ensuring geographic diversity. Potential
locations were drawn from several Volunteered Geographic
Information (VGI) datasets, which were conflated and curated to remove duplicates. To ensure diversity, we removed
neighboring locations within a specified distance (typically
500m) and set location frequency caps for categories that
have severely skewed geographic distributions. These two
factors helped reduce spatial density while also encouraging
3. Dataset Collection
4 https://wiki.openstreetmap.org/wiki/Map_Features
5 https://portal.dgiwg.org/files/?artifact_id=8629
Prior to the dataset collection process for fMoW, a set
of categories had to be identified. Based on our target of 1
6 https://www.digitalglobe.com/resources/
satellite-information
3
the selection of locations from disparate geographic areas.
The remaining locations were then processed using DigitalGlobe’s GeoHIVE7 crowdsourcing platform. Members of
the GeoHIVE crowd were asked to validate the presence of
categories in satellite images, as shown in Figure 2.
able locations from the United States. Many of these “wind
farm” instances were invalidated by the crowd, likely due
to the difficulty of identifying tall, thin structures in satellite imagery, particularly when the satellite image is looking straight down on the tower. The “barn”, “construction
site”, “flooded road”, and “debris or rubble” categories are
also examples that contain some geographic bias. In the
case of the “barn” category, the bias comes from the distribution of “barn” tags in OSM, which are predominately
located in Europe, whereas the other three categories contain geographic bias as a result of the more complex feature
selection process, mentioned earlier, that was required for
these categories.
4. Dataset Analysis
Here we provide some statistics and analysis of fMoW.
Two versions of the dataset are publicly available:
• fMoW-full The full version of the dataset includes
pan-sharpened RGB images and 4/8-band multispectral images (MSI), which are both stored in TIFF format. Pan-sharpened images are created by “sharpening” lower-resolution MSI using higher-resolution
panchromatic imagery. All pan-sharpened images
in fMoW-full have corresponding MSI, where the
metadata files for these images are nearly identical.
• fMoW-rgb An alternative JPEG compressed version of the dataset, which is provided since
fMoW-full is very large. For each pan-sharpened
RGB image we simply perform a conversion to JPEG.
For MSI images, we extract the RGB channels and
save them as JPEGs.
For all experiments presented in this paper, we use
fMoW-rgb. We also exclude RGB-extracted versions of
the MSI in fMoW-rgb as they are effectively downsampled versions of the pan-sharpened RGB images.
Figure 2: Sample image of what a GeoHIVE user might see
while validating potential fMoW dataset features. Instructions can be seen in the top-left corner that inform users
to press the ‘1’, ‘2’, or ‘3’ keys to validate existence, nonexistence, or cloud obscuration of a particular object.
The image selection phase comprised of a three-step
process, which included searching the DigitalGlobe satellite imagery archive, creating image chips, and filtering
out cloudy images. Approximately 30% of the candidate images were removed for being too cloudy. DigitalGlobe’s IPE Data Architecture Highly-available Objectstore (IDAHO) service was used to process imagery into
pan-sharpened RGB and multispectral image chips in a
scalable fashion. These chips were then passed through a
CNN architecture to classify and remove any undesirable
cloud-covered images.
Finally, images that passed through the previous two
phases were sent to a curated and trusted crowd for bounding box annotation. This process involved a separate interface from the first phase, one that asked crowd users to draw
bounding boxes around the category of interest in each image and provided some category-specific guidance for doing so. The resulting bounding boxes were then graded by
second trusted crowd to assess quality. In total, 642 unique
GeoHIVE users required a combined total of approximately
2,800 hours to annotate category instances for fMoW.
Even after multiple crowd validation procedures and implementing programmatic methods for ensuring geographic
diversity, there were several categories that contained some
bias. For example, the “wind farm” category does not
contain very many examples from the United States, even
though the initial location selection phase returned 1,938 vi-
4.1. fMoW Splits
We have made the following splits to the dataset:
• seq This is the sequestered portion of the dataset
that is not currently publicly available. It will be released after it is used for final testing in the public challenge centered around the dataset8 .
• train Contains 65.2% and 72.13% of the total
bounding boxes with and without seq included, respectively.
• val Contains 11.4% and 12.6% of the total bounding boxes with and without seq included, respectively.
This set was made representative of test, so that validation can be performed.
• test Contains 13.8% and 15.3% of the total bounding boxes with and without seq included, respectively.
7 https://geohive.digitalglobe.com
8 https://www.iarpa.gov/challenges/fmow.html
4
Instances per Category
90000
80000
8 band
70000
4 band
60000
3 band
50000
40000
30000
20000
10000
false detection
airport
airport hangar
airport terminal
amusement park
aquaculture
archaeological site
barn
border checkpoint
burial site
car dealership
construction site
crop field
dam
debris or rubble
educational institution
electric substation
factory or powerplant
fire station
flooded road
fountain
gas station
golf course
ground transportation station
helipad
hospital
impoverished settlement
interchange
lake or pond
lighthouse
military facility
multi-unit residential
nuclear powerplant
office building
oil or gas facility
park
parking lot or garage
place of worship
police station
port
prison
race track
railway bridge
recreational facility
road bridge
runway
shipyard
shopping mall
single-unit residential
smokestack
solar farm
space facility
stadium
storage tank
surface mine
swimming pool
toll booth
tower
tunnel opening
waste disposal
water treatment facility
wind farm
zoo
0
Figure 3: This shows the total number of instances for each category (including FD) in fMoW across different number of
bands. These numbers include the temporal views of the same areas. fMoW-full consists of 3 band imagery (pan-sharpened
RGB), as well as 4 and 8 band imagery. In fMoW-rgb, the RGB channels of the 4 and 8 band imagery are extracted and
saved as JPEG images.
The total number of bounding box instances for each category can be seen in Figure 3.
UTM zones in the metadata. Figure 5 illustrates the frequency of sequences within the UTM zones on earth, where
the filled rectangles each represent a different UTM zone.
Green colors represent areas with higher numbers of sequences, while blue regions have lower counts. As seen,
fMoW covers much of the globe. The images captured for
fMoW also have a wide range of dates, which allows algorithms to analyze areas on earth over long periods of time
in some cases. Figure 6 shows a distributions for years and
local times (converted from UTC) in which the images were
captured. The average time difference between the earliest
and most recent images in each sequence is approximately
3.8 years.
4.2. fMoW Statistics
Variable length sequences of images are provided for
each scene in the dataset. Figure 4 shows what percentage of the sequences in the dataset belong to each sequence
length. 21.2% of the sequences contain only 1 view. Most
(95%) of the sequences contain 10 or less images.
Percentage of Dataset
Number of Temporal Views Distribution
25%
20%
15%
5. Baselines and Methods
10%
Here we present 5 different approaches to our task,
which vary by their use of metadata and temporal reasoning. All experiments were performed using fMoW-rgb.
Two of the methods presented involve fusing metadata into
a CNN architecture. The following provides a summary of
the metadata features that are used, as well as any preprocessing operations that are applied:
• UTM Zone One of 60 UTM zones and one of 20 latitude bands are combined for this feature. We convert
these values to 2 coordinate values, each between 0
and 1. This is done by taking the indices of the values
within the list of possible values and then normalizing.
• Timestamp The year, month, day, hour, minute, second, and day of the week are extracted from the timestamp and added as separate features. The timestamp
provided in the metadata files is in Coordinated Universal Time (UTC).
5%
0%
1
3
5
7
9 11 13 15 17
Number of Temporal Views
19 20+
Figure 4: This shows the distribution of the number of temporal views in our dataset. The number of temporal views is
not incremented by both the pan-sharpened and multispectral images. These images have almost identical metadata
files and are therefore not counted twice. The maximum
number of temporal views for any area in the dataset is 41.
A major focus of the collection effort was global diversity. In the metadata, we provide UTM zones, which typically refer to 6◦ longitude bands (1-60). We also concatenate letters that represent latitude bands (total of 20) to the
5
major axis.
– Sun Azimuth Angle in degrees (0-360◦ ) of
clockwise rotation off north to the sun.
– Sun Elevation Angle in degrees (0-90◦ ) of elevation, measured from the horizontal, to the sun.
• Image+box sizes The pixel dimensions of the
bounding boxes and image size, as well as the fraction
of the image width and height that the boxes occupy
are added as features.
After preprocessing the metadata features, we perform
mean subtraction and normalization using values calculated
for train + val. A full list of metadata features and their
descriptions can be found in the appendix.
It is worth noting here that the imagery in fMoW is not
registered, and while many sequences have strong spatial
correspondence, individual pixel coordinates in different
images do not necessarily represent the same positions on
the ground. As such, we are prevented from easily using
methods that exploit registered sequences.
The CNN used as the base model in our various baseline methods is DenseNet-161 [12], with 48 feature maps
(k=48). During initial testing, we found this model to outperform other models such as VGG-16 [23] and ResNet50 [11]. We initialize our base CNN models using the pretrained ImageNet weights, which we found to improve performance during initial tests. Training is performed using
a crop size of 224x224, the Adam optimizer [16], and an
initial learning rate of 1e-4. Due to class imbalance in our
dataset, we attempted to weight the loss using class frequencies, but did not observe any improvement.
To merge metadata features into the model, the softmax
layer of DenseNet is removed and replaced with a concatenation layer to merge DenseNet features with preprocessed
metadata features, followed by two 4096-d fully-connected
layers with 50% dropout layers, and a softmax layer with
63 outputs (62 main categories + FD). An illustration of this
base model is shown in Figure 7.
Figure 5: This shows the geographic diversity of fMoW.
Data was collected from over 400 unique UTM zones (including latitude bands). This helps illustrate the number of
images captured in each UTM zone, where more green colors show UTM zones with a higher number of instances,
and more blue colors show UTM zones with lower counts.
11.6%
Time Distribution
2010
25%
2011
5.4% 4.6%
5.3%
2012
6.0%
2016
7.1%
29.8%
2013
2015
(a)
15%
10%
5%
0%
11.7%
18.5%
20%
2014
00:00-09:30
09:30-10:00
10:00-10:30
10:30-11:00
11:00-11:30
11:30-12:00
12:00-12:30
12:30-13:00
13:00-13:30
13:30-14:00
14:00-24:00
2002-2009
2017
(b)
Figure 6: Distribution over (a) years the images were captured, and (b) time of day the images were captured (UTC
converted to local time for this figure).
• GSD Ground sample distance, measured in meters,
is provided for both the panchromatic and multispectral bands in the image strip. The panchromatic images used to generate the pan-sharpened RGB images
have higher resolution than the MSI, and therefore
have smaller GSD values. These GSD values, which
describe the physical sizes of pixels in the image, are
used directly without any preprocessing.
• Angles These identify the angle at which the sensor
is imaging the ground, as well as the angular location
of the sun with respect to the ground and image. These
features can be added without preprocessing. The following angles are provided:
– Off-nadir Angle Angle in degrees (0-90◦ ) between the point on the ground directly below the
sensor and the center of the image swath.
– Target Azimuth Angle in degrees (0-360◦ ) of
clockwise rotation off north to the image swath’s
DenseNet
Softmax
4096
4096
Extract
Features
Concat
gsd: 0.5219
utm: 30T
timestamp: 2016-02-04T12:29:21Z
…
off_nadir_angle_dbl: 10.154
...
Figure 7: An illustration of our base model used to fuse
metadata features into the CNN. This model is used as
a baseline and also as a feature extractor (without softmax) for providing features to an LSTM. Dropout layers
are added after the 4096-d FC layers.
6
We test the following approaches with fMoW:
• LSTM-M An LSTM architecture trained using temporal sequences of metadata features. We believe
training solely on metadata helps understand how important images are in making predictions, while also
providing some measure of bias present in fMoW.
• CNN-I A standard CNN approach using only images, where DenseNet is fine-tuned after ImageNet.
Softmax outputs are summed over each temporal view,
after which an argmax is used to make the final prediction. The CNN is trained on all images across all
temporal sequences of train + val.
• CNN-IM A similar approach to CNN-I, but with
metadata features concatenated to the features of
DenseNet before the fully connected layers.
• LSTM-I An LSTM architecture trained using features extracted from CNN-I.
• LSTM-IM An LSTM architecture trained using
features extracted from CNN-IM.
All of these methods are trained on train + val. Since
tight bounding boxes are typically provided for category instances in the dataset, we add a context buffer around each
box before extracting the region of interest from the image.
We found that it was useful to provide more context for categories with smaller sizes (e.g., single-unit residential) and
less context for categories that generally cover larger areas
(e.g., airports).
Per-category F1 scores for test are shown in Table 1.
From the results, it can be observed that, in general, the
LSTM architectures show similar performance to our approaches that sum the probabilities over each view. Some
possible contributors to this are the large quantity of singleview images provided in the dataset, and that temporal
changes may not be particularly important for several of the
categories. CNN-I and CNN-IM are also, to some extent,
already reasoning about temporal information while making
predictions by summing the softmax outputs over each temporal view. Qualitative results that show success and failure
cases for LSTM-I are shown in Figure 8. Qualitative results are not shown for the approaches that use metadata, as
it is much harder to visually show why the methods succeed
in most cases.
It could be argued that the results for approaches using
metadata are only making improvements because of bias
exploitation. To show that metadata helps beyond inherent bias, we removed all instances from the test set where
the metadata-only baseline (LSTM-M) is able to correctly
predict the category. The results of this removal, which can
be found in Table 2, show that metadata can still be useful
for improving performance.
To further confirm the importance of temporal reasoning, we compare the methods presented above with two additional methods, CNN-I-1 and CNN-IM-1, which make
LSTM-M
CNN-I
LSTM-I
airport
airport hangar
airport terminal
amusement park
aquaculture
archaeological site
barn
border checkpoint
burial site
car dealership
construction site
crop field
dam
debris or rubble
educational institution
electric substation
factory or powerplant
fire station
flooded road
fountain
gas station
golf course
ground transportation station
helipad
hospital
impoverished settlement
interchange
lake or pond
lighthouse
military facility
multi-unit residential
nuclear powerplant
office building
oil or gas facility
park
parking lot or garage
place of worship
police station
port
prison
race track
railway bridge
recreational facility
road bridge
runway
shipyard
shopping mall
single-unit residential
smokestack
solar farm
space facility
stadium
storage tank
surface mine
swimming pool
toll booth
tower
tunnel opening
waste disposal
water treatment facility
wind farm
zoo
0.599
0.447
0.017
0.023
0.622
0.514
0.016
0.292
0.000
0.019
0.101
0.053
0.514
0.158
0.381
0.157
0.000
0.000
0.028
0.625
0.085
0.022
0.220
0.114
0.067
0.012
0.538
0.142
0.000
0.037
0.426
0.227
0.000
0.011
0.522
0.025
0.076
0.362
0.068
0.444
0.087
0.234
0.030
0.295
0.000
0.488
0.000
0.117
0.429
0.204
0.424
0.000
0.174
0.140
0.200
0.362
0.030
0.141
0.526
0.071
0.044
0.540
0.039
0.728
0.859
0.721
0.697
0.746
0.754
0.524
0.695
0.333
0.852
0.741
0.372
0.888
0.806
0.403
0.495
0.849
0.443
0.409
0.296
0.727
0.785
0.860
0.658
0.812
0.387
0.410
0.833
0.721
0.715
0.509
0.385
0.720
0.198
0.789
0.626
0.775
0.638
0.246
0.692
0.611
0.898
0.703
0.907
0.722
0.821
0.371
0.615
0.688
0.735
0.912
0.824
0.825
0.921
0.824
0.920
0.891
0.723
0.867
0.595
0.854
0.939
0.566
0.729
0.800
0.665
0.715
0.727
0.762
0.491
0.684
0.404
0.859
0.797
0.373
0.872
0.798
0.607
0.475
0.869
0.459
0.494
0.285
0.705
0.779
0.916
0.694
0.856
0.404
0.506
0.678
0.650
0.755
0.564
0.414
0.762
0.218
0.773
0.638
0.787
0.658
0.237
0.698
0.650
0.886
0.755
0.919
0.738
0.814
0.351
0.629
0.703
0.755
0.921
0.737
0.850
0.921
0.802
0.913
0.918
0.737
0.897
0.570
0.816
0.948
0.582
0.853
0.884
0.677
0.746
0.898
0.811
0.574
0.717
0.523
0.827
0.747
0.318
0.930
0.864
0.474
0.548
0.858
0.536
0.483
0.638
0.814
0.761
0.899
0.713
0.831
0.426
0.750
0.905
0.687
0.779
0.597
0.445
0.600
0.228
0.844
0.662
0.700
0.712
0.201
0.736
0.695
0.919
0.761
0.903
0.747
0.889
0.368
0.662
0.717
0.772
0.927
0.875
0.818
0.928
0.870
0.906
0.960
0.754
0.949
0.604
0.853
0.959
0.598
0.837
0.837
0.699
0.759
0.868
0.805
0.607
0.707
0.515
0.846
0.770
0.358
0.926
0.886
0.488
0.557
0.872
0.544
0.523
0.795
0.840
0.772
0.875
0.719
0.820
0.458
0.704
0.909
0.694
0.828
0.655
0.451
0.552
0.225
0.865
0.698
0.732
0.735
0.329
0.667
0.726
0.892
0.813
0.906
0.756
0.885
0.351
0.662
0.684
0.768
0.931
0.889
0.819
0.924
0.880
0.907
0.954
0.777
0.942
0.670
0.879
0.968
0.611
Average
0.193
0.679
0.688
0.722
0.734
false_detection
CNN-IM LSTM-IM
Table 1: F1 scores for different approaches on test. Color
formatting was applied to each column independently. The
average values shown at the bottom of the table are calculated without FD scores.
predictions for each individual view. We then have all other
methods repeat their prediction over the full sequence. This
is done to show that, on average, seeing an area multiple
times outperforms single-view predictions. We note that
these tests are clearly not fair for some categories, such as
7
LSTM-I: Construction Site
CNN-I: Educational Institution
LSTM-I: Debris or Rubble
LSTM-I: Flooded Road
CNN-I: Hospital
GT: Debris or Rubble
CNN-I: False Detection
LSTM-I: Construction Site
GT: Construction Site
CNN-I: False Detection
GT: False Detection
GT: False Detection
Figure 8: Qualitative examples from test of the image-only approaches. The images presented here show the extracted and
resized images that are passed to the CNN approaches. The top two rows show success cases for LSTM-I, where CNN-I
was not able to correctly predict the category. The bottom two rows show failure cases for LSTM-I, where CNN-I was able
to correctly predict the category. We also note that sequences with ≥9 views were chosen. The second row was trimmed to
keep the figure consistent. However, we note that variable temporal views are provided for throughout the dataset.
6. Conclusion and Discussion
“construction site”, where some views may not even contain the category. However, we perform these tests for completeness to confirm our expectations. Results are shown in
Table 3. Per-category results can be found in the appendix.
LSTM-M
CNN-I
LSTM-I
CNN-IM
LSTM-IM
0
0.685
0.693
0.695
0.702
We present fMoW, a dataset that consists of over 1 million image and metadata pairs, of which many are temporal
views of the same scene. This enables reasoning beyond
visual information, as models are able to leverage temporal
information and reason about the rich set of metadata features (e.g., timestamp, UTM zone) provided for each image.
By posing a task in between detection and classification,
we avoid the inherent challenges associated with collecting a large, geographically diverse, detection dataset, while
still allowing for models to be trained that are transferable
to real-world detection systems. Different methods were
presented for this task that demonstrate the importance of
reasoning about metadata and temporal information. All
code, data, and pretrained models have been made publicly
available. We hope that by releasing the dataset and code,
other researchers in the CV community will find new and
interesting ways to further utilize the metadata and temporal changes to a scene. We also hope to see fMoW being
used to train models that are able to assist in humanitarian
efforts, such as applications involving disaster relief.
Table 2: Results on test instances where the metadataonly baseline (LSTM-M) is not able to correctly predict
the category. These are the average F1 scores not including
FD. These results show that metadata is important beyond
exploiting bias in the dataset.
CNN-I-1
CNN-I
LSTM-I
CNN-IM-1
CNN-IM
LSTM-IM
0.618
0.678
0.684
0.666
0.722
0.735
Table 3: Average F1 scores, not including FD, for individual
images from test. CNN-I-1 and CNN-IM-1 make predictions for each individual view. All other methods repeat
their prediction over the full sequence.
8
Acknowledgements This work would not have been
possible without the help of everyone on the fMoW Challenge team, who we thank for their contributions. A special
thanks to: Kyle Ellis, Todd Bacastow, Alex Dunmire, and
Derick Greyling from DigitalGlobe; Rebecca Allegar, Jillian Brennan, Dan Reitz, and Ian Snyder from Booz Allen
Hamilton; Kyle Bowerman and Gődény Balázs from Topcoder; and, finally, Myron Brown, Philippe Burlina, Alfred
Mayalu, and Nicolas Norena Acosta from JHU/APL. We
also thank the professors, graduate students, and researchers
in industry and from the CV community for their suggestions and participation in discussions that helped shape the
direction of this work.
The material in this paper is based upon work supported by the Office of the Director of National Intelligence
(ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract 2017-17032700004. The views
and conclusions contained herein are those of the authors
and should not be interpreted as necessarily representing
the official policies or endorsements, either expressed or
implied, of the ODNI, IARPA, or the U.S. Government.
The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding
any copyright annotation therein.
path (String).
6. Pan Resolution Ground sample distance of panchromatic band (pan-GSD) in the image strip, measured in
meters (Double). start, end, min, and max values are also included. start and end represent the
pan-GSD for the first and last scan lines, respectively.
min and max represent the minimum and maximum
pan-GSD for all scan lines, respectively.
7. Multi Resolution Ground sample distance of multispectral bands (multi-GSD) in the image strip, measured in meters (Double). start, end, min, and
max values are also included. start and end represent the multi-GSD for the first and last scan lines,
respectively. min and max represent the minimum and
maximum multi-GSD for all scan lines, respectively.
8. Target Azimuth Azimuth angle of the sensor with
respect to the center of the image strip, measured in
degrees (Double). start, end, min, and max values
are also included. start and end represent the target
azimuth for the first and last scan lines, respectively.
min and max represent the minimum and maximum
target azimuth for all scan lines, respectively.
9. Sun Azimuth Azimuth angle of the sun measured
from north, clockwise in degrees, to the center of the
image strip, measured in degrees (Double). min and
max values are also included. min and max represent
the minimum and maximum sun azimuth for all scan
lines, respectively.
10. Sun Elevation Elevation angle of the sun measured
from the horizontal, measured in degrees (Double).
min and max values are also included. min and max
represent the minimum and maximum sun elevation
for all scan lines, respectively.
11. Off-Nadir Angle The off nadir angle of the satellite
with respect to the center of the image strip, measured
in degrees (Double). start, end, min, and max values are also included. start and end represent the
off-nadir angle for the first and last scan lines, respectively. min and max represent the minimum and maximum off-nadir angle for all scan lines, respectively.
Appendix Overview
In this document, we provide:
Appendix I: Descriptions of the metadata features and
distributions for country codes and UTM zones.
Appendix II: Additional collection details.
Appendix III: Additional results.
Appendix IV: Examples from our dataset.
Appendix I. Metadata Features and Statistics
1. ISO Country Code ISO Alpha-3 country code
(String). There are a total of 247 possible country
codes, 207 of which are present in fMoW.
2. UTM Zone Universal Transverse Mercator. There
are 60 UTM zones, which are 6◦ in width. We provide
a number for the UTM zone (1-60), along with a letter
representing the latitude band. There are a total of 20
latitude bands, which range from “C” to “X” (“I” and
“O” are not included).
3. Timestamp UTC timestamp.
Datetime format
(Python): “%Y-%m-%dT%H:%M:%SZ” (String).
4. Cloud Cover Fraction of the image strip, not image
chip, that is completely obscured by clouds on a scale
of 0-100 (Integer).
5. Scan Direction The direction the sensor is pointed
when collecting an image strip. Either “Forward”,
when the image is collected ahead of the orbital path or
“Reverse” when the image is taken behind the orbital
Country Codes Here we show the counts for each
unique country code in fMoW. Counts are incremented
once for each sequence instead of once per metadata file.
[(“USA”, 18750), (“FRA”, 7470), (“ITA”, 6985),
(“RUS”, 6913), (“CHN”, 6597), (“DEU”, 4686), (“GBR”,
4496), (“BRA”, 3820), (“CAN”, 3128), (“TUR”, 2837),
(“JPN”, 2542), (“IDN”, 2448), (“ESP”, 2402), (“AUS”,
2105), (“DZA”, 1849), (“IND”, 1804), (“UKR”, 1735),
(“CZE”, 1713), (“POL”, 1386), (“MEX”, 1274), (“ARG”,
1248), (“NLD”, 1236), (“SYR”, 1224), (“BEL”, 1190),
(“PHL”, 1179), (“IRQ”, 1129), (“EGY”, 1041), (“ZAF”,
924), (“CHL”, 888), (“LTU”, 871), (“LBY”, 863), (“KOR”,
809), (“CHE”, 788), (“LVA”, 772), (“PRT”, 722), (“YEM”,
9
701), (“BLR”, 601), (“GRC”, 592), (“AUT”, 572), (“SVN”,
570), (“ARE”, 566), (“IRN”, 540), (“COL”, 509), (“TWN”,
509), (“TZA”, 475), (“NZL”, 465), (“PER”, 459), (“HTI”,
417), (“KEN”, 405), (“NGA”, 383), (“VEN”, 378),
(“PRK”, 371), (“ECU”, 351), (“IRL”, 335), (“MYS”, 328),
(“BOL”, 313), (“FIN”, 288), (“KAZ”, 268), (“MAR”,
266), (“TUN”, 257), (“CUB”, 256), (“EST”, 247), (“SAU”,
246), (“HUN”, 222), (“THA”, 219), (“NPL”, 196),
(“HRV”, 187), (“NOR”, 183), (“SVK”, 175), (“SEN”, 172),
(“BGD”, 171), (“HND”, 167), (“SWE”, 166), (“BGR”,
165), (“HKG”, 154), (“DNK”, 153), (“MDA”, 147),
(“ROU”, 142), (“ZWE”, 141), (“SRB”, 140), (“GTM”,
140), (“DOM”, 134), (“LUX”, 133), (“SDN”, 132),
(“VNM”, 126), (“URY”, 120), (“CRI”, 119), (“SOM”,
112), (“ISL”, 110), (“LKA”, 110), (“QAT”, 108), (“PRY”,
107), (“SGP”, 106), (“OMN”, 105), (“PRI”, 95), (“NIC”,
87), (“NER”, 85), (“SSD”, 82), (“UGA”, 79), (“SLV”,
79), (“JOR”, 78), (“CMR”, 77), (“PAN”, 74), (“PAK”,
72), (“UZB”, 70), (“CYP”, 67), (“KWT”, 67), (“ALB”,
66), (“CIV”, 65), (“BHR”, 65), (“GIN”, 64), (“MLT”,
63), (“JAM”, 62), (“AZE”, 62), (“GEO”, 60), (“SLE”,
59), (“ETH”, 58), (“LBN”, 57), (“ZMB”, 55), (“TTO”,
54), (“LBR”, 52), (“BWA”, 51), (“ANT”, 50), (“BHS”,
50), (“MNG”, 46), (“MKD”, 45), (“GLP”, 45), (“COD”,
45), (“KO-”, 42), (“BEN”, 42), (“GHA”, 41), (“MDG”,
36), (“MLI”, 35), (“AFG”, 35), (“ARM”, 33), (“MRT”,
33), (“KHM”, 32), (“CPV”, 31), (“TKM”, 31), (“MMR”,
31), (“BFA”, 29), (“BLZ”, 29), (“NCL”, 28), (“AGO”,
27), (“FJI”, 26), (“TCD”, 25), (“MTQ”, 25), (“GMB”,
23), (“SWZ”, 23), (“BIH”, 21), (“CAF”, 19), (“GUF”,
19), (“PSE”, 19), (“MOZ”, 18), (“NAM”, 18), (“SUR”,
17), (“GAB”, 17), (“LSO”, 16), (“ERI”, 15), (“BRN”,
14), (“REU”, 14), (“GUY”, 14), (“MAC”, 13), (“TON”,
13), (“ABW”, 12), (“PYF”, 12), (“TGO”, 12), (“BRB”,
12), (“VIR”, 11), (“CA-”, 11), (“DJI”, 11), (“FLK”, 11),
(“MNE”, 11), (“KGZ”, 11), (“ESH”, 10), (“LCA”, 10),
(“BMU”, 10), (“COG”, 9), (“ATG”, 9), (“BDI”, 9), (“GIB”,
8), (“LAO”, 8), (“GNB”, 8), (“DMA”, 8), (“KNA”, 8),
(“GNQ”, 7), (“RWA”, 7), (“BTN”, 7), (“TJK”, 6), (“TCA”,
5), (“VCT”, 4), (“WSM”, 3), (“IOT”, 3), (“AND”, 3),
(“ISR”, 3), (“AIA”, 3), (“MDV”, 2), (“SHN”, 2), (“VGB”,
2), (“MSR”, 2), (“PNG”, 1), (“MHL”, 1), (“VUT”, 1),
(“GRD”, 1), (“VAT”, 1), (“MCO”, 1)]
1086), (“51R”, 1069), (“36S”, 1046), (“35T”, 1038),
(“36R”, 1037), (“49M”, 1026), (“48M”, 1021), (“10T”,
1010), (“53S”, 1001), (“10S”, 955), (“14R”, 935), (“19T”,
928), (“30S”, 912), (“17S”, 875), (“17R”, 874), (“43P”,
854), (“50S”, 796), (“36U”, 767), (“50R”, 751), (“33S”,
751), (“32S”, 746), (“14S”, 730), (“34T”, 728), (“12S”,
716), (“37M”, 705), (“13S”, 676), (“37T”, 667), (“36T”,
653), (“15S”, 629), (“55H”, 618), (“34S”, 604), (“29S”,
600), (“38P”, 598), (“15T”, 586), (“22J”, 585), (“18Q”,
549), (“15R”, 539), (“35S”, 511), (“10U”, 497), (“21H”,
492), (“36V”, 491), (“19H”, 482), (“48R”, 476), (“49S”,
459), (“48S”, 446), (“49Q”, 444), (“29T”, 438), (“16P”,
429), (“56H”, 425), (“14Q”, 422), (“40R”, 420), (“39R”,
413), (“39U”, 406), (“18N”, 385), (“35J”, 383), (“37V”,
380), (“50T”, 379), (“56J”, 355), (“34V”, 351), (“43V”,
347), (“29U”, 346), (“38U”, 345), (“17M”, 328), (“38T”,
323), (“19P”, 323), (“51S”, 317), (“54H”, 311), (“49R”,
295), (“34H”, 293), (“22K”, 293), (“48N”, 276), (“20H”,
273), (“50Q”, 268), (“28P”, 262), (“18L”, 260), (“24M”,
258), (“24L”, 256), (“21J”, 255), (“41V”, 254), (“13T”,
254), (“47N”, 253), (“40U”, 253), (“45R”, 251), (“43Q”,
245), (“51Q”, 243), (“51T”, 240), (“39S”, 239), (“19K”,
238), (“19Q”, 237), (“59G”, 236), (“43R”, 234), (“12T”,
230), (“49T”, 227), (“41U”, 223), (“32V”, 219), (“30V”,
212), (“13Q”, 212), (“40V”, 210), (“16R”, 210), (“20T”,
210), (“38R”, 204), (“36J”, 203), (“46T”, 200), (“45T”,
197), (“44U”, 196), (“15Q”, 190), (“50L”, 190), (“32P”,
184), (“60H”, 182), (“47P”, 182), (“20P”, 181), (“24K”,
178), (“17Q”, 178), (“35K”, 169), (“20J”, 168), (“11U”,
165), (“18H”, 164), (“52T”, 163), (“11T”, 161), (“36N”,
158), (“39V”, 157), (“20K”, 157), (“39Q”, 155), (“12U”,
149), (“38V”, 147), (“18P”, 147), (“23L”, 147), (“18G”,
146), (“31N”, 146), (“19J”, 142), (“33P”, 141), (“40Q”,
136), (“13R”, 136), (“47T”, 132), (“47R”, 126), (“48U”,
124), (“32R”, 123), (“15P”, 121), (“39P”, 117), (“48P”,
117), (“33R”, 116), (“45U”, 113), (“43S”, 111), (“44N”,
109), (“54T”, 109), (“32N”, 109), (“36W”, 108), (“17P”,
108), (“36P”, 105), (“31R”, 104), (“56K”, 101), (“20Q”,
101), (“39T”, 97), (“16Q”, 96), (“29R”, 95), (“25L”,
92), (“45Q”, 91), (“46Q”, 91), (“48T”, 90), (“44Q”, 89),
(“42V”, 87), (“29N”, 87), (“43U”, 86), (“4Q”, 86), (“47Q”,
85), (“48Q”, 84), (“30N”, 83), (“19G”, 82), (“25M”, 81),
(“42Q”, 80), (“44P”, 80), (“20L”, 77), (“50J”, 77), (“53U”,
76), (“38N”, 75), (“27W”, 75), (“44R”, 75), (“33V”,
74), (“34R”, 72), (“49L”, 70), (“36M”, 69), (“40S”, 69),
(“12R”, 68), (“37P”, 68), (“52R”, 65), (“14T”, 64), (“50U”,
62), (“35H”, 62), (“50H”, 61), (“28R”, 60), (“54U”,
59), (“46V”, 58), (“44T”, 56), (“21K”, 56), (“55G”, 56),
(“22L”, 56), (“35P”, 55), (“31P”, 54), (“29P”, 54), (“35R”,
52), (“30R”, 51), (“19U”, 50), (“53T”, 49), (“46U”, 49),
(“50N”, 48), (“47S”, 48), (“42R”, 48), (“37Q”, 47), (“19L”,
47), (“14U”, 47), (“28Q”, 46), (“37N”, 45), (“19F”, 45),
(“42U”, 44), (“36K”, 42), (“37R”, 40), (“37W”, 40),
UTM Zones Here we show the counts for each unique
UTM zone in fMoW. Counts are incremented once for each
sequence instead of once per metadata file.
[(“31U”, 5802), (“32T”, 4524), (“33T”, 4403), (“30U”,
4186), (“32U”, 3864), (“33U”, 3315), (“31T”, 3150),
(“18T”, 2672), (“17T”, 2339), (“34U”, 2049), (“37S”,
1718), (“30T”, 1686), (“37U”, 1672), (“23K”, 1627),
(“18S”, 1481), (“11S”, 1388), (“16T”, 1283), (“54S”,
1244), (“38S”, 1229), (“31S”, 1227), (“35U”, 1137),
(“35V”, 1116), (“52S”, 1115), (“16S”, 1110), (“51P”,
10
(“41S”, 38), (“42S”, 38), (“38Q”, 37), (“30P”, 37), (“42T”,
36), (“35L”, 36), (“46R”, 36), (“52U”, 35), (“60G”, 35),
(“27V”, 34), (“45V”, 34), (“35W”, 34), (“13U”, 34),
(“35M”, 34), (“18M”, 32), (“17L”, 32), (“41W”, 32),
(“17N”, 31), (“21N”, 31), (“23M”, 30), (“21L”, 29),
(“28S”, 28), (“58K”, 28), (“22M”, 28), (“41R”, 27),
(“18R”, 27), (“10V”, 26), (“57U”, 26), (“34K”, 26),
(“49U”, 25), (“6V”, 25), (“38L”, 25), (“20G”, 25), (“33L”,
24), (“60K”, 24), (“55K”, 23), (“51N”, 23), (“22H”,
22), (“22N”, 22), (“47V”, 22), (“41T”, 21), (“44V”, 21),
(“36Q”, 21), (“46S”, 20), (“22T”, 20), (“34N”, 19), (“20U”,
19), (“12Q”, 19), (“12V”, 19), (“19N”, 18), (“31Q”, 18),
(“21M”, 18), (“52L”, 18), (“56V”, 18), (“52V”, 18), (“23J”,
16), (“45W”, 16), (“9U”, 16), (“34J”, 16), (“27P”, 16),
(“43W”, 15), (“1K”, 14), (“33M”, 14), (“40W”, 14),
(“40K”, 14), (“43T”, 14), (“55T”, 14), (“51U”, 13), (“53K”,
13), (“34M”, 13), (“32M”, 13), (“37L”, 13), (“21P”, 12),
(“50P”, 12), (“35N”, 12), (“6K”, 11), (“59H”, 11), (“33K”,
11), (“20M”, 11), (“49N”, 11), (“5Q”, 10), (“6W”, 10),
(“26Q”, 10), (“39L”, 10), (“47U”, 10), (“34W”, 10),
(“50K”, 10), (“8V”, 10), (“20S”, 10), (“40T”, 9), (“51V”,
9), (“42W”, 8), (“60W”, 8), (“53H”, 8), (“50V”, 8), (“20F”,
8), (“53L”, 7), (“18F”, 7), (“35Q”, 7), (“30Q”, 7), (“44S”,
7), (“15M”, 7), (“5V”, 7), (“54J”, 7), (“39W”, 6), (“49P”,
6), (“50M”, 6), (“19V”, 6), (“21F”, 6), (“20N”, 5), (“14P”,
5), (“34P”, 5), (“53J”, 5), (“38M”, 5), (“51K”, 5), (“29Q”,
4), (“11R”, 4), (“49V”, 4), (“48V”, 4), (“51M”, 4), (“38W”,
4), (“33N”, 4), (“45S”, 4), (“27Q”, 4), (“55J”, 3), (“19M”,
3), (“53V”, 3), (“2W”, 3), (“32Q”, 3), (“2L”, 3), (“16M”,
3), (“57W”, 3), (“43M”, 3), (“53W”, 2), (“43N”, 2), (“52J”,
2), (“28M”, 2), (“56T”, 2), (“33H”, 2), (“21T”, 2), (“44W”,
2), (“15V”, 1), (“33W”, 1), (“60V”, 1), (“18K”, 1), (“31M”,
1), (“54M”, 1), (“58P”, 1), (“58W”, 1), (“40X”, 1), (“58G”,
1), (“57V”, 1), (“16U”, 1), (“59K”, 1), (“52N”, 1), (“2K”,
1), (“33Q”, 1), (“34Q”, 1), (“11V”, 1), (“56W”, 1), (“26P”,
1), (“28W”, 1), (“59W”, 1), (“38K”, 1), (“26S”, 1), (“7L”,
1), (“56U”, 1), (“55V”, 1)]
existence, non-existence, and cloud cover.
Appendix II. Dataset Collection
Introduced in the main paper, CNN-I-1 and CNN-IM-1
make predictions for each individual view. All other methods repeat their prediction over the full sequence. Again,
we note that these tests are clearly not fair to some categories, such as “construction site”, where some views may
not even contain the category. However, we show results
for these tests for completeness. Only the average values,
which do not include “false detection” results, are shown in
the main paper. We show per-category results in Table 4.
Figure 9: Sample image (“wind farm”) of what a GeoHIVE
user might see while validating potential fMoW features.
Instructions can be seen in the top-left corner that inform
users to press the ‘1’, ‘2’, or ‘3’ keys to validate existence,
non-existence, or cloud obscuration of a particular object.
For validation of object localization, a different interface
is used that asks users to draw a bounding box around the
object of interest after being given an initial seed point. The
visualization for this is shown in Figure 10, and the seed
point can be seen as a green dot located on the object of interest. Users are additionally provided some instructions regarding how large of a box to draw, which may vary by object class. This interface is more complex than the location
selection interface, which is why it is performed after object
existence can be confirmed and non-cloudy high-quality imagery is obtained. A smaller and more experienced group of
users is also used for this task to help ensure the quality of
the annotations.
Appendix III. Additional Results
The location selection phase was used to identify potential locations that map to our categories while also ensuring geographic diversity. Potential locations were drawn
from several Volunteered Geographic Information (VGI)
datasets, which were conflated and curated to remove duplicates and ensure geographic diversity. The remaining locations were then processed using DigitalGlobe’s GeoHIVE
crowdsourcing platform. Members of the GeoHIVE crowd
were asked to validate the presence of categories in satellite
images, as shown in Figure 9. The interface uses centerpoint location information to draw a circle around a possible
object of interest. The interface then asks users to rapidly
verify the existence of a particular label, as extracted from
the VGI datasets, using the ‘1’, ‘2’, and ‘3’ keys to represent
Appendix IV. Dataset Examples
Figure 11 shows one example for each category in our
dataset. For viewing purposes, regions within the full image chip were extracted using the scaled bounding box coordinates for the categories. For the baseline approaches
11
CNN-I-1
CNN-I
LSTM-I
airport
airport hangar
airport terminal
amusement park
aquaculture
archaeological site
barn
border checkpoint
burial site
car dealership
construction site
crop field
dam
debris or rubble
educational institution
electric substation
factory or powerplant
fire station
flooded road
fountain
gas station
golf course
ground transportation station
helipad
hospital
impoverished settlement
interchange
lake or pond
lighthouse
military facility
multi-unit residential
nuclear powerplant
office building
oil or gas facility
park
parking lot or garage
place of worship
police station
port
prison
race track
railway bridge
recreational facility
road bridge
runway
shipyard
shopping mall
single-unit residential
smokestack
solar farm
space facility
stadium
storage tank
surface mine
swimming pool
toll booth
tower
tunnel opening
waste disposal
water treatment facility
wind farm
zoo
0.662
0.815
0.664
0.653
0.698
0.642
0.458
0.598
0.232
0.736
0.664
0.286
0.853
0.755
0.297
0.461
0.771
0.406
0.337
0.253
0.659
0.691
0.852
0.627
0.703
0.331
0.429
0.804
0.615
0.634
0.520
0.388
0.548
0.180
0.692
0.563
0.710
0.560
0.187
0.630
0.540
0.847
0.645
0.864
0.667
0.781
0.388
0.549
0.643
0.665
0.784
0.748
0.844
0.895
0.746
0.859
0.841
0.642
0.789
0.475
0.815
0.899
0.471
0.737
0.864
0.746
0.726
0.751
0.743
0.532
0.678
0.268
0.788
0.712
0.436
0.879
0.805
0.330
0.517
0.852
0.461
0.382
0.254
0.744
0.779
0.906
0.691
0.814
0.385
0.484
0.852
0.700
0.727
0.564
0.433
0.575
0.229
0.757
0.624
0.778
0.637
0.216
0.646
0.614
0.893
0.704
0.908
0.712
0.847
0.416
0.617
0.700
0.756
0.862
0.878
0.866
0.933
0.789
0.916
0.874
0.741
0.852
0.562
0.842
0.932
0.531
0.732
0.819
0.685
0.757
0.736
0.767
0.507
0.675
0.311
0.802
0.771
0.423
0.871
0.778
0.536
0.482
0.865
0.461
0.450
0.240
0.720
0.806
0.926
0.733
0.866
0.395
0.546
0.691
0.625
0.751
0.627
0.472
0.759
0.245
0.767
0.653
0.791
0.642
0.225
0.621
0.657
0.880
0.759
0.925
0.728
0.806
0.326
0.622
0.705
0.762
0.884
0.788
0.903
0.920
0.754
0.903
0.878
0.765
0.880
0.516
0.786
0.934
0.563
0.825
0.904
0.647
0.659
0.852
0.752
0.531
0.635
0.367
0.750
0.662
0.252
0.902
0.785
0.296
0.537
0.786
0.488
0.368
0.553
0.749
0.704
0.895
0.663
0.730
0.356
0.720
0.898
0.604
0.734
0.575
0.429
0.588
0.201
0.761
0.611
0.645
0.631
0.165
0.668
0.596
0.899
0.697
0.868
0.710
0.861
0.397
0.589
0.645
0.673
0.828
0.859
0.853
0.895
0.779
0.857
0.918
0.674
0.914
0.478
0.783
0.928
0.552
0.840
0.905
0.696
0.758
0.901
0.798
0.624
0.697
0.465
0.821
0.716
0.347
0.933
0.839
0.365
0.585
0.847
0.534
0.471
0.634
0.811
0.767
0.932
0.734
0.834
0.447
0.764
0.912
0.661
0.805
0.630
0.517
0.650
0.225
0.824
0.658
0.694
0.703
0.199
0.710
0.656
0.936
0.762
0.911
0.742
0.899
0.390
0.676
0.711
0.792
0.852
0.885
0.871
0.930
0.837
0.894
0.949
0.749
0.943
0.583
0.841
0.950
0.606
0.821
0.835
0.726
0.782
0.846
0.790
0.622
0.682
0.497
0.830
0.748
0.407
0.929
0.861
0.439
0.601
0.859
0.542
0.516
0.809
0.857
0.785
0.898
0.764
0.804
0.468
0.691
0.927
0.676
0.854
0.685
0.523
0.494
0.213
0.859
0.685
0.704
0.729
0.317
0.642
0.729
0.924
0.794
0.909
0.758
0.900
0.411
0.675
0.658
0.782
0.882
0.971
0.879
0.921
0.848
0.881
0.947
0.777
0.932
0.632
0.864
0.972
0.637
Average
0.618
0.678
0.684
0.666
0.722
0.735
false_detection
(a) ground transportation station
(b) helipad
Figure 10: Sample images of the interface used to more
precisely localize objects within an image. In each example, a green dot is placed near the center of the pertinent
object. Users are able to draw a bounding box by clicking
and dragging. Instructions at the top of each example inform the user how to use the interface and also provide any
category-specific instructions that may be relevant. Comments regarding issues such as clouds or object misclassification can be entered near the bottom of the page before
submitting an annotation.
CNN-IM-1 CNN-IM
LSTM-IM
Table 4: F1 scores for different approaches on an individual
image basis. Color formatting was applied to each column
independently. The average values shown at the bottom of
the table are calculated without the false detection scores.
CNN-I-1 and CNN-IM-1 make predictions for each individual view. All other methods repeat their prediction over
the full sequence.
to keep in mind that the images for each category in the
full dataset vary in quality, have different weather conditions (e.g., snow cover), contain drastically different context
(e.g., desert vs. urban), have different levels of difficulty for
recognition, and other variations.
presented in the main paper, smaller boxes were given more
context than larger boxes, and therefore it may appear for
some categories with smaller sizes (e.g., smoke stacks) that
there is a lot more context than expected. It is important
12
airport
airport hangar
burial site
factory or powerplant
airport terminal
car dealership
construction site
false detection
helipad
fire station
hospital
amusement park
aquaculture
crop field
dam
flooded road
impoverished settlement
fountain
interchange
lake or pond
archaeological site
debris or rubble
barn
border checkpoint
educational institution
gas station
golf course
lighthouse
military facility
electric substation
ground transportation
station
multi-unit residential
nuclear powerplant
office building
oil or gas facility
park
parking lot or garage
place of worship
police station
port
prison
race track
railway bridge
recreational facility
road bridge
runway
shipyard
shopping mall
single-unit residential
toll booth
smokestack
tower
solar farm
tunnel opening
space facility
stadium
waste disposal
storage tank
water treatment facility
Figure 11: One example per category in fMoW.
13
surface mine
wind farm
swimming pool
zoo
References
[17] I. Krasin, T. Duerig, N. Alldrin, V. Ferrari,
S. Abu-El-Haija, A. Kuznetsova, H. Rom, J. Uijlings, S. Popov, A. Veit, S. Belongie, V. Gomes,
A. Gupta, C. Sun, G. Chechik, D. Cai, Z. Feng,
D. Narayanan, and K. Murphy. Openimages: A
public dataset for large-scale multi-label and multiclass image classification. Dataset available from
https://github.com/openimages, 2017.
[1] S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev,
G. Toderici, B. Varadarajan, and S. Vijayanarasimhan.
YouTube-8M: A Large-Scale Video Classification
Benchmark. arXiv preprint arXiv:1609.08675, 2016.
[2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra,
C. Lawrence Zitnick, and D. Parikh. VQA: Visual
Question Answering. In ICCV, 2015.
[3] A. Chang, A. Dai, T. Funkhouser, M. Halber,
M. Nießner, M. Savva, S. Song, A. Zeng, and
Y. Zhang.
Matterport3d: Learning from rgbd data in indoor environments.
arXiv preprint
arXiv:1709.06158, 2017.
[4] G. Cheng, J. Han, and X. Lu. Remote Sensing Image
Scene Classification: Benchmark and State of the Art.
Proc. IEEE, 2017.
[5] N. DigitalGlobe, CosmiQ Works. Spacenet. Dataset
available
from
https://aws.amazon.com/publicdatasets/spacenet/, 2016.
[6] J. Donahue, L. Anne Hendricks, S. Guadarrama,
M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term Recurrent Convolutional Networks for
Visual Recognition and Description. In CVPR, 2015.
[7] M. Everingham, S. A. Eslami, L. Van Gool, C. K.
Williams, J. Winn, and A. Zisserman. The Pascal
Visual Object Classes Challenge: A Retrospective.
IJCV, 2015.
[8] L. Fei-Fei, R. Fergus, and P. Perona. One-Shot Learning of Object Categories. PAMI, 2006.
[9] G. Griffin, A. Holub, and P. Perona. Caltech-256 Object Category Dataset. 2007.
[10] D. Harwath and J. R. Glass. Learning Word-Like
Units from Joint Audio-Visual Analysis. ACL, 2017.
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual
learning for image recognition. In CVPR, 2016.
[12] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely Connected Convolutional Networks.
CVPR, 2017.
[13] N. Jean, M. Burke, M. Xie, W. M. Davis, D. B. Lobell, and S. Ermon. Combining Satellite Imagery and
Machine Learning to Predict Poverty. Science, 2016.
[14] A. Karpathy and L. Fei-Fei. Deep Visual-Semantic
Alignments for Generating Image Descriptions. In
CVPR, 2015.
[15] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale Video Classification with Convolutional Neural Networks. In CVPR,
2014.
[16] D. Kingma and J. Ba. Adam: A Method for Stochastic
Optimization. ICLR, 2014.
[18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona,
D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft
COCO: Common Objects in Context. In ECCV, 2014.
[19] D. Marmanis, M. Datcu, T. Esch, and U. Stilla. Deep
Learning Earth Observation Classification Using ImageNet Pretrained Networks. GRSL, 2016.
[20] A. Miech, I. Laptev, and J. Sivic. Learnable pooling
with Context Gating for video classification. arXiv
preprint arXiv:1706.06905, 2017.
[21] Y. Pan, T. Mei, T. Yao, H. Li, and Y. Rui. Jointly
Modeling Embedding and Translation to Bridge Video
and Language. In CVPR, 2016.
[22] O. Russakovsky, J. Deng, H. Su, J. Krause,
S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015.
[23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
[24] K. Tang, M. Paluri, L. Fei-Fei, R. Fergus, and L. Bourdev. Improving Image Classification with Location
Context. In ICCV, 2015.
[25] S. Wang, M. Bai, G. Mattyus, H. Chu, W. Luo,
B. Yang, J. Liang, J. Cheverie, S. Fidler, and R. Urtasun. Torontocity: Seeing the world with a million
eyes. ICCV, 2017.
[26] T. Weyand, I. Kostrikov, and J. Philbin. Planet-photo
geolocation with convolutional neural networks. In
ECCV, 2016.
[27] G.-S. Xia, J. Hu, F. Hu, B. Shi, X. Bai, Y. Zhong,
L. Zhang, and X. Lu. AID: A Benchmark Data Set
for Performance Evaluation of Aerial Scene Classification. TGRS, 2017.
[28] G.-S. Xia, X.-Y. Tong, F. Hu, Y. Zhong, M. Datcu, and
L. Zhang. Exploiting deep features for remote sensing image retrieval: A systematic investigation. arXiv
preprint arXiv:1707.07321, 2017.
[29] Y. Yang and S. Newsam. Bag-Of-Visual-Words and
Spatial Extensions for Land-Use Classification. In
ACM GIS, 2010.
[30] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan,
O. Vinyals, R. Monga, and G. Toderici. Beyond Short
14
Snippets: Deep Networks for Video Classification. In
CVPR, 2015.
15
| 1 |
MPISE: Symbolic Execution of MPI Programs
Xianjin Fu1,2 , Zhenbang Chen1,2 , Yufeng Zhang1,2 ,
Chun Huang1,2 ,Wei Dong1 and Ji Wang1,2
1
arXiv:1403.4813v3 [cs.DC] 15 Sep 2014
2
School of Computer, National University of Defense Technology, P. R. China
Science and Technology on Parallel and Distributed Processing Laboratory,
National University of Defense Technology, P. R. China
{xianjinfu,zbchen,yfzhang,chunhuang,wdong,wj}@nudt.edu.cn
Abstract. Message Passing Interfaces (MPI) plays an important role
in parallel computing. Many parallel applications are implemented as
MPI programs. The existing methods of bug detection for MPI programs
have the shortage of providing both input and non-determinism coverage,
leading to missed bugs. In this paper, we employ symbolic execution to
ensure the input coverage, and propose an on-the-fly schedule algorithm
to reduce the interleaving explorations for non-determinism coverage,
while ensuring the soundness and completeness. We have implemented
our approach as a tool, called MPISE, which can automatically detect
the deadlock and runtime bugs in MPI programs. The results of the
experiments on benchmark programs and real world MPI programs indicate that MPISE finds bugs effectively and efficiently. In addition, our
tool also provides diagnostic information and replay mechanism to help
understanding bugs.
1
Introduction
In the past decades, Message Passing Interface (MPI) [19] has become the de
facto standard programming model for parallel programs, especially in the filed
of high performance computing. A significant part of parallel programs were
written using MPI, and many of them are developed in dozens of person-years
[14].
Currently, the developers of MPI programs usually use traditional methods
to improve the confidence of the programs, such as traditional testing and debugging [2][1]. In practice, developers may waste a lot of time in testing, but only
a small part of behavior of the program is explored. MPI programs have the
common features of concurrent systems, including non-determinism, possibility
of deadlock, etc. These features make the shortage of testing in coverage guarantee more severe. Usually, an MPI program will be run as several individual
processes. The nature of non-determinism makes the result of an MPI program
depend on the execution order of the statements in different processes. That is
to say, an MPI program may behave differently with a same input on different
executions. Hence, sometimes it is harder to find the bugs in an MPI program
by a specific program execution.
To improve the reliability of MPI programs, many techniques have been
proposed. Basically, we can divide the existing work into two categories: static
analysis methods [21][22][23] and dynamic analysis methods [27] [24]. A static
method analyzes an MPI program without actually running it. The analysis can
be carried out on code level [22] or model level [21]. Usually, a static method
needs to make an abstraction of the MPI program under analysis [21][23]. Therefore, many static methods suffer the false alarm problem.
Dynamic methods, such as testing and runtime verification, need to run the
analyzed MPI programs and utilize the runtime information to do correctness
checking [17][20][26], online verification [25][27], debugging [1][4], etc. Traditional
testing methods work efficiently in practice by checking the correctness of a run
under a given test harness. However, testing methods cannot guarantee the coverage on non-determinism even after many runs of a same program input. Other
dynamic analysis methods, such as ISP [25], provide the coverage guarantee over
the space of non-determinism and scale well, but they are still limited to program inputs. While TASS [22] employs symbolic execution and model checking
to verify MPI programs, it only works on small programs due to the limited
support of runtime library models.
In this paper, we use symbolic execution to reason about all the inputs and try
to guarantee the coverage on both input and non-determinism. We symbolically
execute the statements in each process of an MPI program to find input-related
bugs, especially runtime errors and deadlocks. For the non-determinism brought
by the concurrent features, we use an on-the-fly scheduler to reduce the state
space to be explored in the analysis, while ensuring the soundness and completeness. Specially, to handle the non-determinism resulted from the wildcard
receives in MPI programs, we dynamically match the source of a wildcard receive
into all the possible specific sources in a lazy style, which avoids the problem of
missing bugs. Furthermore, unlike the symbolic execution plus model checking
method in [22], which uses an MPI model to simulate the runtime behaviors of
MPI library, we use a true MPI library as the model, which enables us to analyze
real-world MPI programs.
To summarize, our paper has the following main contributions: firstly, we
propose an on-the-fly scheduling algorithm, which can reduce unnecessary interleaving explorations while ensuring the soundness and completeness; secondly,
when attacking the non-determinism caused by wildcard receives, we propose
a technique, called lazy matching, to avoid blindly matching each process as
the source of a wildcard receive, which may lead to false positives; finally, we
have implemented our approach in a tool called MPISE, and conducted extensive experiments to justify its effectiveness and efficiency in finding bugs in MPI
programs.
The rest of this paper is organized as follows. Section 2 introduces the background and shows the basic idea of MPISE by motivating examples. Section 3
describes the details of the algorithms implemented in MPISE. Section 4 explains
our implementation based on Cloud9 and shows the experimental results. Finally,
Sections 5 discusses the related work and the conclusion is drawn in Section 6.
2
Background and Motivating Example
In this section, we briefly describe symbolic execution and the scope of the MPI
APIs we are concerned with, then show how our algorithm works by motivating
examples.
2.1
Symbolic execution
Symbolic execution [16] is a SAT/SMT based program analysis technique originally introduced in the 1970s. With the significant improvement in SAT/SMT
techniques and computing power, symbolic execution draws renewed interests recently. The main idea is, rather than using concrete values, symbolic execution
uses symbolic values as input values, and keeps tracking the results of numerical
operations on symbolic values. Hence, the result of a program under symbolic execution will be symbolic expressions. Most importantly, symbolic execution uses
a constraint of symbolic values, called path condition (PC), to represent a path
of a program. At the beginning, the path condition is true. When encountering
a branch statement, symbolic execution explores both directions of the branch.
For exploring one direction, symbolic execution records (i.e., conjunction) the
condition cond corresponding to the direction in PC and queries an underlying
solver with P C ∧ cond to decide whether this direction is feasible. If the answer
is yes, symbolic execution will continue to execute the statements following the
direction, and PC is update to be P C ∧ cond; otherwise, it means the direction
is infeasible, thus symbolic execution backtracks to the branch statement, and
starts to explore the other direction. The selection of which direction of a branch
to explore first can be random or according to some heuristics. Once symbolic
execution reaches the end of a program, the accumulated PC represents the
constraints that the inputs need to satisfy to drive the program to the explored
path. Therefore, we can consider symbolic execution as a function that computes
a set of PCs for a program. Naturally, we can use the PCs of the program to do
automatic test generation [8][9], bug finding [8][13], verification [12], etc.
According to the before explanation, symbolic execution is a precise program
analysis technique, because each PC represents a real feasible path of the program under analysis. Therefore, when used for bug finding, symbolic execution
does not suffer from the false alarm problem, and the bugs found are real bugs.
Whereas, one of the major challenge symbolic execution faces is path space exploration, which is theoretically exponential with the number the branches in
the program.
2.2
MPI Programs
An MPI program is a sequential program in which some MPI APIs are used. The
running of an MPI program usually consists of a number of parallel processes,
say P0 , P1 , ..., Pn−1 , that communicate via message passings based on MPI APIs
and the supporting platform. The message passing operators we consider in this
paper include:
– Send(dest) -send a message to Pdest (dest = 0, . . . , n − 1), which is the
destination process of the Send operation. Note that only synchronous communications are considered in this paper, so this operation blocks until a
matching receive has been posted.
– Recv(src) -receive a message from Psrc (src = 0, . . . , n − 1, AN Y ), which is
the source process of the Recv operation. Note that the src can take the
wildcard value “ANY”, which means this Recv operation expects messages
from any process. Because Send and Recv are synchronous, a Send/Recv that
fails to match with a corresponding Recv/Send would result in a deadlock.
– Barrier() -synchronization of all processes, which means the statements of
any process should not be issued past this barrier until all the processes
are synchronized. Therefor, an MPI program is expected to eventually reach
such a state that all the processes reach their barrier calls. If this does not
hold, there would be a deadlock.
The preceding three MPI operations are the most important operations we
consider in this paper. Actually, they cover the most frequently used synchronous
communications in MPI programs.
2.3
Motivating Examples
Usually, an MPI program is fed with inputs to perform a computational task,
and the bugs of the program may be input-dependent. On the other side, due
to the non-determinism feature, even with same inputs, one may find that bugs
occur “sometimes”.
Consider the MPI program in Fig 1, if the program runs with an input that
is not equal to ‘a’, the three processes will finish normally with two matched
Send and Recv, as indicated by Fig 2(a). However, if the program is fed with the
input ‘a’, a deadlock may happen, in case that P roc1 receives a message from
P roc2 first by a wildcard receive, and then it waits a message from P roc2 and
P roc0 also expects P roce1 to receive a message, as shown in Fig 2(c). Therefore,
tools that do not provide input space coverage would surely fail to detect this
bug if the program is not fed with ‘a’. Even if one is lucky enough to run the
program with ‘a’, we may still fail to detect the bug if the wildcard receive is
matched with P roc0 , e.g., the case in Fig 2(b).
Thus, for detecting deadlock bugs, we need both input coverage and nondeterminism coverage guarantee. The basic idea of our method is: we employ
symbolic execution to cover all possible inputs, and explore all the possible
matches of a wildcard receive by matching it to any possible source.
To be more detailed, since we only symbolically execute one process at a time,
we need to decide the exploration order of the processes. Usually, each process
of an MPI program has a rank, we always start from the smallest ranked process
and switch to another process until the current process needs synchronization,
such as sending or receiving a message. Thus, the switches during symbolic
execution happen on-the-fly. Specifically, things become more complex when
encountering a Recv(ANY) statement, where we need to delay the selection of
1
2
3
i n t main ( i n t argc , char ∗∗ argv ) {
i n t x , y , myrank ;
MPI Comm comm = MPI COMM WORLD;
4
M P I I n i t (& argc , &argv ) ;
MPI Comm rank (comm, &myrank ) ;
i f ( myrank==0) {
x = 0;
MPI Ssend(&x , 1 , MPI INT , 1 , 9 9 , comm) ;
}
e l s e i f ( myrank==1) {
i f ( argv [ 1 ] [ 0 ] ! = ’ a ’ ) // a r g c i s e x a c t l y 2
MPI Recv(&x , 1 , MPI INT , 0 , 9 9 , comm, NULL) ;
else
MPI Recv(&x , 1 , MPI INT , MPI ANY SOURCE, 9 9 , comm, NULL) ;
5
6
7
8
9
10
11
12
13
14
15
16
MPI Recv(&y , 1 , MPI INT , 2 , 9 9 , comm, NULL) ;
} e l s e i f ( myrank==2){
x = 20;
MPI Ssend(&x , 1 , MPI INT , 1 , 9 9 , comm) ;
}
MPI Finalize () ;
return 0 ;
17
18
19
20
21
22
23
24
}
Fig. 1. Example showing the need for both input and non-determinism coverage
the corresponding sending process until all the possible sending statements are
encountered.
For the MPI program in Fig 1 run in three processes, we start from P roc0 , i.e.,
the process with rank 0. When executing to line 9, a Send is encountered, which
means a synchronization is needed. From the send statement, we know it needs
to send a message to P roc1 . Thus, we switch to P roc1 and do symbolic execution
from the beginning. When the branch statement at line 12 is encountered, and
argv[1][0] is symbolic (we suppose it has a symbolic value X), the condition
X 6= ‘a’ is added to the path condition of the true side and its negation to the
false side. We mark here as a backtrack point and has two paths to follow, which
are explained as follows:
X 6= ‘a’: If we explore the true side first, the path condition, i.e., X 6= ‘a’,
is fed to the solver to check the feasibility of the path. Apparently, the
solver will answer yes, thus we can continue the symbolic execution of P roc1 .
Then, Recv(0) is meet and it is exactly matched with the send in P roc0 .
Therefore, both processes advance, and P roc0 ends while P roc1 goes to
Recv(2). In a same manner, P roc1 gets asleep, we switch to P roc2 . Again
the two operations matches, the whole execution will end normally, as shown
in Fig 2(a).
X == ‘a’: This side is also feasible. The symbolic execution of P roc1 will encounter Recv(ANY), and switches to P roc2 . After executing the Send at
Proc0
Send(1)
Recv(2)
Proc1 Recv(0)
Proc2
Send(1)
Send(1)
Send(1)
Recv(ANY)
Recv(2)
Send(1)
(a) X6=a
Recv(ANY)
Recv(2)
Send(1)
(b) X==a and wildcard (c) X==a and wildcard
receive matches with receive matches with
P roc0
P roc2
Fig. 2. Three cases of the program in Fig 1
Line 20, there is no process that can be switched to. All the possible sending processes of the Recv(ANY) in P roc1 are determined. Thus, now we
begin to handle the Recv(ANY) by matching it with each possible sending.
Suppose we match the Recv(ANY) with the Send of P roc0 , we continue to
execute P roc1 . We encounter another Recv at Line 17 that expects to receive a message from P roc2 , then P roc1 and P roc2 advance, and finally the
whole execution ends normally, as indicated by Fig 2(b). On the other hand,
if the Recv(ANY) is matched with the Send of P roc2 , when encountering
the Recv in P roc1 , symbolic execution will switch to P roc2 , but P roc2 has
finished. Then, P roc0 and P roc1 can not terminated. Hence, a deadlock is
detected, as shown in Fig 2(c).
In summary, the deadlock, which may happen in the program in Fig 1 when
run in three processes, can only be encountered when the input starts with ‘a’
and the Recv(ANY) in the second process is matched with the Send in the third
process. By using our approach, MPISE can detect it automatically. The details
of our symbolic execution algorithms will be introduced in the next section.
3
Symbolic execution algorithms
In this section, we will introduce a general framework for symbolic execution of
MPI programs first, and then present a scheduling algorithm during the symbolic execution. Furthermore, to attack the non-determinism brought by wildcard receives, we will present a refined scheduling method, which can ensure the
exploration of all the possible matches of a wildcard receive.
To start with, we introduce some notions first. When symbolically executing a
sequential program, the symbolic executor keeps tracking of states, each of which
consists of a map that records the symbolic/concrete value of each variable, a
program counter and a path condition. For an MPI program, a state during
symbolic execution is composed by the states of the parallel processes. A state
s0 is said to be the successor of a state s, if s0 can be obtained by symbolically
executing a statement in one process. With the notion of state, we define a
deadlock to be the state that has no successor, and at which there is at least one
process that does not terminate. Recall that symbolic execution will do state
forking when encountering a branch statement. For MPI programs, in addition
to branch statements, the concurrency nature can also result in state forking.
Theoretically, for the current state, if there are more than one process, say n,
that can be executed, there are n possible successor states. Hence, besides the
number of branch statements, the number of parallel processes also makes the
path space increase exponentially. Algorithm 1 presents a general framework for
symbolic execution of MPI programs.
Algorithm 1: Symbolic Execution Framework
1
2
3
4
5
6
7
8
9
10
Search(M P , n, slist){
Active = {P0 , . . . , Pn } ; Inactive = ∅;
NextProcCandidate= -1; worklist = {initial state};
while (worklist is not empty) do
s = pick next state;
p = Scheduler (s);
if p 6= null then
stmt = the next statement of p;
SE(s, p, stmt);
}
Basically, the symbolic execution procedure is a worklist-based algorithm.
The input consists of an MPI program, the number of the parallel running processes and the symbolic variables. At the beginning, only the initial state, i.e.,
composed by the initial states of all the processes, is contained in the worklist.
Then, new states can be derived from the current state and put into the worklist.
State exploration is done if there is no state in the worklist. Because of state
forking, we usually have a way for space exploration, such as depth first search
(DFS) and breadth first search (BFS). Clearly, it is hard or even impossible to
explore the whole path space. In fact, for the state forking introduced by the
concurrent feature, sometimes there is no need to add all the possible successor states to the worklist, which can still capture the behavior of the program
precisely in our context. Hence, different from the usual symbolic execution algorithm, in our algorithm, we first select a state from worklist (Line 5, where a
search algorithm can be used), then we make a decision (Line 6, the details of
which will be given in Section 3.1) of which process is scheduled for symbolic
execution. Finally, we symbolically execute the next statement of the scheduled
process, in which some new states may be generated.
Basically, for the non-communication statements in an MPI program, the
symbolic execution semantics is same as usual. In the following of this section, we
will concentrate on explaining the scheduling of the processes and the handling
of the communication operations.
3.1
On-the-fly scheduling
With the general framework in Algorithm 1, we introduce our scheduler here,
aiming to avoid naively exploring the interleavings of all the processes. For each
process of an MPI program during symbolic execution, the process is active if
it is not asleep. Usually, we make a process asleep when the process needs to
communicate but the corresponding process is not ready, whose details will be
given in Algorithm 3. We maintain the current status of each process via two sets:
Active and Inactive. At beginning, all the processes are contained in Active.
If a process is made to be asleep, it will be removed from Active and added to
Inactive. Because we schedule the processes on-the-fly, we use a global variable
N extP rocCandidate to denote the index of the next process to symbolically
execute. The following Algorithm 2 gives how to do scheduling.
Algorithm 2: Scheduling the Next Process for Symbolic Execution
1
2
3
4
5
6
7
8
9
10
Scheduler(s){
if N extP rocCandidate! = −1 and P rocN extP rocCandidate is active then
N ext = N extP rocCandidate;
N extP rocCandidate = −1;
return P rocN ext ;
else if Active 6= ∅ then
return the process p0 with the smallest rank in Active;
if Inactive 6= ∅ then
Report Deadlock;
}
First, we check whether there is a next process that needs to be executed and
is also active. If there exists one, the process identified by N extP rocCandidate
will be selected, and the next process global variable is reset (Line 1∼5); otherwise, we return the active process with the smallest rank if exists (Line 6∼7).
Finally, if there is no active process that can be scheduled, and the Inactive set
is non-empty, i.e., there exists at least one process that does not terminate, we
report that a deadlock is found (Line 8∼9).
Now, we explain how to symbolically execute each statement. In Algorithm 3,
we mainly give the handling for MPI APIs considered in this paper. The local
statements in each process do not influence the other processes, and the symbolic execution of basic statements, such as assignment and branch, is the same
with the traditional approach [8]. Hence, the symbolic execution of local statements is omitted for the sake of space. In Algorithm 3, Advance(S) denotes the
procedure in which the program counter of each process in S will be advanced,
and Match(p, q) denotes the procedure in which the synchronization between p
and q happens, i.e., the receiver receives the data sent by the sender, and the
program counters of p and q will be both advanced.
If a Send(dest) is encountered and there is a process in Inactive that matches
the statement, we move that process from Inactive to Active (Line 5) and advance the two processes (Line 6). If there is no process that can receive the message, we add this process into Inactive set (Line 8), and switch to the destination
process of the send operation (Line 9). The execution of a receive operation is
similar, except that when the receive operation is a wildcard receive, we make
the current process asleep (the reason will be explained in Section 3.2).
Algorithm 3: Symbolic Execution of a Statement
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
SE(s, p, stmt){
switch kindof (stmt) do
case Send (dest)
if stmt has a matched process q ∈ Inactive then
Inactive = Inactive \ {q}; Active = Active ∪ {q};
Match(p, q);
else
Inactive = Inactive ∪ {p};
N extP rocCandidate = dest;
return;
case Recv (src)
if src != MPI ANY SOURCE then
if stmt has a matched process q ∈ Inactive then
Inactive = Inactive \ {q}; Active = Active ∪ {q};
Match(p, q);
else
Inactive = Inactive ∪ {p};
N extP rocCandidate = src;
20
else
Inactive = Inactive ∪ {p};
21
return;
19
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
case Barrier
if mcb == ∅ then
mcb = {P0 , . . . , Pn } \ {p};
Inactive = Inactive ∪ {p};
else
mcb = mcb \ {p};
if mcb == ∅ then
Advance({P0 , . . . , Pn });
Inactive = ∅; Active = {P0 , . . . , Pn };
else
Inactive = Inactive ∪ {p};
return;
case Exit
Active = Active \ {p};
return;
Advance({p});
}
For handling barriers, we use a global variable mcb to denote the rest processes that need to reach a barrier for a synchronization. When a barrier state-
ment is encountered, if mcb is empty, we initialize mcb to be the set containing
the rest processes (Line 24) and add the current process into Inactive (Line 25).
If mcb is not empty, we remove the current process from mcb . Then, if mcb
is empty, i.e., all the processes have reached a barrier, we can advance all the
processes (Line 29) and make all the processes active (Line 30); otherwise, we
add the current process into Inactive set (Line 32). When encountering an Exit
statement, which means the current process terminates, we remove the current
process from Active (Line 35).
In summary, according to the two algorithms, the symbolic execution process will continue to execute the active process with the smallest rank until a
preemption happens caused by an unmatched MPI operation. From a state in
symbolic execution, we do not put all the possible states into the worklist, but
only the states generated by the current process. This is the reason why we call
it on-the-fly scheduling. Actually, we only explore a sub space of the whole program path space, but without sacrificing the ability of finding deadlock bugs.
The correctness of our on-the-fly scheduling algorithms is guaranteed by the
following theorem, whose proof is given in appendix.
Theorem 1. Given a path of an MPI program from the initial state to a deadlocked state, there exists a path from the initial state to the same deadlocked state
obtained by the on-the-fly scheduling. And vice versa.
3.2
Lazy matching algorithm
Note that so far, we do not treat wildcard receives. Actually, wildcard receives are
one of the major reasons of non-determinism. Clearly, we cannot blindly rewrite
a wildcard receive. For example, in Fig 3(a), if we force the wildcard receive in
P roc1 to receive from P roc2 , a deadlock will be reported, which actually will not
happen. In addition, if we rewrite a wildcard receive immediately when we find
a possible match, we still may miss bugs. As shown in Fig 3(b), if we match the
wildcard receive in P roc0 with the send in P roc1 , the whole symbolic execution
will terminate successfully, thus a deadlock, which will appear when the wildcard
receive is matched with the send in P roc2 , is missed.
P roc0
P roc1
P roc2
Send(1) Recv(ANY) local statements
(a) Blind rewriting of a wildcard receive
P roc0
P roc1 P roc2
Recv(ANY) ; Recv(2) Send(0) Send(0)
(b) Eager rewriting of a wildcard receive
Fig. 3. Rewriting of a wildcard statement
To solve this problem, we employ a lazy style approach instead of an eager
one. That is, we delay the selection of the send candidate of a wildcard receive
until the whole symbolic execution procedure blocks. To be detailed, when the
symbolic execution encounters a wildcard receive, we would make the current
process asleep (Line 20 in Algorithm 3), waiting for all possible senders. When
a matched send is found, the current process will also be made asleep, and
we switch to the next active process. When there is no process that can be
scheduled, i.e., all the processes are in Inactive, we match the wildcard receive
to each possible matched send by forking a successor state for each one. Thus,
Algorithm 2 needs to be refined to handle wildcard receives. The refined parts
are given as follows.
Algorithm 4: Refined Scheduling for Handling Wildcard Receives
1
2
3
4
5
6
7
8
9
Scheduler(s){
...
if Inactive 6= ∅ then
if Exists a Recv(ANY) process in Inactive then
P S = Inactive;
for each Recv(ANY) process p ∈ Inactive do
for each matched process q ∈ Inactive of p do
Inactive = P S \ {p, q}; Active = {p, q};
AddState(s, p, q);
return null;
10
else
Report Deadlock;
11
12
13
}
For each process encountering a wildcard receive in Inactive, we add a new
state for each of its matched sender processes (Line 9). The AddState(s, p, q)
denotes a procedure that does the synchronization between p and q, advances
both p and q, and adds the new state to the worklist. Thus, we are exploring
all the possible cases of a wildcard receive. If there are multiple Recv(ANY)
processes, we are interleaving the matches of all the processes. The example
in Fig 4 demonstrates this situation. When all the processes are asleep, if we
match the Recv(ANY) in P roc1 with the send in P roc0 first, no deadlock will
be detected; otherwise, if we match the Recv(ANY) in P roc2 with the send in
P roc3 first, a deadlock will be detected.
P roc0
P roc1
P roc2
P roc3
Send(to:1) Recv(from:ANY) Recv(from:ANY) Send(to:2)
Recv(from:3)
Send(to:1)
Fig. 4. Multiple wildcard receives
Therefore, after considering wildcard receives, the matches of different wildcard receives are not independent. We deals with this problem by naively interleaving the match orders of wildcard receives. This leads to redundant interleavings, but dose not miss interleaving-specific deadlocks. The optimization is
left to our future work. The proof of correctness of our handling for wildcard
receives is provided in appendix.
4
4.1
Implementation and Experiments
Implementation
We have implemented our approach as a tool, called MPISE, based on Cloud9
[7], which is a distributed symbolic executor for C programs. Cloud9 enhances
KLEE [8] by enabling the support of most POSIX interfaces and parallelism.
The architecture of MPISE is shown in Fig 5.
Process number and
other arguments
C-MPI
programs
LLVM-GCC
Compiler
LLVM
bytecode
MPISE
(executor,scheduler,
test generator)
Hooked TOMPI lib
( LLVM bytecode)
MPISE
(replayer)
Test
cases
Deadlock,
assert failure
Fig. 5. The architecture of MPISE.
The target MPI programs written in C is fed into LLVM-GCC compiler to
obtain the LLVM bytecode, which will be linked with a pre-compiled library, i.e.,
TOMPI [11], as well as the POSIX runtime library. Then, the linked executable
program will be symbolically executed. Basically, TOMPI is a platform that uses
multi-threads to simulate the running of an MPI program. TOMPI provides a
subset of MPI interfaces, which contains all the MPI APIs we consider in this
paper. An MPI program can be compiled and linked with TOMPI libraries
to generate a multi-thread executable, which is supposed to generate the same
output as that of the parallel running of the MPI program. Hence, we use TOMPI
as the underlying MPI library. By using TOMPI, we can use the support for
concurrency in Cloud9 to explore the path space of an MPI program run with a
specific number of processes. When a path ends or a deadlock is detected, MPISE
records all the information of the path, including the input, the orders of message
passings, etc. For each path, we generate a corresponding test case, based on
which one can use replayer to reproduce a concrete path. Compared with Cloud9,
our implementation of MPISE consists of the following new features:
– New scheduler. Cloud9 employs a none-preemptive scheduler, i.e., a process would keep being executed until it gives up, such as encountering an
explicit preemption call or process exit. Clearly, we need a new scheduler for
MPISE. We have implemented our on-the-fly scheduler that can schedule
the MPI processes according to the algorithms in Sections 3.1 & 3.2.
– Environment support for MPI APIs. Cloud9 does not “recognize”
MPI operations, while MPISE makes the symbolic engine know MPI operations based on TOMPI, including MPI Send, MPI Ssend, MPI Recv,
MPI Barrier, etc. The message passing APIs are dealt specially for scheduling, while other MPI APIs are treated as normal function callings.
– Enhanced Replay. MPISE can replay each generated test case of an MPI
program, which can help user to diagnosis bugs such as deadlock and assertion failure. The replayer of MPISE extends the replayer component of
Cloud9 by using the on-the-fly schedule when replaying a test case. During
replaying, the replayer uses the recorded input to feed the program, and
follows the recorded schedules to schedule the processes.
– Enhanced POSIX model. MPISE heavily depends on the library models
it uses. However, the POSIX model provided by Cloud9 is not sufficient for us
to symbolically execute MPI programs. The reason is we need to maintain a
process specific data area for each process when symbolically executing each
process. Because we use multi-thread programs to simulate the behaviour of
MPI programs, we have improved the mechanism for multi-thread programs
in Cloud9 to support maintaining thread specific data.
4.2
Experimental evaluation
We have conducted extensive experiments to validate the effectiveness and scalability of MPISE. All the experiments were conducted on a Linux server with
32 cores and 250 GB memory.
Using MPISE to analyze the programs in the Umpire test suite [25], we have
successfully analyzed 33 programs, i.e., either no deadlock detected or detecting
a deadlock as expected.
The Umpire test case are input-independent, i.e. the inputs have nothing to
do with whether a deadlock happens. Hence, we conduct the experiments on
the programs with input-dependent deadlocks. The conducted test cases mainly
cover two typical situations of deadlock [18]: point-to-point ones and collective
ones. Point-to-point deadlocks are usually caused by (1). a send/receive routine
has no corresponding receive/send routine; (2). a send-receive cycle may exist
due to the improper usage of send and receive. Collective deadlocks are typically
caused by (1). missed collective routines (such as Barrier); (2). improper ordering
of some point-to-point or/and collective routines.
In our experiments, we also use ISP and TASS to analyze the programs. Fig
6 displays the experimental results, including those of ISP and TASS.
In Fig 6, we divide the experimental results into two categories: input independent programs and input dependent ones. For each category, we select
programs that can deadlock caused by different reasons, including head to head
receive, wait all, receive any, etc. For each input dependent program, we generate the input randomly when analyzing the program with ISP, and analyze
the program for 10 times, expecting to detect a deadlock. The execution time
of analyzing each input dependent program with ISP is the average time of the
10 times of runnings. According to the experimental results, we can conclude as
follows:
MPISE can detect the deadlock in all the programs. ISP misses the deadlock
for all the input dependent programs. TASS fails to analyze most of programs.
Program
Input
Independent
Input
Dependent
anysrc-deadlock.c
basic-deadlock.c
collect-misorder.c
waitall-deadlock.c
bcast-deadlock.c
complex-deadlock.c
waitall-deadlock2.c
barrier-deadlock.c
head-to-head.c
rr-deadlock.c
recv-any-deadlock.c
cond-bcast.c
collect-misorder.c
waitall-deadlock3.c
ISP
TASS
Result Time(s) Result Time(s)
Deadlock 0.126
Fail
1.299
Deadlock 0.022
Fail
1.227
Deadlock 0.022
Fail
0.424
Deadlock 0.024
Fail
1.349
Deadlock 0.021
Fail
0.493
Deadlock 0.023
Fail
1.323
Deadlock 0.024
Fail
1.349
No
0.061
Fail
0.863
No
0.022
Fail
1.542
No
0.022
Fail
1.244
No
0.022 Deadlock 1.705
No
0.021
No
1.410
No
0.023 Deadlock 1.682
No
0.104
Fail
1.314
MPISE
Result Time(s)
Deadlock 1.59
Deadlock 1.46
Deadlock 1.48
Deadlock 1.49
Deadlock 1.40
Deadlock 1.46
Deadlock 1.48
Deadlock 1.71
Deadlock 1.67
Deadlock 1.67
Deadlock 1.70
Deadlock 1.63
Deadlock 1.85
Deadlock 1.78
Fig. 6. Experimental results
Thus, MPISE outperforms ISP and TASS for all the programs in Fig 6. The
reason is, MPISE uses symbolic execution to have an input coverage guarantee,
and the scheduling algorithms ensures that any deadlock caused by the MPI
operations considered in this paper will not be missed. In addition, we utilize
TOMPI and Cloud9 to provide a better environment support for analyzing MPI
programs. The reason of the common failure of TASS is that TASS does not support many APIs, such as fflush(stdout) of POSIX and MPI Get Processor Name
of MPI, and needs manually modifying the analyzed programs.
For each program, the analysis time using MPISE is longer than that of using
ISP or TASS. The reason is two fold: firstly, we need to symbolically execute
the bytecodes including those of the underlying MPI library, i.e., TOMPI. For
example, for the input dependent program cond-barrier-deadlock.c, the number
of the executed instructions is 302625. Secondly, the time used by MPISE includes the linking time of the target program byte code and TOMPI library.
In addition, we need to record states and do solving during symbolic execution,
which also needs more time than dynamic analysis.
For the rest programs in Umpire test suite, MPISE either reports a deadlock
that actually does not exist or aborts during symbolic execution. The reason is
we only consider the synchronous communications in MPI programs, or some advanced MPI operations, such as MPI Type vector, are not supported by MPISE.
The improvement with respect to these aspects is our future work.
To validate the scalability of MPISE, we use MPISE to analyze three real
world MPI programs, including an MPI program (CPI) for calculating π and
two C MPI programs (DT and IS) from NSA Parallel Benchmarks (NPB) 3.3
[3] with class S. The LOC of DT is 1.2K, and the program needs an input
that is either BH, WH or SH for showing the communication graph name. IS
is an MPI program for integer sorting, and the LOC of IS is 1.4K. MPISE can
analyze these three programs successfully, and no deadlock is found. We make
Symbolic exectuion time of CPI
Instruction count of CPI
1.7
symbolic execution time(s)
4.00E+05
Inctruction count
3.50E+05
3.00E+05
2.50E+05
2.00E+05
1.50E+05
1.00E+05
5.00E+04
1.65
1.6
1.55
1.5
1.45
0.00E+00
2
4
6
8
10
12
14
CPI
CPI
2
4
Number of Processes
(a) Instruction count of CPI
10
12
14
Symbolic exectuion time of DT
Instruction count of DT
25
symbolic execution time(s)
2.00E+07
Inctruction count
8
(b) Symbolic execution time of CPI
2.50E+07
1.50E+07
1.00E+07
5.00E+06
DT
0.00E+00
6
8
10
12
20
15
10
5
DT
0
6
14
8
10
12
14
Number of Processes
Number of Processes
(c) Instruction count of DT
(d) Symbolic execution time of DT
Symbolic exectuion time of IS
Instruction count of IS
26
3.20E+07
symbolic execution time(s)
3.15E+07
Inctruction count
6
Number of Processes
3.10E+07
3.05E+07
3.00E+07
2.95E+07
2.90E+07
2.85E+07
2.80E+07
2
4
6
8
Number of Processes
(e) Instruction count of IS
IS
25.5
25
24.5
24
23.5
23
22.5
22
21.5
21
IS
2
4
6
8
Number of Processes
(f) Symbolic execution time of IS
Fig. 7. The experimental results under different numbers of processes
the input symbolic and symbolically execute all the three MPI programs under
different numbers of parallel processes. The experimental results are displayed
Fig 7. Because IS can only be run with 2n (n ≥ 1) processes, we do not have
results for the case of 6 processes.
From Fig 7, we can observe that, for all the three programs, the number of
the executed instructions and the symbolic execution time do not increase exponentially with respect to the number of processes. It justifies that MPISE avoids
the exponential increasing of instructions or symbolic execution time caused by
the parallelism by the on-the-fly scheduling algorithms. Note that we make the
input of DT symbolic ones, and this program aborts early when fed with input
BH and the process number that is less than 12, this explains the sudden rise of
both analyze time and instructions when the number of processes goes from 10
to 12 in Fig.7(c) and Fig.7(d).
5
Related Work
There are already some existing work for improving the reliability of MPI programs [14]. Generally, they often fall into one of the following two categories:
debugging and testing methods, and verification methods.
Debugging and testing tools often scale well, but depend on concrete inputs
to run MPI programs, expecting to find or locate bugs. Debugging tools such
as TotalView [4] and DDT [1] are often effective when the bugs can be replayed
consistently. Whereas, for MPI programs, reproducing a concurrency bug caused
by non-determinism is itself a challenging problem. Another kind of tools, such
as Marmot [17], the Intel Trace Analyzer and Collector [20] and MUST [15],
intercept MPI calls at runtime and record the running information of an MPI
program, and check runtime errors, deadlock or analyze performance bottlenecks
based on the recorded runtime information. These tools often need to recompile
or relink MPI programs, and also depend on the inputs and the scheduling of
each running.
Another line of tools are verification tools. Dynamic verification tools, such
as ISP [25] and DAMPI [27], provide a coverage guarantee over the space of MPI
non-determinism. When two or more matches of a non-deterministic operation,
such as wildcard receive is detected, the program will be re-executed, and each
running using a specific match. Hence, these tools can find the bug relying on
a particular choice when non-deterministic operations are encountered, but also
depend on the inputs that are fed to run the program. TASS [22] tackles the
limitation by using symbolic execution to reason about all the inputs of an MPI
program, but its feasibility is limited by the simple MPI model used, which is
justified in Section 4.2. There are few static analysis work for MPI program. Greg
proposes in [6] a novel data flow analysis notion, called parallel control flow graph
(pCFG), which can capture the interaction behavior of an MPI program with
arbitrary number of processes. Based on pCFG, some static analysis activities
can be carried out. However, the static analysis based on pCFG is hard to be
automated.
Compared with the existing work, MPISE symbolically executes MPI programs and uses an on-the-fly scheduling algorithm to handle non-determinism,
and provides the coverage guarantee on both input and non-determinism. In
addition, MPISE uses a realistic MPI library, i.e., TOMPI [11], to be the MPI
model used. Therefore, more realistic MPI programs can be analyzed automatically by MPISE, without modifying the programs manually. Furthermore, since
the MPI library and the symbolic executor are loosely coupled by hooking the
library, it is not hard to switch to another implementation of MPI library to
improve the precision and the feasibility of symbolic execution.
6
Conclusion
MPI plays a significant role in parallel programming. To improve the reliability
of the softwares implemented using MPI, we propose MPISE in this paper to
use symbolic execution to analyze MPI programs, targeting to find the bugs
of an MPI program automatically. Existing work on analyzing MPI programs
suffers problems in different aspects, such as scalability, feasibility and input or
non-determinism coverage. We employ symbolic execution to tackle the input
coverage problem, and propose an on-the-fly algorithm to reduce the interleaving explorations for non-determinism coverage, while ensuring the soundness and
completeness. We have implemented a prototype of MPISE as an adoption of
Cloud9, and conducted extensive experiments. The experimental results show
that MPISE can find bugs effectively and efficiently. MPISE also provides diagnostic information and utilities to help people understand a bug.
For future work, there are several aspects. In one aspect, we plan to support
non-blocking MPI operations, which are widely used in nowadays MPI programs.
In another aspect, we want to refine our MPI model further, e.g., using a more
realistic library, to improve the precision of symbolic execution. Finally, we are
also concerned with improving the scalability of MPISE and the analysis of
production-level MPI programs.
References
1. Allinea DDT. http://www.allinea.com/products/ddt/.
2. The GNU debugger. http://www.gnu.org/software/gdb/.
3. NSA parallel benchmarks. http://www.nas.nasa.gov/Resources/Software/npb.
html.
4. Totalview Software. http://www.roguewave.com/products/totalview.
5. Christel Baier and Joost-Pieter Katoen. Principles of Model Checking (Representation and Mind Series). The MIT Press, 2008.
6. Greg Bronevetsky. Communication-sensitive static dataflow for parallel message
passing applications. In Proceedings of the 7th Annual IEEE/ACM International
Symposium on Code Generation and Optimization, CGO ’09, pages 1–12. IEEE
Computer Society, 2009.
7. Stefan Bucur, Vlad Ureche, Cristian Zamfir, and George Candea. Parallel symbolic
execution for automated real-world software testing. In Proceedings of the Sixth
Conference on Computer Systems, EuroSys ’11, pages 183–198, New York, NY,
USA, 2011. ACM.
8. Cristian Cadar, Daniel Dunbar, and Dawson Engler. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation, OSDI ’08, pages 209–224, Berkeley, CA, USA, 2008. USENIX Association.
9. Cristian Cadar, Patrice Godefroid, Sarfraz Khurshid, Corina S. Păsăreanu,
Koushik Sen, Nikolai Tillmann, and Willem Visser. Symbolic execution for software testing in practice: Preliminary assessment. In Proceedings of the 33rd International Conference on Software Engineering, ICSE ’11, pages 1066–1071, New
York, NY, USA, 2011. ACM.
10. Edmund M. Clarke, Jr., Orna Grumberg, and Doron A. Peled. Model checking.
MIT Press, Cambridge, MA, USA, 1999.
11. Erik Demaine. A threads-only MPI implementation for the development of parallel
programs. In Proceedings of the 11th International Symposium on High Performance Computing Systems, pages 153–163, 1997.
12. Xianghua Deng, Jooyong Lee, and Robby. Bogor/Kiasan. A k-bounded symbolic
execution for checking strong heap properties of open systems. In Proceedings of the
21st IEEE/ACM International Conference on Automated Software Engineering,
ASE ’06, pages 157–166, 2006.
13. P. Godefroid, M.Y. Levin, D. Molnar, et al. Automated whitebox fuzz testing.
In Proceedings of the Network and Distributed System Security Symposium, NDSS
’08, 2008.
14. Ganesh Gopalakrishnan, Robert M. Kirby, Stephen Siegel, Rajeev Thakur, William
Gropp, Ewing Lusk, Bronis R. De Supinski, Martin Schulz, and Greg Bronevetsky.
Formal analysis of MPI-based parallel programs. Commun. ACM, 54(12):82–91,
December 2011.
15. Tobias Hilbrich, Joachim Protze, Martin Schulz, Bronis R. de Supinski, and
Matthias S. Müller. MPI runtime error detection with must: Advances in deadlock detection. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’12, pages 30:1–30:11,
Los Alamitos, CA, USA, 2012. IEEE Computer Society Press.
16. J.King. Symbolic execution and program testing. Communications of the ACM,
19(7):385–394, 1976.
17. Bettina Krammer, Katrin Bidmon, Matthias S. Müller, and Michael M. Resch.
Marmot: An MPI analysis and checking tool. In Gerhard R. Joubert, Wolfgang E.
Nagel, Frans J. Peters, and Wolfgang V. Walter, editors, PARCO, volume 13 of
Advances in Parallel Computing, pages 493–500. Elsevier, 2003.
18. Glenn R. Luecke, Yan Zou, James Coyle, Jim Hoekstra, and Marina Kraeva. Deadlock detection in MPI programs. Concurrency and Computation: Practice and
Experience, 14(11):911–932, 2002.
19. Message Passing Interface Forum. MPI: A message-passing interface standard,version 2.2. http://www.mpi-forum.org/docs/, 2009.
20. Patrick Ohly and Werner Krotz-Vogel. Automated MPI correctness checking
what if there was a magic option? In 8th LCI International Conference on HighPerformance Clustered Computing, South Lake Tahoe, California, USA, May 2007.
21. Stephen F. Siegel. Verifying parallel programs with MPI-spin. In Proceedings of
the 14th European Conference on Recent Advances in Parallel Virtual Machine and
Message Passing Interface, PVM/MPI ’07, pages 13–14, Berlin, Heidelberg, 2007.
Springer-Verlag.
22. StephenF. Siegel and TimothyK. Zirkel. TASS: The toolkit for accurate scientific
software. Mathematics in Computer Science, 5(4):395–426, 2011.
23. Michelle Mills Strout, Barbara Kreaseck, and Paul D. Hovland. Data-flow analysis for MPI programs. In Proceedings of the 2006 International Conference on
Parallel Processing, ICPP ’06, pages 175–184, Washington, DC, USA, 2006. IEEE
Computer Society.
24. Sarvani Vakkalanka, Ganesh Gopalakrishnan, and RobertM. Kirby. Dynamic verification of mpi programs with reductions in presence of split operations and relaxed
orderings. In Aarti Gupta and Sharad Malik, editors, Computer Aided Verification,
volume 5123 of Lecture Notes in Computer Science, pages 66–79. Springer Berlin
Heidelberg, 2008.
25. Sarvani S. Vakkalanka, Subodh Sharma, Ganesh Gopalakrishnan, and Robert M.
Kirby. Isp: A tool for model checking mpi programs. In Proceedings of the 13th
ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming,
PPoPP ’08, pages 285–286, New York, NY, USA, 2008. ACM.
26. Jeffrey S. Vetter and Bronis R. de Supinski. Dynamic software testing of mpi
applications with umpire. In Proceedings of the 2000 ACM/IEEE Conference on
Supercomputing, Supercomputing ’00, Washington, DC, USA, 2000. IEEE Computer Society.
27. Anh Vo, Sriram Aananthakrishnan, Ganesh Gopalakrishnan, Bronis R. de Supinski, Martin Schulz, and Greg Bronevetsky. A scalable and distributed dynamic
formal verifier for MPI programs. In Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and
Analysis, SC ’10, pages 1–10, Washington, DC, USA, 2010. IEEE Computer Society.
A
Theorems and Proofs
In order to model MPI programs, we introduce the notion of transition system
in [5]:
Definition 1 Transition System
A transition system TS is a tuple (S, Act, →, I, AP, L), where
–
–
–
–
–
–
S is a set of states,
Act is a set of actions,
→⊆ S × Act × S is a transition relation,
I ∈ S is a set of initial states,
AP is a set of atomic propositions, and
L : S → 2AP is a labeling function.
For an action act and a state s, if there is a transition τ = hs, act, s0 i ∈→, we
act
say that act is enabled in state s, and τ is denoted as s → s0 . If there are no such
τ , act is disabled in s. We denote the set of actions enabled in s as enabled(s),
act
i.e., {act | s1 → s2 ∧ s1 = s}. Note that all transitions systems in this paper
is assumed to be action deterministic, i.e. for a sate s ∈ State and an action
α ∈ Act, s has at most one transition labeled with α to another state. Hence if
α ∈ Act is enabled in state s, we also use α(s) to denote the unique α-successor
α
of s, i.e. s → α(s).
And we use execution to describe the behavior of the transition system. An
execution in a transition system is an alternative sequence of states and actions
π = s0 α1 s1 α2 . . . , αn sn starts from a initial sate s0 and ends at a terminal state,
αi+1
where si → si+1 holds for 0 ≤ i < n. We use |π| to denote the length of π, and
|π| = n.
To model an MPI process, we define the model more precisely: S = Loc ×
Eval(V ar) is a set of states, where Loc is the locations of a process, and
Eval(V ar) denote the set of variable evaluations that assign values to variables.
Act = {s, r, b}, which means we only care about the blocking synchronous send
and receive MPI operations as well as the collective operation barrier. We use
dest(op), where op ∈ {s, r}, to denote the destination of a send or a receive.
The above definition refers to only one process. However, the running of an
MPI program typically consists of many processes. Therefore, we need mechanisms to provide the operational model for parallel runnings in terms of transition systems.
Definition 2 Parallel composition
Let T Si = (Si , Acti , →i , Ii , APi , Li ) i = 1, 2, . . . , n be n transition systems. The
transition system T S = T S1 9H T S2 9H · · · 9H T Sn is defined to be:
T S = (S1 × S2 × . . . Sn , Actg , →, I1 × I2 × . . . In , AP1 ∪ AP2 ∪ . . . APn , Lg )
where the transition relation → is defined by the following rule:
– for matched actions α, β ∈ H = {s, r, b} in distinct processes:
α
si →i s0i
∧
β
sj →j s0j
∧
match(α, β)
SR
hs1 , . . . , si , . . . , sj , . . . sn i −−→ hs1 , . . . , s0i , . . . , s0j , . . . sn i
here match(α, β) if and only if (α = s∧β = r)∨(α = r∧β = s), dest(α) = j,
and dest(β) = i, SR is the compositional global action of s and r.
– for matched actions α = b in distinct processes:
α
s1 →1 s01
∧
α
. . . si →i s0i
∧
α
. . . sn →n s0n
B
hs1 , . . . , si , . . . sn i −
→ hs01 , . . . , s0i , . . . s0n i
Here B is the compositional global action of the local action b of each process.
S
The labeling function is defined by Lg (hs1 , . . . , si , . . . , sn i) = 1≤i≤n L(si ). And
note that actions in Actg are also introduced in the above two rules.
The composition of transition systems gives a global view of a directed graph
G = (S, T ), where the nodes in S are global states and an edge in T is a global
transition with action SR or B. Note that the idea behind our on-the-fly schedule
is that, for a global state, we only choose some of the transitions to move on
and discard the others. Hence, we only explore a subgraph. To describe this
subgraph, we first introduce some notions here.
Given a global state σ in a composed transition system, we fix a total order
on actions enabled in σ according to weight(act), where act ∈ enabled(σ), δ =
act
σ → σ 0 , and
1, if act = B;
weight(act) =
i, if act = SR , σ = hs1 , . . . , sn i, σ 0 = hs1 , . . . , s0i , . . . , s0j , . . . sn i.
i.e., act1 < act2 iff weight(act1 ) < weight(act2 ). When an action act ∈
enabled(s) has the minimal weight, we say that act ranks first in enabled(s).
Definition 3 Let G̃ = (S, T̃ ) be a subgraph of G, where T̃ is defined as follows:
[
T̃ =
{τ | τ = hs, act, s0 i ∧ act ranks first in enabled(s)}.
s∈S
We can see that this G̃ is formed according to the on-the-fly schedule, in
which we always schedule the active process with the smallest rank. Now, we
can present the main theorem, which guarantees the completeness and soundness
of the on-the-fly schedule.
Theorem 1. Given an execution π in G from a global initial state σ0 to a
deadlocked global state σ, there exists an execution T from σ0 to σ in G̃ such
that |T | = |π|. And vice versa.
To prove the above theorem, we introduce some notions first. An independent
relation I ∈ Act × Act is a relation, satisfying the following two conditions:
for any state s ∈ S, with α, β ∈ enabled(s) and α 6= β,
– Enabledness α ∈ enabled(β(s)) , β ∈ enabled(α(s)).
– Commutativity α(β(s)) = β(α(s)).
α
Recall that α(s) denotes the unique α successor of s, i.e. if s → s0 holds, then s0 =
α(s). The dependency relation D is the complement of I, i.e., D = Act×Act−I.
The method we constructing a subgraph G̃ from a graph G actually falls
into the ample set method proposed in [10], which expands only a part of the
transitions at each state, for a path not considered by the method, there is an
equivalent path with respect to the specification considered. Among the four
conditions C0-C3 for selecting ample(s) ⊆ enabled(s), the first two are as follows:
C0 ample(s) = ∅ if and only if enabled(s) = ∅.
αn+1
α
α
C1 Let s0 →0 s1 , . . . , →n sn → t be a finite execution fragment, if αn+1 depends on β ∈ ample(s0 ), then there exists an αi ∈ ample(s0 ), where 0 ≤ i <
n + 1, i.e., along every execution in the full state graph G that starts at s, an
action appears in the execution fragment starts at s which is dependent on an
action in ample(s) cannot be executed without an action in ample(s) occurring
first.
C2 makes sure that: when ample(s) 6= enabled(s), the actions in ample(s)
do not change the label of states with respect to the verifying properties. Here
the property we concerned with is deadlock-reachability. Since we commute independent actions by applying commutativity of independent relation, we do not
need C2. C3 ensures that a cycle in the reduced graph needs to satisfy some
requirements. Owing to the acyclicity of the state space in out context, we do
not need C3.
In our circumstance, we define ample(s) as following:
{B} if B ∈ enabled(s);
ample(s) =
{SR} else if SR ∈ enabled(s) ∧ SR ranks first in enabled(s).
Note that for a state s ∈ S, if enabled(s) 6= ∅, |ample(s)| = 1, i.e., ample(s) has
only one element. Hence, C0 surely holds.
To check whether C1 holds on the full state graph generated by our schedule
algorithm, we first introduce some properties of the full state graph. Clearly, according to the definition of parallel composition, only SR actions can be enabled
simultaneously at a global state, and the SR actions enabled at a same state are
αj−1
αj
α
independent. So, given a execution fragment π = s0 →0 s1 . . . , → sj → sj+1
starting from s0 ∈ S, αj depends on an action β ∈ ample(s0 ). We want to prove
that there exists a αk , where 0 ≤ k < j, and αk ∈ ample(s0 ). Recall the definition of dependent relation D, we know that αj and β should meet one of the
following cases:
1. @s ∈ S such that αj , β ∈ enabled(s);
2. for any state s ∈ S, if αj , β ∈ enabled(s), then αj 6∈ enabled(β(s)) or
β 6∈ enabled(αj (s)), i.e., either αj disables β or β disables αj ;
3. for any state s ∈ S, if αj , β ∈ enabled(s) and αj ∈ enabled(β(s)) ∧ β ∈
enabled(αj (s)), then αj (β(s)) 6= β(αj (s)), i.e., αj and β are not commutative.
Because only SR actions can be enabled simultaneously at a global state,
both case 2 and case 3 cannot hold under our context. Therefore, only case 1
holds for the state graph generated by our method, i.e., αj and β should not
be both enabled at any state. Based on this result, we can get C1 holds in our
context by contradiction. Assume that {α0 . . . αj−1 } ∩ ample(s0 ) = ∅ holds for
the execution π. Because β is enabled at s0 , α0 and β are independent, hence β
is also enabled at s1 . In addition, α1 is also enabled at s1 and α1 6= β, so α1 and
β are also independent. In the same way, we can get that each αi (0 ≤ i < j)
and β are independent. Thus, by using commutativity, we can get β and αj are
both enabled at sj , which violates case 1. Hence, the condition C1 holds.
Proof. One direction, because G̃ is a subgraph of G, the execution T from δ0 to
δ in G̃ is also an execution from δ0 to δ in G, hence we got an execution π = T ,
and |T | = |π|.
The other direction is a little more complex. The basic idea is to construct
a corresponding execution in the subgraph gradually based on the ample set of
each state passed in π.
Let π be an execution in G from δ0 to δ. We construct a finite sequence
of executions π0 , π1 , . . . , πn , where π0 = π and n = |π|. Each execution πi is
constructed based on the before execution πi−1 . For example, π1 is constructed
from π0 , i.e., π, according to the first action execution in π. Thus, we want to
prove that the last execution πn is an execution in the subgraph, and shares the
same first and last states with π. We can prove it by presenting the construction
method of each step. We decompose each πi into two execution fragments, i.e.,
πi = ηi ◦ θi , where ηi is of length i and ηi ◦ θi is the concatenation of the two
execution fragments.
Assuming that we have constructed π0 , . . . , πi , we now turn to construct
πi+1 = ηi+1 ◦ θi+1 . Let s0 be the last state of the execution fragment ηi and
α be the first action of θi . Note that s0 is also the first state of the execution
fragment θi , i.e.,
α0 =α
α1
α2
θi = s0 −→
s1 −→
s2 −→
. . . s|θi |
There are two cases:
α1
α2
α
A. α ∈ ample(s0 ). Then ηi+1 = ηi ◦ (s0 → α(s0 )) and θi+1 = s1 −→
s2 −→
. . . s|θi | .
B. α 6∈ ample(s0 ). Note that s|θi | = σ is a deadlock state, hence no action can be
enabled at σ. Therefore, for any action β ∈ ample(s0 ), some actions that appear
in θi must be dependent on β. The reason is: if all the actions that appears in θi
is independent of β, then all the actions in θi cannot disable β, hence β would
be enabled at s|θi | = σ, which violates the premiss that σ is a deadlock state.
Therefore, for any action β in ample(s0 ), we can find an action αj that
appears in θi , and αj depends on β. According to C1, there must exist an
action β 0 ∈ ample(s0 ), such that β 0 occurs before αj . Because there may exist
multiple actions that are in ample(s0 ) and occur before αj , we take the first
one, say αk and αk ∈ ample(s0 ). So, αk is the first one among the elements of
ample(s0 ) that occur in θi . Clearly, the actions before αk , i.e., α0 , ..., αk−1 , are
independent with αk . Hence, we can construct the following execution by using
the commutativity condition k times:
α|θi |−1
αk−1
αk+1
αk+2
α
α1
α=α
s|θi | .
. . . −→ αk (sk ) −→ sk+2 −→ . . . −→
ξ = s0 →k αk (s0 ) −→0 αk (s1 ) −→
α
In this case ηi+1 = ηi ◦ (s0 →k αk (s0 )) and θi+1 is the execution fragment
α
that is obtained from ξ by removing the first transition s0 →k αk (s0 ).
Clearly, πi and πi+1 share the same last state. So, π0 = δ and πn share the
same last state of the execution, namely πn is also an execution from δ0 to δ
in G. In addition, according to the construction procedure, |π| = |πn | holds.
αj
Most importantly, in execution πn , for any 0 ≤ j < n, such that sj −→ sj+1 ,
αj ∈ ample(sj ) holds. Therefore, πn is also an execution from δ0 to δ in G̃, and
we take this execution as T .
To prove the correctness and soundness of our lazy matching algorithm, we
need to deal with wildcard receives. Hence the rules of parallel composition
of transition systems need to be refined. Instead of redefine match to make it
work with wildcard receives, we make a new rule for matched send and wildcard
receives, to distinct it with source specific receives.
– for matched actions α, β ∈ H = {s, r∗ } in distinct processes, where r∗ is the
wildcard receive:
α
si →i s0i
∧
β
sj →j s0j
∧
match(α, β)
SR∗
hs1 , . . . , si , . . . , sj , . . . sn i −−−→ hs1 , . . . , s0i , . . . , s0j , . . . sn i
here match(α, β) if and only if α = s ∧ β = r∗ , dest(α) = j, and dest(β) =
AN Y , SR∗ is the compositional global action of s and r∗ .
We also need to redefine the subgraph G̃ because we have a new kind of global
transitions.
Definition 4 Let T̃ =
S
subtran(s) , where subtrans(s) is defined as:
s∈S
B
{s → B(s)} if B ∈ enabled(s);
subtran(s) = {s SR
→ SR(s)} else if SR ∈ enabled(s)∧SR ranks first in enabled(s);
act
{s → act(s)} else if act ∈ enabled(s).
Let G̃∗ = (S, T̃ ), which is a subgraph of the full state graph.
Clearly, we can see that G̃∗ is the subgraph we formed according to the
on-the-fly schedule plus lazy matching. Accordingly, we define ample(s) as:
if B ∈ enabled(s);
{B}
else if SR ∈ enabled(s) ∧ SR ranks first in enabled(s);
ample(s) = {SR}
enabled(s) other.
And the following theorem addresses the correctness and soundness of lazy
matching algorithm:
Theorem 2. Given any execution π in G from a global state σ0 to a deadlocked
global state σ, there exists an execution T from σ0 to σ in G̃∗ such that |T | = |π|.
And vice versa.
To prove theorem 2, we first check the conditions C0 and C1. Clearly, C0
holds. To check C1, we should point out that in the new global state graph,
an action SR ∈ enabled(s) is independent with the rest actions in enabled(s).
In addition, only SR∗ can disable other SR∗ actions, which is ensured by the
semantics of wildcard receives. Same as before, given a execution fragment π =
αj−1
αj
α
s0 →0 s1 . . . , → sj → sj+1 starting from s0 ∈ S, αj depends on an action
β ∈ ample(s0 ). We want to prove that there exists a αk , where 0 ≤ k < j, and
αk ∈ ample(s0 ). We discuss the two cases of ample(s0 ):
1. ample(s0 ) = enabled(s0 ). Clearly, C1 holds.
2. ample(s0 ) 6= enabled(s0 ). Thus, ample(s0 ) = {SR} and β = SR. We then
discuss two cases: 1) αj ∈ enabled(s0 ) \ ample(s0 ), according to the observation, SR is independent with each of the rest actions in enabled(s0 ), so αj
and SR are independent. Therefore, it is a contradiction, thus this case never
happens. 2) αj 6∈ enabled(s0 ). For an SR action, the dependent relation of
it can only be one case, i.e., @s ∈ S such that αj , β ∈ enabled(s). Because
SR will never be disabled by any other action, same as the idea for proving
C1 for the case without wildcard receives, we can prove that β occurs before
αj .
In total, we can get that C1 holds on the new state graph.
Proof. We have concluded that the condition C0 and C1 still holds in the new
full state graph. Hence, the procedure of the proof for theorem 2 is basically the
same with that of theorem 1.
| 1 |
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ
CERTIFICATES
arXiv:1607.05031v1 [math.CO] 18 Jul 2016
BART SEVENSTER: , JACOB TURNER:
Abstract. Using polynomial equations to model combinatorial problems has
been a popular tool both in computational combinatorics as well as an approach to proving new theorems. In this paper, we look at several combinatorics problems modeled by systems of polynomial equations satisfying special
properties. If the equations are infeasible, Hilbert’s Nullstellensatz gives a
certificate of this fact. These certificates have been studied and exhibit combinatorial meaning. In this paper, we generalize some known results and show
that the Nullstellensatz certificate can be viewed as enumerating combinatorial structures. As such, Gröbner basis algorithms for solving these decision
problems may implicitly be solving the enumeration problem as well.
Keywords: Hilbert’s Nullstellensatz, Polynomial Method, Enumerative Combinatorics, Algorithmic Combinatorics
1. Introduction
Polynomials and combinatorics have a long common history. Early in both the
theories of graphs and of matroids, important polynomial invariants were discovered
including the chromatic polynomial, the Ising model partition function, and the flow
polynomial, many of which are generalized by the Tutte polynomial [29, 30, 5, 26].
The notion of using generating functions is ubiquitous in many areas of combinatorics and many these polynomials can be viewed as such functions. Another
general class of polynomials associated to graphs is the partition function of an
edge coloring model [9].
On the other hand, polynomials show up not just as graph parameters. Noga
Alon famously used polynomial equations to actually prove theorems about graphs
using his celebrated “Combinatorial Nullstellensatz” [1]. His approach often involved finding a set of polynomial equations whose solutions corresponded to some
combinatorial property of interest and then studying this system.
Modeling a decision problem of finding a combinatorial structure in a graph by
asking if a certain system of polynomial equations has a common zero is appealing
from a purely computational point of view. This allows these problems to be
approached with well-known algebraic algorithms from simple Gaussian elimination
to Gröbner basis algorithms [10, 8]. Using polynomial systems to model these
problems also seems to work nicely with semi-definite programming methods [17,
19, 18, 27].
This approach was used in [21], where the question of infeasibility was considered.
If a set of polynomial equations is infeasible, Hilbert’s Nullstellensatz implies that
there is a set of polynomials acting as a certificate for this infeasibility. For any
: Korteweg-de Vries Institute for Mathematics, University of Amsterdam, 1098 XG Amsterdam, Netherlands.
1
2
BART SEVENSTER: , JACOB TURNER:
given finite simple graph, a polynomial system whose solutions corresponded to
independent sets of size k was originally formulated by László Lovász [22], but
other articles have studied the problem algebraically [20, 28].
One of the interesting results in [21] was to show that for these particular systems
of polynomials, the Nullstellensatz certificate contained a multivariate polynomial
with a natural bijection between monomials and independent sets in the graph.
As such, the Nullstellensatz certificate can be viewed as an enumeration of the
independent sets of a graph. Furthermore, the independence polynomial can quickly
be recovered from this certificate. Later, when modeling the set partition problem,
a similar enumeration occurred in the Nullstellensatz certificate [24].
This paper is directly inspired by these two results and we look at the different
systems of polynomials given in [21] and show that this phenomenon of enumerative Nullstellensatz certificates shows up in all of their examples. We explain this
ubiquity in terms of inversion in Artinian rings. One important example considered
in [21] was k-colorable subgraphs. This problem has also been studied by analyzing
polynomial systems in [2, 14, 25, 13]. We generalize the polynomial systems used
in [21] to arbitrary graph homomorphisms.
We also consider existence of planar subgraphs, cycles of given length, regular
subgraphs, vertex covers, edge covers, and perfect matchings. On the one hand,
these results may be viewed negatively as they imply that a certificate for infeasibility contains much more information than necessary to settle a decision problem.
There have also been papers on attempting efficient computations of Nullstellensatz
certificates ([11, 12]) and we should expect that often this will be very hard. On
the other hand, if one wishes to enumerate combinatorial structures, our results
imply algorithms built from known algebraic techniques to solve this problem.
One polynomial system that does not fall into the general setting of the other
examples is that of perfect matchings. While there is a natural system of equations
modeling this problem that does have enumerative Nullstellensatz certificates, there
is another system of polynomials generating the same ideal that does not. We spend
the latter part of this paper investigating the second set and try to achieve some
partial results in explaining what combinatorial information is contained in the
Nullstellensatz certificates.
This paper is organized as follows. In Section 2, we review the necessary background on the Nullstellensatz and make our definitions precise. We then present
our motivating example, independent sets, and explain how this particular problem serves as a prototype for other interesting problems. In Section 3, we prove
a sufficient condition for a Nullstellensatz certificate to enumerate combinatorial
structures and give several examples, some of them new and some reformulations
of old ones, that satisfy this property. Lastly, in Section 4, we look at a system of
polynomials whose solutions are perfect matchings that do not satisfy this sufficient
condition and prove some results about the certificates.
2. Background
Given a system of polynomials f1 , . . . , fs P Crx1 , . . . , xn s, consider the set
Vpf1 , . . . , fs q “ tpα1 , . . . , αn q P Cn | f1 pα1 , . . . , αn q “ ¨ ¨ ¨ “ fs pα1 , . . . , αn q “ 0u.
We call such a set an variety (by an abuse of language, we use the term even for
reducible and non-reduced sets in this paper). In particular, the empty set is a
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
3
variety and if Vpf1 , . . . , fs q “ H, we say that the system of polynomials f1 , . . . , fs
is infeasible.
One version David Hilbert’s famous Nullstellensatz states that a system f1 , . . . , fs
P Crx1 , . . . , xn s is infeasible
ř if and only if there exists polynomials β1 , . . . , βs P
Crx1 , . . . , xn s such that βi fi “ 1 (cf. [8]). The set of polynomials β1 , . . . , βs are
called a Nullstellensatz certificate for the infeasibility of the system. The degree
of the Nullstellensatz certificate is defined to be maxtdegpβ1 q, . . . , degpβs qu. We
note that a Nullstellensatz certificate is dependent on the choice of polynomials
defining the system. We will revisit this point later. The second observation is that
Nullstellensatz certificates aren’t unique. Often the Nullstellensatz certificates of
greatest interest are those of minimum degree.
Research into an “effective Nullstellensatz” has yielded general bounds for the
degree of a Nullstellensatz certificate of a system of polynomials [4, 15]. As such, the
following general algorithm for finding such certificates has been proposed (cf. [11,
12]). Suppose we have a system
řs of s equations in n variables, f1 , . . . , fs . We want
to find β1 , . . . , βs such that i“1 βi fi “ 1. Let Mn,k denote the set of monomials
of degree ď k in n variables.
Algorithm 1 Basic Outline of the NulLA Algorithm.
Suppose we know from some theorem that a Nullstellensatz certificate must have
degree at most d.
i “ 0.
while i ď d do
Ź test every degree for a certificate
ř
For i P rss, βi :“ M PMn,d αM M .
Let L be the empty set of linear equations.
for M P Mn,d do
ř
Determine the coefficient of M in βi fi , LM .
if M “ 1 then Append LM “ 1 to L.
else Append LM “ 0 to L.
Solve the system L if possible.
if L has a solution then Output ”Yes” and Exit While Loop.
if i “ d then Output ”No”.
else i ÞÑ i ` 1.
We summarize the above pseudocode. First guess at the degree of the Nullstellensatz certificate and then consider ř
generic polynomials βi in variables x1 , . . . , xn
of said degree. Then the condition βi fi “ 1 can be reformulated as system of
linear equations whose solutions give the coefficients each βi should have.
If the linear system has no solution, the guessed degree is increased. The general degree bounds guarantee that this process will terminate eventually, implicitly
finding a valid certificate of minimal degree, provided the initial polynomial system
was infeasible. This algorithm is similar to the XL style Gröbner basis algorithms
studied in algebraic cryptography [7, 3]. One way to understand the complexity
of this algorithm is to look at the Nullstellensatz certificates that get produced for
different systems of polynomials. This algorithm is one of the main motivations
behind the inquiry into Nullstellensatz certificates.
4
BART SEVENSTER: , JACOB TURNER:
In this paper, we will consider combinatorial problems modeled by systems of
polynomials in Crx1 , . . . , xn s. The problems we consider will all come from the
theory of finite graphs and all varieties will be zero dimensional.
2.1. Motivating Example. Let us give the first example of a polynomial system
modeling a graph problem: independent sets. Lovász gave the following set of
polynomials for determining if a graph has an independent set of size m.
Proposition 2.1 ([22]). Given a graph G, every solution of the system of equations
x2i ´ xi “ 0, i P V pGq,
řn xi xj “ 0, ti, ju P EpGq,
i“1 xi “ m
corresponds to an independent set in G of size m.
It is not hard to see that the equations in Proposition 2.1 define a zero-dimensional
variety whose points are in bijection with independent sets of size m. The first equation says that xi “ 0, 1 for all i P V pGq. So every vertex is either in a set or not.
The second equation says that two adjacent vertices cannot both be in the same
set. The last equation says that precisely m vertices are in the set.
If this system is infeasible, then the Nullstellensatz certificate has been explicitly
worked out [21]. Many of properties of the certificate encodes combinatorial data.
For example, one does not need to appeal to general effective Nullstellensatz bounds
as the degree of the Nullstellensatz certificate can be taken to be the independent
set number of the graph in question. Combining several theorems from that paper,
the following is known.
Theorem 2.2 ([21]). Suppose the system of equations in Proposition 2.1 are infeasible. Then the minimum degree Nullstellensatz certificate is unique and has degree
equal to αpGq, the independence number of G. If
1 “ Ap´m `
n
ÿ
i“1
xi q `
ÿ
ti,juPEpGq
Qij xi xj `
n
ÿ
Pi px2i ´ xi q,
i“1
with certificate polynomials A, Qij , and Pi , then the degree of this certificate is
realized by A; degpPi q ď αpGq ´ 1 and degpQij q ď αpGq ´ 2. Furthermore, if the
certificate is of minimum degree, the monomials
in A with non-zero coefficients can
ś
be taken to be precisely those of the form iPI xi where I is an independent set of
G. Lastly, the polynomials A, Qi , and Pi all have positive real coefficients.
So Theorem 2.2 tells us for a minimum degree certificate, the polynomial A enumerates all independent sets of G. The coefficients of the monomials, however, will
not necessarily be one, so it is not precisely the generating function for independents sets. The precise coefficients were worked out in [21]. In the next section, we
show that a similar theorem will hold for many other examples of zero-dimensional
varieties coming from combinatorics.
3. Rephrasing Nullstellensatz certificates as inverses in an Artinian
ring
Recall that a ring S is called Artinian is it satisfies the descending chain condition
on ideals, i.e. for every infinite chain of ideals ¨ ¨ ¨ Ĺ I2 Ĺ I1 Ă S, there is some i
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
5
such that Ik “ Ik`1 for all k ě i. Equivalently, viewed as a left module over itself,
S is finite dimensional, meaning it contains finitely many monomials.
Let V be a variety in Cn defined by the equations f1 , . . . , fs . Although not
standard, we do not assume that V is reduced or irreducible, which is to say that
often the ideal xf1 , . . . , fs y will not be radical or prime. An ideal IpVq :“ xf1 , . . . , fs y
is radical if g ` P IpVq ùñ g P IpVq. The quotient ring CrVs :“ Crx1 , . . . , xn s{IpVq
is called the coordinate ring of V. It is an elementary fact from algebraic geometry
that V is zero dimensional if and only if CrVs is Artinian. This is true even for
non-reduced varieties.
The polynomial systems coming from combinatorics are designed to have a solution if some combinatorial structure exists. In Subsection 2.1, the combinatorial
structure of interest was independent sets in a graph. Many of the polynomials
systems that show up in examples have a particular set of equations that always
play the same role. Given a polynomial system in Crx1 , . . . , xn s, we call a subset of
the variables xi for i P I Ď rns indicator variablesřif the polynomial system includes
the equations x2i ´ xi “ 0 for i P I and ´m ` iPI xi “ 0 for some m P N. This
was the case for the example in Subsection 2.1.
The indicator variables often directly correspond combinatorial objects, e.g.
edges or vertices in a graph. The equations x2i ´ xi “ 0 means
that object i is
ř
either in some structure or not. Then the equation ´m ` iPI xi “ 0 says that
there must be m objects in the structure. The other equations in the polynomial
system impose conditions the structure must satisfy.
Now suppose we are given an infeasible polynomial system f1 “ 0, . . . , fs “ 0
in R :“ Cry1 , . . . , yn , x1 , . . . , xp s, where
řn y1 , . . . , yn are indicator variables. Without
loss of generality, let f1 “ ´m ` i“1 yi for some m P N. Then one way to find a
Nullstellensatz certificate for this polynomial system is to find the ř
inverse of f1 in
s
R{xf2 , . . . , fs y, which we denote β1 , and then express f1 β1 ´ 1 as i“2 βi fi . The
polynomials β1 , β2 , . . . , βs will be a Nullstellensatz certificate.
Throughout the rest of this section, we consider an infeasible polynomial system
f1 “ 0, . . . , fs “ 0 in A :“
řnCry1 , . . . , yn , x1 , . . . , xp s where y1 , . . . , yn are indicator
variables and f1 “ ´m ` i“1 yi for some m P N not equal to zero. We let
R :“ A{xf2 , . . . , fs y
and V “ SpecpRq be the variety defined by f2 , . . . , fs .
Lemma 3.1.řThe ring R is Artinian if and only if for every a P N, the polynomial
n
system ´a ` i“1 yi “ 0, f2 “ 0, . . . , fs “ 0 has finitely many solutions.
Proof. We look at the variety defined by the equations f2 , . . . , fs . Since y1 , . . . , yn
are indicator variables, the equations yi2 ´ yi “ 0 are among the equations f2 “
0, . . . , fs “ 0. Thus any solution to these equations must have every yi is equal
to
řneither zero or one. Thus there are only finitely many a P N such that ´a `
i“1 yi “ 0, f2 “ 0, . . . , fs “ 0 has a solution as a must be less than or equal to n.
Furthermore, each such polynomial system has finitely many solutions so the system
f2 “ 0, . . . , fs “ 0 only has finitely many solutions. Thus V is zero dimensional
and the ring R is Artinian. Conversely, if R is Artinian, there can be only finitely
many solutions to the system f2 “ 0, . . . , fs “ 0 and thus ř
only a finite subset of
n
those solutions can also be a solution to the equation ´a ` i“1 yi “ 0.
From here on out, we assume that R is Artinian as this will be the case in every example we consider. This is the consequence of the fact that the polynomial
6
BART SEVENSTER: , JACOB TURNER:
systems are designed to have solutions corresponding to some finite combinatorial
structure inside of some larger, yet still finite, combinatorial object. In our examples, we are always looking for some graph structure inside a finite graph. The
examples we consider also satisfy another property that we shall assume throughout
the rest of this section unless otherwise stated.
Definition 3.2. The system of polynomials f2 “ 0, . . . , fs “ 0 in indicator variables y1 , . . . , yn is called subset closed if
(a) There is a solution of the system where y1 “ ¨ ¨ ¨ “ yn “ 0 and
(b) Let I Ď rns and χI piq “ 1 if i P I and 0 else. If there is a solution of the system
where yi “ χI piq, then for all J Ď I, there is a solution of the system where
yj “ χJ pjq.
It is very easy to describe the monomials with non-zero coefficients appearing in
the inverse of f1 in the ring R. We look at the variety defined by the polynomials
f2 , . . . , fs which consists of finitely many points. Suppose we are given a solution
to the system f2 “ 0, . . . , fs “ 0: y1 “ d1 , . . . , yn “ dn , and x1 “ a1 , . . . , xp “ ap .
We can then map this solution to the point pd1 , . . . , dn q P t0, 1un . We see that each
solution to the system f2 , . . . , fs can be associated to a point on the n-dimensional
hypercube t0, 1un . Let B be the subset of t0, 1un which is the image of this mapping.
n
ś Given d “ pd1 , . . . , dn q P t0, 1u , we can associate to it the monomial yd :“
i,di “1 yi . For any b “ pb1 . . . , bn q P Bzp0, . . . , 0q, the monomial yb regarded as a
function restricted to V is not identically zero as there is a point in V where yi “ 1
for all bi “ 1 in pb1 , . . . , bn q.
Lemma 3.3. If f1 “ 0, . . . , fs “ 0 is subset closed, then for d “ pd1 , . . . , dn q R B,
yd is in the ideal generated by f2 , . . . , fs .
Proof. If this were not the case, there would be a point v “ pc1 , . . . , cn , γ1 , . . . , γp q P
V such that yd pvq “ 1. Let I be the set ti P rns|ci “ 1u. If J “ ti P rns|di “ 1u,
we see that J Ď I and that there is a solution where yi “ χI piq. So there must be
a solution where yi “ χJ piq by the property of being subset closed. This implies
that pd1 , . . . , dn q P B since di “ χJ piq, and so we have a contradiction.
This means that for d R B, there is some k P N such that ydk “ 0 in the ring R.
The exponent k may a priori be greater than one as we have not assumed that the
ideal xf2 , . . . , fs y is radical. However, we note that for any d P t0, 1un , ydk “ yd for
all k P N because the equations yi2 ´ yi for all i P rns are among the polynomials
f2 , . . . , fs . Thus for d R B, yd “ 0 in the ring R.
We can thus conclude that the monomials in the indicator variables in R are in
bijection with combinatorial structures satisfying the constraints of the polynomial
system. In Proposition 2.1, the ring Crx1 , . . . , x|V pGq| s modulo
ś the first two sets of
polynomials gives a ring whose monomials are of the form iPI xi , where I indexes
vertices in an independent set of G.
Lemma 3.4. Given b1 , . . . , bk P B, there are no polynomial relations of the form
řk
i“1 ai ybi “ 0 for ai P Rě0 in R except for all ai “ 0. Similarly if all ai P Rď0 .
Proof. We note that because of the polynomials yi2 ´ yi “ 0, any monomial yβi can
only take the value of zero or one when restricted to V. The only set of non-negative
real numbers whose sum is zero is the trivial case where all are zero. The proof for
the second assertion is the same as the first.
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
7
Theorem 3.5. There is a Nullstellensatz certificate β1 , . . . , βs for the system f1 “
0, . . . , fs “ 0 such that the non-zero monomials of β1 are precisely the monomials
yb for b P B.
Proof. We look at the Nullstellensatz
certificate β1 , . . . , βn where β1 f1 “ 1 in R so
řs
we can express β1 f1 ´ 1 “ i“2 βi fi in A. We now analyze what β1 must look like.
First of all we note that over C, there is a power series expansion
ˆ ˙i
1
1 ÿ t
“´
´m ` t
m iě0 m
řn
viewed as a function in t. Replacing t with i“1 yi , we get a power series in the
indicator variables that includes every monomial in these variables with a negative
coefficient.
We consider the partial sums
˙i
k ˆ
n
ÿ
1 ÿ t
,
t“
yi
´
m i“0 m
i“1
first taken modulo the ideal generated by the polynomials of the form yi2 ´ yi .
This gives a sum where the monomials are yd for d P t0, 1un whose coefficients
are all negative real numbers. We know from Lemma 3.3, that the monomials in
β1 can be taken of the form yb for b P B; the other monomials can be expressed
in terms of f2 , . . . , fs . So then those monomials that are equal to zero modulo
f2 , . . . , fs can
ř
ř be removed, giving us another sum. If there is a relation of the form
1
i ai ydi “
j cj ydj , we ignore it. Such relations simply mean that there is a nonunique way to represent this sum in R. Lastly, these partial sums converge to the
inverse β1 of f1 in R which is supported on the monomials of the form yb for b P B,
using the assumption that R is Artinian.
Theorem 3.5 guarantees the existence of a Nullstellensatz certificate such that
every possible combinatorial structure satisfying the constraints of f2 , . . . , fs is
encoded in a monomial in the indicator variables appearing with non-zero coefficient
in β1 . This is precisely the Nullstellensatz certificate found in Theorem 2.2 since
any subset of an independent set is again an independent set.
As it so happens, in Theorem 2.2 this Nullstellensatz certificate is also of minimal
degree. However, it is not necessarily the case that the certificate given in Theorem
3.5 is minimal.
The most obvious way for minimality to fail is by reducing by the linear relations
among the monomials yb for b P B with both negative and positive coefficients.
However, if all such linear relations are homogeneous polynomials, we show these
relations cannot reduce the degree of the Nullstellensatz certificate.
Definition 3.6. We say that the ideal f2 , . . . , fs has only homogeneous linear
ř`
relations among the indicator variables if every equation is of the form i“1 ai ydi “
0 for ai P R and di P t0, 1un and has the property that not all ai are positive or all
negative and that degpydi q “ degpydj q for all i, j P r`s.
Lemma 3.7. Let β1 , . . . , βs be a minimal Nullstellensatz certificate for the system
f1 “ 0, . . . , fs “ 0, and suppose that β1 only has positive real coefficients. Then
after adding homogeneous relations in the indicator variables to the system, there
is a minimal degree Nullstellensatz certificate of the form β1 , β21 , . . . , βs1 .
8
BART SEVENSTER: , JACOB TURNER:
Proof. By adding homogeneous relations in the indicator variables, we claim it is
impossible to reduce the degree of β1 if it only has positive real coefficients. Let
us try to remove the monomials of highest degree in β1 by adding homogeneous
linear relations in the indicator variables. We apply the first linear homogeneous
relation
to seeřwhich monomials we can remove. The relations are of the form
řk
`
a
y
“ j“k`1 aj ybj where the ybk are all monomials of a given degree d
i
b
i
i“1
and all ak P Rě0 . Thus we can potentially remove some of the monomials of degree
d in β1 using such relations, but not all of them. This is true not matter how many
linear homogeneous relations we apply; there will always be some monomials of
highest degree remaining. However, we might be able to reduce the degree of the
other βi , i ě 2 to get a Nullstellensatz certificate β1 , β21 , . . . , βs1 such that the degree
of this new certificate is less than the degree of the original.
Given the Nullstellensatz certificate β1 , . . . , βs guaranteed by Theorem 3.5, we
note that f1 β1 ´ 1 must consist entirely of monomials of the form yd for d R B. If
D “ maxtdegpyb q| b P Bu, then f1 β1 ´ 1 must be of degree D ` 1 as f1 is linear.
Let M denote the set of monomials (in A) of f1 β1 ´ 1.
We consider the following hypothetical situation where the Nullstellensatz certificate guaranteed by Theorem
3.5 is not of minimal degree. Given a monomial
řs
µ P M , we have that µ “ i“2 µi fi for some polynomials µi , although this is not
unique. We define
s
ÿ
degR pµq :“ maxtdegpµi q, over all equalities µ “
µi fi u.
i“2
Then the degree of β1 , . . . , βs is maxtD, degR pµqfor µ P M u.
Suppose that the degree of the certificate is ě D`2 and that for every maxtdegR pyi ¨
µq|µ P M u ă maxtdegR pµq|µ P M u for every i P rns. Then define β11 :“ β1 `
m´1 pf1 β1 ´ 1q, which has degree D ` 1.
Now we note that
f1 β11 ´ 1 “ f1 β1 ` m´1 f1 pf1 β1 ´ 1q ´ 1
“ f1 β1 ´ f1 β1 ` 1 ` m´1
n
ÿ
yi pf1 β1 ´ 1q ´ 1
i“1
“ m´1
n
ÿ
yi pf1 β1 ´ 1q.
i“1
řn
However, since every monomial in m´1 i“2 yiř
pf1 β1 ´ 1q is of the form yi ¨ µ for
s
some µ P M , we can express this polynomial as i“1 βs1 fs where degpβs1 q ă degpβs q
since maxtdegR pyi ¨ µq|µ P M u ă maxtdegR pµq|µ P M u for every i P rns. So we
have found a Nullstellensatz certificate with smaller degree.
In the hypothetical situation above, we were able to drop the degree of the
Nullstellensatz certificate by increasing the degree of β1 by one. However, this
construction can be iterated and it may be that the degree of β1 must be increased
several times before the minimal degree certificate is found. This depends on how
high the degrees of β2 , . . . , βs are. It may also be the case that adding lower degree
monomials also lowers the degree of the certificate.
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
9
Lemma 3.8. If β1 , . . . , βs be the Nullstellensatz certificate guaranteed by Theorem
3.5 and f2 , . . . , fs have only linear homogeneous relations in the indicator variables. If maxtdegpβi q, i P rssu “ degpβi q, then this is a Nullstellensatz certificate of
minimal degree.
Proof. For any Nullstellensatz certificate β11 , . . . , βs1 , we may assume without loss of
generality that the monomials of β1 form a subset of those in β11 . Indeed we may use
Lemma 3.3 to say that all monomials yb for b P B must appear in β11 unless there are
linear homogeneous relations in the indicator variables. By Lemma 3.7, reducing
β11 alone by these relations will not reduce the degree of β11 , . . . , βs1 . However, this
implies that degpβ11 q ě degpβ1 and thus that the degree of the Nullstellensatz
certificate β11 , . . . , βs1 has degree ě degpβ1 q, which is the degree of the certificate
β1 , . . . , βs by assumption. So β1 , . . . , βs is a Nullstellensatz certificate is of minimal
degree.
Lemma 3.8 tells us that the Nullstellensatz certificate given in Theorem 3.5 is
naı̈vely more likely to be of minimal degree when the degrees of f2 , . . . , fs are high
with respect to the number of variables, implying that the degrees of β2 , . . . , βs are
low.
Proposition 3.9. Let f1 , . . . , fs have only homogeneous linear relations in the
indicator variables and β1 , . . . , βs be a Nullstellensatz certificate. Let β1 be supported
on the monomials
yb1 , . . . , yb` for b1 , . . . , b` P B. Then if for all j P rns and k P r`s,
řs
yj ybk “ i“2 αi fi satisfies degpαi q ď degpybk q for all i “ 2, . . . , s, β1 , . . . , βs is a
minimum degree Nullstellensatz certificate.
In addition, if f2 “ 0, . . . , fs “ 0 is a polynomial system entirely in the indicator
variables and there are only homogeneous linear relations, then there is a Nullstellensatz certificate of minimal degree of the form β1 , β21 , . . . , βs1 , where β1 is the
coefficient polynomial guaranteed by Theorem 3.5.
Proof. We know that f1 β1 ´1 contains
řsonly monomials of the form yj ybk for j P rns
and k P r`s. By assumption yj ybk “ i“2 αi fi and satisfies degpαi q ď degpybk q for
all i “ 2, . . . , s. This implies degpβ1 q ě degpβi q for all i “ 2, . . . , s. Then apply
Lemma 3.8.
If we restrict our attention to a system f1 “ 0, . . . , fs “ 0 only in the indicator
variables, all homogeneous linear
řs relations are in the indicator variables. Furthermore, any equation yj ybk “ i“2 αi fi is a homogeneous linear relation in the
indicator variables since these are the only variables in the equation. So these too
can be ignored. We can then apply Lemma 3.7.
Proposition 3.9 is a generalization of Theorem 2.2 as we can see from Proposition
2.1 that all of the variables are indicator variables.
3.1. Some examples with indicator variables. We now reproduce the first
theorem from [21]. This theorem establishes several polynomial systems for finding
combinatorial properties of graphs and we shall see that all of them (except for one,
which we have omitted from the theorem) satisfy the conditions of Theorem 3.5.
Afterwards, we shall present a few new examples that also use indicator variables.
Theorem 3.10 ([21]).
BART SEVENSTER: , JACOB TURNER:
10
1. A simple graph G “ pV, Eq with vertices numbered 1, . . . , n and edges numbered
1, . . . , e has a planar subgraph with m edges if and only if the following system of
equations has a solution:
´m `
ř
ti,juPE zij “ 0.
2
´ zij “ 0
zij
śn`e
s“1 pxtiuk ´ sq “ 0
śn`e
s“1 pytijuk ´ sq “ 0
ztiju pytijuk ´ xtiuk ´ ∆tij,iuk q “ 0
ś3
ztuvu k“1 pytuvuk ´ xtiuk ´ ∆tuv,iuk q “ 0
ś3
ztiju ztuvu k“1 pytijuk ´ ytuvuk ´ ∆tij,uvuk q “ 0
ś3
ztiju ztuvu k“1 pytuvuk ´ ytijuk ´ ∆tuv,ijuk q “ 0
ś3
pxtiuk ´ xtjuk ´ ∆ti,juk q “ 0
śk“1
3
´ xtiuk ´ ∆tj,iuk q “ 0
k“1 px
śtjuk
n`e´1
p∆tij,uvuk ´ dq “ 0
d“1
ś
n`e´1
p∆tij,iuk ´ dq “ 0
d“1
For k “ 1, 2, 3:
ˆ ź
sk
pxtiuk ´ xtjuk q
for
for
for
for
for
for
for
for
for
for
for
all ti, ju P E.
k “ 1, 2, 3 and every i P rns.
k “ 1, 2, 3 and ti, ju P E.
i P rns, ti, ju P E, k P r3s.
i P rns, tu, vu P E, u, v ‰ i.
every ti, ju, tu, vu P E.
every ti, ju, tu, vu P E.
i, j P rns.
i, j P rns.
ti, ju, tu, vu P E, k P r3s.
ti, ju P E, k P r3s.
˙
ź
pxtiuk ´ ytuvuk q
ź
i,jPrns
iPrns
ti,ju,
iăj
tu,vuinE
tu,vuPE
pytijuk ´ ytuvuk q
“ 1.
2. A graph G “ pV, Eq with vertices labeled 1, . . . , n has a k-colorable subgraph with
m edges if and only if the following systems of equations has a solution:
ř
´m ` ti,juPE yij “ 0
2
yij
´ yij “ 0 for ti, ju P E.
xki ´ 1 “ 0 for i P rns.
yij pxk´1
` xk´2
xj ` ¨ ¨ ¨ ` xk´1
q “ 0 for ti, ju P E.
i
i
j
3. Let G “ pV, Eq be a simple graph with maximum vertex degree ∆ and vertices
labeled 1, . . . , n. Then g has a subgraph with m edges and edge-chromatic number
∆ if and only if the following system of equations has a solution:
ř
´m ` ti,juPE yij “ 0
2
yij
´ yij “ 0 for ti, ju P E.
yij px∆
ij ´ 1q “ 0 for ti, ju P E.
ź
si
pxij ´ xik q “ 1
for i P rns.
j,kPN piq
jăk
We can also look at the a system of polynomials that ask if there is a subgraph
of graph homomorphic to another given graph. This is a generalization of Part 3 of
Theorem 3.10 as k-colorable subgraphs can be viewed as subgraphs homomorphic
to the complete graph on k vertices.
Proposition 3.11. Given two simple graphs G (with vertices labeled 1, . . . , n) and
H, there is a subgraph of G “ pV, Eq with m edges homomorphic to H if and only
if the following system of equations has a solution:
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
´m `
ˆ
˙
ř
jPN piq
yij
ř
yij
ś
tv,wuPEpHq
ś
vPV pHq
ti,juPE
2
yij
´
yij “ 0
yij “ 0
pzi ´ xv q “ 0
pzi ` zj ´ xv ´ xw q “ 0
11
for all ti, ju P E.
for all i P rns.
for all ti, ju P E.
Proof. The variables yij are the indicator variables which designate whether or not
an edge of G is included in the subgraph. The third set of equations says that if at
least one of the edges incident to vertex i is included in the subgraph, then vertex
i must map to a vertex v P V pHq. The last set of equations says that if the edge
ti, ju is included in the subgraph, its endpoints must be mapped to the endpoints
of an edge in H.
We see that all of these systems of equations have indicator variables and that
each system has only finitely many solutions. So Lemma 3.1 says that the rings
formedř
by taking a quotient by the ideal generated by all equations not of the form
n
´m ` i“1 yi gives an Artinian ring. We also see that a subset of k-colorable
subgraph is k-colorable, a subset of a planar subgraph is planar, and a subgraph
of a k-edge colorable subgraph is k-edge colorable. Lastly, if a subgraph F of G is
homomorphic to H, so is any subgraph of F by restricting the homomorphism. So
all of these systems are subset closed.
Corollary 3.12. If the first system of equations in Theorem 3.10 is infeasible,
there is a Nullstellensatz certificate β1 , . . . , βs such that the monomials in β1 are
monomials in the variables yi in bijections with the planar subgraphs. If the second
system in Theorem 3.10 is infeasible, the same holds example that the monomials
are in bijection with the k-colorable subgraphs. If the third system in Theorem 3.10
is infeasible, the same holds except that the monomials are in bijection with the
k-edge colorable subgraphs. Lastly, if the system of equations in Proposition 3.11
is infeasible, the same holds except that the monomials are in bijection with the
subgraphs homomorphic to H.
Proof. This follows directly from Theorem 3.5 and Theorem 3.10.
None of the examples in Theorem 3.10 satisfy the conditions of Proposition 3.9
as the indicator variables are a proper subset of the variables in the system. The
following theorem gives a few examples that only involve indicator variables. While
only three of the four following examples satisfies the conditions of Proposition 3.9,
we will see that Theorem 3.5 can be useful in understanding minimum degree
certificates if we can analyze the equations directly.
Definition 3.13. Given a graph G, we say that a subgraph H cages a vertex
v P V pGq if every edge incident to v in G is an edge in H.
Theorem 3.14.
1. A graph G “ pV, Eq with vertices labeled 1, . . . , n has a regular spanning subgraph
with m edges if and only if the following system of equations has a solution:
ř
´m ` ti,juPE yij “ 0
2
yij
ř
ř ´ yij “ 0 for all ti, ju P E.
for every i, ` P rns.
jPN piq yij “
kPN p`q yk`
12
BART SEVENSTER: , JACOB TURNER:
Furthermore, if the system is infeasible, there is a minimal degree Nullstellensatz
certificate of the form β1 , β21 , . . . , βs1 , where β1 is the coefficient polynomial guaranteed in Theorem 3.5.
2. A graph G “ pV, Eq with vertices labeled 1, . . . , n has a k-regular subgraph with
m edges if and only if the following system of equations has a solution:
ř
´m ` ti,juPE yij “ 0
2
yij
´ yij “ 0 for all ti, ju P E.
ř
ř
p jPN piq yij qp jPN piq yij ´ kq “ 0 for every i P rns.
Furthermore, if the system is infeasible, if there exists an edge in a maximum kregular subgraph such that for both of its endpoints, there is an edge incident to it
that is in no maximum k-regular subgraph, then there is a minimal degree Nullstellensatz certificate of the form β1 , β21 , . . . , βs1 , where β1 is the coefficient polynomial
guaranteed in Theorem 3.5.
3. A graph G “ pV, Eq with vertices labeled 1, . . . , n has a vertex cover of size m if
and only if the following system of equations has a solution:
ř
´pn ´ mq ` iPrns yi “ 0
yi2 ´ yi “ 0 for all i P rns.
yi yj “ 0 for all ti, ju P E.
Furthermore, if the system is infeasible, there is a Nullstellensatz certificate β1 , . . . , βs
of minimal degree such the monomials in β1 are in bijection with the independent
sets of G.
4. A graph G with vertices labeled 1, . . . , n and e edges has an edge cover of size m
if and only if the following system of equations has a solution:
ř
´pe ´ mq ` ti,juPE yij “ 0
2
ś yij ´ yij “ 0 for all ti, ju P E.
jPN piq yij “ 0 for all i P rns.
Furthermore, if the system is infeasible, there is a minimal degree Nullstellensatz
certificate β1 , . . . , βs such that the monomials of β1 correspond to the subgraphs of
G that cage no vertex of G.
Proof. We first prove Part 1. First we show that a solution to the system imply
the existence of a regular spanning subgraph of size m. The indicator variables
correspond to edges that will either be in a subgraph satisfying the last set of
equations or not. The last set of equations say that every pair of vertices must
be incident to the same number of edges in the subgraph. The last equations
are homogeneous linear equations and so we use Proposition 3.9 to prove that
Nullstellensatz certificate guaranteed in Theorem 3.5 is a minimal degree certificate.
Now we move to Part 2. Once again, the indicator variables correspond to edges
that will either be in the subgraph or not. The last set of equations say that the
number of that every vertex must be incident to k edges in the subgraph or 0 edges.
The last equations are not homogeneous linear relations. Now suppose that there
is an edge ti, ju that is in a maximum k-regular subgraph and ti, `1 u and t`2 , ju
are edges in none.
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
13
The polynomial β1 in the certificate β1 , . . . , βs given by Theorem 3.5 contains
a monomial for every k-regular subgraph. At least one of these monomials contains
in which yij appears:
ř the variable
ř yij . There are only two
ř linear relations
ř
p sPN piq yis qp sPN piq pyis ´ kqq “ 0 and p sPN piq ysi qp sPN pjq pysj ´ kqq “ 0. The
former equation involves the variable yi`1 and the latter the variable y`2 j . But
neither of these variables appear in monomials of maximal degree by assumption.
Therefore monomials of maximal degree involving yij cannot be gotten rid of by
the polynomials f2 , . . . , fs . So the total degree of β1 cannot be reduced.
Now we prove Part 3. We first consider a different system modeling vertex cover:
ř
´m ` iPrns xi “ 0
x2i ´ xi “ 0 for all i P rns.
pxi ´ 1qpxj ´ 1q “ 0 for all ti, ju P E.
In this system, the indicator variables correspond to vertices that will be either
in a vertex cover or not. The last set of equations say that for every edge, at
least one of its endpoints must be included the in the vertex cover. However, this
system is not subset closed, in fact it is the opposite. If a set is a vertex cover,
so is any superset. So for Theorem 3.5 to be applicable, we make the variable
change ´yi “ xi ´ 1. Plugging this variable change in gives us the equations in
the statement in the theorem and is now subset closed. However, it defines an
isomorphic ideal. We then note that these equations model independent set on the
same graph and use Theorem 2.2.
Lastly, we prove Part 4. Like in Part 3, we first consider the following system:
ř
´m ` ti,juPE xij “ 0
x2ij ´ xij “ 0 for all ti, ju P E.
ś
jPN piq pxij ´ 1q “ 0 for all i P rns.
In this system, the indicator variables correspond to edges that are in the edge
cover or not. The last equations say that for every vertex, at least one of its
incident edges must be in the edge cover. Once again, this system is the opposite of
being subset closed: any superset of an edge cover is an edge cover. So once again
we make a variable substitution, this time ´yij “ xij ´ 1. Plugging in gives us
the system in the statement of the theorem. We then use Proposition 3.9, noting
there are no linear relations among the indicator variables, and note that those
square free
ś monomials that get sent to zero are those divisible by a monomial of
the form jPN pjq xij . If a monomial is not divisible by a monomial of such a form,
it corresponds to a subgraph that cages no vertex.
We see from Part 2 of Theorem 3.14 that whether or not a minimal degree Nullstellensatz certificate exists that enumerates all combinatorial structures satisfying
the polynomial constraints is sensitive to the input data. We also from the proofs
of Parts 3 and 4 how Theorem 3.5 might not be applicable. However, in the case
of a superset closed system, it is generally possible to change it to a subset closed
system using the change of variables exhibited in the proof of Theorem 3.14.
While the systems of equations for Parts 3 and 4 of Theorem 3.14 are not the
most obvious ones, because they can be obtained from a more straightforward
system by a linear change of basis, we have the following Corollary.
BART SEVENSTER: , JACOB TURNER:
14
Corollary 3.15.
Part 1. A graph G “ pV, Eq with vertices labeled 1, . . . , n has a vertex cover of size
m if and only if the following system of equations has a solution:
ř
´m ` iPrns xi “ 0
x2i ´ xi “ 0 for all i P rns.
pxi ´ 1qpxj ´ 1q “ 0 for all ti, ju P E.
Furthermore, if the system is infeasible, the degree of a minimum degree Nullstellensatz certificate is the independence number of G.
Part 2. A graph G “ pV, Eq with vertices labeled 1, . . . , n and e edges has an edge
cover of size m if and only if the following system of equations has a solution:
ř
´m ` ti,juPE xij “ 0
x2ij ´ xij “ 0 for all ti, ju P E.
ś
jPN piq pxij ´ 1q “ 0 for all i P rns.
Furthermore, if the system is infeasible, the degree of a minimum degree Nullstellensatz certificate is equal to the maximum number of edges a subgraph of G can
have such that no vertex of G is caged.
Proof. For both parts, the correctness of the system of equations was proven in
Theorem 3.14. Furthermore, these systems are equivalent to the systems in Parts
3 and 4, respectively, in Theorem 3.14 after an invertible linear change of basis.
This means that if β1 , . . . , βs is a minimum degree Nullstellensatz certificate for
the systems in Theorem 3.14, then applying an an invertible linear change of basis
to the variables in the βi preserves their degrees. Thus the degrees any minimum
degree Nullstellensatz certificate for the systems in the statement of the corollary
must be the same.
4. Perfect Matchings
We now turn our attention to the problem of determining if a graph has a perfect
matching via Nullstellensatz certificate methods. Unlike many of the problems
considered in the previous section, this problem is not NP-complete. Edmond’s
blossom algorithm determines if a graph G “ pV, Eq has a perfect matching in time
Op|E||V |1{2 q. The following set of equations has been proposed for determining if
a graph G has a perfect matching.
ř
(1)
xij “ 1
xij xjk “ 0
jPN piq
i P V pGq
@i P V pGq, j, k P N piq
where N piq denotes the neighborhood of vertex i. The first equation says that a
vertex must be incident to at least one edge in a perfect matching and the second
equation says it can be incident to at most one edge. So indeed, these equations
are infeasible if and only if G does not have a perfect matching. However, there is
not yet a complete understanding of the Nullstellensatz certificates if this system
is infeasible [23].
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
15
We note that the equations x2ij ´xij can easily be seen to be in the ideal generated
by the Equations 1. Thus the variables
xij are indicator variables. However, there
ř
is no equation of the form ´m ` xij , so we are not in the situation required to
apply Theorem 3.5. That said, there still exists a Nullstellensatz certificate such
that the non-zero monomials are precisely those corresponding to matchings in the
graph G.
We observe that a matching on a graph G corresponds precisely to an independent set of its line graph LpGq. In fact, there is a bijection between independent
sets of LpGq and matchings of G. This suggests a different set of equations for
determining perfect matchings of G that mimic those in Proposition 2.1.
x2ij ´ xij “ 0, ti, ju P EpGq,
ř xij xik “ 0, ti, ju, ti, ku P EpGq,
“ |V pGq|{2.
ti,juPEpGq xij
(2)
It quickly follows from Proposition 2.1 that the solutions to this system forms
a zero-dimensional variety whose solutions correspond to perfect matchings. However, we also know from Theorem 2.2 that if the system is infeasible then there is
a unique minimum degree Nullstellensatz certificate whose degree is the size of a
maximum
matching of G. Furthermore, the coefficient polynomial for the equař
tion xij “ |V pGq|{2 in this certificate has monomials precisely corresponding to
matchings in G.
Equations 1 and Equations 2 define the same variety as a set. We now want
to find a way of turning a Nullstellensatz certificate for Equations 2 into a Nullstellensatz certificates for Equations 2. This should be possible if Equations 1 and
Equations 2 both define the same ideal. It is sufficient to show that both generate
a radical ideal. We have the following lemma (cf. [16]).
Proposition 4.1 (Seidenberg’s Lemma). Let I Ă krx1 , . . . , xn s be a zero dimensional ideal. Suppose that for every i P rns, there is a non-zero polynomial
gi P I X krxi s such that gi has no repeated roots. Then I is radical.
We see that the polynomials of the form x2ij ´xij satisfy the conditions of Proposition 4.1 and so both ideals are indeed radical. By Theorem 2.2, if Equations 2
are infeasible, we have a Nullstellensatz certificate of the form
˙
ˆ
ÿ
ÿ
ÿ
|V pGq|
`
xij `
Qijk xij xik `
Pij px2ij ´ xij q,
1“A ´
2
ti,ju‰ti,ku
ti,juPEpGq
ti,juPEpGq
PEpGq
where A is a polynomial whose monomials are in bijection with matchings of G and
all coefficients are positive real numbers. If Equations 1 are infeasible, we denote
by the polynomials ∆i and Θijk a Nullstellensatz certificate such that
„ ÿ
ÿ
ÿ
1“
∆i p
xij q ´ 1 `
Θijk xij xik .
iPV pGq
jPN piq
ti,ju‰ti,kuPEpGq
Proposition 4.2. If Equations 1 are infeasible, then there is a Nullstellensatz
certificate ∆i , i P V pGq, and Θijk for ti, ju ‰ tj, ku P EpGq such that
(a) The degree of each ∆i is the size of a maximal
ś matching of G.
(b) For every matching M of G, the monomial ti,juPM xij appears with non-zero
coefficient in ∆i for all i P V pGq.
BART SEVENSTER: , JACOB TURNER:
16
(c) The degree of Θijk is less than or equal to the degree of ∆` for all i, j, k, ` P V pGq.
Proof. First we note that
„ ÿ
1 ÿ
|V pGq|
`
p
xij q ´ 1 “ ´
2
2
iPV pGq
jPN piq
Then for ti, ju P EpGq we have that
ˆ
Pij px2ij ´ xij q “ Pij xij r´1 `
ÿ
So if A, Pij and
that if we set
xij .
ti,juPEpGq
ÿ
xik s ´
ti,kuPEpGq
Qijk
ÿ
˙
xij xik .
ti,kuPEpGq
j‰k
are a Nullstellensatz certificate for Equations 2, then we see
ÿ
1
Pij xij
and
A`
2
jPN piq
ÿ
:“ Qijk ´ Pij
xij xik ,
∆i :“
Θijk
ti,kuPEpGq
j‰k
that we get a Nullstellensatz certificate for Equations 1. Since degpPij q ă degpAq
and both have only positive real coefficients, degp∆i q “ degpAq, which is the size
of a maximal matching of G, using Theorem 2.2. This also implies Part (b) of the
statement. Lastly, we note that since degpQijk q ď degpAq ´ 2 that Θijk has degree
at most degpAq “ degp∆i q, again using Theorem 2.2.
While Proposition 4.2 implies the existence of an enumerative Nullstellensatz
certificate similar to that in Theorem 2.2, it is not necessarily of minimal degree.
In fact many times it will not be. Consider the following result.
Theorem 4.3. A loopless graph G has a degree zero Nullstellensatz certificate
β1 , . . . , βs for Equations 1 if and only if G is bipartite and the two color classes are
of unequal size. Furthermore, we can choose such a Nullstellensatz certificate such
that for each non-zero βi , |βi |´1 can be take to be equal to the difference in size of
the independent sets.
ř
Proof. Let G “ pV, Eq and fi “ jPN piq xij ´ 1 for i P V . Suppose the graph G is
1
bipartite and has two color classes A and B, such that |A| ą |B|. Let c “ |A|´|B|
,
then we have that
ÿ
ÿ
cfi `
´cfj “ 1,
iPA
jPB
so this gives a Nullstellensatz certificate of degree 0 for G.
Conversely, suppose that G has a degree zero Nullstellensatz certificate β1 , ..., βs .
Clearly, the coefficients of the equations of the form xij xjk have to be zero. Now
for some vertex vi , let the equation fi have βi “ c, for c P C. Then for all j P N piq
we have that βj “ ´c. Repeating this argument, we see that G can not have any
odd cycles. Furthermore, for the sum to be unequal to 0, we need the sizes of the
color classes to be unequal.
We see that as the size of the graphs being considered grows, the difference in
the degree of Nullstellensatz certificate given in Proposition 4.2 and a the degree
of a minimal degree certificate can grow arbitrarily large since Theorem 4.3 gives
an infinite family of graphs with degree zero Nullstellensatz certificates.
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
17
We analyze the time complexity of the NulLA algorithm if it is promised a
connected bipartite graph with independent sets of unequal size for returning the
result that the equations are infeasible. The algorithm first assumes that the polynomial equations has a Nullstellensatz certificate of degree zero,
ř which we know
from Theorem 4.3 to be true in this case. Letting fi “ jPN piq xij ´ 1 and
i
g i “ xij xik , then the algorithm will try to find constants αi and βjk
such that
řjk
ř i
i
αi fi ` βjk xij xik “ 1. However, we immediately see that βjk “ 0 for all
ti, ju, ti, ku P EpGq.
So we consider an augmented matrix M |v with columns labeled by the constants
i
αi , βjk
and rows for each linear relation that will be implied among the constants,
which we now determine. Each variable xij , ti, ju P EpGq, appears as a linear
term in exactly two polynomials: fi and fj . We see that this imposes the relation
αi ` αj “ 0 for ti, ju P EpGq. Because of the ´1 appearing in each fi , we also have
ř
1
i
that iPrns αi “ ´1. Lastly, since each gjk
has no monomial in common with gji 1 k1 ,
i
there are no relations among the βjk
. So we see that the number of rows of M is
|EpGq| ` 1.
The matrix M can then be described as follows: If we restrict to the columns
labeled by the αi , we get a copy of the incidence matrix of G along with an extra
i
row of all one’s added to the bottom. The columns labeled by βjk
are all zero
columns. The augmented column v has a zero in every entry except the last, which
is contains a negative one.
The NulLA algorithm seeks to determine if this linear system has a solution.
Since we have a matrix with |V pGq| nontrivial columns and |EpGq| ` 1 ě |V pGq|
rows. This takes time Ωp|V pGq|ω q to run (where ω is some constant ą 2, although
conjectured to asymptotically approach 2, depending on the complexity of matrix
multiplication [6]). However, two-coloring a graph and counting the size of the two
independent sets can be done in time Op|V pGq| ` |EpGq|q “ Op|V pGq|2 q. So even
in the best case scenario, the NulLA algorithm is not an optimal algorithm.
4.1. Nullstellensatz certificates for Odd Cliques. We now turn our attention
to another question inspired by Proposition 4.2. When is the Nullstellensatz certificate given in that theorem of minimal degree? Surprisingly, this turns out to be
the case for odd cliques. This is especially unappealing from an algorithmic standpoint as any graph with an odd number of vertices clearly cannot have a perfect
matching.
Throughout the rest of the is section, we take G “ Kn , for n odd. To prove our
result, we will work over the ring R “ Crxij | ti, ju P rns, i ‰ js{I where I is the
ideal generated by the polynomials x2ij ´ xij and xij xik for ti, ju, ti, ku P EpKn q.
We will be doing linear algebra over this ring as it is the coefficient polynomials ∆i
that we are most interested in. Furthermore, we note that adding the equations
x2ij ´ xij does not increase the degree of the polynomials ∆i in a certificate and it
is convenient to ignore square terms.
ř Workingř over R, we now want to find polynomials ∆i , i P rns, such that
iPrns ∆i p jPN piq xij ´ 1q “ 1. Our goal is to prove that each ∆i has degree tn{2u,
which is the size of a maximum matching in Kn , for n odd.
We already knew from Theorem 4.3 that any Nullstellensatz certificate for Kn
must be of degree at least one. For the proof of the statement, it will be convenient
BART SEVENSTER: , JACOB TURNER:
18
to alter our notation. We now denote the variable xij for e “ ti, ju P EpKn q as xe .
We will also write ∆v for v P V pKn q.
Theorem 4.4. The Nullstellensatz certificate given in Proposition 4.2 is a minimal
degree certificate for Kn , n odd.
Proof. By Proposition 4.2 we know that there exists a Nullstellensatz certificate of
degree tn{2u. We work in the ring R :“ Crxij : i ‰ j P rnss{I, where I is generated
by the second set of equations inśEquations 1. Let M be the set of matchings of
Kn , and, for M P M let xM “ ePM xe . Since we are working in R, by Lemma
3.3, we can write
ÿ
αv,M xM .
∆v “
M PM
A Nullstellensatz certificate gives us that in R
˛
¨
ÿ
ÿ
xe ´ 1‚ “ 1.
∆v ˝
ePN pvq
vPV pKn q
The coefficient of xM is given by
ÿ
ÿ ÿ
´αv,M `
αv,M ze ,
ePM vPe
vPV pKn q
which has to equal zero in a Nullstellensatz certificate if |M | ą 0. Now, if there is
a Nullstellensatz certificate of degree l ă tn{2u, then, if |M | “ l ` 1, we see that
ÿ
RM :“
pαu,M ze ` αv,M ze q “ 0,
e“tu,vuPM
and by edge transitivity of G, summing over these relations implies that
ÿ
ÿ
αv,M “ 0.
vPV pKn q
M PM
|M |“l
Furthermore, we have
0“
ÿ
ÿ
ÿ ÿ
´αv,M `
vPV pKn q
αv,M ze ùñ
ePM vPe
M PM
|M |“l
ÿ ÿ
0“
αv,M ze
ePM vPe
Then summing over the linear relations in the second line above gives
ÿ
ÿ
0 “ pl ´ 1q
αv,M
vPV pKn q
M PM
|M |“l´1
Repeating this, we obtain that
ÿ
αv,H “ 0,
vPV
which contradicts the assumption that the ∆v give a Nullstellensatz certificate as
we must have
ÿ
αv,H “ ´1.
vPV
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
19
Thus we can conclude that there is no Nullstellensatz certificate where all ∆v have
degree at most tn{2u ´ 1 in R. But we know from Proposition 4.2 that there exists
a Nullstellensatz certificate where each ∆v has degree tn{2u and all other coefficient
polynomials have degree at most tn{2u. So this Nullstellensatz certificate is of
minimal degree.
So we see that using NulLA to determine if a graph has a perfect matching using
Equations 1 can be quite problematic. Since any graph with and odd number of
vertices cannot have a perfect matching, the NulLA algorithm does a `lot of
˘ work:
a`i
for every
i
P
rtn{2us,
it
determines
if
a
system
of
linear
equations
in
where
i
` ˘
a “ n2 variables, which is the number of monomials in the variables xij of degree
i. However, the NulLA algorithm could be made smarter by having it reject any
graph on odd vertices before doing any linear algebra. This leads us to an open
question:
Question 1. Is there a family of graphs, each with even size, none of which have a
perfect matching, such that the Nullstellensatz certificate given in Proposition 4.2
is of minimal degree?
We actually implemented the NulLA algorithm to try and find examples of
graphs with high degree Nullstellensatz certificates for Equations 1. The only ones
were graphs containing odd cliques. This leads us to wonder if there are natural
”bad graphs” for the degree of the Nullstellensatz certificate and if their presence
as a subgraph determines the minimal degree. Formally:
Question 2. Are there finitely many families of graphs G1 , . . . Gk such that the
degree of a minimal degree Nullstellensatz certificate for Equations 1 of a graph G
is determined by the largest subgraph of G contained in one of the families Gi ?
5. Conclusion
In tackling decision problems, it is often a natural idea to rephrase them in some
other area of mathematics and use algorithms from said area to see if performance
can be improved. The NulLA algorithm is inspired by the idea of rewriting combinatorial decision problems as systems of polynomials and then using Gröbner basis
algorithms from computational algebraic geometry to decide this problems quickly.
Amazingly, from a theoretical point of view, the rewriting of these problems
as polynomial systems is not just a change of language. Lots of combinatorial
data seems to come packaged with it. Throughout this paper, we have seen time
and again that simply trying to solve the decision problem in graph theory might
actually involve enumerating over many subgraphs. The theory of Nullstellensatz
certificates is fascinating for the amount of extra information one gets for free by
simply writing these problems as polynomial systems.
From an algorithmic viewpoint, our results suggest that one should be cautious
about using the NulLA algorithm as a practical tool. The NulLA algorithm always
finds a minimal degree certificate, and our theorems show that such certificates may
entail solving a harder problem that the one intended. However, we do not know
which minimal degree Nullstellensatz certificate will get chosen: maybe there are
others that are less problematic algorithmically.
Certainly, however, work should be done to understand which minimal degree
Nullstellensatz certificates will actually be found by the algorithm if there is to be
20
BART SEVENSTER: , JACOB TURNER:
any hope in actual computational gains. We have analyzed the worst case scenario,
but it is unclear how often it will arise in practice.
Acknowledgments. We would like to thank Jeroen Zuiddam for coding a working
copy of the NulLA algorithm for our use. The research leading to these results
has received funding from the European Research Council under the European
Union’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement
No 339109.
References
[1] Noga Alon and M Tarsi. Combinatorial nullstellensatz. Combinatorics Probability and Computing, 8(1):7–30, 1999.
[2] Noga Alon and Michael Tarsi. Colorings and orientations of graphs. Combinatorica,
12(2):125–134, 1992.
[3] Gwénolé Ars, Jean-Charles Faugere, Hideki Imai, Mitsuru Kawazoe, and Makoto Sugita.
Comparison between xl and gröbner basis algorithms. In Advances in CryptologyASIACRYPT 2004, pages 338–353. Springer, 2004.
[4] W Dale Brownawell. Bounds for the degrees in the nullstellensatz. Annals of Mathematics,
126(3):577–591, 1987.
[5] Thomas Brylawski and James Oxley. The Tutte polynomial and its applications. Matroid
applications, 40:123–225, 1992.
[6] Don Coppersmith and Shmuel Winograd. On the asymptotic complexity of matrix multiplication. SIAM Journal on Computing, 11(3):472–492, 1982.
[7] Nicolas Courtois, Alexander Klimov, Jacques Patarin, and Adi Shamir. Efficient algorithms for solving overdefined systems of multivariate polynomial equations. In Advances
in Cryptology-EUROCRYPT 2000, pages 392–407. Springer, 2000.
[8] David Cox, John Little, and Donal O’shea. Ideals, varieties, and algorithms, volume 3.
Springer, 1992.
[9] Pierre de la Harpe and Vaughan Frederick Randal Jones. Graph invariants related to statistical mechanical models: examples and problems. Journal of Combinatorial Theory, Series
B, 57(2):207–227, 1993.
[10] Jesús A de Loera. Gröbner bases and graph colorings. Beiträge zur algebra und geometrie,
36(1):89–96, 1995.
[11] Jesús A De Loera, Jon Lee, Peter N Malkin, and Susan Margulies. Hilbert’s nullstellensatz
and an algorithm for proving combinatorial infeasibility. In Proceedings of the twenty-first
international symposium on Symbolic and algebraic computation, pages 197–206. ACM, 2008.
[12] Jesús A De Loera, Jon Lee, Peter N Malkin, and Susan Margulies. Computing infeasibility
certificates for combinatorial problems through hilberts nullstellensatz. Journal of Symbolic
Computation, 46(11):1260–1283, 2011.
[13] Jesús A De Loera, Susan Margulies, Michael Pernpeintner, Eric Riedl, David Rolnick, Gwen
Spencer, Despina Stasi, and Jon Swenson. Graph-coloring ideals: Nullstellensatz certificates,
gröbner bases for chordal graphs, and hardness of gröbner bases. In Proceedings of the 2015
ACM on International Symposium on Symbolic and Algebraic Computation, pages 133–140.
ACM, 2015.
[14] Shalom Eliahou. An algebraic criterion for a graph to be four-colourable. 1992.
[15] János Kollár. Sharp effective nullstellensatz. Journal of the American Mathematical Society,
1(4):963–975, 1988.
[16] Martin Kreuzer and Lorenzo Robbiano. Computation Commutative Algebra I. Springer Verlag, Heidelberg, 2000.
[17] Jean Lasserre. Polynomials nonnegative on a grid and discrete optimization. Transactions of
the American Mathematical Society, 354(2):631–649, 2002.
[18] Monique Laurent. Semidefinite representations for finite varieties. Mathematical Programming, 109(1):1–26, 2007.
[19] Monique Laurent and Franz Rendl. Semidefinite programming and integer programming.
Handbooks in Operations Research and Management Science, 12:393–514, 2005.
ENUMERATIVE ASPECTS OF NULLSTELLENSATZ CERTIFICATES
21
[20] Shuo-Yen Robert Li et al. Independence numbers of graphs and generators of ideals. Combinatorica, 1(1):55–61, 1981.
[21] J. a. Loera, J. Lee, S. Margulies, and S. Onn. Expressing combinatorial problems by systems
of polynomial equations and hilbert’s nullstellensatz. Comb. Probab. Comput., 18(4):551–582,
July 2009.
[22] László Lovász. Stable sets and polynomials. Discrete mathematics, 124(1):137–153, 1994.
[23] Susan Margulies. private communication, 2016.
[24] Susan Margulies, Shmuel Onn, and Dmitrii V Pasechnik. On the complexity of hilbert refutations for partition. Journal of Symbolic Computation, 66:70–83, 2015.
[25] Yuri Vladimirovich Matiyasevich. Some algebraic methods for calculation of the number of
colorings of a graph. Zapiski Nauchnykh Seminarov POMI, 283:193–205, 2001.
[26] James G Oxley. Matroid theory, volume 1997. Oxford university press Oxford, 1992.
[27] Pablo A Parrilo. Semidefinite programming relaxations for semialgebraic problems. Mathematical programming, 96(2):293–320, 2003.
[28] Aron Simis, Wolmer V Vasconcelos, and Rafael H Villarreal. On the ideal theory of graphs.
Journal of Algebra, 167(2):389–416, 1994.
[29] William T Tutte. A contribution to the theory of chromatic polynomials. Canad. J. Math,
6(80-91):3–4, 1954.
[30] Neil White. Combinatorial geometries, volume 29. Cambridge University Press, 1987.
| 0 |
Routing and Staffing when Servers are Strategic
Ragavendran Gopalakrishnan
Xerox Research Centre India, Bangalore, Karnataka 560103,
[email protected]
arXiv:1402.3606v4 [cs.GT] 23 Mar 2016
Sherwin Doroudi
Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA 15213,
[email protected]
Amy R. Ward
Marshall School of Business, University of Southern California, Los Angeles, CA 90089,
[email protected]
Adam Wierman
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA 91125,
[email protected]
Traditionally, research focusing on the design of routing and staffing policies for service systems has modeled
servers as having fixed (possibly heterogeneous) service rates. However, service systems are generally staffed
by people. Furthermore, people respond to workload incentives; that is, how hard a person works can depend
both on how much work there is, and how the work is divided between the people responsible for it. In a
service system, the routing and staffing policies control such workload incentives; and so the rate servers
work will be impacted by the system’s routing and staffing policies. This observation has consequences when
modeling service system performance, and our objective in this paper is to investigate those consequences.
We do this in the context of the M /M /N queue, which is the canonical model for large service systems.
First, we present a model for “strategic” servers that choose their service rate in order to maximize a tradeoff between an “effort cost”, which captures the idea that servers exert more effort when working at a faster
rate, and a “value of idleness”, which assumes that servers value having idle time. Next, we characterize
the symmetric Nash equilibrium service rate under any routing policy that routes based on the server idle
time (such as the longest idle server first policy). We find that the system must operate in a quality-driven
regime, in which servers have idle time, in order for an equilibrium to exist. The implication is that to have
an equilibrium solution the staffing must have a first-order term that strictly exceeds that of the common
square-root staffing policy. Then, within the class of policies that admit an equilibrium, we (asymptotically)
solve the problem of minimizing the total cost, when there are linear staffing costs and linear waiting costs.
Finally, we end by exploring the question of whether routing policies that are based on the service rate,
instead of the server idle time, can improve system performance.
Key words : service systems; staffing; routing; scheduling; routing; strategic servers
Subject classifications : Primary: Queues: applications, limit theorems; secondary: Games/group decisions:
noncooperative
1. Introduction. There is a broad and deep literature studying the scheduling and staffing of
service systems that bridges operations research, applied probability, and computer science. This
literature has had, and is continuing to have, a significant practical impact on the design of call
centers (see, for example, the survey papers [18] and [1]), health care systems (see, for example,
the recent book [29]), and large-scale computing systems (see, for example, the recent book [26]),
among other areas. Traditionally, this literature on scheduling and staffing has modeled the servers
of the system as having fixed (possibly heterogeneous) service rates and then, given these rates,
scheduling and staffing policies are proposed and analyzed. However, in reality, when the servers
are people, the rate a server chooses to work can be, and often is, impacted by the scheduling and
staffing policies used by the system.
For example, if requests are always scheduled to the “fastest” server whenever that server is
available, then this server may have the incentive to slow her rate to avoid being overloaded with
1
2
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
work. Similarly, if extra staff is always assigned to the division of a service system that is the
busiest, then servers may have the incentive to reduce their service rates in order to ensure their
division is assigned the extra staff. The previous two examples are simplistic; however, strategic
behavior has been observed in practice in service systems. For example, empirical data from call
centers shows many calls that last near 0 seconds [18]. This strategic behavior of the servers allowed
them to obtain “rest breaks” by hanging up on customers – a rather dramatic means of avoiding
being overloaded with work. For another example, academics are often guilty of strategic behavior
when reviewing for journals. It is rare for reviews to be submitted before an assigned deadline
since, if someone is known for reviewing papers very quickly, then they are likely to be assigned
more reviews by the editor.
Clearly, the strategic behavior illustrated by the preceding examples can have a significant impact
on the performance provided by a service system. One could implement a staffing or scheduling
policy that is provably optimal under classical scheduling models, where servers are nonstrategic,
and end up with far from optimal system performance as a result of undesirable strategic incentives
created by the policy. Consequently, it is crucial for service systems to be designed in a manner
that provides the proper incentives for such “strategic servers”.
In practice, there are two approaches used for creating the proper incentives for strategic servers:
one can either provide structured bonuses for employees depending on their job performance
(performance-based payments) or one can provide incentives in how scheduling and staffing is
performed that reward good job performance (incentive-aware scheduling). While there has been
considerable research on how to design performance-based payments in the operations management and economics communities; the incentives created by scheduling and staffing policies are
much less understood. In particular, the goal of this paper is to initiate the study of incentive-aware
scheduling and staffing policies for strategic servers.
The design of incentive-aware scheduling and staffing policies is important for a wide variety of
service systems. In particular, in many systems performance-based payments such as bonuses are
simply not possible, e.g., in service systems staffed by volunteers such as academic reviewing. Furthermore, many service systems do not use performance-based compensation schemes; for example,
the 2005 benchmark survey on call center agent compensation in the U.S. shows that a large
fraction of call centers pay a fixed hourly wage (and have no performance-based compensation) [3].
Even when performance-based payments are possible, the incentives created by scheduling
and staffing policies impact the performance of the service system, and thus impact the success
of performance-based payments. Further, since incentive-aware scheduling and staffing does not
involve monetary payments (beyond a fixed employee salary), it may be less expensive to provide
incentives through scheduling and staffing than through monetary bonuses. Additionally, providing
incentives through scheduling and staffing eliminates many concerns about “unfairness” that stem
from differential payments to employees.
Of course, the discussion above assumes that the incentives created by scheduling and staffing
can be significant enough to impact the behavior. A priori it is not clear if they are, since simply
changing the scheduling and staffing policies may not provide strong enough incentives to strategic servers to significantly change service rates, and thus system performance. It is exactly this
uncertainty that motivates the current paper, which seeks to understand the impact of the incentives created by scheduling and staffing, and then to design incentive-aware staffing and scheduling
policies that provide near-optimal system performance without the use of monetary incentives.
1.1. Contributions of this paper. This paper makes three main contributions. We introduce a new model for the strategic behavior of servers in large service systems and, additionally,
we initiate the study of staffing and routing in the context of strategic servers. Each of these
contributions is described in the following.
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
3
Modeling Strategic Servers (Sections 2 and 3): The essential first step for an analysis of strategic
servers is a model for server behavior that is simple enough to be analytically tractable and yet rich
enough to capture the salient influences on how each server may choose her service rate. Our model
is motivated by work in labor economics that identifies two main factors that impact the utility
of agents: effort cost and idleness. More specifically, it is common in labor economics to model
agents as having some “effort cost” function that models the decrease in utility which comes from
an increase in effort [12]. Additionally, it is a frequent empirical observation that agents in service
systems engage in strategic behavior to increase the amount of idle time they have [18]. The key
feature of the form of the utility we propose in Section 2 is that it captures the inherent trade-off
between idleness and effort. In particular, a faster service rate would mean quicker completion of
jobs and might result in a higher idle time, but it would also result in a higher effort cost.
In Section 3 of this paper, we apply our model in the context of a M /M /N system, analyzing
the first order condition, and provide a necessary and sufficient condition for a solution to the first
order condition to be a symmetric equilibrium service rate (Theorem 4). In addition, we discuss the
existence of solutions to the first order condition, and provide a sufficient condition for a unique
solution (Theorem 5). These results are necessary in order to study staffing and routing decisions,
as we do in Sections 4 and 5; however, it is important to note that the model is applicable more
generally as well.
Staffing Strategic Servers (Section 4): The second piece of the paper studies the impact strategic
servers have on staffing policies in multi-server service systems. The decision of a staffing level for
a service system has a crucial impact on the performance of the system. As such, there is a large
literature focusing on this question in the classical, nonstrategic, setting, and the optimal policy
is well understood. In particular, the number of servers that must be staffed to ensure stability
in a conventional M /M /N queue with arrival rate λ and fixed service rate µ should be strictly
larger than the offered load, λ/µ. However, when there are linear staffing and waiting costs, the
economically optimal number of servers to staff is more. Specifically, the optimal policy employs
the square root of the offered load more servers [8]. This results in efficient operation, because
the system loading factor λ/(N µ) is close to one;
√ and maintains quality of service, because the
customer wait times are small (on the order of 1/ λ). Thus, this is often referred to as the Quality
and Efficiency Driven (QED) regime or as square-root staffing.
Our contribution in this paper is to initiate the study of staffing strategic servers. In the presence
of strategic servers, the offered load depends on the arrival rate, the staffing, and the routing,
through the servers’ choice of their service rate. We show that an equilibrium service rate exists
only if the number of servers staffed is order λ more than the aforementioned square-root staffing
(Theorem 7). In particular, the system must operate in a quality-driven regime, in which the servers
have idle time, instead of the quality-and-efficiency driven regime that arises under square-root
staffing, in which servers do not have idle time. Then, within the set of policies that admit an
equilibrium service rate, we (asymptotically) solve the problem of minimizing the total cost, when
there are linear staffing costs and linear waiting costs (Theorem 8).
Routing to Strategic Servers (Section 5): The final piece of this paper studies the impact of
strategic servers on the design of scheduling policies in multi-server service systems. When servers
are not strategic, how to schedule (dispatch) jobs to servers in multi-server systems is well understood. In particular, the most commonly proposed policies for this setting include Fastest Server
First (FSF), which dispatches arriving jobs to the idle server with the fastest service rate; Longest
Idle Server First (LISF), which dispatches jobs to the server that has been idle for the longest
period of time; and Random, which dispatches the job to each idle server with equal probability.
When strategic servers are not considered, FSF is the natural choice for reducing the mean response
time (though it is not optimal in general [16, 35]). However, in the context of strategic servers the
story changes. In particular, we prove that FSF has no symmetric equilibria when strategic servers
4
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
are considered, even when there are just two servers. Further, we prove that LISF, a commonly
suggested policy for call centers due to its fairness properties, has the same, unique, symmetric
equilibrium as random dispatching. In fact, we prove that there is a large policy-space collapse – all
routing policies that are idle-time-order-based are equivalent in a very strong sense (Theorem 9).
With this in mind, one might suggest that Slowest Server First (SSF) would be a good dispatch
policy, since it could incentivize servers to work fast; however, we prove that, like FSF, SSF has
no symmetric equilibria (Theorem 10). However, by “softening” SSF’s bias toward slow servers,
we are able to identify policies that are guaranteed to have a unique symmetric equilibrium and
provide mean response times that are smaller than that under LISF and Random (Theorem 11).
A key message provided by the results described above is that scheduling policies must carefully
balance two conflicting goals in the presence of strategic servers: making efficient use of the service
capacity (e.g., by sending work to fast servers) while still incentivizing servers to work fast (e.g.,
by sending work to slow servers). While these two goals are inherently in conflict, our results show
that it is possible to balance them in a way that provides improved performance over Random.
1.2. Related work. As we have already described, the question of how to route and staff in
many-server systems when servers have fixed, nonstrategic, service rates is well-studied. In general,
this is a very difficult question, because the routing depends on the staffing and vice versa. However,
when all the servers serve at the same rate, the routing question is moot. Then, [8] shows that
square-root staffing, first introduced in [17] and later formalized in [25], is economically optimal
when both staffing and waiting costs are linear. Furthermore, square root staffing is remarkably
robust: there is theoretical support for why it works so well for systems of moderate size [30], and it
continues to be economically optimal both when abandonment is added to the M /M /N model [19]
and when there is uncertainty in the arrival rate [33]. Hence, to study the joint routing and staffing
question for more complex systems, that include heterogeneous servers that serve at different rates
and heterogeneous customers, many authors have assumed square root staffing and show how to
optimize the routing for various objective functions (see, for example, [4, 23, 6, 40, 41]). In relation
to this body of work, this paper shows that scheduling and routing results for classical many-server
systems that assume fixed service rates must be revisited when servers exhibit strategic behavior.
This is because they may not admit a symmetric equilibrium service rate in the case of square-root
staffing (see Section 4) or be feasible in the case of Fastest Server First routing (see Section 5).
Importantly, the Fastest Server First routing policy mentioned earlier has already been recognized to be potentially problematic because it may be perceived as “unfair”. The issue from an
operational standpoint is that there is strong indication in the human resource management literature that the perception of fairness affects employee performance [15, 14]. This has motivated the
analysis of “fair” routing policies that, for example, equalize the cumulative server idleness [7, 38],
and the desire to find an optimal “fair” routing policy [5, 42]. Another approach is to formulate
a model in which the servers choose their service rate in order to balance their desire for idle
time (which is obtained by working faster) and the exertion required to serve faster. This leads
to a non-cooperative game for a M /M /N queue in which the servers act as strategic players that
selfishly maximize their utility.
Finally, the literature that is, perhaps, most closely related to the current paper is the literature
on queueing games, which is surveyed in [28]. The bulk of this literature focuses on the impact of
customers acting strategically (e.g., deciding whether to join and which queue to join) on queueing
performance. Still, there is a body of work within this literature that considers settings where
servers can choose their service rate, e.g., [31, 21, 10, 11]. However, in all of the aforementioned
papers, there are two servers that derive utility from some monetary compensation per job or per
unit of service that they provide, and there are no staffing decisions. In contrast, our work considers
systems with more than two servers, and considers servers that derive utility from idle time (and
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
5
have a cost of effort). The idea that servers value idle time is most similar to the setting in [20],
but that paper restricts its analysis to a two server model. Perhaps the closest previous work to the
current paper in analysis spirit is [2], which characterizes approximate equilibria in a market with
many servers that compete on price and service level. However, this is similar in theme to [31, 10]
in the sense that they consider servers as competing firms in a market. This contrasts with the
current paper, where our focus is on competition between servers within the same firm.
2. A model for strategic servers. The objective of this paper is to initiate an investigation
into the effects of strategic servers on classical management decisions in service systems, e.g.,
staffing and routing. We start by, in this section, describing formally our model for the behavior
of a strategic server.
The term “strategic server” could be interpreted in many ways depending on the server’s goal.
Thus, the key feature of the model is the utility function for a strategic server. Our motivation comes
from a service system staffed by people who are paid a fixed wage, independent of performance.
In such settings, one may expect two key factors to have a first-order impact on the experience of
the servers: the amount of effort they put forth and the amount of idle time they have.
Thus, a first-order model for the utility of a strategic server is to linearly combine the cost of
effort with the idle time of the server. This gives the following form for the utility of server i in a
service system with N servers:
Ui (µ) = Ii (µ) − c(µi ), i ∈ {1, . . . , N },
(1)
where µ is a vector of the rate of work chosen by each server (i.e., the service rate vector), Ii (µ)
is the time-average idle time experienced by server i given the service rate vector µ, and c(µi ) is
the effort cost of server i. We take c to be an increasing, convex function which is the same for
all servers. We assume that the strategic behavior of servers (choosing a utility-maximizing service
rate) is independent of the state of the system and that the server has complete information about
the steady state properties of the system when choosing a rate, i.e., they know the arrival rate,
scheduling policy, staffing policy, etc., and thus can optimize Ui (µ).
The key feature of the form of the utility in (1) is that it captures the inherent trade-off between
idleness and effort. The idleness, and hence the utility, is a steady state quantity. In particular, a
faster service rate would mean quicker completion of jobs and might result in higher idle time in
steady state, but it would also result in a higher effort cost. This trade-off then creates a difficult
challenge for staffing and routing in a service system. To increase throughput and decrease response
times, one would like to route requests to the fastest servers, but by doing so the utility of servers
decreases, making it less desirable to maintain a fast service rate. Our model should be interpreted
as providing insight into the systemic incentives created by scheduling and staffing policies rather
than the transitive incentives created by the stochastic behavior of the system.
Our focus in this paper will be to explore the consequences of strategic servers for staffing and
routing in large service systems, specifically, in the M /M /N setting. However, the model is generic
and can be studied in non-queueing contexts as well.
To quickly illustrate the issues created by strategic servers, a useful example to consider is that
of a M /M /1 queue with a strategic server.
Example 1 (The M/M/1 queue with a strategic server). In a classic M /M /1 system, jobs arrive at rate λ into a queue with an infinite buffer, where they wait to obtain service
from a single server having fixed service rate µ. When the server is strategic, instead of serving at
a fixed rate µ, the server chooses her service rate µ > λ in order to maximize the utility in (1). To
understand what service rate will emerge, recall that in a M /M /1 queue with µ > λ the steady
6
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
state fraction of time that the server is idle is given by I(µ) = 1 − µλ . Substituting this expression
into (1) means that the utility of the server is given by the following concave function:
U (µ) = 1 −
λ
− c(µ).
µ
We now have two possible scenarios. First, suppose that c′ (λ) < 1/λ, so that the cost function
does not increase too fast. Then, U (µ) attains a maximum in (λ, ∞) at a unique point µ⋆ , which is
the optimal (utility maximizing) operating point for the strategic server. Thus, a stable operating
point emerges, and the performance of this operating point can be derived explicitly when a specific
form of a cost function is considered.
On the other hand, if c′ (λ) ≥ 1/λ, then U (µ) is strictly decreasing in (λ, ∞) and hence does not
attain a maximum in this interval. We interpret this case to mean that the server’s inherent skill
level (as indicated by the cost function) is such that the server must work extremely hard just to
stabilize the system, and therefore should not have been hired in the first place.
For example, consider the class of cost functions c(µ) = cE µp . If c(λ) < p1 , then µ⋆ solves µ⋆ c(µ⋆ ) =
1
p+1
λ
λ
⋆
> λ. On the other hand, if c(λ) ≥ p1 , then U (µ) is strictly decreasing
, which gives µ = cE p
p
in (λ, ∞) and hence does not attain a maximum in this interval.
Before moving on to the analysis of the M /M /N model with strategic servers, it is important to
point out that the model we study focuses on a linear trade-off between idleness and effort. There
are certainly many generalizations that are interesting to study in future work. One particularly
interesting generalization would be to consider a concave (and increasing) function of idle time in
the utility function, since it is natural that the gain from improving idle time from 10% to 20%
would be larger than the gain from improving idle time from 80% to 90%. A preliminary analysis
highlights that the results in this paper would not qualitatively change in this context.1
3. The M /M /N queue with strategic servers. Our focus in this paper is on the staffing
and routing decisions in large service systems, and so we adopt a classical model of this setting, the
M /M /N , and adjust it by considering strategic servers, as described in Section 2. The analysis of
staffing and routing policies is addressed in Sections 4 and 5, but before moving to such questions,
we start by formally introducing the M /M /N model, and performing some preliminary analysis
that is useful both in the context of staffing and routing.
3.1. Model and notation. In a M /M /N queue, customers arrive to a service system having
N servers according to a Poisson process with rate λ. Delayed customers (those that arrive to find
all servers busy) are served according to the First In First Out (FIFO) discipline. Each server is
fully capable of handling any customer’s service requirements. The time required to serve each
customer is independent and exponential, and has a mean of one time unit when the server works
at rate one. However, each server strategically chooses her service rate to maximize her own (steady
state) utility, and so it is not a priori clear what the system service rates will be.
1
Specifically, if g(Ii (µ)) replaces Ii (µ) in (1), all the results in Section 3 characterizing equilibria service rates
are maintained so long as g ′′′ < 0, except for Theorem 5, whose sufficient condition would have to be adjusted to
accommodate g. In addition, our results could be made stronger depending on the specific form of g. For example, if
g is such that limµi →µi + Ui (µ) = −∞, then, a preliminary analysis reveals that it would not be necessary to impose
the stability constraint µi > λ/N exogenously. Moreover, every solution to the symmetric first order condition (9)
would be a symmetric equilibrium (i.e., the sufficient condition of Theorem 4 as generalized for this case by Footnote
2 would automatically be satisfied).
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
7
In this setting, the utility functions that the servers seek to maximize are given by
Ui (µ; λ, N, R) = Ii (µ; λ, N, R) − c(µi),
i ∈ {1, . . . , N },
(2)
where µ is the vector of service rates, λ is the arrival rate, N is the number of servers (staffing
level), and R is the routing policy. Ii (µ; λ, N, R) is the steady state fraction of time that server i
is idle. c(µ) is an increasing, convex function with c′′′ (µ) ≥ 0, that represents the server effort cost.
Note that, as compared with (1), we have emphasized the dependence on the arrival rate λ,
staffing level N , and routing policy of the system, R. In the remainder of this article, we expose or
suppress the dependence on these additional parameters as relevant to the discussion. In particular,
note that the idle time fraction Ii (and hence, the utility function Ui ) in (2) depends on how
arriving customers are routed to the individual servers.
There are a variety of routing policies that are feasible for the system manager. In general, the
system manager may use information about the order in which the servers became idle, the rates
at which servers have been working, etc. This leads to the possibility of using simple policies such
as Random, which chooses an idle server to route to uniformly at random, as well as more complex
policies such as Longest/Shortest Idle Server First (LISF/SISF) and Fastest/Slowest Server First
(FSF/SSF). We study the impact of this decision in detail in Section 5.
Given the routing policy chosen by the system manager and the form of the server utilities
in (2), the situation that emerges is a competition among the servers for the system idle time. In
particular, the routing policy yields a division of idle time among the servers, and both the division
and the amount of idle time will depend on the service rates chosen by the servers.
As a result, the servers can be modeled as strategic players in a noncooperative game, and thus
the operating point of the system is naturally modeled as an equilibrium of this game. In particular,
a Nash equilibrium of this game is a set of service rates µ⋆ , such that,
Ui (µ⋆i , µ⋆−i ; R) = max Ui (µi , µ⋆−i ; R),
λ
µi > N
(3)
where µ⋆−i = (µ⋆1 , . . . , µ⋆i−1 , µ⋆i+1 , . . . , µ⋆N ) denotes the vector of service rates of all the servers except
server i. Note that we exogenously impose the (symmetric) constraint that each server must work
at a rate strictly greater than Nλ in order to define a product action space that ensures the stability
of the system.2 Such a constraint is necessary to allow steady state analysis, and does not eliminate any feasible symmetric equilibria. We treat this bound as exogenously fixed, however in some
situations a system manager may wish to impose quality standards on servers, which would correspond to imposing a larger lower bound (likely with correspondingly larger payments for servers).
Investigating the impact of such quality standards is an interesting topic for future work.
Our focus in this paper is on symmetric Nash equilibria. With a slight abuse of notation, we
say that µ⋆ is a symmetric Nash equilibrium if µ⋆ = (µ⋆ , . . . , µ⋆ ) is a Nash equilibrium (solves (3)).
Throughout, the term “equilibrium service rate” means a symmetric Nash equilibrium service rate.
We focus on symmetric Nash equilibria for two reasons. First, because the agents we model
intrinsically have the same skill level (as quantified by the effort cost functions), a symmetric
2
One can imagine that servers, despite being strategic, would endogenously stabilize the system. To test this, one
could study a related game where the action sets of the servers are (0, ∞). Then, the definition of the idle time Ii (µ)
must be extended into the range of µ for which the system is overloaded; a natural way to do so is to define it
to be zero in this range, which would ensure continuity at µ for which the system is critically loaded. However, it
is not differentiable there, which
necessitates a careful piecewise analysis. A preliminary analysis indicates that in
λ
this scenario, no µ ∈ 0, N
can ever be a symmetric equilibrium, and then, the necessary and sufficient condition
of Theorem 4 would become U (µ⋆ , µ⋆ ) ≥ limµ1 →0+ U (µ1 , µ⋆ ), which is more demanding than (10) (e.g., it imposes a
finite upper bound on µ⋆ ), but not so much so that it disrupts the staffing results that rely on this theorem (e.g.,
Lemma 1 still holds).
8
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
equilibrium corresponds to a fair outcome. As we have already discussed, this sort of fairness is often
crucial in service organizations [15, 14, 5]. A second reason for focusing on symmetric equilibria is
that analyzing symmetric equilibria is already technically challenging, and it is not clear how to
approach asymmetric equilibria in the contexts that we consider. Note that we do not rule out the
existence of asymmetric equilibria; in fact, they likely exist, and it would be interesting to study
whether they lead to better or worse system performance than their symmetric counterparts.
3.2. The M /M /N queue with strategic servers and Random routing. Before analyzing staffing and routing in detail, we first study the M /M /N queue with strategic servers
and Random routing. We focus on Random routing first because it is, perhaps, the most commonly studied policy in the classical literature on nonstrategic servers. Further, this importance is
magnified by a new “policy-space collapse” result included in Section 5.1.1, which shows that all
idle-time-order-based routing policies (e.g., LISF and SISF) have equivalent steady state behavior,
and thus have the same steady state behavior as Random routing. We stress that this result stands
on its own in the classical, nonstrategic setting of a M /M /N queue with heterogeneous service
rates, but is also crucial to analyze routing to strategic servers (Section 5).
The key goal in analyzing a queueing system with strategic servers is to understand the equilibria
service rates, i.e., show conditions that guarantee their existence and characterize the equilibria
when they exist. Theorems 4 and 5 of Section 3.2.2 summarize these results for the M /M /N queue
with Random routing. However, in order to obtain such results we must first characterize the idle
time in a M /M /N system in order to be able to understand the “best responses” for servers, and
thus analyze their equilibrium behavior. Such an analysis is the focus of Section 3.2.1.
3.2.1. The idle time of a tagged server. In order to characterize the equilibria service
rates, a key first step is to understand the idle time of a M /M /N queue. This is, of course, a
well-studied model, and so one might expect to be able to use off-the-shelf results. While this is
true when the servers are homogeneous (i.e., all the server rates are the same), for heterogeneous
systems, closed form expressions are challenging to obtain in general, and the resulting forms are
quite complicated [22].
To characterize equilibria, we do need to understand the idle time of heterogeneous M /M /N
queues. However, due to our focus on symmetric equilibria, we only need to understand a particular,
mild, form of heterogeneity. In particular, we need only understand the best response function for
a “deviating server” when all other servers have the same service rate. Given this limited form of
heterogeneity, the form of the idle time function simplifies, but still remains quite complicated, as
the following theorem shows.
Theorem 1. Consider a heterogenous M /M /N system with Random routing and arrival rate
λ > 0, where N − 1 servers operate at rate µ > Nλ , and a tagged server operates at rate µ1 > µ1 =
+
(λ − (N − 1)µ) . The steady state probability that the tagged server is idle is given by:
−1
ρ
ErlC(N, ρ)
µ
ρ
1−
1+
1−
I(µ1 , µ; λ, N ) = 1 −
,
N
N
µ1
N − ρ + 1 − µ1
µ
where ρ = λµ , and ErlC(N, ρ) denotes the Erlang C formula, given by:
ErlC(N, ρ) =
ρN N
N ! N −ρ
PN −1 ρj ρN N
j=0 j! + N ! N −ρ
.
(4)
9
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
In order to understand this idle time function more, we derive expressions for the first two
derivatives of I with respect to µ1 in the following theorem. These results are crucial to the analysis
of equilibrium behavior.
Theorem 2.
The first two partial derivatives of I with respect to µ1 are given by
µ1 µ1
∂I
ErlC(N, ρ)
I2 λ
ErlC(N, ρ)
+ 1−
(5)
= 2
1 +
∂µ1
µ1 N − ρ
µ
µ N − ρ + 1 − µ1 2
N − ρ + 1 − µµ1
µ
2
µ
ErlC(N, ρ)
∂2I
ρ ErlC(N, ρ)
µ
2I 3 λ
ErlC(N,
ρ)
1
1
+ N − 1−
=− 3
2 1 +
3
1 −
µ1
µ1
µ1
∂µ21
µ1 N − ρ
µ
µ
N
−
ρ
+
1
−
N − ρ+1− µ
N − ρ+1− µ
µ
(6)
Importantly, it can be shown that the right hand side of (5) is always positive, and therefore, the
idle time is increasing in the service rate µ1 , as expected. However, it is not clear through inspection
of (6) whether the second derivative is positive or negative. Our next theorem characterizes the
second derivative, showing that the idle time could be convex at µ1 = µ1 to begin with, but if so,
then as µ1 increases, it steadily becomes less convex, and is eventually concave. This behavior adds
considerable complication to the equilibrium analysis.
Theorem 3. The second derivative of the idle time satisfies the following properties:
†
∂2I
∂2I
(a) There exists a threshold µ†1 ∈ [µ1 , ∞) such that ∂µ
2 > 0 for µ < µ1 < µ1 , and ∂µ2 < 0 for
1
1
µ†1 < µ1 < ∞.
∂2I
∂3I
(b) ∂µ
2 > 0 ⇒ ∂µ3 < 0.
1
1
1
We remark that it is possible that the threshold µ† could be greater than Nλ , so, restricting the
service rate of server 1 to be greater than Nλ does not necessarily simplify the analysis.
3.2.2. Symmetric equilibrium analysis for a finite system. The properties of the idle
time function derived in the previous section provide the key tools we need to characterize the
symmetric equilibria service rates under Random routing for a M /M /N system.
To characterize the symmetric equilibria, we consider the utility of a tagged server, without loss
of generality, server 1, under the mildly heterogeneous setup of Theorem 1. We denote it by
U (µ1 , µ; λ, N ) = I(µ1 , µ; λ, N ) − c(µ1 )
(7)
For a symmetric equilibrium in ( Nλ , ∞), we explore the first order and second order conditions for
U as a function of µ1 to have a maximum in (µ1 , ∞).
The first order condition for an interior local maximum at µ1 is given by:
∂U
=0
∂µ1
=⇒
∂I
= c′ (µ1 )
∂µ1
(8)
Since we are interested in a symmetric equilibrium, we analyze the symmetric first order condition,
obtained by plugging in µ1 = µ in (8):
∂U
λ
λ
λ
= 0 =⇒
N − + ErlC N,
= c′ (µ)
(9)
∂µ1 µ1 =µ
N 2 µ2
µ
µ
Now, suppose that µ⋆ > Nλ satisfies the symmetric first order condition (9). Then, µ1 = µ⋆ is
a stationary point of U (µ1 , µ⋆ ). It follows then, that µ⋆ will be a symmetric equilibrium for the
servers (satisfying (3)) if and only if U (µ1 , µ⋆ ) attains a global maximum at µ1 = µ⋆ in the interval
( Nλ , ∞). While an obvious necessary condition for this is that U (µ⋆ , µ⋆ ) ≥ U ( Nλ , µ⋆ ), we show,
perhaps surprisingly, that it is also sufficient, in the following theorem.
10
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Theorem 4. µ⋆ > Nλ is a symmetric equilibrium if and only if it satisfies the symmetric first
order condition (9), and the inequality U (µ⋆ , µ⋆ ) ≥ U ( Nλ , µ⋆ ), i.e.,
λ
c(µ) ≤ c
N
−1 !−1
ρ
ErlC(N, ρ)
ρ
1+ 1− +
.
+ 1−
N
N
N −1
(10)
Finally, we need to understand when the symmetric first order condition (9) admits a feasible
solution µ⋆ > Nλ . Towards that, we present sufficient conditions for at least one feasible solution,
as well as for a unique feasible solution.
Theorem 5. If c′ Nλ < λ1 , then the symmetric first order condition (9) has at least one solu
2
tion for µ in Nλ , ∞ . In addition, if 2 Nλ c′ Nλ + Nλ c′′ Nλ ≥ 1, then the symmetric first order
condition (9) has a unique solution for µ in Nλ , ∞ .
In the numerical results that follow, we see instances of zero, one, and two equilibria.3 Interestingly, when more than one equilibrium exists, the equilibrium with the largest service rate,
which leads to best system performance, also leads to highest server utility, and hence is also most
preferred by the servers, as the following theorem shows.
µ⋆1
Theorem 6. If the symmetric first order condition (9) has two solutions, say µ⋆1 and µ⋆2 , with
> µ⋆2 > Nλ , then U (µ⋆1 , µ⋆1 ) > U (µ⋆2 , µ⋆2 ).
3.3. Numerical examples. Because of the complexity of the expression for the equilibrium
service rate(s) given by the first order condition (9) and the possibility of multiple equilibria, we
discuss a few numerical examples here in order to provide intuition. In addition, we point out some
interesting characteristics that emerge as a consequence of strategic server behavior.
We present two sets of graphs below: one that varies the arrival rate λ while holding the staffing
level fixed at N = 20 (Figure 1), and one that varies the staffing level N while holding the arrival
rate fixed at λ = 2 (Figure 2). In each set, we plot the following two equilibrium quantities: (a)
service rates, and (b) mean steady state waiting times. Note that the graphs in Figure 2 only show
data points corresponding to integer values of N ; the thin line through these points is only meant
as a visual tool that helps bring out the pattern. Each of the four graphs shows data for three
different effort cost functions: c(µ) = µ, c(µ) = µ2 , and c(µ) = µ3 , which are depicted in red, blue,
and green respectively. The data points in Figure 2 marked × and ⋄ correspond to special staffing
levels N ao,2 and N opt,2 respectively, which are introduced later, in Section 4.
The first observation we make is that there are at most two equilibria. Further, for large enough
values of the minimum service rate Nλ , there is no equilibrium. (In Figure 1(a) where N is fixed,
this happens for large λ, and in Figure 2(a) where λ is fixed, this happens for small N .) On the
other hand, when the minimum service rate Nλ is small enough, there is a unique equilibrium; for
this range, even if the symmetric first order condition (9) has another solution greater than Nλ ,
it fails to satisfy (10). If an intermediate value of Nλ is small enough for (9) to have two feasible
solutions, but not too small so that both solutions satisfy (10), then there are two equilibria.
The second observation we make is that the two equilibria have very different behaviors. As
illustrated in Figure 1(a), the larger equilibrium service rate first increases and then decreases while
3
In general, the symmetric first order condition (9) can be rewritten as
µ2 c′ (µ) +
λ
λ
(ρ − ErlC (N, ρ)) −
= 0.
N2
N
Note that, when the term ρ − ErlC(N, ρ) is convex in µ, it follows that the left hand side of the above equation is
also convex in µ, which implies that there are at most two symmetric equilibria.
11
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
µ⋆
⋆
log10 (W )
c(µ) = µ3
3
0.4
c(µ) = µ3
2
◦
c(µ) = µ
0.2
c(µ) = µ
•
0
c(µ) = µ2
c(µ) = µ2
•
4
λ
6
-3
•
•
-6
0
2
4
6
λ
(a) Service Rates
(b) Mean Steady State Waiting Times
Figure 1. Equilibrium behavior as a function of the arrival rate when the staffing level is fixed at N = 20, for three
different effort cost functions: linear, quadratic, and cubic. The dotted line in (a) is µ = λ/N = λ/20.
⋆
log10 (W )
µ⋆
0.4
c(µ) = µ3
• • •
• ×
•
3
⋄
× • •
•
c(µ) = µ2
• • •
• • •
• •
c(µ) = µ3
•
• • • •
• • •
⋄
× •
⋄
0
• • •
-3
×
•
6
c(µ) = µ2
•
×
×
•
•
•
•
•
× •
12
•
•
•
•
•
•
•
•
•
•
•
0
6
12
(a) Service Rates
18
N
•
•
•
-6
N
18
⋄
•
•
⋄
⋄
c(µ) = µ
×
•
⋄
⋄
c(µ) = µ
0.2
•
•
•
(b) Mean Steady State Waiting Times
Figure 2. Equilibrium behavior as a function of the staffing level when the arrival rate is fixed at λ = 2, for three
different effort cost functions: linear, quadratic, and cubic. The dotted curve in (a) is µ = λ/N = 2/N . The data points
marked × and ⋄ correspond to N ao,2 and N opt,2 respectively.
the corresponding mean steady state waiting time in Figure 1(b) steadily increases. In contrast, as
the smaller equilibrium service rate increases, the corresponding mean steady state waiting time
decreases. The relationship between the equilibrium service rates and waiting times is similarly
inconsistent in Figure 2. This behavior is not consistent with results from classical, nonstrategic
models, and could serve as a starting point to explaining empiric observations that are also not
consistent with classical, nonstrategic models. For example, the non-monotonicity of service rate
in workload is consistent with behavior observed in a hospital setting in [32].
4. Staffing strategic servers. One of the most studied questions for the design of service
systems is staffing. Specifically, how many servers should be used for a given arrival rate. In the
classical, nonstrategic setting, this question is well understood. In particular, as mentioned in the
introduction, square-root staffing is known to be optimal when there are linear staffing and waiting
costs [8].
In contrast, there is no previous work studying staffing in the context of strategic servers. The
goal of this section is to initiate the study of the impact that strategic servers have on staffing. To
get a feeling for the issues involved, consider a system with arrival rate λ and two possible staffing
12
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
policies: N1 = λ and N2 = 2λ, where Ni is the number of servers staffed under policy i given arrival
rate λ. Under N1 , if the servers work at any rate slightly larger than 1, then they will have almost
no idle time, and so they will have incentive to work harder. However, if servers are added, so that
the provisioning is as in N2 , then servers will have plentiful idle time when working at rate 1, and
thus not have incentive to work harder. Thus, the staffing level has a fundamental impact on the
incentives of the servers.
The above highlights that one should expect significant differences in staffing when strategic
servers are considered. In particular, the key issue is that the staffing level itself creates incentives
for the servers to speed up or slow down, because it influences the balance between effort and idle
time. Thus, the policies that are optimal in the nonstrategic setting are likely suboptimal in the
strategic setting, and vice versa.
The goal of the analysis in this section is to find the staffing level that minimizes costs when the
system manager incurs linear staffing and waiting costs, within the class of policies that admit a
symmetric equilibrium service rate. However, the analysis in the previous section highlights that
determining the exact optimal policy is difficult, since we only have an implicit characterization
of a symmetric equilibrium service rate in (9). As a result, we focus our attention on the setting
where λ is large, and look for an asymptotically optimal policy.
As expected, the asymptotically optimal staffing policy we design for the case of strategic servers
differs considerably from the optimal policies in the nonstrategic setting. In particular, in order for
a symmetric equilibrium service rate to exist, the staffing level must be order λ larger than the
optimal staffing in the classical, nonstrategic setting. Then, the system operates in a quality-driven
(QD) regime instead of the quality-and-efficiency-driven (QED) regime that results from squareroot staffing. This is intuitive given that the servers value their idle time, and in the QD regime
they have idle time but in the QED regime their idle time is negligible.
The remainder of this section is organized as follows. We first introduce the cost structure and
define asymptotic optimality in Section 4.1. Then, in Section 4.2, we provide a simple approximation
of a symmetric equilibrium service rate and an asymptotically optimal staffing policy. Finally, in
Section 4.3, we compare our asymptotically optimal staffing policy for strategic servers with the
square-root staffing policy that is asymptotically optimal in the nonstrategic setting.
4.1. Preliminaries. Our focus in this section is on a M /M /N queue with strategic servers,
as introduced in Section 3. We assume Random routing throughout this section. It follows that
our results hold for any “idle-time-order-based” routing policy (as explained in the beginning of
Section 3.2 and validated by Theorem 9). The cost structure we assume is consistent with the one
in [8], under which square-root staffing is asymptotically optimal when servers are not strategic.
In their cost structure, there are linear staffing and waiting costs. One difference in our setting is
that there may be multiple equilibrium service rates. In light of Theorem 6, we focus on the largest
⋆
symmetric equilibrium service rate, and assume W denotes the mean steady state waiting time
in a M /M /N queue with arrival rate λ, and strategic servers that serve at the largest symmetric
equilibrium service rate (when there is more than one equilibrium).4 Then, the total system cost is
⋆
C ⋆ (N, λ) = cS N + wλW ,
where cS is the per-unit staffing cost and w is the per-unit waiting cost. The ⋆ superscript indicates
that the mean steady state waiting time, and hence, the cost function, depends on the (largest)
symmetric equilibrium service rate µ⋆ , which in turn depends on N and λ.
4
Note that the staffing policy we derive in this section (Theorem 8) will be asymptotically optimal regardless of
which equilibrium service rate the servers choose.
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
13
The function C ⋆ (N, λ) is well-defined only if a symmetric equilibrium service rate, under which
the system is stable, exists. Furthermore, we would like to rule out having an unboundedly large
symmetric equilibrium service rate because then the server utility (1) will be large and negative –
and it is hard to imagine servers wanting to participate in such a game.
Definition 1. A staffing policy N λ is admissible if the following two properties hold:
(i) There exists a symmetric equilibrium µ⋆,λ under which the system is stable (λ < µ⋆,λ N λ ) for
all large enough λ.
(ii) There exists a sequence of symmetric equilibria {µ⋆,λ , λ > 0} for which lim supλ→∞ µ⋆,λ < ∞.
If the requirement (ii) in the above definition is not satisfied, then the server utility will approach
−∞ as the service rates become unboundedly large. The servers will not want to participate in
such a game. As long as the requirement (ii) is satisfied, we can assume the server payment is
sufficient to ensure that the servers have positive utility.
We let Π denote the set of admissible staffing policies. We would like to solve for
N opt,λ = arg min C ⋆ (N, λ).
(11)
N ∈Π
However, given the difficulty of deriving N opt,λ directly, we instead characterize the first order
growth term of N opt,λ in terms of λ. To do this, we consider a sequence of systems, indexed by the
arrival rate λ, and let λ become large.
Our convention when we wish to refer to any process or quantity associated with the system
having arrival rate λ is to superscript the appropriate symbol by λ. In particular, N λ denotes
the staffing level in the system having arrival rate λ, and µ⋆,λ denotes an equilibrium service rate
⋆,λ
(assuming existence) in the system with arrival rate λ and staffing level N λ . We assume W
equals the mean steady state waiting time in a M /M /N λ queue with arrival rate λ when the
servers work at the largest equilibrium service rate. The associated cost is
C ⋆,λ (N λ ) = cS N λ + wλW
⋆,λ
.
(12)
Given this setup, we would like to find an admissible staffing policy N λ that has close to the
minimum cost C ⋆,λ (N opt,λ ).
Definition 2. A staffing policy N λ is asymptotically optimal if it is admissible (N λ ∈ Π) and
C ⋆,λ (N λ )
= 1.
λ→∞ C ⋆,λ (N opt,λ )
lim
In what follows, we use the o and ω notations to denote the limiting behavior of functions.
Formally, for any two real-valued functions f (x), g(x) that take nonzero values for sufficiently large
(x)
x, we say that f (x) = o(g(x)) (equivalently, g(x) = ω(f (x))) if limx→∞ fg(x)
= 0. In other words, f
is dominated by g asymptotically (equivalently, g dominates f asymptotically).
4.2. An asymptotically optimal staffing policy. The class of policies we study are those
that staff independently of the equilibrium service rates, which are endogenously determined
according to the analysis in Section 3.2. More specifically, these are policies that choose N λ purely
as a function of λ. Initially, it is unclear what functional form an asymptotically optimal staffing
policy can take in the strategic server setting. Thus, to begin, it is important to rule out policies
that cannot be asymptotically optimal. The following proposition does this, and highlights that
asymptotically optimal policies must be asymptotically linear in λ.
Proposition 1. Suppose N λ = f (λ) + o(f (λ)) for some function f. If either f (λ) = o(λ) or
f (λ) = ω(λ), then the staffing policy N λ cannot be asymptotically optimal.
14
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Intuitively, if f (λ) = o(λ), understaffing forces the servers to work too hard, their service rates
growing unboundedly (and hence their utilities approaching −∞) as λ becomes large. On the other
hand, the servers may prefer to have f (λ) = ω(λ) because the overstaffing allows them to be lazier;
however, the overstaffing is too expensive for the system manager.
Proposition 1 implies that to find a staffing policy that is asymptotically optimal, we need only
search within the class of policies that have the following form:
1
N λ = λ + o(λ), for a ∈ (0, ∞).
a
(13)
However, before we can search for the cost-minimizing a, we must ensure that the staffing (13)
guarantees the existence of a symmetric equilibria µ⋆,λ for all large enough λ. It turns out that this
is only true when a satisfies certain conditions. After providing these conditions (see Theorem 7
in the following), we evaluate the cost function as λ becomes large to find the a⋆ (defined in (17))
under which (13) is an asymptotically optimal staffing policy (see Theorem 8).
Equilibrium characterization. The challenge in characterizing equilibria comes from the
complexity of the first order condition derived in Section 3. This complexity drives our focus on
the large λ regime.
The first order condition for a symmetric equilibrium (9) is equivalently written as
ErlC(N λ , λ/µ)
λ
λ
µ 1+
− λ = µ3 c′ (µ).
(14)
Nλ
Nλ
N
Under the staffing policy (13), when the limit λ → ∞ is taken, this becomes
a(µ − a) = µ3 c′ (µ).
(15)
Since µ3 c′ (µ) > 0, it follows that any solution µ has µ > a. Therefore, under the optimistic assumption that a symmetric equilibrium solution µ⋆,λ converging to the aforementioned solution µ exists,
it follows that
λ/µ⋆,λ < λ/a
for all large enough λ. In words, the presence of strategic servers that value their idle time forces
the system manager to staff order λ more servers than the offered load λ/µ⋆,λ . In particular, since
the growth rate of N λ is λ/a, the system will operate in the quality-driven regime.
The properties of the equation (15) are easier to see when it is rewritten as
1 µ2 ′
1
= 2 c (µ) + .
a a
µ
(16)
Note that the left-hand side of (16) is a constant function and the right-hand side is a convex
function. These functions either cross at exactly two points, at exactly one point, or never intersect,
depending on a. That information then can be used to show whether or not there exists a solution
to the first order condition (14), depending on the value of a in the staffing policy (13).
Theorem 7. The following holds for all large enough λ.
(i) Suppose a > 0 is such that there exists µ2 > µ1 > 0 that solve (16). Then, there exist two
solutions that solve (14).
(ii) Suppose a > 0 is such that there exists exactly one µ1 > 0 that solves (16).
(a) Suppose N λ − λa ≥ 0. Then, there exist two solutions that solve (14).
(b) Otherwise, if N λ − λa < −3, then there does not exist a solution µλ to (14).
Furthermore, for any ǫ > 0, if µλ solves (14), then |µλ − µ| < ǫ for some µ that solves (16).
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
15
We are not sure if there exists a solution in the case of N λ − a1 λ ∈ [−3, 0); however, given that we
are focusing on a large λ asymptotic regime, the range [-3,0) is vanishingly small.
Moving forward, once the existence of a solution to the first order condition (14) is established,
to conclude that solution is a symmetric equilibrium service rate also requires verifying the condition (10) in Theorem 4. This can be done for any staffing policy (13) under which the system
operates in the quality driven-regime.
Lemma 1.
tion (14), if
For any staffing policy N λ and associated µλ that satisfies the first order condilim inf
λ→∞
N λ µλ
= d > 1 and lim sup µλ < ∞,
λ
λ→∞
then µ⋆,λ = µλ is a symmetric equilibrium for all large enough λ.
Under the conditions for the existence of a solution to the first order condition (14) in Theorem 7,
it is also true that the conditions of Lemma 1 are satisfied. In particular, there exists a bounded
sequence {µλ } having
N λ µλ
µλ
o(λ)
lim inf
= lim inf
+ µλ
> 1.
λ→∞
λ→∞
λ
a
λ
This then guarantees that, for all large enough λ, there exists a solution µ⋆,λ to (14) that is a
symmetric equilibrium, under the conditions of Theorem 7.
There are either multiple symmetric equilibria for each λ or 0, because from Theorem 7 there
are either multiple or zero solutions to the first order condition (14). These symmetric equilibria
will be close when there exists exactly one µ that solves (16); however, they may not be close when
there exist two µ that solve (16). We show in the following that this does not affect what staffing
policy should be asymptotically optimal.
Optimal staffing. Given the characterization of symmetric equilibria under a staffing policy (13), we can now move to the task of optimizing the staffing level, i.e., optimizing a. The first
step is to characterize the associated cost, which is done in the following proposition.
Proposition 2. Suppose a > 0 is such that there exists µ > 0 that solves (16). Then, under
the staffing policy (13),
C ⋆,λ (N λ )
1
→ cS , as λ → ∞.
λ
a
Proposition 2 implies that to minimize costs within the class of staffing policies that satisfy (13),
the maximum a under which there exists at least one solution to (16) should be used. That is, we
should choose a to be
a⋆ := sup A, where A := {a > 0 : there exists at least one solution µ > 0 to (16)} .
Lemma 2.
(17)
a⋆ ∈ A is finite.
Importantly, this a⋆ is not only optimal among the class of staffing policies that satisfy (13), it is
asymptotically optimal among all admissible staffing policies. In particular, the following theorem
shows that as λ becomes unboundedly large, no other admissible staffing policy can asymptotically
achieve strictly lower cost than the one in (13) with a = a⋆ .
Theorem 8. If N ao,λ satisfies (13) with a = a⋆ , then N ao,λ is admissible and asymptotically
optimal. Furthermore,
1
C ⋆,λ (N opt,λ )
C ⋆,λ (N ao,λ )
= lim
= cS ⋆ .
λ→∞
λ→∞
λ
λ
a
lim
16
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Note that an inspection of the proof of Theorem 8 shows that it holds regardless of which
⋆,λ
⋆,λ
equilibrium service rate is used to define W . Hence, even though we have defined W to be the
mean steady state waiting time when the servers serve at the largest equilibrium service rate, this
is not necessary. The staffing policy N ao,λ in Theorem 8 will be asymptotically optimal regardless
of which equilibrium service rate the servers choose.
Though the above theorem characterizes an asymptotically optimal staffing level, because the
definition of a⋆ is implicit, it is difficult to develop intuition. To highlight the structure more clearly,
the following lemma characterizes a⋆ for a specific class of effort cost functions.
Lemma 3.
Suppose c(µ) = cE µp for some cE ∈ [1, ∞) and p ≥ 1. Then,
"
1 #(p+1)/p
p1
p+1
p+1
(p
+
1)
1
⋆
⋆
< 1,
<µ =
a =
(p + 2) cE p(p + 2)
cE p(p + 2)2
and a⋆ and µ⋆ are both increasing in p. Furthermore,
<
there are 2 non-negative solutions to (16)
there is no non-negative solution to (16) .
if a > a⋆ , then
=
there is exactly one solution to (16)
There are several interesting relationships between the effort cost function and the staffing level
that follow from Lemma 3. First, for fixed p,
a⋆ (p) ↓ 0 as cE → ∞.
In words, the system manager must staff more and more servers as effort becomes more costly.
Second, for fixed cE , since a⋆ (p) is increasing in p, the system manager can staff less servers when
the cost function becomes “more convex”. The lower staffing level forces the servers to work at a
higher service rate since since µ⋆ (p) is also increasing in p. We will revisit this idea that convexity
is helpful to the system manager in the next section.
4.3. Contrasting staffing policies for strategic and nonstrategic servers. One of the
most crucial observations that the previous section makes about the impact of strategic servers on
staffing is that the strategic behavior leads the system to a quality-driven regime. In this section,
we explore this issue in more detail, by comparing to the optimal staffing rule that arises when
servers are not strategic, and then attempting to implement that staffing rule.
Nonstrategic servers. Recall that, for the conventional M /M /N queue (without strategic
servers), square-root staffing minimizes costs as λ becomes large (see equation (1), Proposition 6.2,
and Example 6.3 in [8]). So, we can define
λ
Cµλ (N ) = cS N + wλW µ
to be the cost associated with staffing N nonstrategic servers that work at the fixed service rate
µ. Further,
Nµopt,λ = arg min Cµλ (N )
N> λ
µ
is the staffing level that minimizes expected cost when the system arrival rate is λ and the service
rate is fixed to be µ. So, the staffing rule
s
λ
λ
(18)
NµBMR,λ = + y ⋆
µ
µ
17
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
is asymptotically optimal in the sense that
lim
λ→∞
Cµλ (NµBMR,λ )
Cµλ (Nµopt,λ )
= 1.
o
−1
n
y
,
where
α(y)
=
1
+
with h(·) being the hazard rate
Here, y ⋆ := arg miny>0 cS y + wα(y)
y
h(−y)
φ(x)
function of the standard normal distribution, namely, h(x) := 1−Φ(x)
with φ(x) = √12π e−
Rx
Φ(x) = −∞ φ(t)dt. The staffing rule (18) is the famous square-root safety staffing rule.
x2
2
and
Contrasting strategic and nonstrategic servers. In order to compare the case of strategic
servers to the case of nonstrategic servers, it is natural to fix µ in (18) to the limiting service rate
that results from using the optimal staffing rule N ao,λ defined in Theorem 8. We see that N ao,λ
, where µ⋆ solves (16) for a = a⋆ , because any solution
staffs order λ more servers than NµBMR,λ
⋆
to (16) has a > µ. When the effort cost function is c(µ) = cE µp for p ≥ 1, we know from Lemma 3
and Theorem 7 (since the a⋆ is unique) that
µ⋆,λ → µ⋆ as λ → ∞,
where µ⋆ is as given in Lemma 3. Then, the difference in the staffing levels is
1
1
1
1
−
λ
+
o(λ)
=
λ + o(λ).
N ao,λ − NµBMR,λ
=
⋆
a⋆ µ⋆
a⋆ p + 2
Since a⋆ = a⋆ (p) is increasing in p from Lemma 3, we see that the difference N ao,λ − NµBMR,λ
⋆
decreases to 0 as the cost function becomes “more convex”. This is consistent with our observation
at the end of the previous subsection that convexity is helpful to the system manager.
It is natural to wonder if a system manager can force the servers to work harder by adopting
the staffing policy suggested by the analysis of nonstrategic servers, i.e.,
s
λ
λ
⋆,BMR,λ
⋆
N
= ⋆,λ + y
.
(19)
µ
µ⋆,λ
The interpretation of this staffing rule requires care, because the offered load λ/µ⋆,λ is itself a
function of the staffing level (and the arrival rate) through an equilibrium service rate µ⋆,λ . The
superscript ⋆ emphasizes this dependence.
The first question concerns whether or not the staffing policy (19) is even possible in practice,
because the staffing level depends on an equilibrium service rate and vice versa. More specifically,
for a given staffing level, the servers relatively quickly arrive at an equilibrium service rate. Then,
when system demand grows, the system manager increases the staffing, and the servers again arrive
at an equilibrium service rate. In other words, there are two games, one played on a faster time
scale (that is the servers settling to an equilibrium service rate), and one played on a slower time
scale (that is the servers responding to added capacity).
To analyze the staffing policy (19), note that the first order condition for a symmetric equilibrium (9) is equivalently written as
λ
⋆,BMR,λ λ
⋆,BMR,λ
+
ErlC
N
,
= µc′ (µ).
N
−
2
⋆,BMR,λ
µ
µ
(N
)
λ/µ
18
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Then, if µλ is a solution to the first order condition under the staffing N ⋆,BMR,λ from (19), from
substituting N ⋆,BMR,λ into the above expression, µλ must satisfy
p
p
λ/µλ
λ
⋆
λ
⋆
λ
λ
= µλ c′ (µλ ).
2 y λ/µ + ErlC λ/µ + y λ/µ , λ
p
µ
λ/µλ + y ⋆ λ/µλ
p
As λ becomes large, since ErlC λ/µ + y λ/µ, µλ is bounded above by 1, the left-hand side of
the above expression has limit 0. Furthermore, the right-hand side of the above equation is nonnegative and increasing as a function of µ. Hence any sequence of solutions µλ to the first order
condition has the limiting behavior
µλ → 0, as λ → ∞,
which cannot be a symmetric equilibrium service rate because we require the servers to work fast
enough to stabilize the system.
One possibility is to expand the definition of an equilibrium service rate in (1) to allow the
servers to work exactly at the lower bound λ/N . In fact, the system manager may now be tempted
to push the servers to work even faster. However, faster service cannot be mandated for free – there
must be a trade-off; for example, the service quality may suffer or the salaries should be higher.
4.4. Numerical examples. In order to understand how well our asymptotically optimal
staffing policy N ao,λ performs in comparison with the optimal policy N opt,λ for finite λ, and how
fast the corresponding system cost converges to the optimal cost, we present some results from
numerical analysis in this section.
We consider two staffing policies: (i) N opt,λ (defined in (11)), and (ii) N ao,λ (defined in Theorem 8
and (17)) where we ignore the o(λ) term of (13). For each, we first round up the staffing level if
necessary, and then plot the following two equilibrium quantities as a function of the arrival rate
λ: (a) service rates µ⋆,λ (if there is more than one, we pick the largest), and (b) normalized costs
C ⋆,λ /λ. We calculate N opt,λ numerically, by iterating over the staffing levels that admit equilibria
(and we choose the lowest cost when there are multiple equilibria). These plots are shown in Figure 3
for three different effort cost functions: c(µ) = µ, c(µ) = µ2 , and c(µ) = µ3 , which are depicted
in red, blue, and green respectively. For each color, the curve with the darker shade corresponds
to N opt,λ and the curve with the lighter shade corresponds to N ao,λ . The horizontal dashed lines
correspond to the limiting values as λ → ∞.
An immediate first observation is the jaggedness of the curves, which is a direct result of the
discreteness of the staffing levels N opt,λ and N ao,λ . In particular, as the arrival rate λ increases,
the equilibrium service rate µ⋆,λ decreases (respectively, the equilibrium normalized cost C ⋆,λ /λ
increases) smoothly until the staffing policy adds an additional server, which causes a sharp increase
(respectively, decrease). The jaggedness is especially pronounced for smaller λ, resulting in a complex pre-limit behavior that necessitates asymptotic analysis in order to obtain analytic results.
However, despite the jaggedness, the plots illustrate clearly that both the equilibrium service
rates and normalized costs of the optimal policy N ao,λ converge quickly to those of the optimal
policy N opt,λ , highlighting that our asymptotic results are predictive at realistically sized systems.
5. Routing to strategic servers. Thus far we have focused our discussion on staffing, assuming that jobs are routed randomly to servers when there is a choice. Of course, the decision of
how to route jobs to servers is another crucial aspect of the design of service systems. As such,
the analysis of routing policies has received considerable attention in the queueing literature, when
19
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
C ⋆,λ /λ
µ⋆,λ
0.4
c(µ) = µ3
8
0.2
c(µ) = µ2
c(µ) = µ
c(µ) = µ
c(µ) = µ2
4
c(µ) = µ3
0
9
18
(a) Service Rates
27
λ
0
9
18
27
λ
(b) Normalized Costs
Figure 3. Equilibrium behavior as a function of the arrival rate for the optimal and asymptotically optimal staffing
policies, for three different effort cost functions: linear, quadratic, and cubic.
servers are not strategic. In this section, we begin to investigate the impact of strategic servers on
the design of routing policies.
In the classical literature studying routing when servers are nonstrategic, a wide variety of policies
have been considered. These include “rate-based policies” such as Fastest Server First (FSF) and
Slowest Server First (SSF); as well as “idle-time-order-based policies” such as Longest Idle Server
First (LISF) and Shortest Idle Server First (SISF). Among these routing policies, FSF is a natural
choice to minimize the mean response time (although, as noted in the Introduction, it is not
optimal in general). This leads to the question: how does FSF perform when servers are strategic?
In particular, does it perform better than the Random routing that we have so far studied?
Before studying optimal routing to improve performance, we must first answer the following
even more fundamental question: what routing policies admit symmetric equilibria? This is a very
challenging goal, as can be seen by the complexity of the analysis for the M /M /N under Random
routing. This section provides a first step towards that goal.
The results in this section focus on two broad classes of routing policies idle-time-order-based
policies and rate-based policies, which are introduced in turn in the following.
5.1. Idle-time-order-based policies. Informally, idle-time-order-based policies are those
routing policies that use only the rank ordering of when servers last became idle in order to determine how to route incoming jobs. To describe the class of idle-time-order-based policies precisely,
let I(t) be the set of servers idle at time t > 0, and, when I(t) 6= ∅, let s(t) = (s1 , . . . , s|I(t)| ) denote
the ordered vector of idle servers at time t, where server sj became idle before server sk whenever j < k. For n ≥ 1, let Pn = ∆({1, . . . , n}) denote the set of all probability distributions over
the set {1, . . . , n}. An idle-time-order-based routing policy is defined by a collection of probability
distributions p = {pS }S∈2{1,2,...,N} \∅ , such that pS ∈ P|S| , for all S ∈ 2{1,2,...,N } \∅. Under this policy,
at time t, the next job in queue is assigned to idle server sj with probability pI(t) (j). Examples of
idle-time-order-based routing policies are as follows.
1. Random. An arriving customer that finds more than one server idle is equally likely to be
routed to any of those servers. Then, pS = (1/|S|, . . . , 1/|S|) for all S ∈ 2{1,2,...,N } \∅.
2. Weighted Random. Each such arriving customer is routed to one of the idle servers with
probabilities that may depend on the order in which the servers became idle. For example, if
|S| + 1 − j
pS (j) = P|S|
, j ∈ S, for sj ∈ S, for all S ∈ 2{1,2,...,N } \∅,
n=1 n
20
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
then the probabilities are decreasing according to the order in which the servers became idle.
1 |S|(|S|+1)
P
|S|(|S|+1)− 2
Note that j pS (j) =
= 1.
1 |S|(|S|+1)
2
3. Longest Idle Server First (Shortest Idle Server First). Each such arriving customer is routed
to the server that has idled the longest (idled the shortest). Then, pS = (1, 0, . . . , 0) (pS =
(0, . . . , 0, 1)) for all S ⊆ {1, 2, . . . , N }.
5.1.1. Policy-space collapse. Surprisingly, it turns out that all idle-time-order-based policies are “equivalent” in a very strong sense — they all lead to the same steady state probabilities,
resulting in a remarkable policy-space collapse result, which we discuss in the following.
Fix R to be some idle-time-order-based routing policy, defined through the collection of probability distributions p = {pS }∅6=S⊆{1,2,...,N } . The states of the associated continuous time Markov
chain are defined as follows:
• State B is the state where all servers are busy, but there are no jobs waiting in the queue.
• State s = (s1 , s2 , . . . , s|I| ) is the ordered vector of idle servers I. When I = ∅, we identify the
empty vector s with state B.
• State m (m ≥ 0) is the state where all servers are busy and there are m jobs waiting in the
queue (i.e., there are N + m jobs in the system). We identify state 0 with state B.
When all servers are busy, there is no routing, and so the system behaves exactly as a M /M /1
queue with arrival rate λ and service rate µ1 + · · · + µN . Then, from the local balance equations,
the associated steady state probabilities πB and πm for m = 0, 1, 2, . . ., must satisfy
m
πm = (λ/µ) πB where µ =
N
X
µj .
(20)
j=1
One can anticipate that the remaining steady state probabilities satisfy
Y µs
for all s = (s1 , s2 , . . . , s|I| ) with |I| > 0,
πs = πB
λ
s∈I
(21)
and the following theorem verifies this by establishing that the detailed balance equations are
satisfied.
Theorem 9. All idle-time-order-based policies have the steady state probabilities that are
uniquely determined by (20)-(21), together with the normalization constraint that their sum is one.
Theorem 9 is remarkable because there is no dependence on the collection of probability distributions p that define R. Therefore, it follows that all idle-time-order-based routing policies result
in the same steady state probabilities. Note that, concurrently, a similar result has been discovered
independently in the context of loss systems [24].
In relation to our server game, it follows from Theorem 9 that all idle-time-order-based policies
have the same equilibrium behavior as Random. This is because an equilibrium service rate depends
on the routing policy through the server idle time vector (I1 (µ; R), . . . , IN (µ; R)), which can be
found from the steady state probabilities in (20)-(21). As a consequence, if there exists (does not
exist) an equilibrium service rate under Random, then there exists (does not exist) an equilibrium
service rate under any idle-time-order-based policy. In summary, it is not possible to achieve better
performance than under Random by employing any idle-time-order-based policy.
5.2. Rate-based policies. Informally, a rate-based policy is one that makes routing decisions
using only information about the rates of the servers. As before, let I(t) denote the set of idle
servers at time t. In a rate-based routing policy, jobs are assigned to idle servers only based on
their service rates. We consider a parameterized class of rate-based routing policies that we term
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
21
r-routing policies (r ∈ R). Under an r-routing policy, at time t, the next job in queue is assigned
to idle server i ∈ I(t) with probability
µr
pi (µ, t; r) = Xi
µrj
j∈I(t)
Notice that for special values of the parameter r, we recover well-known policies. For example,
setting r = 0 results in Random; as r → ∞, it approaches FSF; and as r → −∞, it approaches SSF.
In order to understand the performance of rate-based policies, the first step is to perform an
equilibrium analysis, i.e., we need to understand what the steady state idle times look like under
any r-routing policy. The following proposition provides us with the required expressions.
Proposition 3. Consider a heterogeneous M /M /2 system under an r-routing policy, with
arrival rate λ > 0 and servers 1 and 2 operating at rates µ1 and µ2 respectively. The steady state
probability that server 1 is idle is given by:
h
i
r
2
µ1 (µ1 + µ2 − λ) (λ + µ2 )2 + µ1 µ2 + µrµ+µ
r (λµ1 + λµ2 )
1
2
h
i
I1r (µ1 , µ2 ) =
,
r
µ
2
2
1
µ1 µ2 (µ1 + µ2 )2 + (λµ1 + λµ2 ) µ1 + 2µ1 µ2 − µr +µr (µ1 − µ22 ) + (λµ1 )2 + (λµ2 )2
1
2
and the steady state probability that server 2 is idle is given by I2r (µ1 , µ2 ) = I1r (µ2 , µ1 ).
Note that we restrict ourselves to a 2-server system for this analysis. This is due to the fact that
there are no closed form expressions known for the resulting Markov chains for systems with more
than 3 servers. It may be possible to extend these results to 3 servers using results from [37]; but,
the expressions are intimidating, to say the least. However, the analysis for two servers is already
enough to highlight important structure about the impact of strategic servers on policy design.
In particular, our first result concerns the FSF and SSF routing policies, which can be obtained
in the limit when r → ∞ and r → −∞ respectively. Recall that FSF is asymptotically optimal
in the nonstrategic setting. Intuitively, however, it penalizes the servers that work the fastest by
sending them more and more jobs. In a strategic setting, this might incentivize servers to decrease
their service rate, which is not good for the performance of the system. One may wonder if by doing
the opposite, that is, using the SSF policy, servers can be incentivized to increase their service rate.
However, our next theorem (Theorem 10) shows that neither of these policies is useful if we are
interested in symmetric equilibria.
Recall that our model for strategic servers already assumes an increasing, convex effort cost
function with c′′′ (µ) ≥ 0. For the rest of this section, in addition, we assume that c′ ( λ2 ) < λ1 . (Recall
that this is identical to the sufficient condition c′ ( Nλ ) < λ1 which we introduced in Section 3.2, on
substituting N = 2.)5
Theorem 10. Consider a M /M /2 queue with strategic servers. Then, FSF and SSF do not
admit a symmetric equilibrium.
Moving beyond FSF and SSF, we continue our equilibrium analysis (for a finite r) by using the
first order conditions to show that whenever an r-routing policy admits a symmetric equilibrium,
it is unique. Furthermore, we provide an expression for the corresponding symmetric equilibrium
service rate in terms of r, which brings out a useful monotonicity property.
5
The sufficient condition c′ ( λ2 ) < λ1 might seem rather strong, but it can be shown that it is necessary for the
symmetric first order condition to have a unique solution. This is because, if c′ ( λ2 ) > λ1 , then the function ϕ(µ),
defined in (22), ceases to be monotonic, and as a result, for any given r, the first order condition ϕ(µ) = r could have
more than one solution.
22
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Theorem 11. Consider a M /M /2 queue with strategic servers. Then, any r-routing policy
that admits a symmetric equilibrium, admits a unique symmetric equilibrium, given by µ⋆ = ϕ−1 (r),
where ϕ : ( λ2 , ∞) → R is the function defined by
ϕ(µ) =
4(λ + µ)
(µ(λ + 2µ)c′ (µ) − λ) .
λ(λ − 2µ)
(22)
Furthermore, among all such policies, µ⋆ is decreasing in r, and therefore, E[T ], the mean response
time (a.k.a. sojourn time) at symmetric equilibrium is increasing in r.
In light of the inverse relationship between r and µ⋆ that is established by this theorem, the system
manager would ideally choose the smallest r such that the corresponding r-routing policy admits
a symmetric equilibrium, which is in line with the intuition that a bias towards SSF (the limiting
r-routing policy as r → −∞) incentivizes servers to work harder. However, there is a hard limit
on how small an r can be chosen (concurrently, how large an equilibrium service rate µ⋆ can be
achieved) so that there exists a symmetric equilibrium, as evidenced by our next theorem.
Theorem 12. Consider a M /M /2 queue with strategic servers. Then, there exists µ, r ∈ R,
with r = ϕ(µ), such that no service rate µ > µ can be a symmetric equilibrium under any r-routing
policy, and no r-routing policy with r < r admits a symmetric equilibrium.
The proof of this theorem is constructive and we do exhibit an r, however, it is not clear whether
this is tight, that is, whether there exists a symmetric equilibrium for all r-routing policies with
r ≥ r. We provide a partial answer to this question of what r-routing policies do admit symmetric
equilibria in the following theorem.
Theorem 13. Consider a M /M /2 queue with strategic servers. Then, there exists a unique
symmetric equilibrium under any r-routing policy with r ∈ {−2, −1, 0, 1}.
Notice that we show equilibrium existence for four integral values of r. It is challenging to show
that all r-routing policies in the interval [−2, 1] admit a symmetric equilibrium. This theorem
provides an upper bound on the r of the previous theorem, that is, r ≤ −2. Therefore, if the
specific cost function c is unknown, then the system manager can guarantee better performance
than Random (r = 0), by setting r = −2. If the specific cost function is known, the system manager
may be able to employ a lower r to obtain even better performance. For example, consider a 2server system with λ = 1/4 and one of three different effort cost functions: c(µ) = µ, c(µ) = µ2 , and
c(µ) = µ3 . Figure 4 shows the corresponding equilibrium mean response times (in red, blue, and
green, respectively). It is worth noting that the more convex the effort cost function, larger the
range of r (and smaller the minimum value of r) for which a symmetric equilibrium exists.
6. Concluding remarks. The rate at which each server works in a service system has important consequences for service system design. However, traditional models of large service systems
do not capture the fact that human servers respond to incentives created by scheduling and staffing
policies, because traditional models assume each server works at a given fixed service rate. In this
paper, we initiate the study of a class of strategic servers that seek to optimize a utility function
which values idle time and includes an effort cost.
Our focus is on the analysis of staffing and routing policies for a M /M /N queue with strategic
servers, and our results highlight that strategic servers have a dramatic impact on the optimal
policies in both cases. In particular, policies that are optimal in the classical, nonstrategic setting
can perform quite poorly when servers act strategically.
For example, a consequence of the strategic server behavior is that the cost-minimizing staffing
level is order λ larger than square-root staffing, the cost minimizing staffing level for systems with
fixed service rate. In particular, any system with strategic servers operates in the quality-driven
23
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
log10 (E[T ])
1
• c(µ) = µ
• c(µ) = µ2
2
3
•
• c(µ) = µ3
1
3
•
•
-18
-12
-6
0
6
12
18
r
Figure 4. Equilibrium mean response time (a.k.a. sojourn time) as a function of the policy parameter, r, when the
arrival rate is λ = 14 , for three different effort cost functions: linear, quadratic, and cubic.
regime at equilibrium (as opposed to the quality-and-efficiency-driven regime that arises under
square-root staffing), in which the servers all enjoy non-negligible idle time.
The intuitive reason square-root staffing is not feasible in the context of strategic servers is
that the servers do not value their idleness enough in comparison to their effort cost. This causes
the servers to work too slowly, making idle time scarce. In the economics literature [9, 36], it is
common to assume that scarce goods are more highly valued. If we assume that the servers valued
their idle time more heavily as the idle time becomes scarcer, then the servers would work faster in
order to make sure they achieved some. This suggests the following interesting direction for future
research: what is the relationship between the assumed value of idle time in (2) and the resulting
cost minimizing staffing policy? Another situation in which servers may not care about idle time
becoming scarce is when their compensation depends on their service volume (which is increasing
in their service rate). Then, it is reasonable to expect the servers prefer to have negligible idle time.
It would be interesting to be able to identify a class of compensation schemes under which that is
the case.
The aforementioned two future research directions become even more interesting when the class
of routing policies is expanded to include rate-based policies. This paper solves the joint routing
and staffing problem within the class of idle-time-order-based policies. Section 5 suggests that
by expanding the class of routing policies to also include rate-based policies we should be able
to achieve better system performance (although it is clear that the analysis becomes much more
difficult). The richer question also aspires to understand the relationship between the server idle
time value, the compensation scheme, the (potentially) rate-based routing policy, and the number
of strategic servers to staff.
Finally, it is important to note that we have focused on symmetric equilibrium service rates. We
have not proven that asymmetric equilibria do not exist. Thus, it is natural to wonder if there are
routing and staffing policies that result in an asymmetric equilibrium. Potentially, there could be
one group of servers that have low effort costs but negligible idle time and another group of servers
that enjoy plentiful idle time but have high effort costs. The question of asymmetric equilibria
becomes even more interesting when the servers have different utility functions. For example, more
experienced servers likely have lower effort costs than new hires. Also, different servers can value
their idle time differently. How do we design routing and staffing policies that are respectful of
such considerations?
24
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Acknowledgments. This work was supported by NSF grant #CCF-1101470, AFOSR grant
#FA9550-12-1-0359, and ONR grant #N00014-09-1-0751. An important source of inspiration for
this work came from discussions with Mor Armony regarding how to fairly assign workload to
employees. We would also like to thank Bert Zwart for an illuminating discussion regarding asymptotically optimal staffing policies.
References
[1] Aksin, Z., M. Armony, V. Mehrotra. 2007. The modern call-center: A multi-disciplinary perspective on
operations management research. Prod. Oper. Manag. 16(6) 665–688.
[2] Allon, G., I. Gurvich. 2010. Pricing and dimensioning competing large-scale service providers. M&SOM
12(3) 449–469.
[3] Anton, J. 2005. One-minute survey report #488: Agent compensation & advancement. Document
Tracking Number SRV488-080305.
[4] Armony, M. 2005. Dynamic routing in large-scale service systems with heterogeneous servers. Queueing
Syst. Theory Appl. 51(3-4) 287–329.
[5] Armony, M., A. R. Ward. 2010. Fair dynamic routing in large-scale heterogeneous-server systems. Oper.
Res. 58 624–637.
[6] Atar, R. 2005. Scheduling control for queueing systems with many servers: Asymptotic optimality in
heavy traffic. Ann. Appl. Probab. 15(4) 2606–2650.
[7] Atar, R., Y. Y. Shaki, A. Shwartz. 2011. A blind policy for equalizing cumulative idleness. Queueing
Syst. 67(4) 275–293.
[8] Borst, S., A. Mandelbaum, M. I. Reiman. 2004. Dimensioning large call centers. Oper. Res. 52(1) 17–34.
[9] Brock, T. C. 1968. Implications of commodity theory for value change. Psychological Foundations of
Attitudes 243–275.
[10] Cachon, G. P., P. T. Harker. 2002. Competition and outsourcing with scale economies. Manage. Sci.
48(10) 1314–1333.
[11] Cachon, G. P., F. Zhang. 2007. Obtaining fast service in a queueing system via performance-based
allocation of demand. Manage. Sci. 53(3) 408–420.
[12] Cahuc, P., A. Zylberberg. 2004. Labor Economics. MIT Press.
[13] Cheng, S.-F., D. M. Reeves, Y. Vorobeychik, M. P. Wellman. 2004. Notes on equilibria in symmetric
games. International Workshop On Game Theoretic And Decision Theoretic Agents (GTDT). 71–78.
[14] Cohen-Charash, Y., P. E. Spector. 2001. The role of justice in organizations: A meta-analysis. Organ.
Behav. and Hum. Dec. 86(2) 278–321.
[15] Colquitt, J. A., D. E. Conlon, M. J. Wesson, C. O. L. H. Porter, K. Y. Ng. 2001. Justice at the
millennium: A meta-analytic review of 25 years of organizational justice research. J. Appl. Psychol.
86(3) 425–445.
[16] de Véricourt, F., Y.-P. Zhou. 2005. Managing response time in a call-routing problem with service
failure. Oper. Res. 53(6) 968–981.
[17] Erlang, A. K. 1948. On the rational determination of the number of circuits. E. Brockmeyer, H. L.
Halstrom, A. Jensen, eds., The Life and Works of A. K. Erlang. The Copenhagen Telephone Company,
216–221.
[18] Gans, N., G. Koole, A. Mandelbaum. 2003. Telephone call centers: Tutorial, review, and research
prospects. M&SOM 5(2) 79–141.
[19] Garnett, O., A. Mandelbaum, M. Reiman. 2002. Designing a call center with impatient customers.
M&SOM 4(3) 208–227.
[20] Geng, X., W. T. Huh, M. Nagarajan. 2013. Strategic and fair routing policies in a decentralized service
system. Working paper.
[21] Gilbert, S. M., Z. K. Weng. 1998. Incentive effects favor nonconsolidating queues in a service system:
The principal-agent perspective. Manage. Sci. 44(12) 1662–1669.
Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
25
[22] Gumbel, H. 1960. Waiting lines with heterogeneous servers. Oper. Res. 8(4) 504–511.
[23] Gurvich, I., W. Whitt. 2007. Scheduling flexible servers with convex delay costs in many-server service
systems. M&SOM 11(2) 237–253.
[24] Haji, B., S. M. Ross. 2013. A queueing loss model with heterogenous skill based servers under idle time
ordering policies. Working paper.
[25] Halfin, S., W. Whitt. 1981. Heavy-traffic limits for queues with many exponential servers. Oper. Res.
29(3) 567–588.
[26] Harchol-Balter, M. 2013. Performance Modeling and Design of Computer Systems: Queueing Theory
in Action. Cambridge University Press.
[27] Harel, A. 1988. Sharp bounds and simple approximations for the Erlang delay and loss formulas.
Manage. Sci. 34(8) 959–972.
[28] Hassin, R., M. Haviv. 2003. To Queue or Not to Queue: Equilibrium Behavior in Queueing Systems.
Kluwer.
[29] Hopp, W., W. Lovejoy. 2013. Hospital Operations: Principles of High Efficiency Health Care. Financial
Times Press.
[30] Janssen, A. J. E. M., J. S.H. van Leeuwaarden, B. Zwart. 2011. Refining square-root safety staffing by
expanding Erlang C. Oper. Res. 59(6) 1512–1522.
[31] Kalai, E., M. I. Kamien, M. Rubinovitch. 1992. Optimal service speeds in a competitive environment.
Manage. Sci. 38(8) 1154–1163.
[32] Kc, D. S., C. Terwiesch. 2009. Impact of workload on service time and patient safety: An econometric
analysis of hospital operations. Manage. Sci. 55(9) 1486–1498.
[33] Kocaga, L., M. Armony, A. R. Ward. 2013. Staffing call centers with uncertain arrival rates and cosourcing. Working paper.
[34] Krishnamoorthi, B. 1963. On Poisson queue with two heterogeneous servers. Oper. Res. 11(3) 321–330.
[35] Lin, W., P. Kumar. 1984. Optimal control of a queueing system with two heterogeneous servers. IEEE
Trans. Autom. Contr. 29(8) 696–703.
[36] Lynn, M. 1991. Scarcity effects on value: A quantitative review of the commodity theory literature.
Psychol. Market. 8(1) 43–57.
[37] Mokaddis, G. S., C. H. Matta, M. M. El Genaidy. 1998. On Poisson queue with three heterogeneous
servers. International Journal of Information and Management Sciences 9 53–60.
[38] Reed, J., Y. Shaki. 2013. A fair policy for the G/GI/N queue with multiple server pools. Preprint.
[39] Saaty, T. L. 1960. Time-dependent solution of the many-merver Poisson queue. Oper. Res. 8(6) 755–772.
[40] Tezcan, T. 2008. Optimal control of distributed parallel server systems under the Halfin and Whitt
regime. Math. Oper. Res. 33 51–90.
[41] Tezcan, T., J. Dai. 2010. Dynamic control of N-systems with many servers: Asymptotic optimality of
a static priority policy in heavy traffic. Oper. Res. 58(1) 94–110.
[42] Ward, A. R., M. Armony. 2013. Blind fair routing in large-scale service systems with heterogeneous
customers and servers. Oper. Res. 61 228–243.
[43] Whitt,
W.
2002.
IEOR
6707:
Advanced
Topics
in
Queueing
Theory:
Focus
on
Customer
Contact
Centers.
Homework
1e
Solutions,
see
http://www.columbia.edu/~ww2040/ErlangBandCFormulas.pdf.
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
ec1
Routing and Staffing when Servers are Strategic: Technical
Appendix
Ragavendran Gopalakrishnan, Sherwin Doroudi, Amy R. Ward, and Adam Wierman
In this technical appendix, we provide proofs for the results stated in the main body of the
manuscript titled: “Routing and Staffing when Servers are Strategic”. The proofs of these results
are in the order in which they appear in the main body.
PROOFS FROM SECTION 3
Proof of Theorem 1. The starting point of this proof is the expression for the steady
state probabilities of a general heterogeneous M /M /N system with Random routing, which was
derived in [22]. Before stating this more general result, we first set up the required notation. Let
µ1 , µ2 , . . . , µN denote the service rates of the N servers, and let ρj = µλj , 1 ≤ j ≤ N . We assume that
PN −1
j=1 ρj > 1 for stability. Let (a1 , a2 , . . . , ak ) denote the state of the system when there are k jobs in
the system (0 < k < N ) and the busy servers are {a1 , a2 , . . . , ak }, where 1 ≤ a1 < a2 < · · · < ak ≤ N .
Let P (a1 , a2 , . . . , ak ) denote the steady state probability of the system being in state (a1 , a2 , . . . , ak ).
Also, let Pk denote the steady state probability of k jobs in the system. Then,
P (a1 , a2 , . . . , ak ) =
(N − k)! P0 ρa1 ρa2 · · · ρak
,
N!
(EC.1)
where P0 , the steady state probability that the system is empty, is given by:
P0 =
N ! CNN
,
DN
(EC.2)
where, for 1 ≤ j ≤ N ,
values
values from N ρ−1
CjN = sum of combinations of j ρ−1
i
i
=
N −j+1 N −j+2
X
X
a1 =1 a2 =a1 +1
···
N −j+j−1
X
aj−1 =aj−2 +1 aj =aj−1 +1
and
DN =
N
X
j! CjN +
j=1
Note that,
CNN =
N
Y
i=1
C0N
N
X
ρ−1
i
and
−1
−1
ρ−1
a1 ρa2 · · · ρaj ,
C1N
.
C1N − 1
C1N =
N
X
(EC.3)
(EC.4)
ρ−1
i .
i=1
Also, by convention, we write
= 1. The steady state probability that a tagged server, say server
1, is idle is obtained by summing up the steady state probabilities of every state in which server 1
is idle:
N
−1
X
X
P (a1 , a2 , . . . , ak )
(EC.5)
I(µ1 , µ2 , . . . , µN ; λ, N ) = P0 +
k=1 2≤a1 ≤···≤ak ≤N
We now simplify the expressions above for our special system where the tagged server works at
a rate µ1 and all other servers work at rate µ. Without loss of generality, we pick server 1 to be
ec2
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
the tagged server, and we set µ2 = µ3 = · · · = µN = µ, and therefore, ρ2 = ρ3 = · · · = ρN = ρ = µλ .
Then, (EC.1) simplifies to:
P (a1 , a2 , . . . , ak ) =
(N − k)! P0 ρk
, 2 ≤ a1 ≤ · · · ≤ ak ≤ N
N!
(EC.6)
In order to simplify (EC.3), we observe that
N −1
N −1
CjN = ρ−1
1 Cj−1 + Cj
N −1
where the terms Cj−1
and CjN −1 are obtained by applying (EC.3) to a homogeneous M /M /(N −1)
system with arrival rate λ and all servers operating at rate µ. This results in:
N −j
N −j
1
ρ
N
j
(N − j)
ρ
ρ
+
(EC.7)
Cj =
N ρ1
N
j
j
N
The corresponding special cases are given by: C0N = 1, C1N = ρ−1
1 + (N − 1)ρ, and CN =
then simplify (EC.4) by substituting for CjN from (EC.7), to obtain:
!
N
−1 j
X
N! ρ
ρ
ρ
1
ρ
ρ
DN =
+ −1 + 1+ N
+
1−
ρN ρ1 N
ρ1
j! ρ1
C1 − 1
j=0
N
−1 j
X
ρ1
ρ
ρ
ρ
ρ N!
ρ1
+1+
1−
=
1−
ρ1 ρN
N
ρ
j!
ρ N − ρ+1− ρ
j=0
ρ −N
ρ .
ρ1
We
(EC.8)
ρ1
Next, we simplify (EC.2) by substituting for DN from (EC.8), to obtain:
N
−1 j
N
X
ρ
ρ
ρ
ρ
ρ1
1
+
1+
P0 = 1 −
1−
N
ρ
j! N !
N − ρ+1−
j=0
ρ
ρ1
−1
To express P0 in terms of ErlC(N, ρ), the Erlang C formula, we add and subtract the term
within, to obtain:
ρ
ρ1
P0 =
1−
1−
N
ρ
N−1
X
j=0
ρj
N ρN
+
j!
N − ρ N!
!
ρN
ρ
1
+
1+
N!
N − ρ+1−
ρ
ρ1
N
−
N −ρ
ρ
1−
N
which reduces to:
!
N
−1 j
N ρN
N
X
ρ
N
ρ
ρ
ρ
ρ1
ρ
1
N −ρ N !
−
+
1−
1−
P0 =
1−
N
ρ
j! N − ρ N !
N
ρ N − ρ+1−
j=0
=
N
−1
X
j=0
j
N
ρ
N ρ
+
j! N − ρ N !
!−1
Finally, (EC.5) simplifies to:
1 − ρ
N
1−
ρ1
ρ
I(µ1 , µ, µ, . . . , µ; λ, N ) = P0 +
1 +
N
−1
X
k=1
−1
ErlC(N, ρ)
N − ρ + 1 − ρρ1
N −1
P (2, 3, . . . , k + 1)
k
ρ
ρ1
ρ1
1−
ρ
N ρN
N −ρ N !
−1
−1
(EC.9)
ec3
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Substituting for P0 from (EC.9) and P (2, 3, . . . , k + 1) from (EC.6), we get:
N
−1
X
(N − k)! P0 ρk
I(µ1 , µ; λ, N ) = P0 +
N!
k=1
!
N −1
N ρN
ρ X ρk
P0
+
= 1−
N
k! N − ρ N !
k=0
−1
ρ
ErlC(N, ρ)
ρ1
ρ
1−
1+
1−
= 1−
N
N
ρ
N − ρ+1− ρ
N −1
k
ρ1
−1
ρ
ErlC(N, ρ)
µ
ρ
1−
1+
1−
= 1−
,
N
N
µ1
N − ρ + 1 − µ1
as desired.
µ
Proof of Theorem 2. We start with the expression for I from (4), and take its first partial
derivative with respect to µ1 :
−2
∂I
ρ
ρ
µ
µ
∂
ErlC(N, ρ)
ρ
ErlC(N,
ρ)
1 −
1 +
1−
=− 1−
1−
1−
1+
∂µ1
N
N
µ1
∂µ1
N
µ1
N − ρ + 1 − µµ1
N − ρ + 1 − µµ1
N
µ
ρ
ErlC(N, ρ)
ErlC(N, ρ)
µ
ρ
2 ∂
2 ∂
=−
1−
I
I
1−
1+
1+
1−
=
N − ρ ∂µ1
N
µ1
N − ρ ∂µ1
µ1
N − ρ + 1 − µ1
N − ρ + 1 − µ1
µ
µ
Applying the product rule, and simplifying the expression, we get (5). Next, for convenience, we
rewrite (5) as:
ErlC(N, ρ)
I2
µ1 µ1
N − ρ ∂I
ErlC(N, ρ)
+ 1−
= 2 1 +
λ ∂µ1
µ1
µ
µ N − ρ + 1 − µ1 2
N − ρ + 1 − µµ1
µ
(EC.10)
Differentiating this equation once more with respect to µ1 by applying the product rule, we get:
ErlC(N, ρ)
ErlC(N, ρ)
µ1 µ1
+ 1−
µ
µ N − ρ + 1 − µ1 2
N − ρ + 1 − µµ1
µ
I2 ∂
ErlC(N, ρ)
ErlC(N, ρ)
µ1 µ1
+ 1−
+ 2
2
1 +
µ1
µ1
µ1 ∂µ1
µ
µ
N − ρ+1− µ
N − ρ+1− µ
2I ∂I
ErlC(N, ρ)
2I 2 µ21 N − ρ ∂I
I2 ∂
µ1 µ1
ErlC(N, ρ)
+ 1−
=
− 3
+ 2
1 +
µ21 ∂µ1
µ1
I 2 λ ∂µ1
µ1 ∂µ1
µ
µ N − ρ + 1 − µ1 2
N − ρ + 1 − µµ1
µ
N − ρ ∂2I
=
λ ∂µ21
2I ∂I
2I 2
− 3
2
µ1 ∂µ1
µ1
1 +
Applying the product rule for the second term, and simplifying the expression, we get:
2
∂2I
=
∂µ21
I
∂I
∂µ1
2
2
−
µ1
∂I
∂µ1
1
λ
ErlC(N, ρ)
2I 2
1 + 1 − µ1
−
2
µ1 µ2 N − ρ N − ρ + 1 − µ1
µ N − ρ+1−
µ
The expression in (6) is then obtained by substituting for
some incredibly messy (but straightforward) algebra.
∂I
∂µ1
µ1
µ
from (5), and carefully going through
ec4
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Proof of Theorem 3.
In order to prove this theorem, we make the transformation
t=ρ+1−
µ1
µ
(EC.11)
+
For example, when µ1 = µ1 = (λ − (N − 1)µ) , t = t = min (ρ + 1, N ). Using this transformation,
the µI1 term that appears in the beginning of the expression for the second derivative of the idle
time (6) can be written in terms of t as follows.
I
(N − ρ) (N − t)
=
µ1
µg(t)
where
g(t) = N (N − t) (ρ + 1 − t) − ρ (ρ − t) (N − t + ErlC(N, ρ))
Note that g(t) > 0, since I > 0, N > ρ, and from stability, N > t. Substituting this in (6), and
using (EC.11) to complete the transformation, we get the following expression for the second
derivative of the idle time in terms of t.
2
∂2I
2λ (N − ρ) f (t)
= H(t) = −
2
∂µ1
µ3 g 3 (t)
3
where we use the notation g 3 (t) to denote (g(t)) , and
2
2
f (t) = (N − t) − ρErlC(N, ρ) (N − t + ErlC(N, ρ)) + N − (ρ − t) (ρ + 1 − t) ErlC(N, ρ)
In order to prove the theorem, we now need
to show that
(a) There exists a threshold t† ∈ −∞, t such that H(t) < 0 for −∞ < t < t† , and H(t) > 0 for
t† < t < t.
(b) H(t) > 0 ⇒ H ′ (t) > 0.
To show these statements, we prove the following three properties of f and g.
• f (t) is a decreasing function of t.
• g(t) is a decreasing function of t.
• f (0) > 0.
In what follows, for convenience, we denote ErlC(N, ρ) simply by C. Differentiating f (t), we get
f ′ (t) = − (N − t)2 − ρC − 2(N − t)(N − t + C) − N − (ρ − t)2 C + 2(ρ − t)(ρ + 1 − t)C
= −3 (N − t)2 + −(ρ − t)2 + (N − ρ) C
= −3 (N − t)2 (1 − C) + (N − t)2 − (ρ − t)2 + (N − ρ) C
= −3 (N − t)2 (1 − C) + ((N − t + ρ − t)(N − ρ) + (N − ρ)) C
= −3 (N − t)2 (1 − C) + (N − t + ρ + 1 − t)(N − ρ)C
<0
The last step follows by noting that N − t > 0, ρ + 1 − t ≥ 0, N − ρ > 0, and 0 < ErlC(N, ρ) < 1
when 0 < ρ < N . This shows that f (t) is a decreasing function of t. Next, differentiating g(t), we
get
g ′ (t) = −N (N − t) − N (ρ + 1 − t) + ρ(ρ − t) + ρ(N − t + C)
= −N (N − t + ρ + 1 − t) + ρ(ρ + 1 − t) + ρ(N − t) − ρ(1 − C)
= −(N − ρ)(N − t + ρ + 1 − t) − ρ(1 − C)
<0
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
ec5
The last step follows by noting that N − t > 0, ρ + 1 − t ≥ 0, N − ρ > 0, and 0 < ErlC(N, ρ) < 1
when 0 < ρ < N . This shows that g(t) is a decreasing function of t. Finally, evaluating f (0), we get
f (0) = (N 2 − ρC)(N + C) + (N − ρ2 )(ρ + 1)C
= N 3 − ρ3 C + N 2 C − ρ2 C + N C − ρC 2
= (N 3 − ρ3 ) + ρ3 (1 − C) + (N 2 − ρ2 )C + (N − ρ)C + ρC(1 − C)
>0
The last step follows by noting that N − ρ > 0, and 0 < ErlC(N, ρ) < 1 when 0 < ρ < N .
We are now ready to prove the statements (a-b).
(a) First, note that because f (t) is decreasing and f (0) > 0, there exists a threshold t† ∈ 0, t
such that f (t) > 0 for −∞ < t < t† , and f (t) < 0 for t† < t < t. (Note that if f (t) > 0, then
we
let t† = t so that f (t) < 0 in an empty interval.) Next, since g(t) > 0 for all t ∈ −∞, t , the
sign of H(t) is simply the opposite of the sign of f (t). Statement (a) now follows directly.
(b) Statement (b) is equivalent to showing that f (t) < 0 ⇒ H ′ (t) > 0. Differentiating H(t), we get
2λ(N − ρ)2 g 3 (t)f ′(t) − 3f (t)g 2(t)g ′ (t)
′
H (t) = −
µ3
g 6 (t)
2λ(N − ρ)2 g(t)f ′(t) − 3f (t)g ′(t)
=−
µ3
g 4 (t)
Since g(t) > 0, f ′ (t) < 0, and g ′ (t) < 0, it follows that H ′ (t) > 0 whenever f (t) < 0.
This concludes the proof.
Proof of Theorem 4. The “only if” direction is straightforward. Briefly, it follows from the
fact that, by definition, any symmetric equilibrium µ∗ > Nλ must be an interior global maximizer
of U (µ1 , µ⋆ ) in the interval µ1 ∈ ( Nλ , ∞).
The “if” direction requires more care. We first show that the utility function U (µ1 , µ⋆ ) inherits
the properties of the idle time function I(µ1 , µ⋆ ) as laid out in Theorem 3, and then consider the
λ
two cases when it is either increasing or decreasing ath µ1 = N
.
†
Recall that U (µ1 , µ⋆ ) = I(µ1 , µ⋆ ) − c(µ1 ). Let µ1 ∈ µ1 , ∞ be the threshold of Theorem 3. We
subdivide the interval (µ1 , ∞) as follows, in order to analyze U (µ1 , µ⋆ ).
• Consider the interval (µ1 , µ†1 ), where, from Theorem 3, we know that I ′′′ (µ1 , µ⋆ ) < 0. Therefore,
U ′′′ (µ1 , µ⋆ ) = I ′′′ (µ1 , µ⋆ ) − c′′′ (µ1 ) < 0. This means that U ′′ (µ1 , µ⋆ ) is decreasing in this interval.
(Note that this interval could be empty, i.e., it is possible that µ†1 = µ1 .)
• Consider the interval (µ†1 , ∞), where, from Theorem 3, we know that I ′′ (µ1 , µ⋆ ) < 0. Therefore,
U ′′ (µ1 , µ⋆ ) = I ′′ (µ1 , µ⋆ ) − c′′ (µ1 ) < 0. This means that U (µ1 , µ⋆ ) is concave in this interval.
Thus, the utility function U (µ1 , µ⋆ ), like the idle time function I(µ1 , µ⋆ ), may start out as a convex
function at µ1 = µ1 , but it eventually becomes concave, and stays concave thereafter. Moreover,
because the cost function c is increasing and convex, limµ1 →∞ U (µ1 , µ⋆ ) = −∞, which implies that
U (µ1 , µ⋆ ) must eventually be decreasing concave.
We now consider two possibilities for the behavior of U (µ1 , µ⋆ ) in the interval ( Nλ , ∞):
λ
. If µ†1 > Nλ (see Figure 1(a)), U (µ1 , µ⋆ ) would
Case (I): U(µ1 , µ⋆ ) is increasing at µ1 = N
start out being increasing convex, reach a rising point of inflection at µ1 = µ†1 , and then become
increasing concave. (Otherwise, if µ†1 ≤ Nλ , U (µ1 , µ⋆ ) would just be increasing concave to begin
with.) It would then go on to attain a (global) maximum, and finally become decreasing concave.
This means that the unique stationary point of U (µ1 , µ⋆ ) in this interval must be at this (interior)
global maximum. Since U ′ (µ⋆ , µ⋆ ) = 0 (from the symmetric first order condition (9)), µ1 = µ⋆ must
be the global maximizer of the utility function U (µ1 , µ⋆ ), and hence a symmetric equilibrium.
ec6
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
λ
Case (II): U(µ1 , µ⋆ ) is decreasing at µ1 = N
. Because U (µ⋆ , µ⋆ ) ≥ U ( Nλ , µ⋆ ), U (µ1 , µ⋆ ) must
eventually increase to a value at or above U ( Nλ , µ⋆ ), which means it must start out being decreasing
convex (see Figure 1(b)), attain a minimum, then become increasing convex. It would then follow
the same pattern as in the previous case, i.e., reach a rising point of inflection at µ1 = µ†1 , and then
become increasing concave, go on to attain a (global) maximum, and finally become decreasing
concave. This means that it admits two stationary points – a minimum and a maximum. Since
U ′ (µ⋆ , µ⋆ ) = 0 (from the symmetric first order condition (9)) and U (µ⋆ , µ⋆ ) ≥ U ( Nλ , µ⋆ ), µ1 = µ⋆
must be the (global) maximizer, and hence a symmetric equilibrium.
U (µ1 , µ⋆ )
U (µ1 , µ⋆ )
•
•
•
•
µ†
µ1
µ⋆
λ/N
µ†
µ⋆
µ1
λ/N
(a) Case (I)
(b) Case (II)
Figure EC.1. The graphic depiction of the proof of Theorem 4.
Note that if U (µ1 , µ⋆ ) is stationary at µ1 = Nλ , it could either start out increasing or decreasing
in the interval ( Nλ , ∞), and one of the two cases discussed above would apply accordingly.
Finally, to conclude the proof, note that (10) is equivalent to the inequality U (µ⋆ , µ⋆ ) ≥ U ( Nλ , µ⋆ ),
obtained by plugging in and evaluating the utilities using (7) and (4). This completes the proof.
Proof of Theorem 5.
The symmetric first order condition (9) can be rewritten as
λ
ErlC N,
µ
= µ2 c′ (µ)
N2 λ
+ −N
λ
µ
It suffices to show that
if λc′ Nλ < 1, then the left hand side and the right hand side intersect at
least once in Nλ , ∞ . We first observe that the left hand side, the Erlang-C function, is shown to
be convex and increasing in ρ = µλ (pages 8 and 11 of [43]). This means that it is decreasing and
convex in µ. Moreover, ErlC(N, N ) = 1 and ErlC(N, 0) = 0, which means that the left hand side
decreases from 1 to 0 in a convexfashion as µ runs from Nλ to ∞. The right hand side is clearly
∞. Therefore,
convex in µ, and is equal to λc′ Nλ when µ = Nλ , and approaches ∞ as µ approaches
if λc′ Nλ < 1, then the two curves must intersect at least once in Nλ , ∞ .
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
ec7
2
Next, it is sufficient to show that if 2 Nλ c′ Nλ + Nλ c′′ Nλ ≥ 1, then the right hand side is
non-decreasing in µ. In order to do so, it suffices to show that
∂
N2 λ
2 ′
+ −N ≥0
µ c (µ)
∂µ
λ
µ
N2
λ
− 2 ≥0
⇔ µ2 c′′ (µ) + 2µc′ (µ)
λ
µ
2
Nµ
⇔ µ2 c′′ (µ) + 2µc′ (µ)
≥1
λ
The left hand side is a non-decreasing function of µ, therefore, in the interval Nλ , ∞ , we have
2
Nµ 2
λ
λ
λ ′ λ
′′
µ c (µ) + 2µc (µ)
≥
c
+2 c
≥ 1.
λ
N
N
N
N
2 ′′
′
This completes the proof.
Proof of Theorem 6. The utility at any symmetric point can be evaluated as U (µ, µ) = 1 −
− c(µ), using (7) and (4). Therefore, it follows that showing U (µ⋆1 , µ⋆1 ) > U (µ⋆2 , µ⋆2 ) is equivalent
to showing that
c(µ⋆1 ) − c(µ⋆2 )
λ
<
.
⋆
⋆
µ1 − µ2
N µ⋆1 µ⋆2
λ
Nµ
The function c is convex by assumption. It follows that
c(µ⋆1 ) − c(µ⋆2 ) ≤ (µ⋆1 − µ⋆2 )c′ (µ⋆1 ).
(EC.12)
Therefore, rearranging and substituting for c′ (µ⋆1 ) from the symmetric first order condition (9),
λ
λ
λ
c(µ⋆1 ) − c(µ⋆2 )
≤ 2 ⋆ 2 N − ⋆ + ErlC N, ⋆
.
µ⋆1 − µ⋆2
N (µ1 )
µ1
µ1
λ
It has been shown (page 14 of [43], and [27]) that ErlC N, µ < Nλµ . Using this,
c(µ⋆1 ) − c(µ⋆2 )
λ
< 2 ⋆ 2
⋆
⋆
µ1 − µ2
N (µ1 )
λ
λ
λ
λ
1
.
<
N − ⋆ 1−
< 2 ⋆ 2 (N ) =
⋆ 2
µ1
N
N (µ1 )
N (µ1 )
N µ⋆1 µ⋆2
This completes the proof.
PROOFS FROM SECTION 4
Proof of Proposition 1.
We first observe that if f (λ) = ω(λ), then
C ⋆,λ (N λ )
Nλ
≥ cS
→ ∞ as λ → ∞.
λ
λ
Since Proposition 2 evidences a staffing policy under which C ⋆,λ (N λ )/λ has a finite limit, having
f (λ) = ω(λ) cannot result in an asymptotically optimal staffing policy.
Next, we consider the case f (λ) = o(λ). In this case, λ/N λ → ∞. Since any symmetric equilibrium
must have µ⋆,λ > λ/N λ from (3), it follows that if there exists a sequence of symmetric equilibria
{µ⋆,λ }, then µ⋆,λ → ∞ as λ → ∞. We conclude that such a staffing policy cannot be admissible.
ec8
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Proof of Theorem 7.
We can rewrite (16) as
f (µ) = g(µ)
where
f (µ) =
µ2
1
1
and g(µ) = 2 c′ (µ) + .
a
a
µ
The two cases of interest (i) and (ii) are as shown in Figure EC.2. Our strategy for the proof is to
rewrite (14) in terms of functions f λ and g λ that are in some sense close to f and g. Then, in case
(i), the fact that g(µ) lies below f (µ) for µ ∈ [µ1 , µ2 ] implies that f λ and g λ intersect (at least)
twice. The case (ii) is more delicate, because the sign of o(λ) determines if the functions f λ and
g λ will cross (at least) twice or not at all. (We remark that it will become clear in that part of the
proof where the condition o(λ) < −3 is needed.)
1
a
+ c′ (a)
1
a
g(µ)
•
1
a
•
+ c′ (a)
g(µ)
f (µ)
•
1
a
0
µ1
µ2
µ
0
a
µ1
f (µ)
µ
a
(a) Case (i)
(b) Case (ii)
Figure EC.2. The limiting first order condition (16).
The first step is to rewrite (14) as
f λ (µ) = g λ (µ)
where
1
Nλ
λ λ
f (µ) = ErlC N ,
+
λ
µ
λ
λ 2
N
1
g λ (µ) = µ2 c′ (µ)
+ .
λ
µ
λ
The function g λ converges uniformly on compact sets to g since for any µ > 0, substituting for N λ
in (13) shows that
2 !
2
o(λ)
o(λ)
sup g λ (µ) − g(µ) ≤ µ2 c′ (µ)
→ 0,
(EC.13)
+
a λ
λ
µ∈[0,µ]
as λ → ∞. Next, recall ErlC(N, ρ) ≤ 1 whenever ρ/N < 1. Since
1
o(λ)
λ
1
λ
λ + o(λ),
+
f (µ) − f (µ) ≤ ErlC
λ
a
µ
λ
(EC.14)
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
ec9
and ErlC(λ/a + o(λ), λ/µ) ≤ 1 for all µ > a for all large enough λ, the function f λ converges
uniformly to f on any compact set [a + ǫ, µ] with µ > a + ǫ and ǫ arbitrarily small. The reason we
need only consider compact sets having lower bound a + ǫ is that it is straightforward to see any
solution to (16) has µ > a. It is also helpful to note that g λ is convex in µ because
1
d2 λ
g (µ) = 2c′ (µ) + 4µc′′ (µ) + µ2 c′′′ (µ) + 2 3 > 0 for µ ∈ (0, ∞),
2
dµ
µ
and f λ is convex decreasing in µ because ErlC(N, ρ) is convex increasing in ρ (pages 8 and 11
of [43]).
We prove (i) and then (ii).
Proof of (i): There exists µm ∈ (µ1 , µ2 ) for which f (µm ) > g(µm ). Then, it follows from (EC.13)
and (EC.14) that f λ (µm ) > g λ (µm ) for all large enough λ. Also,
lim f λ (µ) =
µ→∞
and
1
< lim g λ (µ) = ∞.
a µ→∞
1 Nλ
< gλ
lim f (µ) = +
λ
λ
µ↓λ/N λ
λ
λ
Nλ
=c
′
λ
Nλ
+
Nλ
λ
for all large enough λ, where the inequality follows because c is strictly increasing. Since f λ is
convex decreasing and g λ is convex, we conclude that there exist two solutions to (14).
Proof of (ii): We prove part (a) and then part (b). Recall that µ1 is the only µ > 0 for which
f (µ1 ) = g(µ1 ).
Proof of (ii)(a): For part (a), it is enough to show that for all large enough λ,
f λ (µ1 ) − g λ (µ1 ) > 0.
(EC.15)
The remainder of the argument follows as in the proof of part (i).
From the definition of f λ and g λ in the second paragraph of this proof, and substituting for N λ ,
f λ (µ1 ) − g λ (µ1 )
2
µ 2
1
2
1
o(λ)
1
λ
1
o(λ)
1
= −
1 − (µ1 )2 c′ (µ1 ) − (µ1 )2 c′ (µ1 )
c′ (µ1 ) + ErlC
+
−
λ + o(λ),
.
a µ1
a
λ
a
µ1
λ
a
λ
It follows from f (µ1 ) = g(µ1 ) that 1/a − 1/µ1 − (µ1 /a)2 c′ (µ1 ) = 0, and so, also noting that
ErlC(λ/a + o(λ), λ/µ1 ) > 0,
2
o(λ)
o(λ)
2
2 ′
2 ′
f (µ1 ) − g (µ1 ) >
.
1 − (µ1 ) c (µ1 ) − (µ1 ) c (µ1 )
λ
a
λ
λ
λ
(EC.16)
Again using the fact that f (µ1 ) = g(µ1 ),
1
1
a
2
2 ′
−
= −1 + 2 .
1 − (µ1 ) c (µ1 ) = 1 − 2a
a
a µ1
µ1
Then, the term multiplying o(λ)/λ in (EC.16) is positive if
−1+2
a
> 0,
µ1
which implies (EC.15) holds for all large enough λ.
(EC.17)
ec10
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
To see (EC.17), and so complete the proof of part (ii)(a), note that since µ1 solves (16), and the
left-hand side of (16) is convex increasing while the right-hand side is concave increasing, µ1 also
solves the result of differentiating (16), which is
Algebra shows that
1
1
= 2 µ21 c′′ (µ1 ) + 2µ1 c′ (µ1 ) .
2
µ1 a
2
1
µ1 ′
µ31 ′′
−2
c
(µ
)
=
c (µ1 ).
1
µ1
a2
a2
We next use (16) to substitute for
µ2
1 ′
c (µ1 )
a2
to find
3
2 µ3
− = 21 c′′ (µ1 ).
µ1 a a
Since c is convex,
2
3
− ≥ 0,
µ1 a
and so 1.5a ≥ µ1 , from which (EC.17) follows.
Proof of (ii)(b): Let µλ ∈ (0, ∞) be the minimizer of the function g λ . The minimizer exists because
g λ is convex and
Nλ 2
1
d λ
′
2 ′′
g (µ) = µc (µ) + µ c (µ)
− 2,
dµ
λ
µ
which is negative for all small enough µ, and positive for all large enough µ. It is sufficient to show
that for all large enough λ
g λ (µ) − f λ(µ) > 0 for all µ ∈ [a, µλ ].
(EC.18)
This is because for all µ > µλ , g λ is increasing and f λ is decreasing.
Suppose we can establish that for all large enough λ
2
ǫλ
1
≥
− , for all µ ∈ [a, µλ ],
µ 3a 2a
(EC.19)
where ǫλ satisfies ǫλ → 0 as λ → ∞. Since g(µ) ≥ f (µ) for all µ, it follows that
λ 2
λ 2
N
a2
N
1
1
g λ (µ) = µ2 c′ (µ)
+ ≥ a−
+ .
λ
µ
µ
λ
µ
Substituting for N λ and algebra shows that
λ 2
2
o(λ)
N
o(λ)
1 1
a
a
a2
.
+ = +
2 1−
+a 1−
a−
µ
λ
µ a
λ
µ
µ
λ
Then, from the definition of f λ and the above lower bound on g λ , also using the fact that the
assumption N λ − λ/a < 0 implies the term o(λ) is negative,
2
o(λ) 2a
1
λ
o(λ)
a
g λ (µ) − f λ(µ) ≥
.
− 1 − ErlC N λ ,
+a 1−
λ
µ
λ
µ
µ
λ
Since −ErlC(N λ , λ/µ) > −1 and 1/a − 1/µ > 0 from (16) implies 1 − a/µ > 0,
1
2a
λ
λ
g (µ) − f (µ) ≥
−1 −1 .
|o(λ)|
λ
µ
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Next, from (EC.19),
and so
ec11
2a 4
≥ − ǫλ ,
µ
3
1
1
λ
g (µ) − f (µ) ≥
−ǫ −1 .
|o(λ)|
λ
3
λ
λ
The fact that |o(λ)| > 3 and ǫλ → 0 then implies that for all large enough λ, (EC.18) is satisfied.
Finally, to complete the proof, we show that (EC.19) holds. First note that µλ as the minimizer
of g λ satisfies
Nλ 2
1
′
2 ′′
2µc (µ) + µ c (µ)
− 2 = 0,
λ
µ
and that solution is unique and continuous in λ. Hence µλ → µ1 as λ → ∞. Then,
g λ (µλ ) → g(µ1 ) =
1
as λ → ∞.
a
Furthermore, g λ (µλ ) approaches g(µ1 ) from above; i.e.,
g λ (µλ ) ↓
1
as λ → ∞,
a
because, recalling that the term o(λ) is negative,
g λ (µ) = µ2 c′ (µ)
1 o(λ)
−
a
λ
2
+
1
> g(µ) for all µ > 0.
µ
Therefore, there exists ǫλ → 0 such that
g λ (µλ ) =
1 3 ǫλ
−
,
a 4a
where the 3/(4a) multiplier of ǫλ is chosen for convenience when obtaining the bound in the previous
paragraph. Finally,
1
1
≥ λ
µ µ
means that (EC.19) follows if
1
2 1 1 ǫλ
2 λ λ
g
(µ
)
=
−
.
≥
µλ 3
3a 2 a
To see the above display is valid, note that µλ solves
′
g λ (µ) = 0,
which from algebra is equivalent to
2g λ (µλ ) −
Hence
λ 2
N
3
λ 3 ′′
λ
+
µ
c
(µ
)
= 0.
µλ
λ
2g λ (µλ ) −
as required.
3
≤ 0,
µλ
ec12
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Proof of Lemma 1. It is enough to show the inequality (10) of Theorem 4 holds. The function
c is convex by assumption. It follows that
λ
λ
λ
c(µλ) − c
≤
µ
−
c′ (µλ ).
(EC.20)
Nλ
Nλ
Plugging in for c′ (µλ ) from the symmetric first order condition (14) yields (after algebra)
ErlC (N λ , λ/µλ )
λ
λ
λ
λ
′
λ
λ
1− λ λ +
.
µ − λ c (µ ) = λ λ 1 − λ λ
N
µ N
µ N
µ N
Nλ
Hence, in order to show the inequality (10) is true, also substituting for ρλ = λ/µλ , it is enough to
verify that
λ
µλ N λ
λ
1− λ λ
µ N
⇐⇒
ErlC N λ , λ/µλ
λ
1− λ λ +
µ N
Nλ
1+
!
1
1−
λ
µλ N λ
+
ErlC(N λ ,λ/µλ )
N λ −1
Since N λ − 1 < N λ , it is enough to show that
λ
≤ 1− λ λ
µ N
≤
λ
µλ N λ
λ
ErlC(N λ , λ/µλ )
1+ 1− λ λ +
µ N
N −1
1
.
λ ,λ/µλ )
λ
1 − µλ N λ + ErlC(N
λ
N
−1 !−1
1
ErlC(N λ ,λ/µλ )
λ
λ
1 − µλλN λ +
1
−
+
λ
N
µλ N λ
µλ N λ
Nλ
1 − µλλN λ
⇐⇒
1≤
ErlC(N λ ,λ/µλ )
λ
λ
1
−
+
λ
λ
λ
λ
λ
µ N
µ N
N
λ
λ
λ ErlC(N λ , λ/µλ )
λ
≤ 1− λ λ
1− λ λ + λ λ
⇐⇒
µλ N λ
µ N
µ N
Nλ
µ N
2
λ λ
λ
λ
ErlC(N , λ/µ )
N µ
⇐⇒
−1 .
≤
λ/µλ
λ
1
1+
ErlC(N λ ,λ/µλ )
≤
Since N λ µλ /λ → d > 1 by assumption, the limit of the right-hand side of the above expression is
positive, and, since and ErlC(N λ , λ/µλ ) ≤ 1, the limit of the left-hand side of the above expression
is 0. We conclude that for all large enough λ, the above inequality is valid.
Proof of Proposition 2.
Let
µ⋆ = arg min{µ > 0 : (16) holds }.
Next, recalling that µ⋆ > a, also let
µ = µ⋆ −
1 ⋆
µ − a > a,
2
so that the system is stable if all servers were to work at rate µ (λ < µN λ for all large enough
λ). It follows from Theorem 7 that, for all large enough λ, any µλ that satisfies the first order
condition (14) also satisfies µλ > µ. Hence any symmetric equilibrium µ⋆,λ must also satisfy µ⋆,λ > µ
for all large enough λ, and so
⋆,λ
λ
W < W µ.
Therefore, also using the fact that W
cS
⋆,λ
> 0, it follows that
Nλ
N λ C ⋆,λ (N λ )
Nλ
⋆,λ
λ
<
= cS
+ wW < cS
+ wW µ .
λ
λ
λ
λ
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
ec13
Then, since N λ /λ → 1/a as λ → ∞ from (13), it is sufficient to show
λ
W µ → 0 as λ → ∞.
This follows from substituting the staffing N λ = λ/a + o(λ) in (13) into the well-known formula for
the steady state mean waiting time in a M /M /N λ queue with arrival rate λ and service rate µ as
follows
λ/µ
1
λ
λ λ
Wµ =
ErlC
N
,
λ N λ − λ/µ
µ
1/µ
λ
=
ErlC N λ ,
µ
1/a − 1/µ λ + o(λ)
→ 0, as λ → ∞,
since ErlC(N λ , λ/µ) ∈ [0, 1] for all λ.
Proof of Lemma 2. It follows from the equation
a(µ − a) = µ3 c′ (µ)
that
+ p
µ
′
a=
1 − 1 − 4µc (µ) .
2
The condition 4µc′ (µ) ≤ 1 is required to ensure that there is a real-valued solution for a. Hence
+ p
µ
′
′
1 − 1 − 4µc (µ) : 0 ≤ 4µc (µ) ≤ 1 .
A=
2
Since c′ (µ) is well-behaved, this implies that A is compact, and, in particular, closed. We conclude
that a⋆ = sup A ∈ A, which implies that a⋆ is finite.
Proof of Theorem 8.
It follows from Proposition 1 that
0 ≤ lim inf
λ→∞
N opt,λ
N opt,λ
≤ lim sup
< ∞,
λ
λ
λ→∞
because any staffing policy that is not asymptotically optimal also is not optimal for each λ. Consider any subsequence λ′ on which either lim inf λ→∞ N opt,λ /λ or lim supλ→∞ N opt,λ /λ is attained,
and suppose that
′
1
N opt,λ
(EC.21)
→ as λ′ → ∞, where a ∈ [0, ∞).
λ′
a
The definition of asymptotic optimality requires that for each λ′ , there exists a symmetric equi′
librium service rate µ⋆,λ . As in the proof of Lemma 1, it is enough to consider sequences {µλ }
that satisfy the first order condition (14). Then, by the last sentence of Theorem 7, any sequence
′
′
of solutions {µλ } to (14) must be such that |µλ − µ| is arbitrarily small, for λ′ large enough, for
some µ that solves (16), given a in (EC.21). In summary, the choice of a in (EC.21) is constrained
by the requirement that a symmetric equilibrium service rate must exist.
Given that there exists at least one symmetric equilibrium service rate for all large enough λ′ ,
it follows in a manner very similar to the proof of Proposition 2 that
W
⋆,λ′
→ 0 as λ′ → ∞,
ec14
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
even though when there are multiple equilibria we may not be able to guarantee which symmetric
′
equilibrium µ⋆,λ the servers choose for each λ′ . We conclude that
′
′
′
C ⋆,λ (N opt,λ )
N opt,λ
1
⋆,λ′
=
c
+ wW
→ cS , as λ′ → ∞.
S
′
′
λ
λ
a
(EC.22)
We argue by contradiction that a in (EC.22) must equal a⋆ . Suppose not. Then, since
C ⋆,λ (N ao,λ )
1
→ cS ⋆ as λ → ∞
λ
a
by Proposition 2 (and so the above limit is true on any subsequence), and a⋆ > a by its definition,
it follows that
′
′
′
′
C ⋆,λ (N ao,λ ) < C ⋆,λ (N opt,λ ) for all large enough λ′ .
′
The above inequality contradicts the definition of N opt,λ .
The previous argument did not depend on if λ′ was the subsequence on which lim inf λ→∞ N opt,λ /λ
or lim supλ→∞ N opt,λ /λ was attained. Hence
N opt,λ
1
= ⋆,
λ→∞
λ
a
lim
and, furthermore,
1
C ⋆,λ (N opt,λ )
= cS ⋆ .
λ→∞
λ
a
lim
Since also
,
1
C ⋆ λ (N ao,λ )
= cS ⋆ ,
lim
λ→∞
λ
a
the proof is complete.
Proof of Lemma 3. We first observe that (16) is equivalently written as:
0 = cE pµp+2 − aµ + a2 .
The function
f (µ) = cE pµp+2 − aµ + a2
attains its minimum value in (0, ∞) at
µ=
a
cE p(p + 2)
1/(p+1)
.
The function f is convex in (0, ∞) because f ′′ (µ) > 0 for all µ ∈ (0, ∞) and so µ is the unique
minimum. It follows that
<
there are 2 non-negative solutions to (16)
there is no non-negative solution to (16) .
if f (µ) > 0, then
there is exactly one solution to (16)
=
Since
p+2
p+2
f (µ) = a p+1 a2− p+1 − △
for
△ :=
1
cE p(p + 1)
1
p+1
1−
1
cE p
p+1
1
p+2
p+2 !
> 0,
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
ec15
it follows that
p
if a p+1
<
there are 2 non-negative solutions to (16)
there is no non-negative solution to (16) .
− △ > 0, then
there is exactly one solution to (16)
=
The expression for △ can be simplified so that
(p + 1)
△=
(p + 2)
1
cE p(p + 2)
1
p+1
.
Then, a⋆ follows by noting that a⋆ = △(p+1)/p and µ⋆ follows by noting that µ⋆ = µ and then
substituting for a⋆ .
To complete the proof, we must show that a⋆ and µ⋆ are both increasing in p. This is because we
have already observed that any solution to (16) has a < µ, and the fact that µ < 1 follows directly
from the expression for µ. We first show a⋆ is increasing in p, and then argue that this implies µ⋆
is increasing in p.
To see that a⋆ is increasing in p, we take the derivative of log a⋆ (p) and show that this is positive.
Since
log a⋆ (p) = log(p + 1) − log(p + 2) +
1
log(p + 1)
p
1
2
1
− log cE − log p − log(2 + p),
p
p
p
it follows that
1
p/(p + 1) − log(p + 1)
1
1
(log a (p)) =
−
+
+ 2 log cE
2
p+1 p+2
p
p
!
p
p
− log(p)
− log(p + 2)
p
p+2
−2
.
−
2
p
p2
⋆
′
After much simplification, we have
1
1
(log a (p)) = 2 log cE + 2
p
p
⋆
′
p(p + 2)2
p2 + p + 4
log
−
.
p+1
(p + 1)(p + 2)
Hence it is enough to show that
△(p) = log
p(p + 2)2
p+1
−
p2 + p + 4
≥ 0, for p ≥ 1.
(p + 1)(p + 2)
This follows because the first term is increasing in p, and has a value that exceeds 1 when p = 1;
on the other hand, the second term has a value that is strictly below 1 for all p ≥ 1.
Finally, it remains to argue that µ⋆ is increasing in p. At the value µ = µ⋆
g(µ) = µ3 c′ (µ) − aµ + a2 = 0.
At the unique point where the minimum is attained, it is also true that
g ′ (µ) = µ3 c′′ (µ) + 3µ2 c′ (µ) − a = 0.
Since µ3 c′′ (µ) + 3µ2 c′ (µ) is an increasing function of µ, it follows that if a increases, then µ must
increase.
ec16
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
PROOFS FROM SECTION 5
Proof of Theorem 9. It is sufficient to verify the detailed balance equations. For reference,
it is helpful to refer to Figure EC.3, which depicts the relevant portion of the Markov chain. We
require the following additional notation. For all I ⊆ {1, 2, . . . , N }, all states s = (s1 , s2 , . . . , s|I| ),
all servers s′ ∈ {1, 2, . . . , N }\I, and integers j ∈ {1, 2, . . . , |I| + 1}, we define the state s[s′ , j] by
s[s′ , j] ≡ (s1 , s2 , . . . , sj−1 , s′ , sj , . . . , s|I|).
We first observe that:
Rate into state s due to an arrival = λ
X
X |I|+1
′
πs[s′ ,j] pI∪{s } (j)
s′ 6∈I j=1
|I|
XX
µs′ πB Y µs I∪{s′ }
=λ
p
(j)
λ s∈I λ
s′ 6∈I j=0
Y µs X
X
=
µs′ πs
=
µs′ πB
λ
′
′
s∈I
s 6∈I
s 6∈I
= Rate out of state s due to a departure.
Then, to complete the proof, we next observe that for each s′ 6∈ I:
Rate into state s due to a departure = µs|I| π(s1 ,s2 ,...,s|I|−1 )
Y µs
= µs|I| πB
λ
s∈I\{s|I| }
s[s′ , 1]
s − s1
λp I
∪{
s′
}
I
λp
(1)
′
s[s′ , 2]
...
s + s′
λpI∪{s } (2)
(1)
λpI (2)
s
′
I
λp
s
∪{
}
|
(|I
µs ′
+1
)
λp I
(|I
|
µs|I|
)
s − s2
...
s − s|I|
For each s′ 6∈ I
Figure EC.3. Snippet of the Markov chain showing the rates into and out of state s = (s1 , . . . , s|I| ). For convenience, we use s − sj to denote the state (s1 , s2 , . . . , sj−1 , sj+1 , . . . , s|I| ) and s + s′ to denote the state s[s′ , |I| + 1] =
(s1 , s2 , . . . , s|I| , s′ ).
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
ec17
1 (1)
λp
λ
λ
µ2
µ1
0
λ
2
µ1
µ2
···
3
µ1 + µ2
µ1 + µ2
λ
λ(1 − p)
1 (2)
Figure EC.4. The M /M /2 Markov chain with probabilistic routing
= λπB
Y µs
s∈I
λ
= λπs
= Rate out of state s due to an arrival.
Proof of Proposition 3. In order to derive the steady state probability that a server is idle,
we first solve for the steady state probabilities of the M /M /2 system (with arrival rate λ and
service rates µ1 and µ2 respectively) under an arbitrary probabilistic routing policy where a job
that arrives to find an empty system is routed to server 1 with probabilityr p and server 2 with
1
probability 1 − p. Then, for an r-routing policy, we simply substitute p = µrµ+µ
r.
1
2
It should be noted that this analysis (and more) for 2 servers has been carried out by [34]. Prior
to that, [39] carried out a partial analysis (by analyzing an r-routing policy with r = 1). However,
we rederive the expressions using our notation for clarity.
The dynamics of this system can be represented by a continuous time Markov chain shown in
Figure EC.4 whose state space is simply given by the number of jobs in the system, except when
there is just a single job in the system, in which case the state variable also includes information
about which of the two servers is serving that job. This system is stable when µ1 + µ2 > λ and we
denote the steady state probabilities as follows:
• π0 is the steady state probability that the system is empty.
(j)
• π1 is the steady state probability that there is one job in the system, served by server j.
• For all k ≥ 2, πk is the steady state probability that there are k jobs in the system.
We can write down the balance equations of the Markov chain as follows:
(1)
(2)
λπ0 = µ1 π1 + µ2 π1
(1)
(λ + µ1 )π1 = λpπ0 + µ2 π2
(2)
(λ + µ2 )π1 = λ(1 − p)π0 + µ1 π2
(1)
(2)
(λ + µ1 + µ2 )π2 = λπ1 + λπ1 + (µ1 + µ2 )π3
∀k ≥ 3 :
(λ + µ1 + µ2 )πk = λπk−1 + (µ1 + µ2 )πk+1 ,
yielding the following solution to the steady state probabilities:
π0 =
µ1 µ2 (µ1 + µ2
)2
µ1 µ2 (µ1 + µ2 − λ)(µ1 + µ2 + 2λ)
+ λ(µ1 + µ2 )(µ22 + 2µ1 µ2 + (1 − p)(µ21 − µ22 )) + λ2 (µ21 + µ22 )
(EC.23)
ec18
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
λ(λ + p(µ1 + µ2 ))π0
µ1 (µ1 + µ2 + 2λ)
λ(λ
+ (1 − p)(µ1 + µ2 ))π0
(2)
.
π1 =
µ2 (µ1 + µ2 + 2λ)
(1)
π1 =
Consequently, the steady state probability that server 1 is idle is given by
λ(λ + (1 − p)(µ1 + µ2 ))
(2)
π0 .
I1 (µ1 , µ2 ; p) = π0 + π1 = 1 +
µ2 (µ1 + µ2 + 2λ)
Substituting for π0 , we obtain
I1 (µ1 , µ2 ; p) =
µ1 (µ1 + µ2 − λ) [(λ + µ2 )2 + µ1 µ2 + (1 − p)λ(µ1 + µ2 )]
. (EC.24)
µ1 µ2 (µ1 + µ2 )2 + λ(µ1 + µ2 ) [µ22 + 2µ1 µ2 + (1 − p)(µ21 − µ22 )] + λ2 (µ21 + µ22 )
Finally, for an r-routing policy, we let p =
I1r (µ1 , µ2 ) = I1 (µ1 , µ2 ; p =
µr1
µr1 +µr2
to obtain:
µr1
)
µr1 + µr2
h
i
r
2
µ1 (µ1 + µ2 − λ) (λ + µ2 )2 + µ1 µ2 + µrµ+µ
r λ(µ1 + µ2 )
1
2
h
i
=
.
µr2
2
2
µ1 µ2 (µ1 + µ2 ) + λ(µ1 + µ2 ) µ2 + 2µ1 µ2 + µr +µr (µ21 − µ22 ) + λ2 (µ21 + µ22 )
1
2
By symmetry of the r-routing policy, it can be verified that I2r (µ1 , µ2 ) = I1r (µ2 , µ1 ), completing the
proof.
Proof of
Theorem 10. We first highlight that when all servers operate at the same rate
µ ∈ Nλ , ∞ , both FSF and SSF are equivalent to Random routing. Henceforth, we refer to such
a configuration as a symmetric operating point µ. In order to prove that there does not exist a
symmetric equilibrium under either FSF or SSF, we show that at any symmetric operating point
µ, any one server can attain a strictly higher utility by unilaterally setting her service rate to be
slightly lower (in the case of FSF) or slightly higher (in the case of SSF) than µ.
We borrow some notation from the proof of Proposition 3 where we derived the expressions
for the steady state probability that a server is idle when there are only 2 servers under any
probabilistic policy, parameterized by a number p ∈ [0, 1] which denotes the probability that a job
arriving to an empty system is routed to server 1. Recall that I1 (µ1 , µ2 ; p) denotes the steady state
probability that server 1 is idle under such a probabilistic policy, and the corresponding utility
function for server 1 is U1 (µ1 , µ2 ; p) = I1 (µ1 , µ2 ; p) − c(µ1). Then, by definition, the utility function
for server 1 under FSF is given by:
U1 (µ1 , µ2 ; p = 0) , µ1 < µ2
F SF
U1 (µ1 , µ2 ) = U1 µ1 , µ2 ; p = 21
, µ1 = µ2
U1 (µ1 , µ2 ; p = 1) , µ1 > µ2 .
Similarly, under SSF, we have:
U1 (µ1 , µ2 ; p = 1)
SSF
U1 (µ1 , µ2 ) = U1 µ1 , µ2 ; p = 21
U1 (µ1 , µ2 ; p = 0)
, µ1 < µ2
, µ1 = µ2
, µ1 > µ2 .
Note that while the utility function under any probabilistic routing policy is continuous everywhere, the utility function under FSF or SSF is discontinuous at symmetric operating points. This
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
ec19
discontinuity turns out to be the crucial tool in the proof. Let the two servers be operating at a
symmetric operating point µ. Then, it is sufficient to show that there exists 0 < δ < µ − λ2 such that
U1F SF (µ − δ, µ) − U1F SF (µ, µ) > 0,
(EC.25)
U1SSF (µ + δ, µ) − U1F SF (µ, µ) > 0.
(EC.26)
and
We show (EC.25), and (EC.26) follows from a similar argument. Note that
1
F SF
F SF
U1 (µ − δ, µ) − U1 (µ, µ) = U1 (µ − δ,µ; p = 0) − U1 µ, µ; p =
2
= U1 (µ − δ,µ; p = 0) − U1 (µ, µ; p = 0)
1
+ U1 (µ, µ; p = 0) − U1 µ, µ; p =
2
Since the first difference, U1 (µ − δ, µ; p = 0) − U1 (µ, µ; p = 0), is zero when δ = 0, and is continuous
in δ, it is sufficient to show that the second difference, U1 (µ, µ; p = 0) − U1 (µ, µ; p = 12 ), is strictly
positive:
1
1
= I1 (µ, µ; p = 0) − I1 µ, µ; p =
U1 (µ, µ; p = 0) − U1 µ, µ; p =
2
2
λ(2µ − λ)
>0
using (EC.24) .
=
(µ + λ)(2µ + λ)
This completes the proof.
Proof of Theorem 11. The proof of this theorem consists of two parts. First, we show
that under any r-routing policy, any symmetric equilibrium µ⋆ ∈ ( λ2 , ∞) must satisfy the equation
ϕ(µ⋆ ) = r. This is a direct consequence of the necessary first order condition for the utility function
of server 1 to attain an interior maximum at µ⋆ . The second part of the proof involves using the
condition c′ ( λ2 ) < λ1 to show that ϕ is a strictly decreasing bijection onto R, which would lead to
the following implications:
• ϕ is invertible; therefore, if an r-routing policy admits a symmetric equilibrium, it is unique,
and is given by µ⋆ = ϕ−1 (r).
• ϕ−1 (r) is strictly decreasing in r; therefore, so is the unique symmetric equilibrium (if it exists).
Since the mean response time E[T ] is inversely related to the service rate, this establishes that
E[T ] at symmetric equilibrium (across r-routing policies that admit one) is increasing in r.
We begin with the first order condition for an interior maximum. The utility function of server
1 under an r-routing policy, from (2), is given by
U1r (µ1 , µ2 ) = I1r (µ1 , µ2 ) − c(µ1 )
For µ⋆ ∈ (λ/2, ∞) to be a symmetric equilibrium, the function U1r (µ1 , µ⋆ ) must attain a global
maximum at µ1 = µ⋆ . The corresponding first order condition is then given by:
∂I1r
(µ1 , µ⋆ )
∂µ1
= c′ (µ⋆ ),
(EC.27)
µ1 =µ⋆
where I1r is given by Proposition 3. The partial derivative of the idle time can be computed and
the left hand side of the above equation evaluates to
∂I1r
(µ1 , µ⋆ )
∂µ1
=
µ1 =µ⋆
λ(4λ + 4µ⋆ + λr − 2µ⋆ r)
.
4µ⋆ (λ + µ⋆ )(λ + 2µ⋆ )
(EC.28)
ec20
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
Substituting in (EC.27) and rearranging the terms, we obtain:
4(λ + µ⋆ )
(µ⋆ (λ + 2µ⋆ )c′ (µ⋆ ) − λ) = r.
λ(λ − 2µ⋆ )
The left hand side is equal to ϕ(µ⋆ ), thus yielding the necessary condition ϕ(µ⋆ ) = r.
Next, we proceed to show that if c′ ( λ2 ) < λ1 , then ϕ is a strictly decreasing bijection onto R. Note
that the function
4(λ + µ)
ϕ(µ) =
(µ(λ + 2µ)c′ (µ) − λ)
λ(λ − 2µ)
is clearly a continuous function in ( λ2 , ∞). In addition, it is a surjection onto R, as evidenced by
the facts that ϕ(µ) → −∞ as µ → ∞ and ϕ(µ) → ∞ as µ → λ2 + (using c′ ( λ2 ) < λ1 ).
To complete the proof, it is sufficient to show that ϕ′ (µ) < 0 for all µ ∈ ( λ2 , ∞). First, observe
that
4ψ(µ)
,
ϕ′ (µ) =
λ(λ − 2µ)2
where
ψ(µ) = µ(λ + µ)(λ2 − 4µ2 )c′′ (µ) + (λ3 + 6λ2 µ − 8µ3 )c′ (µ) − 3λ2 .
Since c′ ( λ2 ) < λ1 , as µ → λ2 +, ψ(µ) < 0. Moreover, since c′′′ (µ) > 0, for all µ > λ2 , we have
2 !
2 !
λ
λ
λ
c′′′ (µ) − 4 µ −
(λ2 + 6λµ + 6µ2)c′′ (µ) − 24 µ2 −
c′ (µ) < 0.
ψ ′ (µ) = −4µ(λ + µ) µ2 −
2
2
2
It follows that ψ(µ) < 0 for all µ > λ2 . Since ϕ′ (µ) has the same sign as ψ(µ), we conclude that
ϕ′ (µ) < 0, as desired.
Proof of Theorem 12. From Theorem 11, we know that if a symmetric equilibrium exists,
then it is unique, and is given by µ⋆ = ϕ−1 (r), where ϕ establishes a one-to-one correspondence
between r and µ⋆ (µ⋆ is strictly decreasing in r and vice versa). Therefore, it is enough to show
that there exists a finite upper bound µ > λ2 such that no service rate µ > µ can be a symmetric
equilibrium under any r-routing policy. It would then automatically follow that for r = ϕ(µ), no
r-routing policy with r ≤ r admits a symmetric equilibrium. We prove this by exhibiting a µ
and showing that if µ ≥ µ, then the utility function of server 1, U1r (µ1 , µ), cannot attain a global
maximum at µ1 = µ for any r ∈ R.
We begin by establishing a lower bound for the maximum utility U1r (µ1 , µ) that server 1 can
obtain under any r-routing policy:
λ
λ
λ
λ λ
λ
r
r
r
r
, µ = I1
,µ −c
,
≥ −c
= U1
.
(EC.29)
max U1 (µ1 , µ) ≥ U1
2
2
2
2
2 2
µ1 > λ
2
By definition, if µ⋆ is a symmetric equilibrium under any r-routing policy, then the utility function
of server 1, U1r (µ1 , µ⋆ ), is maximized at µ1 = µ⋆ , and hence, using (EC.29), we have
λ λ
U1r (µ⋆ , µ⋆ ) ≥ U1r ( , ).
2 2
(EC.30)
Next, we establish some properties on U1r (µ, µ) that help us translate this necessary condition for a
symmetric equilibrium into an upper bound on any symmetric equilibrium service rate. We have,
U1r (µ, µ) = 1 −
which has the following properties:
λ
− c(µ),
2µ
e-companion to Gopalakrishnan, Doroudi, Ward, and Wierman: Routing and Staffing when Servers are Strategic
ec21
• Since c′ ( λ2 ) < λ1 , U1r (µ, µ), as a function of µ, is strictly increasing at µ = λ2 .
• U1r (µ, µ) is a concave function of µ.
This means that U1r (µ, µ) is strictly increasing at µ = λ2 , attains a maximum at the unique µ† > λ2
that solves the first order condition µ2† c′ (µ† ) = λ2 , and then decreases forever. This shape of the
curve U1r (µ, µ) implies that there must exist a unique µ > µ† , such that U1r (µ, µ) = U1r ( λ2 , λ2 ).
Since U1r (µ, µ) is a strictly decreasing function for µ > µ† , it follows that if µ⋆ > µ, then,
r
U1 (µ⋆ , µ⋆ ) < U1r (µ, µ) = U1r ( λ2 , λ2 ), contradicting the necessary condition (EC.30). This establishes
the required upper bound µ on any symmetric equilibrium service rate, completing the proof.
Proof of Theorem 13. A useful tool for proving this theorem is Theorem 3 from [13], whose
statement we have adapted to our model:
Theorem EC.1. A symmetric game with a nonempty, convex, and compact strategy space, and
utility functions that are continuous and quasiconcave has a symmetric (pure-strategy) equilibrium.
We begin by verifying that our 2-server game meets the qualifying conditions of Theorem EC.1:
• Symmetry: First, all servers have the same strategy space of service rates, namely, ( λ2 , ∞).
Moreover, since an r-routing policy is symmetric and all servers have the same cost function,
their utility functions are symmetric as well. Hence, our 2-server game is indeed symmetric.
• Strategy space: The strategy space ( λ2 , ∞) is nonempty and convex, but not compact, as
required by Theorem EC.1. Hence, for the time being, we modify the strategy space to be
[ λ2 , µ + 1] so that it is compact, where µ is the upper bound on any symmetric equilibrium,
established in Theorem 12, and deal with the implications of this modification later.
• Utility function: U1r (µ1 , µ2 ) is clearly continuous. From Mathematica, it can be verified that the
idle time function I1r (µ1 , µ2 ) is concave in µ1 for r ∈ {−2, −1, 0, 1}, and since the cost function
is convex, this means the utility functions are also concave. (Unfortunately, we could not get
Mathematica to verify concavity for non-integral values of r, though we strongly suspect that
it is so for the entire interval [−2, 1].)
Therefore, we can apply Theorem EC.1 to infer that an r-routing policy with r ∈ {−2, −1, 0, 1}
admits a symmetric equilibrium in [ λ2 , µ+1]. We now show that the boundaries cannot be symmetric
equilibria. We already know from Theorem 12 that µ + 1 cannot be a symmetric equilibrium. (We
could have chosen to close the interval at any µ > µ. The choice µ + 1 was arbitrary.) To see that
λ
cannot be a symmetric equilibrium, observe that c′ ( λ2 ) < λ1 implies that U1r (µ1 , λ2 ) is increasing
2
at µ1 = λ2 (using the derivative of the idle time computed in (EC.28)), and hence server 1 would
have an incentive to deviate. Therefore, any symmetric equilibrium must be an interior point, and
from Theorem 11, such an equilibrium must be unique. This completes the proof.
| 3 |
Analyzing Adaptive Cache Replacement Strategies
Mario E. Consuegra2 , Wendy A. Martinez1 , Giri Narasimhan1 , Raju Rangaswami1 , Leo
Shao1 , and Giuseppe Vietri1
arXiv:1503.07624v2 [cs.DS] 24 Apr 2017
1
School of Computing and Information Sciences, Florida International University, Miami,
FL 33199, USA. {walem001,giri,raju,gviet001}@fiu.edu
2
Google Inc., Kirkland, WA, USA.
April 25, 2017
Abstract
Adaptive Replacement Cache (Arc) and CLOCK with Adaptive Replacement (Car) are state-of-theart “adaptive” cache replacement algorithms invented to improve on the shortcomings of classical cache
replacement policies such as Lru, Lfu and Clock. By separating out items that have been accessed
only once and items that have been accessed more frequently, both Arc and Car are able to control the
harmful effect of single-access items flooding the cache and pushing out more frequently accessed items.
Both Arc and Car have been shown to outperform their classical and popular counterparts in practice.
Both algorithms are complex, yet popular. Even though they can be treated as online algorithms with an
“adaptive” twist, a theoretical proof of the competitiveness of Arc and Car remained unsolved for over
a decade. We show that the competitiveness ratio of Car (and Arc) has a lower bound of N + 1 (where
N is the size of the cache) and an upper bound of 18N (4N for Arc). If the size of cache offered to Arc
or Car is larger than the one provided to Opt, then we show improved competitiveness ratios. The
important implication of the above results are that no “pathological” worst-case request sequences exist
that could deteriorate the performance of Arc and Car by more than a constant factor as compared to
Lru.
1
Introduction
Megiddo and Modha [MM03,MM04] engineered an amazing cache replacement algorithm that was self-tuning
and called it Adaptive Replacement Cache or Arc. Later, Bansal and Modha [BM04] designed another
algorithm called Clock with Adaptive Replacement (Car). Extensive experimentation suggested that Arc
and Car showed substantial improvements over previously known cache replacement algorithms, including
the well-known Least Recently Used or Lru and Clock. On the theoretical side, the seminal work of Sleator
and Tarjan [ST85] showed that Lru can be analyzed using the theory of online algorithms. They showed that
Lru has a competitiveness ratio of N (where N is the size of the cache). More surprisingly, they also showed
that with no prefetching, no online algorithm for cache replacement could achieve a competitiveness ratio less
than N , suggesting that under this measure, Lru is optimal. In other words, there exist worst-case request
sequences that would prevent any algorithm from being better than N -competitive. While these results
are significant, they highlight the difference between theory and practice. Sleator and Tarjan’s techniques
analyze online algorithms in terms of their worst-case behavior (i.e., over all possible inputs), which means
that other algorithms with poorer competitiveness ratios could perform better in practice. Another way
to state this is that the results assume an oblivious adversary who designs inputs for the online algorithms
in a way that make them perform as poorly as possible. The upper bound on performance ratio merely
guarantees that no surprises are in store, i.e., there is no input designed by an adversary that can make the
algorithm perform poorly.
1
Given a fixed size cache, the cache replacement problem is that of deciding which data item to evict
from the cache in order to make room for a newly requested data item with the objective of maximizing cache
hits in the future. The cache replacement problem has been referred to as a fundamental and practically
important online problem in computer science (see Irani and Karlin [Hoc97], Chapter 13) and a “fundamental
metaphor in modern computing” [MM04].
The Lru algorithm was considered the most optimal page replacement policy for a long time, but it had
the drawback of not being “scan-resistant”, i.e., items used only once could pollute the cache and diminish
its performance. Furthermore, Lru is difficult to implement efficiently, since moving an accessed item to
the front of the queue is an expensive operation, first requiring locating the item, and then requiring data
moves that could lead to unacceptable cache contention if it is to be implemented consistently and correctly.
The Clock algorithm was invented by Frank Corbató in 1968 as an efficient one-bit approximation to Lru
with minimum overhead [Cor68] and continues to be used in MVS, Unix, Linux, and Windows operating
systems [Fri99]. Like Lru, Clock is also not scan-resistant because it puts too much emphasis on “recency”
of access and pays no attention to “frequency” of access. So there are sequences in which many other
algorithms can have significantly less cost than the theoretically optimal Lru. Since then, many other cache
replacement strategies have been developed and have been showed to be better than Lru in practice. These
are discussed below in Section 2.
An important development in this area was the invention of adaptive algorithms. While regular
“online” algorithms are usually designed to respond to input requests in an optimal manner, these selftuning algorithms are capable of adapting to changes in the request sequence caused by changes in the
workloads. Megiddo and Modha’s Arc [MM03] is a self-tuning algorithm that is a hybrid of Lfu and
Lru. Bansal and Modha’s Car is an adaptivehybrid of Lfu and Clock [BM04]. Experiments show that
Arc and Car outperform Lru and Clock for many benchmark data sets [BM04]. Versions of Arc have
been deployed in commercial systems such as the IBM DS6000/DS8000, Sun Microsystems’s ZFS, and in
PostgreSQL.
Unfortunately, no theoretical analysis of the adaptive algorithms, Arc and Car, exist in the literature.
The main open question that remained unanswered was whether or not there exist “pathological” request
sequences that could force Arc or Car to perform poorly. In this document we show that these two
algorithms are O(N )-competitive, suggesting that they are not much worse than the optimal Lru. We also
prove a surprising lower bound on the competitiveness that is larger than N .
The main contributions of this paper are as follows:
1. For completeness, we provide proofs that Lru and Clock are N -competitive.
2. We prove a lower bound on the competitiveness of Arc and Car of N + 1, proving that there are
request sequences where they cannot outperform Lru and Clock.
3. We show that Arc is 4N -competitive.
4. We show that Car is 18N -competitive.
5. We obtain precise bounds for the competitiveness of all four algorithms when the sizes of the caches
maintained by them are different from that maintained by Opt.
6. We show that if the size of the cache is twice that of the one allowed for the optimal offline algorithm,
then the competitiveness ratio drops to a small constant.
We use the method of potential functions to analyze the algorithms. However, the main challenges in solving
these problems is that of carefully designing the potential function for the analysis. We discuss the role of
the adaptive parameter in the potential function. The contributions of this paper are summarized in Table
1. In this table, N is the size of the cache maintained by the algorithm, while NO is the size of the cache
maintained by Opt. The table provides lower bounds (LB) and upper bounds (UB) on the competitiveness
ratio when the cache sizes are equal, i.e., N = NO ; it also provides upper bounds when they are not equal.
2
Algorithm
Lru
Arc
Clock
Car
Compet. Ratio
LB
N
N +1
N
N +1
Compet. Ratio
UB
N
4N
N
18N
Compet. Ratio UB
w/ Unequal Sizes
N/(N − NO + 1)
12N/(N − NO + 1)
N/(N − NO + 1)
18N/(N − NO + 1)
[Ref]
[ST85]
This paper
This paper
This paper
Table 1: Summary of Results
After providing relevant background on cache replacement algorithms in Section 2, we discuss the lower
bounds on the competitiveness ratios of Arc and Car in Section 3. Next we prove upper bounds on the
competitiveness ratios of Lru, Clock, Arc, and Car in Section 4. Concluding remarks can be found in
Section 5.
2
Previous Work on Cache Replacement Strategies
Below we give brief descriptions of the four algorithms being discussed in this paper, after which we mention
a large collection of other closely related cache replacement algorithms.
The Lru Algorithm: Lru evicts the least recently used entry. It tends to perform well when there are
many items that are requested more than once in a relatively short period of time, and performs poorly on
“scans”. Lru is expensive to implement because it requires a queue with move-to-front operations whenever
a page is requested.
The Clock Algorithm: On the other hand, Clock was designed as an efficient approximation of Lru,
which it achieves by avoiding the move-to-front operation. Clock’s cache is organized as a single “circular”
list, instead of a queue. The algorithm maintains a pointer to the “head” of the list. The item immediately
counterclockwise to it is the “tail” of the list. Each item is associated with a “mark” bit. Some of the pages
in the cache are marked, and the rest are unmarked. When a page hit occurs that page is marked, but the
contents of the cache remain unchanged. When a page fault occurs, in order to make room for the requested
page, the head page is evicted if the page is unmarked. If the head page is marked, the page is unmarked and
the head is moved forward clockwise, making the previous head as the tail of the list. After a page is evicted,
the requested page is unmarked and placed at the tail of the list. Clock is inexpensive to implement, but
is not scan-resistant like Lru.
The Arc Algorithm To facilitate our discussion, we briefly describe the Arc algorithm. As mentioned
before, it combines ideas of recency and frequency. Arc’s cache is organized into a “main” part (of size N )
and a “history” part (of size N ). The main part is further divided into two lists, T1 and T2 , both maintained
as LRU lists (i.e., sorted by “recency”). T1 focuses on “recency” because it contains pages with short-term
utility. Consequently, when an item is accessed for the first time from the disk, it is brought into T1 . Items
“graduate” to T2 when they are accessed more than once. Thus, T2 deals with “frequency” and stores items
with potential for long-term utility. Additionally, Arc maintains a history of N more items, consisting of
B1 , i.e., items that have been recently deleted from T1 , and B2 , i.e., items that have been recently deleted
from T2 . History lists are also organized in the order of recency of access. The unique feature of Arc is its
self-tuning capability, which makes it scan-resistant. Based on a self-tuning parameter, p, the size of T1 may
grow or shrink relative to the size of T2 . The details of the algorithm are fairly complex and non-intuitive.
Detailed pseudocode for Arc (Figure 4 from [MM03]) is provided in the Appendix for convenience.
It is worth noting that Arc is considered a “universal” algorithm in the sense that it does not use any a
priori knowledge of its input, nor does it do any offline tuning. Furthermore, Arc is continuously adapting,
since adaptation can potentially happen at every step.
3
It must be noted that our results on Arc assume the “learning rate”, δ, to be equal to 1, while the Arc
algorithm as presented by Megiddo and Modha recommended a “faster” learning rate based on experiments
on real data. The learning rate is the rate at which the adaptive parameter p is changed as and when needed.
The Car Algorithm Inspired by Arc, Car’s cache is organized into two main lists, T1 and T2 , and two
history lists, B1 and B2 . Inspired by Clock, both T1 and T2 are organized as “circular” lists, with each
item associated with a mark bit. The history lists, B1 and B2 are maintained as simple FIFO lists. We let
t1 , t2 , b1 , b2 denote the sizes of T1 , T2 , B1 , B2 , respectively. Also, let t := t1 + t2 . Let lists L1 (and L2 , resp.)
be the list of size ℓ1 (ℓ2 , resp.) obtained by concatenating list B1 to the end of “linearized” T1 (concatenating
B2 to the tail of T2 , resp.). Note that circular lists are linearized from head to tail. We let T10 and T20 (T11
and T21 , resp.) denote the sequence of unmarked (marked, resp.) pages in T1 and T2 , respectively.
The following invariants are maintained by Car for the lists:
1. 0 ≤ t1 + t2 ≤ N
2. 0 ≤ ℓ1 = t1 + b1 ≤ N
3. 0 ≤ ℓ1 + ℓ2 = t1 + t2 + b1 + b2 ≤ 2N
4. t1 + t2 < N =⇒ b1 + b2 = 0
5. t1 + t2 + b1 + b2 ≥ N =⇒ t1 + t2 = N
6. Once t1 + t2 = N and/or ℓ1 + ℓ2 = 2N , they remain true from that point onwards.
Car maintains an adaptive parameter p, which it uses as a target for t1 , the size of list T1 . Consequently,
N − p is the target for t2 . Using this guiding principle, it decides whether to evict an item from T1 or T2 in
the event that a miss requires one of the pages to be replaced. The replacement policy can be summarized
into two main points:
1. If the number of items in T1 (barring the marked items at the head of the list) exceeds the target size,
p, then evict an unmarked page from T1 , else evict an unmarked page from T2 .
2. If ℓ1 = t1 + b1 = N , then evict a history page from B1 , else evict a history page from B2 . Since the
details of the algorithm are complex, the actual pseudocode is provided (Figure 2 from [BM04]) in the
Appendix.
Other Cache Replacement Algorithms The DuelingClock algorithm [JIPP10] is like Clock but
keeps the clock hand at the newest page rather than the oldest one, which allows it to be scan-resistant. More
recent algorithms try to improve over Lru by implementing multiple cache levels and leveraging history.
In [OOW93] the Lru-K algorithm was introduced. Briefly, the Lru-K algorithm estimates interarrival
times from observed requests, and favors retaining pages with shorter interarrival times. Experiments have
shown Lru-2 performs better than Lru, and that Lru-K does not show increase in performance over Lru2 [OOW93], but has a higher implementation overhead. It was also argued that Lru-K is optimal under the
independence reference model (IRM) among all algorithms A that have limited knowledge of the K most
recent references to a page and no knowledge of the future [OOW93].
In essence, the Lru-K algorithm tries to efficiently approximate Least Frequently Used (Lfu) cache
replacement algorithm. As K becomes larger, it gets closer and closer to Lfu. It has been argued that Lfu
cannot adapt well to changing workloads because it may replace currently “hot” blocks instead of “cold”
blocks that had been “hot” in the past. Lfu is implemented as a heap and takes O(log N ) time per request.
Another cache replacement algorithm is Lirs [JZ02]. The Lirs algorithm evicts the page with the
largest IRR (inter-reference recency). It attempts to keep a small (≈ 1%) portion of the cache for HIR
(high inter-reference) pages, and a large (≈ 99%) portion of the cache for LIR (low inter-reference) pages.
The Clock-Pro algorithm approximates Lirs efficiently using Clock [JCZ05]. The 2q [JS94] algorithm
4
is scan-resistant. It keeps a FIFO buffer A1 of pages that have been accessed once and a main Lru buffer
Am of pages accessed more than once. 2q admits only hot pages to the main buffer. The buffer A1 is
divided into a main component that keeps the pages in A1 that still reside in cache, and a history component
that remembers pages that have been evicted after one access. The relative sizes of the main and history
components are tunable parameters. 2q has time complexity of O(1). Another algorithm that tries to bridge
the gap between recency and frequency is Lrfu [LCK+ 01]. This is a hybrid of Lru and Lfu and is adaptive
to changes in workload. The time complexity ranges from O(1) for Lru to O(log n) for Lfu.
3
Lower Bounds on Competitiveness Ratio for Arc and Car
This section presents our results on the lower bounds for Arc and Car. We also show that the adaptive
parameter is critical to both Arc and Car by showing that their non-adaptive versions have an unbounded
competitiveness ratio.
3.1
Lower Bound for Arc
First, we show a lower bound on the competitiveness ratio for Arc.
Theorem 1. The competitiveness ratio of Algorithm Arc has a lower bound of N + 1.
Proof. We show that we can generate an unbounded request sequence that causes N + 1 page faults on
Arc for every page fault on Opt. The sequence only involves 2N + 1 pages denoted by 1, . . . , 2N + 1. Our
example, will take the contents of the cache managed by Arc from configurations 1 through configuration
5, which are shown in Table 2. Note that configuration 1 and configuration 5 are essentially the same to the
extent that the value of p is 0 in both, and the number of pages in each of the four parts of the cache are
identical.
Configuration
1
2
3
4
5
p
0
0
0
1
0
T1
∅
2N + 1
∅
∅
∅
T2
1, . . . , N
2, . . . , N
2, . . . , N, 1
3, . . . , N, 1, 2N + 1
1, 2N + 1, 2, . . . , N − 1
B1
∅
∅
2N + 1
∅
∅
B2
N + 1, . . . , 2N
N + 2, . . . , 2N, 1
N + 2, . . . , 2N
N + 2, . . . , 2N, 2
N + 2, . . . , 2N, N
Table 2: Example for Lower Bound on Arc’s competitiveness
We note that we can obtain configuration 1 from an empty cache with the following request sequence:
2N, 2N, 2N − 1, 2N − 1, . . . , 2, 2, 1, 1. Consider the first half of the above request sequence, which contains
a total of 4N requests to 2N new pages, each page requested twice in succession. The first time a page is
requested from the first N new pages, it will be put in T1 . The second time the page is requested, it will
get moved to T2 . In the second half, if a page not in Arc is requested, Replace will be called, which
will move a page from T2 to B2 , and the new page will be placed in T1 . When the same page is requested
again, it simply gets moved to T2 . The value of p remains unchanged in this process. It is clear that we get
Configuration 1 as a result of the request sequence.
We design our request sequence by following the steps below.
1. Make one request to a page 2N + 1 not in Arc. We will assume that this is a brand new page and
therefore also causes a fault for Opt and for Arc. The page 2N + 1 will go into T1 and a page in T2
will be demoted to B2 . The contents of Arc is given by Configuration 2 in Table 2.
2. Request any page in B2 . This decreases the value of p but since p is zero it will remain unchanged.
Since the size of T1 is more than p Arc will call Replace, which will act on T1 , hence 2N + 1 will be
demoted to B1 . Upon requesting page 1 in B2 , we get Configuration 3 in Table 2.
5
3. The next step is to request 2N + 1 again, which will move to T2 , p is increased and a page in T2 is
demoted to B2 . Configuration 4 reflects the contents of the cache at this stage.
4. Finally we make N − 2 requests to any pages from B2 . By requesting the pages 2, 3, . . . , N , we end up
in Configuration 5 from Table 2.
The steps outlined above cause N + 1 page faults for Arc and at most one page fault for Opt. Since
we are back to the initial configuration we can repeat this process over again. This concludes the proof that
the competitiveness ratio of Arc is at least N + 1.
3.2
Lower Bound for Car
Now we prove a similar lower bound for Car.
Theorem 2. The competitiveness ratio of Algorithm Car has a lower bound of N + 1.
Proof. We show that we can generate an infinite request sequence that causes N + 1 page faults in Car for
every page fault on Opt. The sequence only involves 2N + 1 pages denoted by 1, . . . , 2N + 1. Our example,
will take the contents of the cache managed by Car from configurations 1 through N + 2 as shown in Table
3. Note that a superscript of 1 on any page in T1 ∪ T2 indicates that it is marked. All others are unmarked.
Also note that configuration 1 and configuration N + 2 are essentially the same upon relabeling.
First, we show that configuration 1 is attainable, by showing that it can be reached from an empty cache.
This is formalized in the following lemma.
Lemma 1. We can obtain configuration 1 starting from an empty cache with the following request sequence:
2N, 2N, 2N − 1, 2N − 1, . . . , 2, 2, 1, 1.
Proof. The first half of the above request sequence calls each of the N pages 2N, 2N − 1, . . . , N + 1 twice
in succession. The first time they are called, they are moved into T1 unmarked. The second time the same
page is called it gets marked, but remains in T1 . At the end of the first half, all the N pages requested end
up in T1 and are all marked.
The next call to new page N , will trigger a call to Replace, which will move all the marked pages in T1
to T2 leaving them unmarked. It will also move one page from T2 to B2 . Finally, the requested page N will
be moved to T1 and left unmarked. When requested again, it simply gets marked. When the next page, i.e.,
N − 1 is requested, it moves marked page N to T2 , moves one more page from T2 to B2 . As the rest of the
pages from the request sequences are requested, the previous requested page gets moved to T2 , which in turn
demotes one of its pages to B2 . At the end of the process, we get a marked page 1 in T1 . Pages 2, . . . , N
are in T2 , unmarked, and pages N + 1, . . . , 2N end up in B2 . This is exactly what we need for configuration
1.
Continuing on the proof of Theorem 2, we show what happens when, starting from configuration 1, Car
processes the following request sequence.
Page 2N + 1: A page in T2 is demoted to B2 , which loses a page; the marked page from T1 is moved to T2
and the new page is moved into T1 .
MRU page in B2 : This should have decremented p but remains unchanged since it is already zero. Since
the size of T1 is more than p Car will call Replace and 2N + 1 will be demoted to B1 , resulting in
configuration 3 in Table 3.
Page 2N + 1: It will move to T2 , p is increased and a page in T2 is demoted to B2 . See configuration 4 in
Table 3.
MRU page from B2 , repeat N − 2 times: It results in configuration N + 2 in Table 3.
6
Config.
1
2
3
4
5
...
N +2
p
0
0
0
1
0
0
0
B1
∅
∅
2N + 1
∅
∅
...
∅
T1
11
2N + 1
∅
∅
∅
...
∅
T2
2, . . . , N
1, . . . , N − 1
N, 1, . . . , N − 1
2N + 1, N, 1, . . . , N − 2
N − 1, 2N + 1, N, 1, . . . , N − 3
...
2, . . . , N − 1, 2N + 1, N
B2
N + 1, . . . , 2N
N, . . . , 2N − 1
N + 1, . . . , 2N − 1
N − 1, N + 1, . . . , 2N − 1
N − 2, N + 1, . . . , 2N − 1
...
1, N + 1, . . . , 2N − 1
Table 3: Example for Lower Bound on Car’s competitiveness
The request sequence detailed above generates N + 1 faults for Car while only N different pages are
requested. Thus, Opt could limit itself to at most one fault in this stretch. Opt will fault once during each
stretch if the next page is picked to be one that is farthest used in the future. Repeating the above steps
an unbounded number of times with appropriate relabeling proves that the competitiveness ratio of Car is
lower bounded by N + 1.
3.3
Non-Adaptive Arc and Car are not Competitive
It is particularly interesting to note that the non-adaptive version of Car and Arc (called Fixed Replacement
cache) [MM03] are not competitive. The following two theorems prove that the competitiveness ratios can
be unbounded in this case.
Theorem 3. Algorithm Car with fixed p is not competitive.
Proof. Suppose that algorithm Car has p fixed instead of being adaptive and 0 < p < N − 1. Recall that
p is the target size of T1 and N − p is the target size of T2 . We design a request sequence such that with
less than N pages we can generate an infinite number of page faults for Car. The sequence is described as
follows:
Step 1: Fill up T2 with N − p unmarked pages as described above in the proof of Theorem 2.
Step 2: Request the MRU page in B2 . The requested page goes to the tail of T2 as an unmarked page.
Since the size of T2 is greater than p we discard the head of T2 .
Step 3: Request the MRU page in B2 which is actually the page discarded in Step 2 from T2 . This step is
similar to Step 2 and we can continue to repeat this infinitely often, since the page that moves from
B2 to T2 get’s unmarked and the page that moves from T2 to B2 goes to MRU.
Therefore, we can cycle infinitely many times through N − p + 1 pages triggering an infinite number of faults,
while Opt can avoid faults altogether during the cycle.
Theorem 4. Algorithm Arc with fixed p is not competitive.
Proof. Suppose that algorithm Arc has p fixed instead of being adaptive and 0 < p < N . Recall that p is
the target size of T1 and N − p is the target size of T2 . We design a request sequence such that with less than
N pages we can generate an infinite number of page faults for Arc. The first step is to fill up T2 (size of
T2 = N − p). Next we request the MRU page in B2 . Every time we request a page from B2 , it goes into the
top of T2 and thus it increases the size of T2 beyond its target size. It follows that Arc will call Replace
and move a page from T2 to the MRU position in B2 . If the MRU page from B2 is repeatedly requested,
we will cycle through N − p pages, every time incurring a page fault for Arc, while Opt can avoid faults
altogether during the cycle.
7
4
4.1
Analyzing Lru using potential functions
The generic approach
The standard approach used here is as follows. First, we define a carefully crafted potential function, Φ. As
per the strategy of analyzing competitiveness ratios suggested by Sleator and Tarjan [ST85], we then try to
show the following inequality:
CA + ∆Φ ≤ f (N ) · CO + g(N ),
(1)
where CA and CO are the costs incurred by the algorithm and by Opt, respectively, ∆Φ is the change in
potential, f (N ) is some function of N , the size of the cache.
In all of our proofs, we assume that the work involves simultaneously maintaining Opt’s cache as well as
the algorithm’s cache. So we can break down the work into two steps, one where only Opt serves and one
where only the algorithm serves. When only Opt serves, there are 2 cases: first when Opt has a hit and
next when Opt has a miss. Next, we consider the cases when the algorithm serves, once when it has a hit
and once when it has a miss. In each case, our goal is to prove the inequality (1) mentioned above, which
establishes that f (N ) is the competitiveness ratio of algorithm A. There may be an additive term of g(N )
which is a function of the misses needed to get to some initial configuration for the cache.
4.2
Analyzing Lru using potential functions
Assuming that the size of cache given to the competing Opt algorithm is NO ≤ N , the following result was
proved by Sleator and Tarjan [ST85] (Theorem 6) for Lru.
N
-competitive.
Theorem 5. [ST85] Algorithm Lru is N −N
O +1
Here we present a complete proof of this well-known result because we believe it is instructive for the
other proofs in this paper.
Proof. While this was not used in the proof in Sleator and Tarjan [ST85], a potential function that will
facilitate the proof of the above theorem is:
Φ=
P
r(x)
,
NL − NO + 1
x∈D
(2)
where D is the list of items in Lru’s cache but not in Opt’s cache, and r(x) is the rank of item x in Lru’s
list with the understanding that the LRU item has rank 1, while the MRU item has rank equal to the size
of the cache [Alb96].
We now show the following inequality:
CA + ∆Φ ≤
N
· CO + O(N ),
N − NO + 1
(3)
where CA and CO are the costs incurred by the algorithm and by Opt, respectively, ∆Φ is the change in
potential, f (N ) is some function of N , the size of the cache.
We assume that the work involves simultaneously maintaining Opt’s cache as well as Lru’s cache. So
we can break down the work of Lru into two steps, one where only Opt serves and one where only Lru
serves. When only Opt serves, there are 2 cases: first when Opt has a hit and next when Opt has a miss.
In either case, the cost for Lru is 0, since only Opt is serving. When Opt has a hit, the cost for Opt is also
0. Furthermore, since Lru’s cache remains untouched, and no changes take place in the contents of Opt’s
cache, the ranks of the items in Lru remain unchanged. Thus, ∆Φ = 0. Therefore, the inequality in (3) is
trivially satisfied in this case.
8
When Opt has a miss, CO = 1, as before. The item evicted by Opt can contribute the rank of that item
NL
to increase at most by NL , making the increase in potential function to be bounded by NL −N
. Thus,
O +1
the inequality in (3) is satisfied.
Next, we consider the step where Lru serves the request. As with Opt, when Lru is serving, the cost
for Opt is 0. We again consider two cases: first when Lru has a hit and next when Lru has a miss. When
Lru has a hit, the cost for Lru is 0. The contents of Lru’s cache may change. The item that was accessed is
moved to the MRU position. However, this item is already in Opt’s cache and therefore cannot contribute
to a change in potential. Several other items may move down in the cache, thus contributing to a decrease
in potential of at most (N − 1). In the worst, case the increase in potential is at most 0. Therefore, the
inequality in (3) is again satisfied.
Finally, we consider the case when Lru has a miss. As before, CL = 1. Following the previous arguments,
an item would be brought into MRU (which is already present in Opt’s cache), a bunch of items may be
demoted in rank, and the Lru item will be evicted. The only action that can contribute to an increase is
caused by the item that is brought into the MRU location. However, this item is already present in Opt’s
cache, and hence cannot contribute to an increase. All the demotions and eviction can only decrease the
potential function. Note that before the missed item is brought into Lru’s cache, the contents of Lru’s and
Opt’s cache agree in at most NO − 1 items, since Opt just finished serving the request and the item that
caused the miss is already in Opt’s cache. Thus there are at least NL − NO + 1 items that contribute their
ranks to the potential function. These items either get demoted in rank or get evicted. Either way, the
potential function will reduce by a minimum value of NL − NO + 1, although it could more if there are more
items that are in Lru and that are not in Opt’s cache. Thus the total change in potential has to be at most
NL − NO + 1, and we have
CL + ∆Φ ≤ 1 −
(NL − NO + 1)
NL
≤0=
· CO .
(NL − NO + 1)
NL − NO + 1
Summarizing the costs, we have the following:
Step
CL
∆Φ
Opt Serves Request
Opt has a hit
0
0
Opt has a miss
0
≤ NL
Lru Serves Request
Lru has a hit
0
≤0
Lru has a miss
1
≤ NL − NO + 1
CO
0
1
0
0
The analysis of Lru states that if the sizes of Lru’s and Opt’s caches are NL and NO respectively, and
NL
if NL ≥ NO , then the competitiveness ratio of Lru is NL −N
. Thus Lru is 2-competitive if the size of
O +1
Lru’s cache is roughly twice that of Opt’s cache.
4.3
Analyzing the competitiveness of Clock
Our result on the competitiveness of Clock is formalized in the following theorem. While this result appears
to be known, we have not been able to locate a full proof and we believe this is of value. We therefore present
it for the sake of completeness.
N
Theorem 6. Algorithm Clock is N −N
-competitive.
O +1
Proof. Let M0 denote the subsequence of unmarked pages in Clock, ordered counterclockwise from head
to tail. Let M1 denote the subsequence of marked pages in Clock, ordered counterclockwise from head
to tail. Let q be any page in Clock’s cache. Let P 0 [q] denote the position of an unmarked page q in the
9
ordered sequence M0 , and let P 1 [q] denote the position of a marked page q in M1 . Finally, let R[q] denote
the rank of page q defined as follows:
R[q] =
(
P 0 [q]
if q is unmarked,
P 1 [q] + |M0 | otherwise.
(4)
Thus, if q is an unmarked page at the head, then R[q] = 1. By the above definition, the following lemmas
are obvious.
Lemma 2. If q is any page in Clock’s cache, then R[q] ≤ N .
Lemma 3. If a marked page q at the head of Clock’s cache is unmarked and moved to the tail, then R[q]
does not change in the process.
Let D be the set of pages that are in the cache maintained by Clock, but not in the cache maintained
by Opt. We define the potential function as follows:
Φ=
X
R[q]
(5)
q∈D
We prove one more useful lemma about the ranks as defined above.
Lemma 4. If an unmarked page at the head of Clock’s cache is evicted from Clock’s cache, and if there
is at least one page in D, then Φ decreases by at least 1 in the process. .
Proof. All pages, marked or unmarked, will move down by at least one position (reducing the rank of each
by at least 1). The decrease in potential for at least one page that is in D will contribute to Φ, guaranteeing
that ∆Φ ≤ −1.
Let CClock and COpt be the costs incurred by the algorithms Clock and Opt, and let S = σ1 , σ2 , . . . , σm
be an arbitrary request sequence. Let S ′ denote the initial subsequence of requests that take place prior to
the cache becoming full. Note that exactly N faults are incurred in S ′ , after which the cache remains full.
Let S ′′ be the subsequence of S that comes after S ′ .
Let CClock and COpt be the cost incurred by the algorithms Clock and Opt respectively. We will prove
that for every individual request, σ ∈ S ′′ :
CClock (σ) + ∆Φ ≤ N ∗ COpt (σ)
(6)
As before, we assume that request σ is processed in two distinct steps: first when Opt services the page
request and, next when Clock services the request. We will show that inequality (6) is satisfied for both
the steps.
When only Opt acts in this step, Cclock = 0. If Opt does not fault on this request, then COP T = 0. No
change occurs to the contents of the cache maintained by Opt as well as Clock, and the clock hand does
not move. Thus, ∆Φ = 0, satisfying inequality 6.
If Opt faults on request σ, then COP T = 1 and CClock = 0. The contents of the cache maintained by
Opt does change, which could affect the potential function. The potential could increase due to the eviction
of a page in Opt. Since by Lemma 2 the rank of the evicted page cannot exceed N , the potential will change
by at most N . Thus, inequality 6 is satisfied.
Next we consider what happens when Clock services the request. For this case COP T = 0. If Clock
does not fault, then Cclock = 0 and the requested page may change from an unmarked status to a marked
one. However, since the page is already in the cache maintained by Opt it is not in D and is therefore not
considered for the potential function calculations in 5. Thus, inequality 6 is satisfied.
10
Finally, we consider the case when Clock faults, in which case CClock = 1 and COpt = 0. To satisfy
inequality 6, ∆Φ needs to be less or equal to -1. When Clock has a miss, if the head page happens to be
marked, then Clock will repeatedly unmark the marked head page, moving it to the tail position, until an
unmarked head page is encountered. The unmarked head page is then evicted. Each time a marked head
page becomes an unmarked tail page, by Lemma 3 its rank does not change. When finally an unmarked
head page is evicted, we know that there is at least one page in Opt’s cache that is not in Clock’s cache
(i.e., the page that caused the fault). Since there are N pages in the cache maintained by Clock, at least
one of those pages is guaranteed not to be part of the cache maintained by Opt. Since there is at least one
page in D, by Lemma 4 it is clear that evicting an unmarked head page will decrease the potential function
by at least one, which will pay for the Clock’s page fault.
We have therefore showed that for every request σ, inequality 6 is satisfied. Since there can be at most
N faults for the requests in S ′ , summing up the above inequality for all requests, σ ∈ S, we get
CClock (S) ≤ N ∗ COpt (S) + N.
This completes the proof of the theorem and the competitiveness analysis of the Clock algorithm.
4.4
Analyzing the Competitiveness of ARC
In this paper, we prove two different upper bounds for the competitiveness of Arc. These two proofs use
very different potential function. The first one allows for the sizes of the caches maintained by Arc and
Opt to be different, while the second one does not allow for it, but provides a tighter bound. We provide
both results below.
Our first result on the competitiveness of Arc is formalized in the following theorem:
12N
Theorem 7. Algorithm Arc is N −N
-competitive.
O +1
Proof. Let PX [q] be the position of page q in an arbitrary ordered sequence of pages X. When the set is
obvious, we will drop the subscript and denote PX [q] simply by P [q]. The set of history pages T1 , T2 , B1 ,
and B2 will be treated as an ordered sequence of pages ordered from its LRU position to its MRU position.
Let Opt and Car be the set of main pages stored in the caches for algorithms Opt and Arc respectively.
Let D = Arc \ Opt. As before, we associate each page with a rank value R[q], which is defined as follows:
2PB1 [q]
2P [q]
B2
R[q] =
4P
T1 [q] + 2b1
4PT2 [q] + 2b2
if
if
if
if
q
q
q
q
∈ B1
∈ B2
∈ T1
∈ T2
(7)
Finally, we define the potential function as follows:
Φ = p + 2t1 + 2
P
R[q]
− 3|Arc|
N − NO + 1
q∈D
(8)
The initial value of Φ is 0. If the following inequality (9) is true for any request σ, where ∆Φ is the change
in potential caused by serving the request, then when summed over all requests, it proves Theorem 7.
CArc (σ) + ∆Φ ≤
12N COpt (σ)
.
N − NO + 1
(9)
As before, we assume that request σ is processed in two distinct steps: first when Opt serves and, next
when Arc serves. We will show that inequality (9) is satisfied for each of the two steps.
11
Step 1: Opt serves request σ
Since only Opt acts in this step, CArc = 0, and T1 ∪ T2 does not change. There are two possible cases:
either Opt faults on σ or it does not. If Opt does not fault on this request, then it is easy to see that
COpt = 0 and ∆Φ = 0, thus satisfying inequality (9).
If Opt faults on request σ, then COpt = 1 and some page q, is evicted from the cache maintained by
Opt will belong to D after this step and thus its rank will contribute to the potential function, which will
increase by two times the rank of q. The maximal positive change in potential will occur when q is the MRU
page of either T1 or T2 . In this case the rank of q is given by: R[q] = 4P [q] + b1 (R[q] = 4P [q] + b2 ). The
maximum possible values for each of the terms P [q] and b1 will be N , hence the maximum possible rank of
12N
q will be 4N + 2N = 6N . Therefore resulting potential change is at most N −N
.
O +1
Step 2: Arc serves request σ
We break down the analysis into four cases. Case 2.1 deals with the case when Arc finds the page in its
cache. The other three cases assume that Arc faults on this request because the item is not in T1 ∪T2 . Cases
2.2 and 2.3 assume that the missing page is found recorded in the history in lists B1 and B2 , respectively.
Case 2.4 assumes that the missing page is not recorded in history.
Case 2.1: Arc has a page hit Clearly, the page was found in T1 ∪ T2 , and CArc = 0. We consider the
change of each of terms in the potential function individually.
1. As per the algorithm, p can only change when the page is found in history. (See lines 3 through 10 of
Arc(x).) Since the page is not found in Arc’s history, ∆p = 0.
2. If the hit happens in T1 , the page will move to the top of T2 (See line 2 of Arc(x).), which will result
in a decrease in t1 . If the hit happens in T2 , the size of t1 will remain the same. The overall change in
t1 will be 0.
3. Since Opt has already served the page, the page is in Opt’s cache. Therefore, even if the page’s rank
could change when moved from T1 to MRU position of T2 , this rank will not affect the potential since
the page is not in D.
We, therefore, conclude that ∆Φ = 0, satisfying inequality (9).
Next we will analyze the 3 cases when the requested page is not in Arc’s cache. Since CArc = 1, the
change in potential must be ≤ −1 in each case in order for inequality (9) to be satisfied.
Case 2.2: Arc has a page miss and the missing page is in B1 We consider the two cases, first
when Replace moves an item from T1 to B1 and second when it moves an item from T2 to B2 .
1. Case 1: We consider the change in potential function by analyzing each of the 3 terms.
• Value of p will either increase by 1 or stay the same in case p = N , we will account for the worst
case which is when ∆p = 1.
• A new page is being added to MRU of T2 , and Replace is taking the LRU page of T1 to B1 ,
then 2∆t1 = −2.
• The page that moved from B1 to T2 is not in D, therefore the change in its rank will
P not affect the
potential, the other pages will could only decrease their rank, meaning that 2∆ q∈D R[q] ≤ 0.
Since p increases by at most 1 and t1 decreases by at least 2 the total change in potential is at most -1.
2. Case 2: Once again. we consider the change in potential function by analyzing each of the three terms.
• Value of p will either increase by 1 or stay the same in case p = N , we will account for the worst
case which is when ∆p = 1.
12
• A new page is added to MRU of T2 , and Replace moves the LRU page of T2 to B2 . Thus, there
is no change in T1 .
• The page that moved from B1 to T2 is not in D, therefore the change in its rank will not affect
the potential. Since t1 + t2 = N , it is guaranteed that at least N − NO + 1 pages are not in Opt.
For the pages that are in T1 , their ranks will decrease by at least 2 since b1 decreases by 1, and for
the pages in T2 their ranks will decrease by at least 2 as well since b2 increasesPby 1 but the LRU
R[q]
page in T2 will move to B2 , reducing P [q] for all the pages in T2 . The term 2 N q∈D
−NO +1 decreases
by at least -4.
P
R[q]
Since p increases by at most 1 and 2 N q∈D
−NO +1 decreases by at least -4 the total change in potential is
at most -3.
Case 2.3: Arc has a page miss and the missing page is in B2 When the missing page is in B2 ,
Arc makes a call to Replace (Line 5) and then executes Lines 18-19. Thus, p is decremented except if it
is already equal to 0. We consider two sub cases: ∆p ≤ −1 and ∆p = 0.
∆p ≤ −1: As in Case 2.2, the call to Replace has no effect on t1 . Replace will not increment the rank
using a similar analysis as in 2.2 and change in p will at least be -1. The change in the potential function is
at most -1.
∆p = 0: Unlike the sub case above when p decreases by 1, the change in p cannot guarantee the required
reduction in the potential. We therefore need a tighter argument. We know that there is a call to Replace.
Two cases arise and are discussed below.
• Replace moves an item in T1 to B1 : Since the LRU page of T1 is moved to the MRU position of B1 ,
2∆t1 = −2 and there is no movement of a page in D that could increase the rank. Therefore the total
change in the potential function is at most -2.
• Replace moves an item in T2 to B2 : p = 0 indicates that T2 has N pages, therefore is guarantee that
at least N − NO + 1 pages will not be part of Opt, contributing to the change in potential. The page
being moved from T2 to B2 will decrease it’s rank by at least 2, and the rest of the pages in T2 will
move down one position (P [q] will decrease by 1) while B2 will remain the same, resulting in a change
in the potential function of at most -4.
Thus, in each case the potential function decreased by at most -2.
Case 2.4: Arc has a page miss and the missing page is not in B1 ∪ B2
1. t1 + b1 = N ; t1 < N ; The LRU page in B1 is evicted. Assume Replace moves a page from T1 to B1
and a new page is brought into T1 (∆t1 = 0, ∆b1 = 0, ∆t2 = 0, ∆b2 = 0).
• The term p is not affected.
• The term t1 is not affected.
• Since t1 + b1 = N , at least N − No + 1 pages in T1 ∪ B1 are not in Opt. If the page is in B1 \ Opt
then its rank decreases by 2; if the page is in T1 \ Opt its rank decreases by 4.
2. t1 + b1 = N ; t1 < N ; The LRU page in B1 is evicted. Assume Replace moves a page from T2 to B2
and a new page is brought into T1 (∆t1 = 1, ∆b1 = −1, ∆t2 = 1, ∆b2 = 1).
• The term p is not affected.
• The term t1 is increased by 1.
13
• Since t1 + t2 = N , at least N − No + 1 pages in T1 ∪ T2 that are not in Opt. If a page, q, is in
T1 \ Opt then its rank decreases by 2 (∆R[q] = ∆4 ∗ P [q] + ∆2 ∗ b2 = −2); if the page, q, is in
T2 \ Opt its rank decreases by 2 (∆R[q] = ∆4 ∗ P [q] + ∆2 ∗ b2 = −2).
3. t1 + b1 < N ; t1 + t2 + b1 + b2 = 2N ; Assume that the LRU page in B2 is evicted and Replace moves
a page from T1 to B1 and a new page is brought into T1 (∆t1 = 0, ∆b1 = 1, ∆t2 = 0, ∆b2 = −1).
• The term p is not affected.
• The term t1 is not affected.
• Here we used the fact that t2 + b2 > N , then at least N − No + 1 pages in T2 ∪B2 are not in Opt. If
a page, q, is in T2 \Opt then its rank decreases by 2 (∆R[q] = ∆4∗P [q]+∆2∗b2 = 4∗(0)+2(−1) =
−2); if the page, q, is in B2 \ Opt its rank decreases by 2 (∆R[q] = ∆2 ∗ P [q] = 2 ∗ (−1) = −2).
4. t1 + b1 < N ; t1 + t2 + b1 + b2 = 2N ; Assume that the LRU page in B2 is evicted and Replace moves
a page from T2 to B2 and a new page is brought into T1 (∆t1 = 1, ∆b1 = 0, ∆t2 = 1, ∆b2 = 0).
• The term p is not affected.
• The term t1 is increased by 1.
• Here we used the fact that t2 + b2 > N , then at least N − No + 1 pages in T2 ∪B2 are not in Opt. If
a page, q, is in T2 \Opt then its rank decreases by 2 (∆R[q] = ∆4∗P [q]+∆2∗b2 = 4∗(0)+2(−1) =
−2); if the page, q, is in B2 \ Opt its rank decreases by 2 (∆R[q] = ∆2 ∗ P [q] = 2 ∗ (−1) = −2).
5. t1 + b1 < N ; t1 + t2 + b1 + b2 < 2N ; In this case, no pages are evicted from history. Assume that
Replace moves a page from T1 to B1 and a new page is brought into T1 (∆t1 = 0, ∆b1 = 1, ∆t2 = 0,
∆b2 = 0)
• The term p is not affected.
• The term t1 is increased by 1.
• Here we cannot say that the rank decreases. Hence the rank term is at most 0.
• The term |Arc| increases by 1.
6. t1 + b1 < N ; t1 + t2 + b1 + b2 < 2N ; In this case, no pages are evicted from history. Assume Replace
moves a page from T2 to B2 and a new page is brought into T1 (∆t1 = 1, ∆b1 = 0, ∆t2 = −1, ∆b2 = 1)
• The term p is not affected.
• The term t1 is not affected.
• Here we cannot say that the rank decreases. Hence the rank term is at most 0.
• The term |Arc| increases by 1.
Wrapping up the proof of Theorem 7: Combining the four cases (2.1 through 2.4) proves that inequality (9) is satisfied when Arc serves request σ. This completes the proof of Theorem 7, establishing
that the upper bound on the competitiveness of Arc is 12N for the cases where the sizes of Opt and Arc
are the same. By analyzing cases where the size of Arc is greater than Opt we can observe that since Arc
12N
the greater the size of Arc’s cache relative to the size of Opt’s cache, smaller will be the
will be N −N
O +1
competitiveness of Arc.
14
4.5
Alternative Analysis of Competitiveness of Arc
Below, we prove an improved upper bound on the competitiveness ratio of Arc. As seen below, the potential
function is considerably different. Let CA and CO be the costs incurred by the algorithms Arc and Opt.
We start with some notation and definitions. If X is the set of pages in a cache, then let M RU (X) and
LRU (X) be the most recently and least recently used pages from X. Let M RUk (X) and LRUk (X) be the
k most recently and k least recently used pages from X.
Let lists L1 (and L2 ) be the lists obtained by concatenating lists T1 and B1 (T2 and B2 , resp.). Let list L be
obtained by concatenating lists L1 and L2 . We let ℓ1 , ℓ2 , t1 , t2 , b1 , b2 denote the sizes of L1 , L2 , T1 , T2 , B1 , B2 ,
respectively. Finally, let t := t1 + t2 and ℓ := ℓ1 + ℓ2 .
At any instant of time during the parallel simulation of Opt and Arc, and for any list X, we let
M RUk (X) be denoted by T OP (X), where k is the largest integer such that all pages of M RUk (X) are also
in the cache maintained by OPT. We let L′1 , L′2 , T1′ , T2′ denote the T OP s of L1 , L2 , T1 , T2 , respectively, with
sizes ℓ′1 , ℓ′2 , t′1 , t′2 , respectively. We let b′1 and b′2 denote the sizes of the B1′ = L′1 ∩ B1 and B2′ = L′2 ∩ B2 ,
respectively. Note that if b′1 > 0 (b′2 > 0, resp.), then all of T1 (T2 , resp.) is in Opt. Finally, we let
ℓ′ := ℓ′1 + ℓ′2 . The Arc algorithm ensures that 0 ≤ t ≤ N , 0 ≤ ℓ ≤ 2N and 0 ≤ ℓ1 ≤ N , thus making
0 ≤ ℓ2 ≤ 2N .
We assume that algorithm X being analyzed is provided an arbitrary request sequence σ = σ1 , σ2 , . . . , σm .
We define the potential function as follows:
Φ = p − (b′1 + 2 · t′1 + 3 · b′2 + 4 · t′2 ).
(10)
The main result of this section is the following theorem:
Theorem 8. Algorithm ARC is 4N -competitive.
We say that the cache is full if t = N and either t1 + b1 = N or t2 + b2 ≥ N . We will prove the above
theorem by proving the following inequality for any request σ that is requested after the cache is full:
CA (σ) + ∆Φ ≤ 4N · CO (σ) + 2N,
(11)
where ∆X represents the change in any quantity X. Summing up the above inequality for all requests would
prove the theorem as long as the number of faults prior to the cache becoming full is bounded by the additive
term 2N .
We make the following useful observation about a full cache.
Lemma 5. When the request sequence requests the N -th distinct page, we have t = N , and this remains
an invariant from that point onward. No items are discarded from the cache (main or history) until either
t1 + b1 = N or ℓ1 + ℓ2 = 2N . By the time the request sequence requests the 2N -th distinct page, we have
either t1 + b1 = N or ℓ1 + ℓ2 = 2N .
Proof. Once the request sequence requests the N -th distinct page, it is obvious that we will have t = N ,
since until then, no item is evicted from T1 ∪ T2 ∪ B1 ∪ B2 . (Note that Replace only moves items from
the main part to the history, i.e., from T1 ∪ T2 to B1 ∪ B2 .) Also, until then, p does not change. From that
point forward, the algorithm never evicts any item from T1 ∪ T2 without replacing it with some other item.
Thus, t = N is an invariant once it is satisfied. The history remains empty until the main cache is filled, i.e.,
t = N.
From the pseudocode it is clear that items are discarded from the cache in statements 14, 17, and 21; no
discards happen from the cache until either t1 + b1 = N (statement 12) or ℓ1 + ℓ2 = 2N (statement 20). If
ℓ1 + ℓ2 = 2N is reached, since t1 + b1 ≤ N , we are guaranteed that t2 + b2 ≥ N and b1 + b2 = N , both of
which will remain true from that point onward. Thus, by the time the 2N -th distinct page is requested, we
have reached either t1 + b1 = N or ℓ1 + ℓ2 = 2N .
We assume that request σ is processed in two distinct steps: first when Opt services the page request
and, next when Arc services the request. We will show that inequality (11) is satisfied for each of the two
steps.
15
Step 1: Opt services request σ
Since only Opt acts in this step, CA = 0, and the contents of Arc’s cache does not change. There are two
possible cases: either Opt faults on σ or it does not. Assume that page x is requested on request σ.
If Opt does not fault on this request, then CO = 0. Since the contents of the cache maintained by Opt
does not change, and neither do the lists L1 and L2 , we have ∆Φ = 0, and CA (σ) + ∆Φ ≤ 4N · CO (σ) ≤ 0.
If Opt faults on request σ, then CO = 1. The contents of the cache maintained by Opt does change,
which will affect the potential function. Opt will bring in page x into its cache. Assume that it evicts page
y from its cache. The entry of page x into Opt’s cache can only decrease the potential function. The exit
of page y from Opt’s cache can increase the potential function by at most 4N . The reason is as follows.
Since the sum of b′1 , b′2 , t′1 , t′2 cannot exceed the size of Opt’s cache, we have 0 ≤ b′1 + t′1 + b′2 + t′2 ≤ N .
Since b′1 + 2t′1 + 3b′2 + 4t′2 ≤ 4(b′1 + t′1 + b′2 + t′2 ), the left hand side cannot decrease by more than 4N . Thus,
CA (σ) + ∆Φ1 ≤ 4N , proving inequality (11).
Step 2: Arc services request σ
There are four possible cases, which correspond to the four cases in Arc’s replacement algorithm. Case 1
deals with the case when Arc finds the page in its cache. The other three cases assume that Arc faults
on this request because the item is not in T1 ∪ T2 . Cases 2 and 3 assume that the missing page is found
recorded in the history in lists B1 and B2 , respectively. Case 4 assumes that the missing page is not recorded
in history.
Case I: Arc has a page hit.
Clearly, CA = 0. We consider several subcases. In each case, the requested page will be moved to M RU (T2 )
while shifting other pages in T2 down.
Case I.1 If the requested page is in T1′ , the move of this page from T1′ to T2′ implies ∆t′1 = −1; ∆t′2 = +1
and ∆Φ = −(2 · ∆t′1 + 4 · ∆t′2 ) = −2.
Case I.2 If the requested page is in T2′ , the move of this page to M RU (T2 ) does not change the set of items
in T2′ . Thus, ∆t′1 = ∆t′2 = 0 and ∆Φ = 0.
Case I.3 If the requested page is in T1 − T1′ , then ∆t′1 = 0; ∆t′2 = +1 and ∆Φ = −4. One subtle point to
note is that moving x from T1 − T1′ could potentially increase t′1 if the following conditions are met: x
is located just below T1′ in T1 , it is not in Opt’s cache, and the items in T1 immediately below it are
in Opt. However, x is already in Opt’s cache and there must be some item above it in T1 that is not
in Opt.
Case I.4 If the requested page is in T2 − T2′ , then ∆t′2 = +1 and ∆Φ = −4. The subtle point mentioned in
Case I.3 also applies here.
Next we will analyze the three cases when the requested page is not in Arc’s cache. Since CA = 1, the
change in potential must be at most -1 in order for inequality (11) to be satisfied. We make the following
useful observations in the form of lemmas.
Lemma 6. If Arc has a miss and if the page is not in Arc’s history, we have ℓ′ = t′1 + t′2 + b′1 + b′2 < N .
Consequently, we also have ℓ′1 < N and ℓ′2 < N .
Proof. Since Opt has just finished serving the request, the page is present in the cache maintained by Opt
just before Arc starts to service the request. If Arc has a miss, there is at least one page in the cache
maintained by Opt that is not present in the cache maintained by Arc, implying that l′ < N . By definition,
ℓ′ = ℓ′1 + ℓ′2 = t′1 + t′2 + b′1 + b′2 . Thus, the lemma holds.
Lemma 7. A call to procedure Replace either causes an element to be moved from T1 to B1 or from T2
to B2 . In either case, the change in potential due to Replace, denoted by ∆ΦR , has an upper bound of 1.
16
Proof. Procedure Replace is only called when Arc has a page miss. Clearly, it causes an item to be moved
from T1 to B1 or from T2 to B2 . If that item is in T1′ (or T2′ ), then T1 = T1′ (T2 = T2′ , resp.) and the moved
item becomes part of B1′ (B2′ , resp.). Because the coefficients of b′1 and t′1 (b′2 and t′2 , resp.) differ by 1, we
have ∆ΦR = +1. On the other hand, if that element is in T1 − T1′ (T2 − T2′ , resp.), then B1′ (B2′ , resp.) was
empty before the move and remains empty after the move, and thus, ∆ΦR = 0.
Lemma 8. On an Arc miss after phase P (0), if T1 = T1′ then the Replace step will not move a page from
T2′ to B2 . On the other hand, if T2 = T2′ then Replace will not move a page from T1′ to B1 .
Proof. In an attempt to prove by contradiction, let us assume that T1 = T1′ and T2 = T2′ are simultaneously
true and Arc has a miss. By Lemma 5, we know that after phase, we have t = t1 + t2 = N , which by
our assumption means that t′1 + t′2 = N ; this is impossible by Lemma 6. Thus, if T1 = T1′ , then T2 6= T2′ .
Consequently, if LRU (T2 ) is moved to B2 , this item cannot be from T2′ . By a symmetric argument, if
T2′ = T2 , then T1 6= T1′ , and LRU (T1 ) is not in T1′ .
Case II: Arc has a miss and the missing page is in B1
Note that in this case the value of p will change by +1, unless its value equals N , in which case it has no
change. Thus ∆p ≤ 1.
If the missing item is in B1′ , then ∆b′1 = −1 and ∆t′2 = +1. Adding the change due to Replace, we get
∆Φ
≤
1 − (∆b′1 + 4 · ∆t′2 ) + ∆ΦR
≤
−1
If the missing item is in B1 − B1′ , then we have ∆t′2 = 1 and ∆b′1 = 0. Thus, we have
∆Φ
≤
1 − (∆b′1 + 4 · ∆t′2 ) + ∆ΦR
≤
−2
Case III: Arc has a miss and the missing page is in B2 .
Note that in this case the value of p will change by -1, if its value was positive, otherwise it has no change.
Thus ∆p ≤ 0.
If the requested item is in B2′ , then ∆t′2 = 1, and ∆b′2 = −1. Thus, we have
∆Φ
= ∆p − (3 · ∆b′2 + 4 · ∆t′2 ) + ∆ΦR
≤ 0
But this is not good enough since we need the potential change to be at most -1. When ∆p = −1, then
we get the required inequality ∆Φ ≤ −1. Clearly, the difficulty is when ∆p = 0, which happens when p = 0.
Since the missing item is from b′2 , it implies that B2′ is non-empty and T2′ = T2 . By Lemma 8 above, there
must be at least one item in T1 − T1′ , which means that means that t1 > 0. As per the algorithm, since T1
is non-empty and p = 0, we are guaranteed to replace LRU (T1 ), and not an element from T1′ . Therefore,
Replace will leave t′1 and b′1 unchanged, implying that ∆ΦR = 0. Thus, we have
∆Φ
= ∆p − (3 · ∆b′2 + 4 · ∆t′2 ) + ∆ΦR
≤ −1
If the requested item is from B2 − B2′ , then ∆t′2 = 1, and ∆b′2 = 0. Thus, we have
∆Φ
≤
∆p − (4 · ∆t′2 ) + ∆ΦR
≤
−3
Case IV: Arc has a miss and the missing page is not in B1 ∪ B2
17
We consider two cases. First, when ℓ1 = N , Arc will evict the LRU (L1 ). Since by Lemma 6, ℓ′1 < N , we
know that for this case, b′1 remains unchanged at 0 and ∆t′1 = +1. Thus,
∆Φ
≤
≤
−(2 · ∆t′1 ) + ∆ΦR
−1
On the other hand, if ℓ1 < N , then Arc will evict the LRU (L2 ). Again, if the cache is full (i.e.,
t1 + t2 = N and ℓ1 + ℓ2 = 2N ), then we know that ℓ2 > N , which means that L′2 6= L2 and LRU (L2 ) is
not in L′2 . Thus, deletion of LRU (L2 ) = LRU (B2 ) will not affect b′2 or any of the other quantities in the
potential function. Then comes the Replace step, for which a bound has been proved earlier. Finally, a
new item is brought in and placed in M RU (T1 ). Thus ∆t′1 ≤ 1. Putting it all together, we have
∆Φ
≤
−(2 · ∆t′1 ) + ∆ΦR
≤
−1
Wrapping up the proof of Theorem 8 Tying it all up, we have shown that inequality (11) holds for
every request made after the cache is full, i.e.,
CA (σ) + ∆Φ ≤ 4N · CO (σ).
If we assume that the caches started empty, then the initial potential is 0, while the final potential can be
at most 4N . Thus, we have
CA (σ) ≤ 4N · CO (σ) + 4N,
thus proving Theorem 8.
4.6
Analyzing the Competitiveness of CAR
Next, we analyze the competitiveness of Car. The main result of this section is the following:
Theorem 9. Algorithm Car is 18N -competitive.
Proof. Let PX [q] be the position of page q in an arbitrary ordered sequence of pages X. When the set is
obvious, we will drop the subscript and denote PX [q] simply by P [q]. The set of history pages B1 and B2
will be treated as an ordered sequence of pages ordered from its LRU position to its MRU position. The
set of main pages T10 (resp., T20 , T11 , and T21 ) will be treated as an ordered sequence of unmarked (resp.,
unmarked, marked, and marked) pages in T1 (resp, T2 , T1 , and T2 ) ordered from head to tail. Let Opt and
Car be the set of (main and history) pages stored in the caches for algorithms Opt and Car respectively.
Let D = (T1 ∪ T2 ∪ B1 ∪ B2 ) \ Opt. Thus D consists of pages in Car but not in Opt.
We associate each page with a rank value R[q], which is defined as follows:
PB1 [q]
PB2 [q]
2P 0 [q] + b
1
T1
R[q] =
0
2P
[q]
+
b
2
T2
3N + 2PT11 [q] + b1
3N + 2PT21 [q] + b2
if
if
if
if
if
if
q
q
q
q
q
q
∈ B1
∈ B2
∈ T10
∈ T20
∈ T11
∈ T21
(12)
Finally, we define the potential function as follows:
Φ=
X
1
(p + 2(b1 + t1 ) + 3
R[q])
N − NO + 1
q∈D
18
(13)
The initial value of Φ is 0. If the following inequality (14) is true for any request σ, where ∆Φ is the change
in potential caused by serving the request, then when summed over all requests, it proves Theorem 9.
CCar (σ) + ∆Φ ≤
18N
COpt (σ).
N − NO + 1
(14)
As before, we assume that request σ is processed in two distinct steps: first when Opt serves and, next
when Car serves. We will show that inequality (14) is satisfied for each of the two steps.
Step 1: Opt serves request σ
Since only Opt acts in this step, CCar = 0, and T1 ∪ T2 does not change. There are two possible cases:
either Opt faults on σ or it does not. If Opt does not fault on this request, then it is easy to see that
COpt = 0 and ∆Φ = 0, thus satisfying inequality (14).
If Opt faults on request σ, then COpt = 1 and some page, q, is evicted from the cache maintained by
Opt. If q is maintained by Car then it follows that q will belong to D after this step and thus its rank will
contribute to the potential function, which will increase by three times the rank of q. The maximal positive
change in potential will occur when q is the marked head page in T2 . In this case the rank of q is given by:
R[q] = 3N + 2P [q] + b2 . The maximal possible values for each of the terms P [q] and b2 will be N , hence the
maximum possible rank of q will be 3N + 2N + N = 6N . Therefore resulting potential change is at most
3(6N ) = 18N .
Step 2: Car serves request σ
We break down the analysis into four cases. Case 2.1 deals with the case when Car finds the page in its
cache. The other three cases assume that Car faults on this request because the item is not in T1 ∪T2 . Cases
2.2 and 2.3 assume that the missing page is found recorded in the history in lists B1 and B2 , respectively.
Case 2.4 assumes that the missing page is not recorded in history.
Case 2.1: Car has a page hit Clearly, the page was found in T1 ∪ T2 , and CCar = 0. We consider the
change of each of terms in the potential function individually.
1. As per the algorithm, p can only change when the page is found in history. (See lines 14 through 20 of
Car(x).) Since the page is not found in Car’s history, ∆p = 0.
2. Neither the cache nor the history lists maintained by Car will change. Thus, the contribution to the
second term in Φ, i.e., 2(b1 + t1 ) does not change.
3. Since Opt has already served the page, the page is in Opt’s cache. Therefore, even if the page gets
marked during this hit, its rank value does not change. Thus, the contribution to the last term in Φ,
also remains unchanged.
We, therefore, conclude that ∆Φ = 0, satisfying inequality (14).
Next we will analyze the three cases when the requested page is not in Car’s cache. Since CCar = 1,
the change in potential must be at most −1 in each case in order for inequality (14) to be satisfied. Before
tackling the three cases, the following lemmas (9 and 10)
Pare useful for understanding the potential change
caused by the last term in the potential function, i.e., q∈D R[q]. It is worth pointing out that a call to
Replace moves either an item from T1 to B1 or from T2 to B2 , which is exactly the premise of Lemma 9
below.
Lemma 9. When a page is moved from T1 to B1 (or from T2 to B2 ) its rank decreases by at least 1.
Proof. Let q be any page in T1 . In order for q to be moved from T1 to B1 it must have been unmarked
and located at the head of T1 . Since PT1 [q] = 1, the rank of q prior to the move must have been R[q] =
2PT1 [q] + b1 = b1 + 2, where b1 is the size of B1 prior to moving q.
19
After q is moved to the MRU position of B1 , R[q] = PB1 [q] = b1 + 1. Thus its rank decreased by 1. The
arguments for the move from T2 to B2 are identical with the appropriate changes in subscripts.
P
Lemma 10. When Car has a page miss, the term q∈D R[q] in the potential function Φ cannot increase.
Proof. We examine the rank change based on the original location of the page(s) whose ranks changed and
in each case show that the rank change is never positive. Wherever appropriate we have provided references
to line numbers in Pseudocode Car(x) from Appendix.
Case A: q ∈ B1 ∪ B2
The rank of q ∈ B1 , which is simply its position in B1 , can change in one of three different ways.
1. Some page x less recently used than q (i.e., PB1 [x] < PB1 [q]) was evicted (Line 7). In this case,
it is clear that PB1 [q] decreases by at least 1.
2. The page q is the requested page and is moved to T2 (Line 16). In this case, q ∈ Opt and hence
its rank cannot affect the potential function.
3. Some page x is added to MRU of B1 (Line 27). Since pages are ordered from LRU to MRU, the
added page cannot affect the rank of q.
Using identical arguments for q ∈ B2 , we conclude that a miss will not increase the rank of any page
in B1 ∪ B2 .
Case B: q ∈ T10 ∪ T20
The rank of page q ∈ T10 , defined as R[q] = 2PT10 [q] + b1 , may be affected in four different ways.
1. If page q is the head of T1 and gets moved to B1 (Line 27), by lemma 9, the change in rank of q
is at most −1.
2. If an unmarked page x is added to the tail of T1 (Line 13), then since the ordering is from head
to tail, it does not affect the position of page q. Since there was no change in b1 , it is clear that
the change in R[q] is 0.
3. If the unmarked page x 6= q at the head of T1 is marked and moved to tail of T2 (Line 29), then
P [q] decreases by at least 1. Since the content of B1 is unchanged, the change in R[q] = 2P [q] + b1
is at most -2.
4. If the unmarked page x 6= q at the head of T1 is moved to B1 (Line 29), then P [q] decreases by
at least 1, and b1 increases by 1. Hence the change in R[q] = 2P [q] + b1 is at most -1.
The arguments are identical for q ∈ T20 . In each case, we have shown that a miss will not increase the
rank of any page in T10 ∪ T20 .
Case C: q ∈ T11
The rank of page q ∈ T11 , defined as R[q] = 3N + 2PT11 [q] + b1 , may be affected in four different ways.
1. If an unmarked page x is added to the tail of T1 (Line 13), then since the ordering is from head
to tail, it does not affect the position of page q. Since there was no change in b1 , it is clear that
the change in R[q] is 0.
2. If the unmarked page x 6= q at the head of T1 is marked and moved to tail of T2 (Line 29), then
P [q] decreases by at least 1. Since B1 is unchanged, the change in R[q] = 3N + 2P [q] + b1 is at
most -2.
3. If the unmarked page x 6= q at the head of T1 is moved to B1 (Line 29), then P [q] decreases by
at least 1, and b1 increases by 1. Hence the change in R[q] = 3N + 2P [q] + b1 is at most -1.
20
4. Next, we consider the case when the marked page q is the head of T1 and gets unmarked and
moved to T2 (Line 29). Prior to the move, the rank of q is given by R[q] = 3N + 2PT11 [q] + b1 .
Since B1 could be empty, we know that R[q] ≥ 3N + 2. After page q is unmarked and moved to
T2 , its rank is given by R[q] = 2PT20 [q] + b2 . Since P [q] ≤ N and b2 ≤ N , we know that the new
R[q] ≤ 3N . Thus, the rank of page q does not increase.
In each case, we have shown that a miss will not increase the rank of any page in T11 .
Case D: q ∈ T21
The rank of page q ∈ T21 , defined as R[q] = 3N + 2PT21 [q] + b2 , may be affected in four different ways.
1. If an unmarked page x is added to the tail of T2 (Lines 16, 19, or 29), and if b2 does not change,
it is once again clear that the change in R[q] is 0.
2. If a marked page x 6= q at the head of T2 gets unmarked and moved to the tail of T2 (Line 36),
the position of q will decrease by 1 and there is no change in b2 . Thus R[q] changes by at most -2.
3. If an unmarked page x at the head of T2 is moved to B2 (Line 34), P [q] decreases by 1 and b2
increases by 1. Thus R[q] changes by at most -1.
4. Finally, we consider the case when the marked page q is the head of T2 and gets unmarked
and moved to the tail of T2 (Line 36). Prior to the move, the rank of q is given by R[q] =
3N + 2PT21 [q] + b2 . Even if B2 is empty, we know that R[q] ≥ 3N + 2. After page q is unmarked
and moved to T2 , its rank is given by R[q] = 2PT20 [q] + b2 . Since P [q] ≤ N and b2 ≤ N , we know
that the new R[q] ≤ 3N . Thus, the rank of page q does not increase.
In each case, we have shown that a miss will not increase the rank of any page in T21 .
The four cases (A through D) together complete the proof of Lemma 10.
We continue with the remaining cases for the proof of Theorem 9.
Case 2.2: Car has a page miss and the missing page is in B1 We consider the change in the
potential function (defined in Eq. 13) by analyzing each of its three terms.
1. Value of p increases by 1, except when it is equal to N , in which case it remains unchanged. (See Line
15.) Thus, the first term increases by at most 1.
2. The call to Replace has no effect on the value of (t1 + b1 ) because an item is moved either from T1
to B1 or from T2 to B2 . Since the requested page in B1 is moved to T2 , (t1 + b1 ) decreases by 1.
3. By Lemma 10, we already know that the last term increases by at most 0.
Since p increases by at most 1 and the term 2(t1 +b1 ) decreases by at least 2, the total change in the potential
function, is at most -1.
Case 2.3: Car has a page miss and the missing page is in B2 When the missing page is in B2 ,
Car makes a call to Replace (Line 5) and then executes Lines 18-19. Thus, p is decremented except if it
is already equal to 0. We consider two subcases: ∆p < 0 and ∆p = 0.
∆p < 0: As in Case 2.2, the call to Replace has no effect on (t1 + b1 ). Since, Lines 18-19 do not affect
T1 ∪ B1 , the second term does not change. By Lemma 10, we know that the last term increases by at most
0. Since ∆p ≤ −1, the total change in the potential function, ∆p + ∆2(t1 + b1 ) is at most -1.
21
∆p = 0: Unlike the subcase above when p decreases by 1, the change in p cannot guarantee the required
reduction in the potential. We therefore need a tighter argument. We know that there is a call to Replace.
Three cases arise and are discussed below.
• If T1 is empty, then T2 must have N pages, at least one of which must be in D. Also, Replace must
act on T2 , eventually evicting an unmarked page from head of T2 , causing the rank of any page from
T2 \ Opt to decrease by 1.
• If T1 is not empty and has at least one page from D, then the condition in Line 24 passes and Replace
must act on T1 , eventually evicting an unmarked page from head of T1 , causing the rank of at least
one page from T1 \ Opt to decrease by 1.
• Finally, if T1 is not empty and all its pages are in Opt, then T2 must have a page q ∈ D. Since the
requested page x was found in B2 and is moved to the tail of T2 , even though the position of q in T2
does not change, b2 decreased by 1 and consequently the rank of q decreases by 1.
Thus, in each case, even though neither p nor the quantity (t1 + b1 ) changed, the third term involving ranks,
and consequently, the potential function decreased by at least 3.
The following two lemmas are useful for Case 2.4, when the missing page is not in T1 ∪ T2 ∪ B1 ∪ B2 .
Lemma 11. We make two claims:
1. If t1 + b1 = N and the LRU page of B1 is evicted from the cache on Line 7, then
decrease by at least one.
2. If t2 + b2 > N , and the LRU page of B2 , is evicted from the cache on Line 9, then
decrease by at least one.
P
q∈D
P
q∈D
R[q] will
R[q] will
Proof. We tacke the first claim. Assume that y is the LRU page of B1 that is being evicted on Line 7. Then
Car must have had a page miss on x 6∈ B1 ∪ B2 , and the requested page x is added to the tail of T1 . Since
t1 + b1 = N , there is at least one page q ∈ T1 ∪ B1 that is not in Opt’s cache and whose rank contributes
to the potential function. First, we assume that q ∈ T1 \ Opt, whose rank is given by: R[q] = 2 ∗ P [q] + b1 .
For each of the three cases, we show that the potential function does decrease by at least 1.
• If Replace acts on T1 and the unmarked head of T1 , different from q, is moved to B1 then the size of
B1 remains the same (because a page gets added to B1 while another page is evicted) but the position
of q in T1 decreases by one. Therefore R[q] decreases by 2.
• If Replace acts on T1 and q itself is moved to B1 then by Lemma 9, R[q] decreases by at least 1.
• If Replace acts on T2 , then we use the fact that a page is evicted from B1 , and the b1 term in R[q]
must decrease by 1.
Next, we assume that q ∈ B1 \ Opt. Since LRU (B1 ) is evicted, the position of the page q will decrease by
one. Thus R[q] = PB1 [q] must decrease by at least 1, completing the proof of the first claim in the lemma.
The proof of the second claim is very similar and only requires appropriate changes to the subscripts.
Next we tackle the last case in the proof of Theorem 9.
Case 2.4: Car has a page miss and the missing page is not in B1 ∪ B2 We assume that Car’s
cache is full (i.e., l1 + l2 = 2N ). We consider two cases below – first, if l1 = N and the next when l1 < N .
If l1 = t1 + b1 = N , Car will call Replace, evict LRU (B1 ) and then add the requested page to the tail
of T1 . Below, we analyze the changes to the three terms in the potential function.
• Since p is not affected, the first term does not change.
• Since a page is added to T1 and a page is evicted from B1 , the net change in the second term is 0.
• Since the conditions of Lemma 11 apply, the total rank will decrease by at least 1.
22
Adding up all the changes, we conclude that the potential function decreases by at least 3.
If l1 < N , Car will call Replace, evict LRU (B2 ) and then add a page to the tail of T1 . As above, we
analyze the changes to the three terms in the potential function.
• Since p is not affected, the first term does not change.
• A page is added to T1 and a page is evicted from B2 hence (t1 + b1 ) increases by 1.
• Since l2 > N , the conditions of Lemma 11 apply, the total rank will decrease by at least 1.
Adding up all the changes, we conclude that the potential function decreases by at least 1, thus completing
Case 2.4.
Wrapping up the proof of Theorem 9: Combining the four cases (2.1 through 2.4) proves that inequality (14) is satisfied when Car serves request σ. This completes the proof of Theorem 9, establishing
that the upper bound on the competitiveness of Car is 18N .
5
Conclusions and Future Work
Adaptive algorithms are tremendously important in situations where inputs are infinite online sequences
and no single optimal algorithm exists for all inputs. Thus, different portions of the input sequence require
different algorithms to provide optimal responses. Consequently, it is incumbent upon the algorithm to sense
changes in the nature of the input sequence and adapt to these changes. Unfortunately, these algorithms are
harder to analyze. We present the analysis of two important adaptive algorithms called Arc and Car and
show that they are competitive along with proving good lower bounds on the competitiveness ratios.
Two important open questions remain unanswered. Given that there is a gap between the lower and
upper bounds on the competitiveness ratios of the two adaptive algorithms, Arc and Car, what is the true
ratio? More importantly, is there an “expected” competitiveness ratio for request sequences that come from
real applications? The second question would help explain why Arc and Car perform better in practice
than Lru and Clock, respectively.
Acknowledgments This work was partly supported by two NSF Grants (CNS-1018262 and CNS-1563883)
and the NSF Graduate Research Fellowship (DGE-1038321). We are grateful to Kirk Pruhs for suggesting
enhancing our results with the assumption of unequal cache sizes.
23
References
[Alb96]
S. Albers. Competitive online algorithms. Technical report, BRICS Lecture Series, Computer
Science Department, University of Aarhus, 1996.
[BM04]
S. Bansal and D. S. Modha. CAR: CLOCK with adaptive replacement. In Proceedings of the 3rd
USENIX Conference on File and Storage Technologies, FAST ’04, pages 187–200, Berkeley, CA,
USA, 2004. USENIX Association.
[Cor68]
F. J. Corbato. A paging experiment with the MULTICS system. Technical report, DTIC Document, 1968.
[Fri99]
M. B. Friedman. Windows NT page replacement policies. In Proceedings of the Intl. CMG
Conference, pages 234–244, 1999.
[Hoc97]
D. S. Hochbaum, editor. Approximation algorithms for NP-hard problems. PWS Publishing Co.,
Boston, MA, USA, 1997.
[JCZ05]
S. Jiang, F. Chen, and X. Zhang. CLOCK-Pro: An effective improvement of the CLOCK replacement. In USENIX Annual Technical Conference, General Track, pages 323–336, 2005.
[JIPP10]
A. Janapsatya, A. Ignjatovic, J. Peddersen, and S. Parameswaran. Dueling CLOCK: adaptive
cache replacement policy based on the CLOCK algorithm. In Design, Automation & Test in
Europe Conference & Exhibition (DATE), 2010, pages 920–925. IEEE, 2010.
[JS94]
T. Johnson and D. Shasha. 2Q: A low overhead high performance buffer management replacement
algorithm. In Proc. of VLDB, pages 297–306, 1994.
[JZ02]
S. Jiang and X. Zhang. LIRS: An efficient low inter-reference recency set replacement policy to
improve buffer cache performance. In Proc. ACM Sigmetrics Conf., pages 297–306. ACM Press,
2002.
[LCK+ 01] D. Lee, J. Choi, J. H. Kim, S. H. Noh, S. L. Min, Y. Cho, and C. S. Kim. LRFU: A spectrum
of policies that subsumes the least recently used and least frequently used policies. IEEE Trans.
Comput., 50(12):1352–1361, December 2001.
[MM03]
N. Megiddo and D. S. Modha. ARC: A self-tuning, low overhead replacement cache. In Proceedings
of the 2nd USENIX Conference on File and Storage Technologies, FAST ’03, pages 115–130,
Berkeley, CA, USA, 2003. USENIX Association.
[MM04]
N. Megiddo and D. S. Modha. Outperforming LRU with an adaptive replacement cache algorithm.
IEEE Computer, 37(4):58–65, 2004.
[OOW93] E. J. O’Neil, P. E. O’Neil, and G. Weikum. The LRU-K page replacement algorithm for database
disk buffering. SIGMOD Rec., 22(2):297–306, June 1993.
[ST85]
D. D. Sleator and R. E. Tarjan. Amortized efficiency of list update and paging rules. Commun.
ACM, 28(2):202–208, February 1985.
24
6
Appendix
We reproduce the pseudocode for Arc and Car below.
Pseudocode: Arc(x)
INPUT: The requested page x
INITIALIZATION: Set p = 0 and set lists T1 , B1 , T2 , and B2 to empty
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
if (x is in T1 ∪ T2 ) then
Move x to the top of T2
else if (x is in B1 ) then
Adaptation: Update p = min{p + 1, N }
Replace()
Fetch x and move to the top of T2
else if (x is in B2 ) then
Adaptation: Update: p = max{p − 1, 0}
Replace()
Fetch x and move to the top of T2
else
if (t1 + b1 = N ) then
if (t1 < N ) then
Discard LRU item in B1
Replace()
else
Discard LRU page in T1 and remove from cache
end if
else if ((t1 + b1 < N ) and (t1 + t2 + b1 + b2 ≥ N )) then
if (t1 + t2 + b1 + b2 = 2N ) then
Discard LRU item in B2
end if
Replace()
end if
Fetch x and move to the top of T1
end if
Replace()
26: if ((t1 ≥ 1) and ((x ∈ B2 and t1 = p) or (t1 > p))) then
27:
Discard LRU page in T1 and insert as MRU history item in B1
28: else
29:
Discard LRU page in T2 and insert as MRU history item in B2
30: end if
25
⊲ cache hit
⊲ cache history hit
⊲ learning rate = 1
⊲ make space in T1 or T2
⊲ cache history hit
⊲ learning rate = 1
⊲ make space in T1 or T2
⊲ cache and history miss
⊲ make space in T1 or T2
⊲ make space in T1 or T2
Pseudocode: Car(x)
INPUT: The requested page x
INITIALIZATION: Set p = 0 and set lists T1 , B1 , T2 , and B2 to empty
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
if (x is in T1 ∪ T2 ) then
⊲ cache hit
Mark page x
else
⊲ cache miss
if (t1 + t2 = N ) then
⊲ cache full, replace a page from cache
Replace()
⊲ make space in T1 or T2
if ((x 6∈ B1 ∪ B2 ) and (t1 + b1 = N )) then
Discard LRU page in B1
else if ((x 6∈ B1 ∪ B2 ) and (t1 + t2 + b1 + b2 = 2N )) then
Discard LRU page in B2 .
end if
end if
if (x 6∈ B1 ∪ B2 ) then
⊲ cache miss
Insert x at the tail of T1 ; Unmark page x
else if (x ∈ B1 ) then
⊲ cache history hit
Adaptation: Update p = min{p + 1, N }
⊲ learning rate = 1
Move x to the tail of T2 ; Unmark page x
else
⊲ cache history hit
Adaptation: Update: p = max{p − 1, 0}
⊲ learning rate = 1
Move x to the tail of T2 ; Unmark page x
end if
end if
Replace()
22: found = false
23: repeat
24:
if (t1 ≥ max{1, p}) then
25:
if (head page in T1 is unmarked) then
26:
found = true
27:
Discard head page in T1 and insert as MRU history item in B1
28:
else
29:
Unmark head page in T1 , move page as tail page in T2 , and move head of T1 clockwise
30:
end if
31:
else
32:
if (head page in T2 is unmarked) then
33:
found = true
34:
Discard head page in T2 and insert as MRU history item in B2
35:
else
36:
Unmark head page in T2 , and move head of T2 clockwise
37:
end if
38:
end if
39: until (found)
26
| 8 |
1–12
Further and stronger analogy between sampling and optimization:
Langevin Monte Carlo and gradient descent
Arnak S. Dalalyan
ARNAK . DALALYAN @ ENSAE . FR
arXiv:1704.04752v2 [math.ST] 28 Jul 2017
ENSAE/CREST/Université Paris Saclay
Abstract
1
In this paper , we revisit the recently established theoretical guarantees for the convergence of the
Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density.
We improve the existing results when the convergence is measured in the Wasserstein distance and
provide further insights on the very tight relations between, on the one hand, the Langevin Monte
Carlo for sampling and, on the other hand, the gradient descent for optimization. Finally, we also
establish guarantees for the convergence of a version of the Langevin Monte Carlo algorithm that
is based on noisy evaluations of the gradient.
Keywords: Markov Chain Monte Carlo, Approximate sampling, Rates of convergence, Langevin
algorithm, Gradient descent
1. Introduction
p
Let
R p be a positive integer and f : R → R be a measurable function such that the integral
Rp exp{−f (θ)} dθ is finite. In various applications, one is faced with the problems of finding
the minimum point of f or computing the average with respect to the probability density
π(θ) = R
e−f (θ)
.
−f (u) du
Rp e
In other words, one often looks for approximating the values θ ∗ and θ̄ defined as
Z
θ̄ =
θ π(θ) dθ,
θ ∗ ∈ arg minp f (θ).
Rp
θ∈R
In most situations, the approximations of these values are computed using iterative algorithms which
share many common features. There is a vast variety of such algorithms for solving both tasks,
see for example (Boyd and Vandenberghe, 2004) for optimization and (Atchadé et al., 2011) for
approximate sampling. The similarities between the task of optimization and that of averaging have
been recently exploited in the papers (Dalalyan, 2014; Durmus and Moulines, 2016; Durmus et al.,
2016) in order to establish fast and accurate theoretical guarantees for sampling from and averaging
with respect to the density π using the Langevin Monte Carlo algorithm. The goal of the present
work is to push further this study both by improving the existing bounds and by extending them in
some directions.
1. This paper has been published in proceedings of COLT 2017. However, this version is more recent. We have corrected
some typos (2/(m + M ) instead of 1/(m + M ) on pages 3-4) and slightly improved the upper bound of Theorem 3.
c A.S. Dalalyan.
DALALYAN
We will focus on strongly convex functions f having a Lipschitz continuous gradient. That is,
we assume that there exist two positive constants m and M such that
f (θ) − f (θ 0 ) − ∇f (θ 0 )> (θ − θ 0 ) ≥ (m/2)kθ − θ 0 k22 ,
∀θ, θ 0 ∈ Rp ,
(1)
k∇f (θ) − ∇f (θ 0 )k ≤ M kθ − θ 0 k ,
2
2
where ∇f stands for the gradient of f and k · k2 is the Euclidean norm. We say that the density π(θ) ∝ e−f (θ) is log-concave (resp. strongly log-concave) if the function f satisfies the first
inequality of (1) with m = 0 (resp. m > 0).
The Langevin Monte Carlo (LMC) algorithm studied throughout this work is the analogue of
the gradient descent algorithm for optimization. Starting from an initial point ϑ(0) ∈ Rp that may
be deterministic or random, the iterations of the algorithm are defined by the update rule
√
ϑ(k+1,h) = ϑ(k,h) − h∇f (ϑ(k,h) ) + 2h ξ (k+1) ;
k = 0, 1, 2, . . .
(2)
where h > 0 is a tuning parameter, referred to as the step-size, and ξ (1) , . . . , ξ (k) , . . . is a sequence
of mutually independent, and independent of ϑ(0) , centered Gaussian vectors with covariance matrices equal to identity. Under the assumptions imposed on f , when h is small and k is large (so that
the product kh is large), the distribution of ϑ(k,h) is close in various metrics to the distribution with
density π(θ), hereafter referred to as the target distribution. An important question is to quantify
this closeness; this might be particularly useful for deriving a stopping rule for the LMC algorithm.
The measure of approximation used in this paper is the Wasserstein-Monge-Kantorovich distance W2 . For two measures µ and ν defined on (Rp , B(Rp )), W2 is defined by
Z
1/2
W2 (µ, ν) =
inf
kθ − θ 0 k22 dγ(θ, θ 0 )
,
γ∈Γ(µ,ν) Rp ×Rp
where the inf is with respect to all joint distributions γ having µ and ν as marginal distributions.
This distance is perhaps more suitable for quantifying the quality of approximate sampling schemes
than other metrics such as the total variation. Indeed, on the one hand, bounds on the Wasserstein
distance—unlike the bounds on the total-variation distance—directly provide the level of approximating the first order moment. For instance, if µ and ν are two Dirac measures at the points θ and θ 0 ,
respectively, then the total-variation distance DTV (δθ , δθ0 ) equals one whenever θ 6= θ 0 , whereas
W2 (δθ , δθ0 ) = kθ − θ 0 k2 is a smoothly increasing function of the Euclidean distance between θ
and θ 0 . This seems to better correspond to the intuition on the closeness of two distributions.
2. Improved guarantees for the Wasserstein distance
The rationale behind the LMC algorithm (2) is simple: the Markov chain {ϑ(k,h) }k∈N is the Euler
discretization of a continuous-time diffusion process {Lt : t ∈ R+ }, known as Langevin diffusion,
that has π as invariant density (Bhattacharya, 1978, Thm. 3.5). The Langevin diffusion is defined
by the stochastic differential equation
√
dLt = −∇f (Lt ) dt + 2 dW t ,
t ≥ 0,
(3)
where {W t : t ≥ 0} is a p-dimensional Brownian motion. When f satisfies condition (1), equation
(3) has a unique strong solution which is a Markov process. Let νk be the distribution of the k-th
iterate of the LMC algorithm, that is ϑ(k,h) ∼ νk .
2
F URTHER ANALOGY BETWEEN SAMPLING AND OPTIMIZATION
Theorem 1 Assume that h ∈ (0, 2/M ). The following claims hold:
(a) If h ≤ 2/(m+M ) then W2 (νK , π) ≤ (1 − mh)K W2 (ν0 , π) + 1.82(M/m)(hp)1/2 .
(b) If h ≥ 2/(m+M ) then W2 (νK , π) ≤ (M h − 1)K W2 (ν0 , π) + 1.82
Mh
(hp)1/2 .
2 − Mh
The proof of this theorem is postponed to Section 6. We content ourselves here by discussing
the relation of this result to previous work. Note that if the initial value ϑ(0) = θ (0) is deterministic
then, according to (Durmus and Moulines, 2016, Theorem 1), we have
Z
2
kθ (0) − θk22 π(dθ)
W2 (ν0 , π) =
Rp
Z
(0)
2
kθ̄ − θk22 π(dθ)
= kθ − θ̄k2 +
Rp
≤ kθ
(0)
−
θ̄k22
+ p/m.
(4)
First of all, let us remark that if we choose h and K so that
h ≤ 2/(m+M ),
e−mhK W2 (ν0 , π) ≤ ε/2,
1.82(M/m)(hp)1/2 ≤ ε/2,
(5)
then we have W2 (νK , π) ≤ ε. In other words, conditions (5) are sufficient for the density of the
output of the LMC algorithm with K iterations to be within the precision ε of the target density
when the precision is measured using the Wasserstein distance. This readily yields
h≤
m 2 ε2
2
∧
2
14M p m + M
and hK ≥
2(kθ (0) − θ̄k2 + p/m)1/2
1
2
log
m
ε
Assuming m, M and kθ (0) − θ̄k22 /p to be constants, we can deduce from the last display that it
suffices K = Cpε−2 log(p/ε) number of iterations in order to reach the precision level ε. This
fact has been first established in (Dalalyan, 2014) for the LMC algorithm with a warm start and
the total-variation distance. It was later improved by Durmus and Moulines (2016), who showed
that the same result holds for any starting point and established similar bounds for the Wasserstein
distance.
In order to make the comparison easier, let us recall below the corresponding result from2 (Durmus and Moulines, 2016). It asserts that under condition (1), if h ≤ 2/(m+M ) then
mM h K 2
M hp
m + M
M 2 h M 2 h2
W22 (νK , π) ≤ 2 1 −
W2 (ν, π) +
(m + M ) h +
2+
+
.
m+M
m
2mM
m
6
(6)
When we compare this inequality with the claims of Theorem 1, we see that
i) Theorem 1 holds under weaker conditions: h ≤ 2/M instead of h ≤ 2/(m+M ).
ii) The analytical expressions of the upper bounds on the Wasserstein distance in Theorem 1 are
not as involved as those of (6).
2. We slightly adapt the original result taking into account the fact that we are dealing with the LMC algorithm with a
constant step.
3
DALALYAN
14
logK(p)
12
10
8
ε=
ε=
ε=
ε=
6
4
2
0
20
40
60
0 .3
0 .3
0 .1
0 .1
nb
nb
nb
nb
80
of
of
of
of
100
p
steps
steps
steps
steps
fro m
fro m
fro m
fro m
120
the DM bound
o ur b o und
o ur b o und
the DM bound
140
160
180
200
Figure 1: The curves of the functions p 7→ log K(p), where K(p) is the number of steps— derived
either from our bound or from the bound (6) of (Durmus and Moulines, 2016)—sufficing
for reaching the precision level ε (for ε = 0.1 and ε = 0.3).
iii) If we take a closer look, we can check that when h ≤ 2/(m+M ), the upper bound in part (a) of
Theorem 1 is sharper than that of (6).
In order to better illustrate the claim in iii) above, we consider a numerical example in which
m = 4, M = 5 and kθ (0) − θ̄k22 = p. Let Four (h, K, p) and FDM (h, K, p) be the upper bounds on
W2 (νK , π) provided by Theorem 1 and (6). For different values of p, we compute
Kour (p) = min K : there exists h ≤ 2/(m+M ) such that Four (h, K, p) ≤ ε ,
KDM (p) = min K : there exists h ≤ 2/(m+M ) such that FDM (h, K, p) ≤ ε .
The curves of the functions p 7→ log Kour (p) and p 7→ log KDM (p), for ε = 0.1 and ε = 0.3 are
plotted in Figure 1. We can deduce from these plots that the number of iterations yielded by our
bound is more than 5 times smaller than the number of iterations recommended by bound (6) of
Durmus and Moulines (2016).
Remark 2 Although the upper bound on W2 (ν0 , π) provided by (4) is relevant for understanding
the order of magnitude of W2 (ν0 , π), it has limited applicability since the distance kθ 0 − θ̄k might
4
F URTHER ANALOGY BETWEEN SAMPLING AND OPTIMIZATION
be hard to evaluate. An attractive alternative to that bound is the following3 :
Z
2
kθ (0) − θk22 π(dθ)
W2 (ν0 , π) =
RpZ
2
f (θ 0 ) − f (θ) − ∇f (θ)> (θ 0 − θ) π(dθ)
≤
m Rp
Z
2
=
f (θ 0 ) −
f (θ) π(dθ) + p .
m
Rp
If f is lower bounded by some known constant, for instance
if f ≥ 0, the last inequality provides
2
the computable upper bound W2 (ν0 , π)2 ≤ m
f (θ 0 ) + p .
3. Relation with optimization
We have already mentioned that the LMC algorithm is very close to the gradient descent algorithm
for computing the minimum θ ∗ of the function f . However, when we compare the guarantees
of Theorem 1 with those available for the optimization problem, we remark the following striking
difference. The approximate computation of θ ∗ requires a number of steps of the order of log(1/ε)
to reach the precision ε, whereas, for reaching the same precision in sampling from π, the LMC
algorithm needs a number of iterations proportional to (p/ε2 ) log(p/ε). The goal of this section
is to explain that this, at first sight very disappointing behavior of the LMC algorithm is, in fact,
continuously connected to the exponential convergence of the gradient descent.
The main ingredient for the explanation is that the function f (θ) and the function fτ (θ) =
f (θ)/τ have the same point of minimum θ ∗ , whatever
the real number τ > 0. In addition, if we
define the density function πτ (θ) ∝ exp − fτ (θ) , then the average value
Z
θ̄ τ =
θ πτ (θ) dθ
Rp
∗
tends to the minimum point θ when τ goes to zero. Furthermore, the distribution πτ (dθ) tends to
the Dirac measure at θ ∗ . Clearly, fτ satisfies (1) with the constants mτ = m/τ and Mτ = M/τ .
Therefore, on the one hand, we can apply to πτ claim (a) of Theorem 1, which tells us that if we
choose h = 1/Mτ = τ /M , then
M pτ 1/2
m K
W2 (νK , πτ ) ≤ 1 −
W2 (δθ(0) , πτ ) + 2
.
(7)
M
m
M
On the other hand, the LMC algorithm with the step-size h = τ /M applied to fτ reads as
r
1
2τ (k+1)
ϑ(k+1,h) = ϑ(k,h) −
∇f (ϑ(k,h) ) +
ξ
;
k = 0, 1, 2, . . .
(8)
M
M
When the parameter τ goes to zero, the LMC sequence (8) tends to the gradient descent sequence
θ (k) . Therefore, the limiting case of (7) corresponding to τ → 0 writes as
m K (0)
kθ (K) − θ ∗ k2 ≤ 1 −
kθ − θ ∗ k2 ,
M
which is a well-known result in Optimization. This clearly shows that Theorem 1 is a natural
extension of the results of convergence from optimization to sampling.
3. RThe second line follows from
R strong convexity whereas the third line is a consequence of the two identities
∇f (θ)π(dθ) = 0 and Rp θ > ∇f (θ)π(dθ) = p. These identities follow from the fundamental theorem of
Rp
calculus and the integration by parts formula, respectively.
5
DALALYAN
4. Guarantees for the noisy gradient version
In some situations, the precise evaluation of the gradient ∇f (θ) is computationally expensive or
practically impossible, but it is possible to obtain noisy evaluations of ∇f at any point. This is the
setting considered in the present section. More precisely, we assume that at any point ϑ(k,h) ∈ Rp
of the LMC algorithm, we can observe the value
Y (k,h) = ∇f (ϑ(k,h) ) + σ ζ (k) ,
where {ζ (k) : k = 0, 1, . . .} is a sequence of independent zero mean random vectors such that
E[kζ (k) k22 ] ≤ p and σ > 0 is a deterministic noise level. Furthermore, the noise vector ζ (k) is
independent of the past states ϑ(1,h) , . . . , ϑ(k,h) . The noisy LMC (nLMC) algorithm is then defined
as
√
k = 0, 1, 2, . . .
(9)
ϑ(k+1,h) = ϑ(k,h) − hY (k,h) + 2h ξ (k+1) ;
where h > 0 and ξ (k+1) are as in (2). The next theorem extends the guarantees of Theorem 1 to the
noisy-gradient setting and to the nLMC algorithm.
Theorem 3 Let ϑ(K,h) be the K-th iterate of the nLMC algorithm (9) and νK be its distribution.
If the function f satisfies condition (1) and h ≤ 2/M then the following claims hold:
(a) If h ≤ 2/(m+M ) then
2hp 1/2 n
mh K
3.3M 2 o1/2
W2 (νK , π) ≤ 1 −
σ2 +
W2 (ν0 , π) +
.
2
m
m
(10)
(b) If h ≥ 2/(m+M ) then
W2 (νK , π) ≤
M h K
2
W2 (ν0 , π) +
2h2 p 1/2 n
6.6M o1/2
σ2 +
.
2 − Mh
2 − Mh
To understand the potential scope of applicability of this result, let us consider a typical statistical problem in which f (θ) is the negative log-likelihood of n independent random variables
X1 , . . . , Xn . Then, if `(θ, x) is the log-likelihood of one variable, we have
f (θ) =
n
X
`(θ, Xi ).
i=1
In such a situation, if the Fisher information is not degenerated, both m and M are proportional to
the sample size n. When the gradient of `(θ, Xi ) with respect to parameter θ is hard to compute,
one can replace the evaluation of ∇f (ϑ(k,h) ) at each step k by that of Yk = n∇θ `(ϑ(k,h) , Xk ).
Under suitable assumptions, this random vector satisfies the conditions of Theorem 3 with a σ 2
proportional to n. Therefore, if we analyze the expression between curly brackets in (10), we see
that the additional term, σ 2 , due to the subsampling is of the same order of magnitude as the term
3.3M 2 /m. Thus, using the subsampled gradient in the LMC algorithm does not cause a significant
deterioration of the precision while reducing considerably the computational burden.
6
F URTHER ANALOGY BETWEEN SAMPLING AND OPTIMIZATION
5. Discussion and outlook
We have established simple guarantees for the convergence of the Langevin Monte Carlo algorithm
under the Wasserstein metric. These guarantees are valid under strong convexity and Lipschitzgradient assumptions on the log-density function, for a step-size smaller than 2/M , where M is
the constant in the Lipschitz condition. These guarantees are sharper than previously established
analogous results and in perfect agreement with the analogous results in Optimization. Furthermore,
we have shown that similar results can be obtained in the case where only noisy evaluations of the
gradient are possible.
There are a number of interesting directions in which this work can be extended. One relevant
and closely related problem is the approximate computation of the volume of a convex body, or,
the problem of sampling from the uniform distribution on a convex body. This problem has been
analyzed by other Monte Carlo methods such as “Hit and Run” in a series of papers by Lovász
and Vempala (2006b,a), see also the more recent paper (Bubeck et al., 2015). Numerical experiments reported in (Bubeck et al., 2015) suggest that the LMC algorithm might perform better in
practice than “Hit and Run”. It would be interesting to have a theoretical result corroborating this
observation.
Other interesting avenues for future research include the possible adaptation of the Nesterov
acceleration to the problem of sampling, extensions to second-order methods as well as the alleviation of the strong-convexity assumptions. We also plan to investigate in more depth the applications
is high-dimensional statistics (see, for instance, Dalalyan and Tsybakov (2012)). Some results in
these directions are already obtained in (Dalalyan, 2014; Durmus and Moulines, 2016; Durmus
et al., 2016). It is a stimulating question whether we can combine ideas of the present work and the
aforementioned earlier results to get improved guarantees.
6. Proofs
The first part of the proofs of Theorem 1 and Theorem 3 is the same. We start this section by this
common part and then we proceed with the proofs of the two theorems separately. √
Let W be a p-dimensional Brownian Motion such that W (k+1)h − W kh = h ξ (k+1) . We
define the stochastic process L so that L0 ∼ π and
Z t
√
Lt = L0 −
∇f (Ls ) ds + 2 W t ,
∀ t > 0.
(11)
0
It is clear that this equation implies that
Z
(k+1)h
L(k+1)h = Lkh −
∇f (Ls ) ds +
kh
Z (k+1)h
= Lkh −
∇f (Ls ) ds +
√
√
2 (W (k+1)h − W kh )
2h ξ (k+1) .
kh
Furthermore, {Lt : t ≥ 0} is a diffusion process having π as the stationary distribution. Since the
initial value L0 is drawn from π, we have Lt ∼ π for every t ≥ 0.
7
DALALYAN
Let us denote ∆k = Lkh − ϑ(k,h) and Ik = (kh, (k + 1)h]. We have
Z
∇f (Lt ) dt
∆k+1 = ∆k + hY (k,h) −
Ik
Z
(k,h)
(k,h)
(k)
= ∆k − h ∇f (ϑ
∇f (Lt ) − ∇f (Lkh ) dt .
+ ∆k ) − ∇f (ϑ
) + σhζ −
|
{z
}
I
| k
{z
}
:=U k
:=V k
In view of the triangle inequality, we get
k∆k+1 k2 ≤ k∆k − hU k + σhζ (k) k2 + kV k k2 .
(12)
For the first norm in the right hand side, we can use the following inequalities:
E[k∆k − hU k + σhζ (k) k22 ] = E[k∆k − hU k k22 ] + E[kσhζ (k) k22 ]
= E[k∆k − hU k k22 ] + σ 2 h2 p.
(13)
We need now three technical lemmas the proofs of which are postponed to Section 6.3.
Lemma 1 Let us introduce the constant γ that equals |1 − mh| if h ≤ 2/(m+M ) and |1 − M h| if
h ≥ 2/(m+M ). (Since h ∈ (0, 2/M ), this value γ satisfies 0 < γ < 1). It holds that
k∆k − hU k k2 ≤ γk∆k k2 .
(14)
Lemma 2 If the function f is continuously differentiable and the gradient of f is Lipschitz with
constant M , then
Z
k∇f (x)k22 π(x) dx ≤ M p.
Rp
Lemma 3 If the function f has a Lipschitz-continuous
gradient with the Lipschitz constant M , L
R a+h
∇f (Lt ) − ∇f (La ) dt for some a ≥ 0, then
is the Langevin diffusion (11) and V (a) = a
E[kV
1/2
(a)k22 ]
≤
1 4 3
h M p
3
1/2
+ (h3 p)1/2 M.
This completes the common part of the proof. We present below the proofs of the theorems.
6.1. Proof of Theorem 1
Using (12) with σ = 0 and Lemma 1, we get
k∆k+1 k2 ≤ γk∆k k2 + kV k k2 ,
∀k ∈ N.
In view of the Minkowski inequality and Lemma 3, this yields
(E[k∆k+1 k22 ])1/2 ≤ γ(E[k∆k k22 ])1/2 + (E[kV k k22 ])1/2
≤ γ(E[k∆k k22 ])1/2 + 1.82(h3 M 2 p)1/2 ,
8
F URTHER ANALOGY BETWEEN SAMPLING AND OPTIMIZATION
where we have used the fact that h ≤ 2/M . Using this inequality iteratively with k − 1, . . . , 0
instead of k, we get
(E[k∆k+1 k22 ])1/2 ≤ γ k+1 (E[k∆0 k22 ])1/2 + 1.82(h3 M 2 p)1/2
k
X
γj
j=0
≤γ
k+1
(E[k∆0 k22 ])1/2
3
2
1/2
+ 1.82(h M p)
(1 − γ)−1 .
(15)
Since ∆k+1 = L(k+1)h − ϑ(k+1,h) and L(k+1)h ∼ π, we readily get the inequality W2 (νk+1 , π) ≤
1/2
1/2
E[k∆k+1 k22 ]
. In addition, one can choose L0 so that W2 (ν0 , π) = E[k∆0 k22 ]
. Using
these relations and substituting γ by its expression in (15), we get the two claims of the theorem.
6.2. Proof of Theorem 3
Using (12), (13) and Lemma 1, we get (for every t > 0)
E[k∆k+1 k22 ] = E[k∆k − hU k + V k k22 ] + E[kσhζ (k) k22 ]
≤ (1 + t)E[k∆k − hU k k22 ] + (1 + t−1 )E[kV k k22 ] + σ 2 h2 p
≤ (1 + t)γ 2 E[k∆k k22 ] + (1 + t−1 )E[kV k k22 ] + σ 2 h2 p.
Since h ≤ 2/M , Lemma 3 implies that
E[k∆k+1 k22 ] ≤ (1 + t)γ 2 E[k∆k k22 ] + (1 + t−1 )(1.82)2 h3 M 2 p + σ 2 h2 p
1+γ 2
2
2
for every t > 0. Let us choose t = ( 1+γ
2γ ) − 1 so that (1 + t)γ = ( 2 ) . By recursion, this leads
to
1 + γ 2(k+1)
o
2 n
W22 (νk+1 , π) ≤
σ 2 h2 p + (1 + t−1 )(1.82)2 h3 M 2 p .
W22 (ν0 , π) +
2
1−γ
In the case h ≤ 2/(m + M ), γ = 1 − mh and we get
(1 + t−1 )h3 M 2 p =
1+γ
2
= 1 − 21 mh. Furthermore,
(1 + γ)2 h3 M 2 p
h2 M 2 p
≤
.
(1 − γ)(1 + 3γ)
m
This readily yields
2hp 1/2 n
mh k+1
3.3M 2 o1/2
W2 (νk+1 , π) ≤ 1 −
W2 (ν0 , π) +
σ2 +
.
2
m
m
Similarly, in the case h ≥ 2/(m + M ), γ = M h − 1 and we get
(1 + t−1 )h3 M 2 p =
1+γ
2
= 12 M h. Furthermore,
(1 + γ)2 h3 M 2 p
h3 M 2 p
2h2 M p
≤
≤
.
(1 − γ)(1 + 3γ)
2 − Mh
2 − Mh
This implies the inequality
W2 (νk+1 , π) ≤
M h k+1
2
W2 (ν0 , π) +
which completes the proof.
9
2h2 p 1/2 n
6.6M o1/2
σ2 +
,
2 − Mh
2 − Mh
DALALYAN
6.3. Proofs of lemmas
Proof [Proof of Lemma 1] Since f is m-strongly convex, it satisfies the inequality
mM
1
∆> ∇f (ϑ + ∆) − ∇f (ϑ) ≥
k∆k22 +
k∇f (ϑ + ∆) − ∇f (ϑ)k22 ,
m+M
m+M
for all ∆, ϑ ∈ Rp . Therefore, simple algebra yields
2
2
k∆k − hU k k22 = k∆k k22 − 2h∆>
k U k + h kU k k2
(k,h)
= k∆k k22 − 2h∆>
+ ∆k ) − ∇f (ϑ(k,h) ) + h2 kU k k22
k ∇f (ϑ
2h
2hmM
≤ k∆k k22 −
k∆k k22 −
kU k k22 + h2 kU k k22
m+M
m+M
2
2hmM
k∆k k22 + h h −
kU k k22 .
= 1−
m+M
m+M
(16)
Note that, thanks to the strong convexity of f , the inequality kU k k2 = k∇f (ϑ(k,h) + ∆k ) −
∇f (ϑ(k,h) )k2 ≥ mk∆k k2 is true. If h ≤ 2/(m+M ), this inequality can be combined with (16) to
obtain
k∆k − hU k k22 ≤ (1 − hm)2 k∆k k22 .
Similarly, when h ≥ 2/(m+M ), we can use the Lipschitz property of ∇f to infer that kU k k2 ≤
M k∆k k2 . Combining with (16), this yields
k∆k − hU k k22 ≤ (hM − 1)2 k∆k k22 ,
h ≥ 2/(m+M ).
if
Thus, we have checked that (14) is true for every h ∈ (0, 2/M ).
Proof [Proof of Lemma 2] To simplify notations, we prove the lemma for p = 1. The function
x 7→ f 0 (x) being Lipschitz continuous is almost surely differentiable. Furthermore, it is clear that
|f 00 (x)| ≤ M for every x for which this second derivative exists. The result of (Rudin, 1987,
Theorem 7.20) implies that
Z
x
f 0 (x) − f 0 (0) =
f 00 (y) dy.
0
Therefore, using f 0 (x) π(x) = −π 0 (x), we get
Z
Z
Z Z x
f 0 (x)2 π(x) dx = f 0 (0) f 0 (x) π(x) dx +
f 00 (y) dy f 0 (x) π(x) dx
R
R
Z
Z RZ x 0
0
0
= −f (0) π (x) dx −
f 00 (y) dy π 0 (x) dx
R
R
0
Z ∞Z x
Z 0 Z 0
00
0
=−
f (y) π (x) dy dx +
f 00 (y) π 0 (x) dy dx.
0
−∞
0
In view of Fubini’s theorem, we arrive at
Z
Z ∞
Z
0
2
00
f (x) π(x) dx =
f (y) π(y) dy +
R
0
−∞
0
10
x
f 00 (y) π(y) dy ≤ M.
F URTHER ANALOGY BETWEEN SAMPLING AND OPTIMIZATION
This completes the proof.
Proof [Proof of Lemma 3] Since the process L is stationary, V (a) has the same distribution as
V (0). For this reason, it suffices to prove the claim of the lemma for a = 0 only. Using the
Lipschitz continuity of f , we get
E[kV
(0)k22 ]
h
h Z
=E
Z
≤h
2i
∇f (Lt ) − ∇f (L0 ) dt
2
0
h
E ∇f (Lt ) − ∇f (L0 )
Z h
2
E Lt − L0 2 dt.
≤ hM 2
0
2
2
dt
0
Combining this inequality with the stationarity of Lt , we arrive at
E[kV
1/2
(0)k22 ]
≤
hM
2
h
Z
E
−
∇f (Ls ) ds +
≤
hM
2
h
Z
E
0
≤
=
√
0
0
t
Z
Z
t
∇f (Ls ) ds
0
hM 2 E ∇f (L0 )
2
2
1 4 2
h M E ∇f (L0 )
3
Z
h
2
dt
2
2
2 W t 2 dt
1/2
Z
2
+ 2hpM
2
h
1/2
t dt
0
1/2
Z
t2 dt
+ 2hpM 2
h
1/2
t dt
0
0
2
1/2
1/2
1/2
+ h3 M 2 p
.
To complete the proof, it suffices to apply Lemma 2.
Acknowledgments
The work of the author was partially supported by the grant Investissements d’Avenir (ANR-11IDEX-0003/Labex Ecodec/ANR-11-LABX-0047). The author would like to thank Nicolas Brosse,
who suggested an improvement in Theorem 3.
References
Y. Atchadé, G. Fort, E. Moulines, and P. Priouret. Adaptive Markov chain Monte Carlo: theory
and methods. In Bayesian time series models, pages 32–51. Cambridge Univ. Press, Cambridge,
2011.
R. N. Bhattacharya. Criteria for recurrence and existence of invariant measures for multidimensional
diffusions. Ann. Probab., 6(4):541–553, 08 1978.
S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge,
2004.
11
DALALYAN
S. Bubeck, R. Eldan, and J. Lehec. Sampling from a log-concave distribution with Projected
Langevin Monte Carlo. ArXiv e-prints, July 2015.
A. S. Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave
densities. ArXiv e-prints, December 2014.
A. S. Dalalyan and A. B. Tsybakov. Sparse regression learning by aggregation and Langevin MonteCarlo. J. Comput. System Sci., 78(5):1423–1443, 2012.
A. Durmus and E. Moulines. High-dimensional Bayesian inference via the Unadjusted Langevin
Algorithm. ArXiv e-prints, May 2016.
Alain Durmus, Eric Moulines, and Marcelo Pereyra. Sampling from convex non continuously
differentiable functions, when Moreau meets Langevin. February 2016. URL https://hal.
archives-ouvertes.fr/hal-01267115.
L. Lovász and S. Vempala. Hit-and-run from a corner. SIAM J. Comput., 35(4):985–1005 (electronic), 2006a.
L. Lovász and S. Vempala. Fast algorithms for logconcave functions: Sampling, rounding, integration and optimization. In 47th Annual IEEE Symposium on Foundations of Computer Science
(FOCS 2006), 21-24 October 2006, Berkeley, California, USA, Proceedings, pages 57–68, 2006b.
Walter Rudin. Real and complex analysis. McGraw-Hill Book Co., New York, third edition, 1987.
12
| 1 |
Model Accuracy and Runtime Tradeoff in
Distributed Deep Learning: A Systematic Study
Suyog Gupta1,∗ , Wei Zhang1,∗ , Fei Wang2
IBM T. J. Watson Research Center. Yorktown Heights. NY. ∗ Equal Contribution.
Department of Healthcare Policy and Research, Weill Cornell Medical College. New York City. NY
1
arXiv:1509.04210v3 [stat.ML] 5 Dec 2016
2
Abstract—Deep learning with a large number of parameters
requires distributed training, where model accuracy and runtime
are two important factors to be considered. However, there has
been no systematic study of the tradeoff between these two factors
during the model training process. This paper presents Rudra, a
parameter server based distributed computing framework tuned
for training large-scale deep neural networks. Using variants of
the asynchronous stochastic gradient descent algorithm we study
the impact of synchronization protocol, stale gradient updates,
minibatch size, learning rates, and number of learners on runtime
performance and model accuracy. We introduce a new learning
rate modulation strategy to counter the effect of stale gradients
and propose a new synchronization protocol that can effectively
bound the staleness in gradients, improve runtime performance
and achieve good model accuracy. Our empirical investigation
reveals a principled approach for distributed training of neural
networks: the mini-batch size per learner should be reduced
as more learners are added to the system to preserve the model
accuracy. We validate this approach using commonly-used image
classification benchmarks: CIFAR10 and ImageNet.
I. I NTRODUCTION
Deep neural network based models have achieved unparalleled accuracy in cognitive tasks such as speech recognition,
object detection, and natural language processing [18]. For
certain image classification benchmarks, deep neural networks have been touted to even surpass human-level performance [13, 11]. Such accomplishments are made possible by
the ability to perform fast, supervised training of complex
neural network architectures using large quantities of labeled
data. Training a deep neural network translates into solving a
non-convex optimization problem in a very high dimensional
space, and in the absence of a solid theoretical framework
to solve such problems, practitioners are forced to rely on
trial-and-error empirical observations to design heuristics that
help obtain a well-trained model[1]. Naturally, fast training of
deep neural network models can enable rapid evaluation of
different network architectures and facilitate a more thorough
hyper-parameter optimization for these models. Recent years
have seen a resurgence of interest in deploying large-scale
computing infrastructure designed specifically for training
deep neural networks. Some notable efforts in this direction
include distributed computing infrastructure using thousands
of CPU cores [3, 6], high-end graphics processors (GPUs)[16],
or a combination of CPUs and GPUs [4].
The large-scale deep learning problem can hence be viewed
as a confluence of elements from machine learning (ML) and
high-performance computing (HPC). Much of the work in the
ML community is focused on non-convex optimization, model
selection, and hyper-parameter tuning to improve the neural
network’s performance (measured as classification accuracy)
while working under the constraints of the computational
resources available in a single computing node (CPU with
or without GPU acceleration). From a HPC perspective, prior
work has addressed, to some extent, the problem of accelerating the neural network training by mapping the asynchronous version of mini-batch stochastic gradient descent
(SGD) algorithm onto multiple computing nodes. Contrary
to the popular belief that asynchrony necessarily improves
model accuracy, we find that adopting the approach of scaleout deep learning using asynchronous-SGD, gives rise to
complex interdependencies between the training algorithm’s
hyperparameters and the distributed implementation’s design
choices (synchronization protocol, number of learners), ultimately impacting the neural network’s accuracy and the overall
system’s runtime performance.
In this paper we present Rudra, a parameter server based
deep learning framework to study these interdependencies
and undertake an empirical evaluation with public image
classification benchmarks. Our key contributions are:
1) A systematic technique (vector clock) for quantifying the
staleness of gradient descent parameter updates.
2) An investigation of the impact of the interdependence
of training algorithm’s hyperparameters (mini-batch size,
learning rate (gradient descent step size)) and distributed
implementation’s parameters (gradient staleness, number
of learners) on the neural network’s classification accuracy and training time.
3) A new learning rate tuning strategy that reduces the effect
of stale parameter updates.
4) A new synchronization protocol to reduce network bandwidth overheads while achieving good classification accuracy and runtime performance.
5) An observation that to maintain a given level of model
accuracy, it is necessary to reduce the mini-batch size as
the number of learners is increased. This suggests a hard
limit on the amount of parallelism that can be exploited
in training a given model.
II. BACKGROUND
A neural network computes a parametric, non-linear transformation fθ : X 7→ Y , where θ represents a set of adjustable
parameters (or weights). In a supervised learning context
(such as image classification), X is the input image and Y
corresponds to the label assigned to the image. A deep neural
network organizes the parameters θ into multiple layers, each
of which consists of a linear transformation followed by a nonlinear function such as sigmoid, tanh, etc. In a feed-forward
deep neural network, the layers are arranged hierarchically
such that the output of the layer l − 1 feeds into the input
of layer l. The terminal layer generates the network’s output
Ŷ = fθ (X), corresponding to the input X.
A neural network training algorithm seeks to find a set
of parameters θ∗ that minimizes the discrepancy between Ỹ
and the ground truth Y . This is usually accomplished by
defining a differentiable cost function C(Ŷ , Y ) and iteratively
updating each of the model parameters using some variant of
the gradient descent algorithm:
1 Xm
C Yˆs , Ys ,
s=1
m
∇θ(k) (t) = ∂Em /∂θ(k) (t),
(1b)
θ(k) (t + 1) = θ(k) (t) − α(t)∇θ(k) (t)
(1c)
Em =
unique mini-batch. The model parallelism approach augments
this framework by splitting the neural network model across
multiple learners. With model parallelism, each learner trains
only a portion of the network; edges that cross learner boundaries must be synchronized before gradients can be computed
for the entire model.
Several different synchronization strategies are possible.
The most commonly used one is the asynchronous protocol,
in which the learners work completely independently of each
other and the parameter server. Section III will discuss three
different synchronization strategies, each associated with a
unique tradeoff between model accuracy and runtime.
III. D ESIGN AND I MPLEMENTATION
(1a)
where θ(k) (t) represents the k th parameter at iteration t, α
is the step size (also known as the learning rate) and m is
the batch size. The batch gradient descent algorithm sets m
to be equal to the total number of training examples N . Due
to the large amount of training data, deep neural networks
are typically trained using the Stochastic Gradient Descent
(SGD), where the parameters are updated with a randomly
selected training example (Xs , Ys ). The performance of SGD
can be improved by computing the gradients using a minibatch containing m = µ N training examples.
Deep neural networks are generally considered hard to
train [1, 10, 23] and the trained model’s generalization error
depends strongly on hyperparameters such as the initializations, learning rates, mini-batch size, network architecture,
etc. In addition, neural networks are prone to overfit the data.
Regularization methods (e.g., weight decay and dropout) [16]
applied during training have been shown to combat overfitting
and reduce the generalization error.
Scale-out deep learning: A typical implementation of distributed training of deep neural networks consists of a master (parameter server) that orchestrates the work among one
or more slaves (learners). Each learner does the followings:
1) getMinibatch: Select randomly a mini-batch of examples from the training data.
2) pullWeights: Request the parameter server for the
current set of weights/parameters.
3) calcGradient: Compute gradients based on the training error for the current mini-batch (equation 1b).
4) pushGradient: Send the computed gradients to the
parameter server
The parameter server maintains a global view of the model
weights and performs the following functions:
1) sumGradients: Receive and accumulate the gradients
from the learners.
2) applyUpdate: Multiply the accumulated gradient by
the learning rate and update the weights (equation 1c)
Learners exploit data parallelism by each maintaining a
copy of the entire model, and training independently over a
A. Terminology
Throughout the paper, we use the following definitions:
• Parameter Server: a server that holds the model weights.
[22] describes a typical parameter server using a distributed key-value store to synchronize state between
processes. The parameter server collects gradients from
learners and updates the weights accordingly.
• Learner: A computing process that can calculate weight
updates (gradients).
• µ: mini-batch size.
• α: learning rate.
• λ: number of learners.
• Epoch: a pass through the entire training dataset.
• Timestamp: we use a scalar clock [20] to represent
weights timestamp tsi , starting from i = 0. Each weight
update increments the timestamp by 1. The timestamp of
a gradient is the same as the timestamp of the weight
used to compute the gradient.
• σ: staleness of the gradient. A gradient with timestamp
tsi is pushed to the parameter server with current weight
timestamp tsj , where tsj ≥ tsi . We define the staleness
of this gradient σ as j − i.
• hσi, average staleness of gradients. The timestamps
of the set of n gradients that triggers the advancement of weights timestamp from tsi−1 to tsi
form a vector clock [17] htsi1 , tsi2 , ..., tsin i, where
max{i1 , i2 , ..., in } < i. The average staleness of gradients hσi is defined as:
hσi = (i − 1) − mean(i1 , i2 , ..., in )
•
Hardsync protocol: To advance weights timestamp from
tsi to tsi+1 , each learner calculates exactly one minibatch and sends its gradient ∇θl to the parameter server.
The parameter server averages the gradients and updates
the weights according to Equation (3), then broadcasts
the new weights to all learners. Staleness in the hardsync
protocol is always zero.
1 Xλ
(k)
∇θl
l=1
λ
θ(k) (i + 1) = θ(k) (i) − α∇θ(k) (i)
∇θ(k) (i) =
•
(2)
(3)
Async protocol: Each learner calculates the gradients
and asynchronously pushes/pulls the gradients/weights
W ' = W − α f (ΔW,...)
W@E
Parameter
Server
Stats
Server
W@MB
ΔW @MB
TrainErr@MB
Learner
…
Learner
Mini-‐batch@MB
Data
Server
(a) Rudra-adv architecture
W:
Model
Weights
@MB:
Per
Mini-‐batch
@E:
Per
Epoch
(b) Rudra-adv∗ architecture
Fig. 2. Rudra-adv architecture
Fig. 1.
Rudra-base architecture
to/from parameter server. The Async weight update rule
is given by:
(k)
∇θ(k) (i) = ∇θl , Ll ∈ L1 , ..., Lλ
θ(k) (i + 1) = θ(k) (i) − α∇θ(k) (i)
•
(4)
Gradient staleness may be hard to control due to the
asynchrony in the system. [6] describe Downpour SGD,
an implementation of the Async protocol for a commodity
scale-out system in which the staleness can be as large
as hundreds.
n-softsync protocol: Each learner pulls the weights from
the parameter server, calculates the gradients and pushes
the gradients to the parameter server. The parameter
server updates the weights after collecting at least c =
b(λ/n)c gradients. The splitting parameter n can vary
from 1 to λ. The n-softsync weight update rule is given
by:
c = b(λ/n)c
1 Xc
(k)
∇θl , Lj ∈ L1 , ..., Lλ
∇θ (i) =
l=1
c
θ(k) (i + 1) = θ(k) (i) − α∇θ(k) (i)
(k)
(5)
In Section V-A we will show that in a homogeneous
cluster where each learner proceeds at roughly the same
speed, the staleness of the model can be empirically
bounded at 2n. Note that when n is equal to λ, the
weight update rule at the parameter server is exactly
the same as in Async protocol.
B. Rudra-base System Architecture
Figure 1 illustrates the parameter server design that we use
to study the interplay of hyperparameter tuning and system
scale-out factor. This system implements both hardsync and nsoftsync protocols. The arrows between each entity represent
a (group of) MPI message(s), except the communication
between Learner and Data Server, which is achieved by a
global file system. We describe each entity’s role and its
implementation below.
Learner is a single-process multithreaded SGD solver. Before
training each mini-batch, a learner pulls the weights and
the corresponding timestamp from the parameter server. A
learner reduces the pullWeights traffic by first inquiring
the timestamp from the parameter server: if the timestamp is
as old as the local weights’, then this learner does not pull
the weights. After training the mini-batch, learner sends the
gradients along with gradients’ timestamp to parameter server.
The size of pull and push messages is the same as the model
size plus the size of scalar timestamp equal to one.
Data Server is hosted on IBM GPFS, a global file system.
Each learner has an I/O thread, which prefetches the minibatch via random sampling prior to training. Prefetching is
completely overlapped with the computation.
Parameter Server is a multithreaded process, that accumulates
gradients from each learner and applies update rules according
to Equations (3–5). In this study, we implemented hardsync
protocol and n-softsync protocol. Learning rate is configured
differently in either protocol. Inphardsync protocol, the learning
rate is multiplied by a factor λµ/B, where B is the batch
size of the reference model. In the n-softsync protocol, the
learning rate is multiplied by the reciprocal of staleness. We
demonstrate in Section V-A that this treatment of learning rate
in n-softsync can significantly improve the model accuracy.
Parameter server records the vector clock of each weight
update to keep track of the the average staleness. When a
specified number of epochs are trained, parameter server shuts
down each learner.
Statistics Server is a multithreaded process that receives the
training error from each learner and receives the model from
the parameter server at the end of each epoch and tests the
model. It monitors the model training quality.
This architecture is non-blocking everywhere except for
pushing up gradients and pushing down weights, which are
blocking MPI calls (e.g. MPI_Send). Parameter server handles each incoming message one by one (the message handling
itself is multithreaded). In this way, we can precisely control
how each learner’s gradients are received and handled by the
parameter server. The purpose of Rudra-base is to control the
noise of the system, so that we can study the interplay of scaleout factor and the hyperparameter selection. For a moderatelysized dataset like CIFAR-10, Rudra-base shows good scale-out
factor (see Section V-B).
C. Rudra-adv and Rudra-adv∗ System Architecture
To achieve high classification accuracy, the required model
size may be quite large (e.g. hundreds of MBs). In many cases,
to achieve best possible model accuracy, mini-batch size µ
must be small, as we will demonstrate in Section V-B. In order
to meet these requirements with acceptable performance, we
implemented Rudra-adv and Rudra-adv∗ .
Rudra-adv system architecture. Rudra-base clearly is not
a scalable solution when the model gets large. Under ideal
circumstances (see Section IV-A for our experimental hardware system specification), a single learner pushing a model
of 300 MB (size of a typical deep neural network, see section IV-B) would take more than 10 ms to transfer this data. If
16 tasks are sending 300 MB to the same receiver and there is
link contention, it would take over a second for the messages
to be delivered.
To alleviate the network traffic to parameter server, we
build a parameter server group that forms a tree. We colocate each tree leaf node on the same node as the learners
for which it is responsible. Each node in the parameter server
group is responsible for averaging the gradients sent from its
learners and relaying the averaged gradient to its parent. The
root node in the parameter server group is responsible for
applying weight update and broadcasting the updated weights.
Each non-leaf node pulls the weights from its parent and
responds to its children’s weight pulling requests. Rudra-adv
can significantly improve performance compared to Rudrabase and manage to scale out to large model and small µ, while
maintaining the control of the gradients’ staleness. Figure 2(a)
illustrates the system architecture for Rudra-adv. Red boxes
represent the parameter server group, in which the gradients
are pushed and aggregated upwards. Green boxes represent
learners, each learner pushes the gradient to its parameter
server parent and receives weights from its parameter server
parent. The key difference between Rudra-adv and a sharded
parameter server design (e.g., Distbelief [6] and Adam [3])
is that the weights maintained in Rudra-adv have the same
timestamps whereas shared parameter servers maintain the
weights with different timestamps. Having consistent weights
makes the analysis of hyperparameter/scale-out interplay much
more tractable.
Rudra-adv∗ system architecture. We built Rudra-adv∗ to
further improve the runtime performance in two ways:
Broadcast weights within learners. To further reduce the
traffic to the parameter server group, we form a tree within
all learners and broadcast the weights down this tree. In this
way the network links to/from learners are also utilized.
Asynchronous pushGradient and pullWeights. Ideally, one would use MPI non-blocking send calls to asynchronously send gradients and weights. However, depending on the MPI implementation, it is difficult to guarantee
if the non-blocking send calls make progress in the background [9]. Therefore we open additional communication
threads and use MPI blocking send calls in the threads. Each
learner process runs two additional communication threads:
the pullWeights thread and pushGradient thread. In
this manner, computing can continue without waiting for the
communication. Note that since we need to control µ (the
smaller µ is, the better model converges, as we demonstrate in Section V-B), we must guarantee that the learner
pushes each calculated gradient to the server. Alternatively,
one could locally accrue gradients and send the sum, as
in [6], however that will effectively increase µ. For this
Implementation
Rudra-base
Rudra-adv
Rudra-adv∗
Communication overlap (%)
11.52
56.75
99.56
TABLE I
C OMMUNICATION OVERLAP MEASURED IN RUDRA - BASE , RUDRA - ADV ,
RUDRA - ADV∗ FOR AN ADVERSARIAL SCENARIO , WHERE THE MINI - BATCH
SIZE IS THE SMALLEST POSSIBLE FOR 4- WAY MULTI - THREADED
LEARNERS , MODEL SIZE 300MB, AND THERE ARE ABOUT 60 LEARENERS .
reason, the pushGradient thread cannot start sending the
current gradient before the previous one has been delivered.
As demonstrated in Table I that as long as we can optimize
the use of network links, this constraint has no bearing on
the runtime performance, even when µ is extremely small.
In contrast, pullWeights thread has no such constraint
– we maintain a computation buffer and a communication
buffer for pullWeights thread, and the communication
always happens in the background. To use the newly received
weights only requires a pointer swap. Figure 2(b) illustrates the
system architecture for Rudra-adv∗ . Different from Rudra-adv,
each learner continuously receives weights from the weights
downcast tree, which consists of the top level parameter server
node and all the learners.
We measure the communication overlap by calculating the
ratio between computation time and the sum of computation
and communication time. Table I records the the communication overlap for Rudra-base, Rudra-adv, and Rudra-adv∗ in
an adversarial scenario. Rudra-adv∗ can almost completely
overlap computation with communication. Rudra-adv∗ can
scale out to very large model size and work with smallest
possible size of mini-batch. In Section V-E, we demonstrate
Rudra-adv∗ ’s effectiveness in improving runtime performance
while achieving good model accuracy.
IV. M ETHODOLOGY
A. Hardware and software environment
We deploy the Rudra distributed deep learning framework
on a P775 supercomputer. Each node of this system contains
four eight-core 3.84 GHz POWER7 processors, one optical
connect controller chip and 128 GB of memory. A single
node has a theoretical floating point peak performance of
982 Gflop/s, memory bandwidth of 512 GB/s and bi-directional
interconnect bandwidth of 192 GB/s.
The cluster operating system is Red Hat Enterprise Linux
6.4. To compile and run Rudra we used the IBM xlC compiler
version 12.1 with the -O3 -q64 -qsmp options, ESSL for
BLAS subroutines, and IBM MPI (IBM Parallel Operating
Environment 1.2).
B. Benchmark datasets and neural network architectures
To evaluate Rudra’s scale-out performance we employ
two commonly used image classification benchmark datasets:
CIFAR10 [15] and ImageNet [8]. The CIFAR10 dataset
comprises of a total of 60,000 RGB images of size 32 ×
32 pixels partitioned into the training set (50,000 images) and
the test set (10,000 images). Each image belongs to one of the
V. E VALUATION
In this section we present results of evaluation of our scaleout deep learning training implementation. For an initial design
2−Softsync
60
2
Staleness (σ)
50
1.5
1−Softsync
1
Probability
70
2.5
Average staleness 〈σ 〉
10 classes, with 6000 images per class. For this dataset, we
construct a deep convolutional neural network (CNN) with 3
convolutional layers each followed by a subsampling/pooling
layer. The output of the 3rd pooling layer connects, via a
fully-connected layer, to a 10-way softmax output layer that
generates a probability distribution over the 10 output classes.
This neural network architecture closely mimics the CIFAR10
model (cifar10 full.prototxt) available as a part of the opensource Caffe deep learning package [14]. The total number of
trainable parameters in this network are ∼ 90 K, resulting
in the model size of ∼350 kB when using 32-bit floating
point data representation. The neural network is trained using
momentum-accelerated mini-batch SGD with a batch size of
128 and momentum set to 0.9. As a data preprocessing step,
the per-pixel mean is computed over the entire training dataset
and subtracted from the input to the neural network. The
training is performed for 140 epochs and results in a model
that achieves 17.9% misclassification error rate on the test
dataset. The base learning rate α0 is set to 0.001 are reduced
by a factor of 10 after the 120th and 130th epoch. This learning
rate schedule proves to be quite essential in obtaining the low
test error of 17.9%.
Our second benchmark dataset is collection of natural
images used as a part of the 2012 edition of the ImageNet
Large Scale Visual Recognition Challenge (ILSVRC 2012).
The training set is a subset of the hand-labeled ImageNet
database and contains 1.2 million images. The validation
dataset has 50,000 images. Each image maps to one of
the 1000 non-overlapping object categories. The images are
converted to a fixed resolution of 256×256 to be used input
to a deep convolution neural network. For this dataset, we
consider the neural network architecture introduced in [16]
consisting of 5 convolutional layers and 3 fully-connected
layers. The last layer outputs the probability distribution over
the 1000 object categories. In all, the neural network has
∼72 million trainable parameters and the total model size is
289 MB. The network is trained using momentum-accelerated
SGD with a batch size of 256 and momentum set to 0.9.
Similar to the CIFAR10 benchmark, per-pixel mean computed
over the entire training dataset is subtracted from the input
image feeding into the neural network. To prevent overfitting,
a weight regularization penalty of 0.0005 is applied to all the
layers in the network and a dropout of 50% is applied to the
1st and 2nd fully-connected layers. The initial learning rate α0
is set equal to 0.01 and reduced by a factor of 10 after the
15th and 25th epoch. Training for 30 epochs results in a top-1
error of 43.95% and top-51 error of 20.55% on the validation
set.
0.2
0.1
0
0
20
40
60
Staleness (σ)
40
30
20
0.5
10
0 0
10
1
10
2
10
3
10
4
10
0 0
10
1
10
(a)
top-5 error corresponds to the fraction of samples where the correct
label does not appear in the top-5 labels considered most probable by the
model
3
10
4
10
5
10
(b)
Fig. 3. Average staleness hσi of the gradients as a function of the weight
update step at the parameter server when using (a) 1-softsync, 2-softsync and
(b) λ-softsync protocol. Inset in (b) shows the distribution of the gradient
staleness values for λ-softsync protocol. Number of learners λ is set to 30.
space exploration, we use the CIFAR10 dataset and Rudrabase system architecture. Subsequently we extend our findings
to the ImageNet dataset using the Rudra-adv and Rudra-adv∗
system architectures.
A. Stale gradients
In the hardsync protocol introduced in section III-A, the
transition from θ(i) to θ(i + 1) involves aggregating the gradients calculated with θ(i). As a result, each of the gradients
∇θl carries with it a staleness σ equal to 0. However, departing
from the hardsync protocol towards either the n-softsync or
the Async protocol inevitably adds staleness to the gradients,
as a subset of the learners contribute gradients calculated using
weights with timestamp earlier than the current timestamp of
the weights at the parameter server.
To measure the effect of gradient staleness when using the
n-softsync protocol, we use the CIFAR10 dataset and train
the neural network described in section IV-B using λ = 30
learners. For the 1-softsync protocol, the parameter server
updates the current set of weights when it has received a total
of 30 gradients from the learners. Similarly, the 2-softsync
protocol forces the parameter server to accumulate λ/2 = 15
gradient contributions from the learners before updating the
weights. As shown in Figure 3(a) the average staleness hσi for
the 1-softsync and 2-softsync protocols remains close to 1 and
2, respectively. In the 1-softsync protocol, the staleness σLl for
the gradients computed by the learner Ll takes values 0, 1, or
2, whereas σLl ∈ {0, 1, 2, 3, 4} for the 2-softsync protocol.
Figure 3(b) shows the gradient staleness for the n-softsync
protocol where n = λ = 30. In this case, the parameter server
updates the weights after receiving a gradient from any of the
learners. A large fraction of the gradients have staleness close
to 30, and only with a very low probability (< 0.0001) does σ
exceed 2n = 60. These measurements show that, in general,
σLl ∈ {0, 1, . . . , 2n} and hσi = n for our implementation of
the n-softsync protocol.
Modifying the learning rate for stale gradients: In our experiments with the n-softsync protocol we found it beneficial,
and at times necessary, to modulate the learning rate α to take
into account the staleness of the gradients. For the n-softsync
protocol, we set the learning rate as:
α = α0 /hσi = α0 /n
1 The
2
10
Gradient step
Gradient step
(6)
where α0 is the learning rate used for the baseline (control)
experiment: µ = 128, λ = 1. Figure 4 shows a set of
100
90
α ← α0
Test error(%)
80
70
60
50
40
30
0
α←
n=30, λ=30
n=4, λ=30
20
40
60
α0
hσi
80
100
120
140
Training epoch
Fig. 4. Effect of learning rate modulation strategy: Dividing the learning
rate by the average staleness aids in better convergence and achieves lower
test error when using the n-softsync protocol. Number of learners, λ = 30;
mini-batch size µ = 128.
H
p r o t o c o l
8
12
λ = 30
=
2 0
µ
T e s t e rro r (% )
a r d s y n c
( 0 , 1 2 8 , 3 0 )
2 5
( 0 , 1 2 8 , 1 )
( 0 , 4 , 3 0 )
1 5
1 0
3
λ=1
µ=4
T r a in in g
t im
e
1 0
( s )
(a)
(b)
Fig. 6. (σ, µ, λ) tradeoff curves for (a) λ-softsync protocol and (b) 1-softsync
protocol. Shaded region in shows the region bounded by µ = 128, λ = 30,
and µ = 4 contours for the hardsync protocol. λ ∈ {1, 2, 4, 10, 18, 30}
and µ ∈ {4, 8, 16, 32, 64, 128}. Note that for λ = 1, n-softsync protocol
degenerates to the hardsync protocol
( 0 , 4 , 1 )
4
Fig. 5. (σ, µ, λ) tradeoff curves for the hardsync protocol. The dashed black
line represents the 17.9% test error achieved by the baseline model (σ, µ, λ)
= (0, 128, 1) on the CIFAR10 dataset.
representative results illustrating the benefits of adopting this
learning rate modulation strategy. We show the evolution
of the test error on the CIFAR10 dataset as a function of
the training epoch for two different configurations of the nsoftsync protocol (n = 4, n = 30) and set the number of
learners, λ = 30. In both these configurations, setting the
learning rate in accordance with equation (6) results in lower
test error as compared with the cases where the learning rate is
set to α0 . Surprisingly, the configuration 30-softsync, λ = 30,
α = α0 fails to converge and shows a constant high error rate
of 90%. Reducing the learning rate by a factor hσi = n = 30
makes the model with much lower test error2 .
B. (σ, µ, λ) tradeoff curves
Hyperparameter optimization plays a central role in obtaining good accuracy from neural network models [2]. For
the SGD training algorithm, this includes a search over the
neural network’s training parameters such as learning rates,
weight regularization, depth of the network, mini-batch size
etc. in order to improve the quality of the trained neural
network model (quantified as the error on the validation
dataset). Additionally, when distributing the training problem
across multiple learners, parameters such as the number of
learners and the synchronization protocol enforced amongst
the learners impact not only the runtime of the algorithm but
also the quality of the trained model.
An exhaustive search over the space defined by these
parameters for joint optimization of the runtime performance
and the model quality can prove to be a daunting task even
2 Although not explored as a part of this work, it is certainly possible to
implement a finer-grained learning rate modulation strategy that depends on
the staleness of each of gradients computed by the learners instead of the
average staleness. Such a strategy should apply smaller learning rates to staler
gradients
for a small model such as that used for the CIFAR10
dataset, and clearly intractable for models and datasets the
scale of ImageNet. To develop a better understanding of the
interdependence among the various tunable parameters in the
distributed deep learning problem, we introduce the notion of
(σ, µ, λ) tradeoff curves. A (σ, µ, λ) tradeoff curve plots the
error on the validation set (or test set) and the total time to
train the model (wall clock time) for different configurations
of average gradient staleness hσi, mini-batch size per learner
µ, and the number of learners λ. The configurations (σ, µ, λ)
that achieve the virtuous combination of low test error and
small training time are preferred and form ideal candidates
for further hyperparameter optimization.
We
run3
the
CIFAR10
benchmark
for
λ ∈ {1, 2, 4, 10, 18, 30} and µ ∈ {4, 8, 16, 32, 64, 128}.
Figure 5 shows a set of (σ, µ, λ) curves for the hardsync
protocol i.e. σ = 0. The baseline configuration with λ = 1
learner and mini-batch size µ = 128 achieves a test error
of 17.9%. With p
the exception of modifying the learning
rate as α = α0 µλ/128, all the other neural network’s
hyperparameters were kept unchanged from the baseline
configuration while generating the data points for different
values of µ and λ. Note that it is possible to achieve test
error lower than the baseline by reducing the mini-batch size
from 128 to 4. However, this configuration (indicated on the
plot as (σ, µ, λ) = (0, 4, 1)) increases training time compared
with the baseline. This is primarily due to the fact that the
dominant computation performed by the learners involves
multiple calls to matrix multiplication (GEMM) to compute
W X where samples in a mini-batch form columns of the
matrix X. Reducing the mini-batch size cause a proportionate
decrease in the GEMM throughput and slower processing of
the mini-batch by the learner.
In Figure 5, the contour labeled µ = 128 is the configurations with the mini-batch size per learner is kept constant at
128 and the number of learners is varied from λ = 1 to λ = 30.
3 The mapping between λ and the number of computing nodes η is (λ, η) =
{(1, 1), (2, 1), (4, 2), (10, 4), (18, 4), (30, 8)}
The training time reduces monotonically with λ, albeit at the
expense of an increase in the test error. Traversing along the
λ = 30 contour from configuration (σ, µ, λ) = (0, 128, 30) to
(σ, µ, λ) = (0, 4, 30) (i.e. reducing the mini-batch size from
128 to 4) helps restore much of this degradation in the test
error by partially sacrificing the speed-up obtained by the
virtue of scaling out to 30 learners.
3 5
3 5
H a rd
λ- S o
1 -S o
lin e a
3 0
2 5
µ = 128
n c
y n c
y n c
p e e d u p
H a rd
λ- S o
1 -S o
lin e a
3 0
2 5
2 0
S p e e d u p
S p e e d u p
2 0
s y
fts
fts
r s
1 5
1 0
s y
fts
fts
r s
µ=4
n c
y n c
y n c
p e e d u p
1 5
1 0
5
5
0
0
0
5
1 0
1 5
2 0
N u m b e r o f L e a rn e rs , λ
(a)
2 5
3 0
0
5
1 0
1 5
2 0
2 5
3 0
N u m b e r o f L e a rn e rs , λ
(b)
Fig. 7. Speed-up in the training time for mini-batch size and (a) µ = 128 (b)
µ = 4 for 3 different protocols: hardsync, λ-softsync, and 1-softsync. Speedup numbers in (a) and (b) are calculated relative to (σ, µ, λ) = (0, 128, 1)
and (σ, µ, λ) = (0, 4, 1), respectively.
Figure 6(a) shows (σ, µ, λ) tradeoff curves for the λsoftsync protocol. In this protocol, the parameter server updates the weights as soon as it receives a gradient from any of
the learners. Therefore, as shown in section V-A the average
gradient staleness hσi = λ and σmax ≤ 2λ with high probability. The learning rate is set in accordance with equation 6. All
the other hyperparameters are left unchanged from the baseline
configuration. Qualitatively, the (σ, µ, λ) tradeoff curves for
λ-softsync look similar to those observed for the hardsync
protocol. In this case, however, the degradation in the test error
relative to the baseline for the (σ, µ, λ) = (30, 128, 30) configuration is much more pronounced. As observed previously, this
increase in the test error can largely be mitigated by reducing
the size of mini-batch processed by each learner (λ = 30
contour). Note, however, the sharp increase in the training
time for the configuration (σ, µ, λ) = (30, 4, 30) as compared
with (σ, µ, λ) = (30, 128, 30). The smaller mini-batch size not
only reduces the computational throughput at each learner,
but also increases the frequency of pushGradient and
pullWeights requests at the parameter server. In addition, small mini-batch size increases the frequency of weight
updates at the parameter server. Since in the Rudra-base
architecture (section III-B), the learner does not proceed with
the computation on the next mini-batch till it has received the
updated gradients, the traffic at the parameter server and the
more frequent weight updates can delay servicing the learner’s
pullWeights request, potentially stalling the gradient computation at the learner. Interestingly, all the configurations
along the µ = 4 contour show similar, if not better, test error
as the baseline. For these configurations, the average staleness
varies between 2 and 30. From this empirical observation, we
infer that a small mini-batch size per learner confers upon the
training algorithm a fairly high degree of immunity to stale
gradients.
The 1-softsync protocol shows (σ, µ, λ) tradeoff curves
(Figure 6(b)) that appear nearly identical to those observed
for the λ-softsync protocol. In this case, the average staleness
is 1 irrespective of the number of learners. Since the parameter
server waits for λ gradients to arrive before updating the
weights, there is a net reduction in the pullWeights traffic
at the parameter server (see section III-B). As a result, the
1-softsync protocol avoids the degradation in runtime observed in the λ-softsync protocol for the configuration with
µ = 4 and λ = 30. The distinction in terms of the runtime
performance between these two protocols becomes obvious
when comparing the speed-ups obtained for different minibatch sizes (Figure 7). For µ = 128, the 1-softsync and
λ-softsync protocol demonstrate similar speed-ups over the
baseline configuration for upto λ = 30 learners. In this case,
the communication between the learners and the parameter
server is sporadic enough to prevent the learners from waiting
on the parameter server for updated weights. However, as the
number of learners is increased beyond 30, the bottlenecks
at the parameter server are expected to diminish the speed-up
obtainable using the λ-softsync protocol. The effect of frequent
pushGradient and pullWeights requests due to smaller
at the parameter manifest clearly as the mini-batch size is
reduced to 4, in which case, the λ-softsync protocol shows
subdued speed-up compared with 1-softsync protocol. In either
scenario, the hardsync protocol fares the worst in terms of
runtime performance improvement when scaling out to large
number of learners. The upside of adopting the hardsync
protocol, however, is that it achieves substantially lower test
error, even for large mini-batch sizes.
C. µλ = constant
In the hardsync protocol, given a configuration with λ
learners and mini-batch size µ per learner, the parameter server
averages the λ number of gradients reported to it by the
learners. Using equations (1) and (3):
∇θ(k) (i)
=
=
ˆ
µ
λ
λ
1 X 1 X ∂C Ys , Ys
1X
(k)
∇θl =
λ
λ
µ s=1 ∂θ(k) (i)
l=1
l=1
l
ˆ
µλ
1 X ∂C Ys , Ys
(7)
µλ s=1 ∂θ(k) (i)
The last step equation (7) is valid since each training example (Xs , Ys ) is drawn independently from the training
set and also because the hardsync protocol ensures that all
the learners compute gradients on identical set of weights
(k)
i.e. θl (i) = θ(k) (i) ∀ l ∈ {1, 2, . . . , λ}. According to
equation (7), the configurations (σ, µ, λ) = (0, µ0 λ0 , 1) and
(σ, µ, λ) = (0, µ0 , λ0 ) are equivalent from the perspective of
stochastic gradient descent optimization. In general, hardsync
configurations with the same µλ product are expected to give
nearly4 the same test error.
In Table II we report the test error at the end of 140
epochs for configurations with µλ = constant. Interestingly,
4 small differences in the final test error achieved may arise due to the
inherent nondeterminism of random sampling in stochastic gradient descent
and the random initialization of the weights.
σ
µ
λ
Test
error
Training
time(s)
σ
µ
λ
Synchronization
protocol
Test
error
Training
time(s)
1
30
18
10
4
2
4
4
8
16
32
64
30
30
18
10
4
2
18.09%
18.41%
18.92%
18.79%
18.82%
17.96%
1573
2073
2488
3396
7776
13449
1
0
30
0
18
4
8
4
4
8
30
30
30
30
18
1-softsync
Hardsync
30-softsync
Hardsync
18-softsyc
18.09%
18.56%
18.41%
18.15%
18.92%
1573
1995
2073
2235
2488
µλ ≈ 256
1
30
18
10
4
2
1
8
8
16
32
64
128
128
30
30
18
10
4
2
2
20.04%
19.65%
20.33%
20.82%
20.70%
19.52%
19.59%
1478
1509
2938
3518
6631
11797
11924
µλ ≈ 512
1
30
18
10
4
16
16
32
64
128
30
30
18
10
4
23.25%
22.14%
23.63%
24.08%
23.01%
1469
1502
2255
2683
7089
µλ ≈ 128
tiny
1
30
18
1
10
32
30
27.16%
1299
32
30
27.27%
1420
64
18
28.31%
1713
µλ ≈ 1024
128
10
29.83%
2551
128
10
29.90%
2626
TABLE II
Results on CIFAR10 benchmark: T EST ERROR AT THE END OF 140
EPOCHS AND TRAINING TIME FOR (σ, µ, λ) CONFIGURATIONS WITH
µλ = CONSTANT .
we find that even for the n-softsync protocol, configurations
that maintain µλ = constant achieve comparable test errors. At
the same time, the test error turns out to be rather independent
of the staleness in the gradients for a given µλ product. For
instance, Table II shows that when µλ ≈ 128, but the (average)
gradient staleness is allowed to vary between 1 and 30, the test
error stays ∼18-19%. Although this result may seem counterintuitive, a plausible explanation emerges when considering
the measurements shown earlier in Figure 3, that our implementation of the n-softsync protocol achieves an average
gradient staleness of n while bounding the maximum staleness
at 2n. Consequently, at any stage in the gradient descent
algorithm, the weights being used by the different learners
(θl (t)) do not differ significantly and can be considered to
be approximately the same. The quality of this approximation
improves when each update
θ(k) (i + 1) = θ(k) (i) − α∇θ(k) (i)
creates only a small displacement in the weight space. This
can be controlled by suitably tuning the learning rate α.
Qualitatively, the learning rate should decrease as the staleness
in the system increases in order to reduce the divergence across
the weights seen by the learners. The learning rate modulation
of equation (6) achieves precisely this effect.
These results help define a principled approach for distributed training of neural networks: the mini-batch size per
learner should be reduced as more learners are added to the
system in way that keeps µλ product constant. In addition,
the learning rate should be modulated to account for stale
gradients. In Table II, 1-softsync (σ = 1) protocol invariably
TABLE III
Results on CIFAR10 benchmark: T OP -5 BEST (σ, µ, λ) CONFIGURATIONS
THAT ACHIEVE A COMBINATION OF LOW TEST ERROR AND SMALL
TRAINING TIME .
shows the smallest training time for any µλ. This is expected,
since the 1-softsync protocol minimizes the traffic at the
parameter server. Table II also shows that the test error
increases monotonically with the µλ product. These results
reveal the scalability limits under the constraints of preserving
the model accuracy. Since the smallest possible mini-batch size
is 1, the maximum number of learners is bounded. This upper
bound on the maximum number of learners can be relaxed
if an inferior model is acceptable. Alternatively, it may be
possible to reduce the test error for higher µλ by running
for more number of epochs. In such a scenario, adding more
learners to the system may give diminishing improvements
in the overall runtime. From machine learning perspective,
this points to an interesting research direction on designing
optimization algorithm and learning strategies that perform
well with large mini-batch sizes.
D. Summary of results on CIFAR10 benchmark
Table III summarizes the results obtained on the CIFAR10
dataset using the Rudra-base system architecture. As a reference for comparison, the baseline configuration (σ, µ, λ) =
(0, 128, 1) achieves a test error of 17.9% and takes 22,392
seconds to finish training 140 epochs.
E. Results on ImageNet benchmark
The large model size of the neural network used for the
ImageNet benchmark and the associated computational cost
of training this model prohibits an exhaustive state space
exploration. The baseline configuration (µ = 256, λ = 1)
takes 54 hours/epoch. Guided by the results of section V-C,
we first consider a configuration with µ = 16, λ = 18 and
employ the Rudra-base architecture with hardsync protocol
(base-hardsync). This configuration performs training at the
speed of ∼330 minutes/epoch and achieves a top-5 error of
20.85%, matching the accuracy of the baseline configuration
(µ = 256, λ = 1, section IV-B).
The synchronization overheads associated with the hardsync
protocol deteriorate the runtime performance and the training
speed can be further improved by switching over to the 1softsync protocol. Training using the 1-softsync protocol with
mini-batch size of µ = 16 and 18 learners takes ∼270
minutes/epoch, reaching a top-1 (top-5) accuracy of 45.63%
(22.08%) by the end of 30 epochs (base-softsync). For this
particular benchmark, the training setup for the 1-softsync
protocol differs from the hardsync protocol in certain subtle,
Configuration
Architecture
µ
λ
Synchronization
protocol
Validation
error(top-1)
Validation
error (top-5)
Training time
(minutes/epoch)
base-hardsync
base-softsync
adv-softsync
adv∗ -softsync
Rudra-base
Rudra-base
Rudra-adv
Rudra-adv∗
16
16
4
4
18
18
54
54
Hardsync
1-softsync
1-softsync
1-softsync
44.35%
45.63%
46.09%
46.53%
20.85%
22.08%
22.44%
23.38%
330
270
212
125
TABLE IV
Results on ImageNet benchmark: VALIDATION ERROR AT THE END OF 30 EPOCHS AND TRAINING TIME PER EPOCH FOR DIFFERENT CONFIGURATIONS .
8 5
b a s
b a s
a d v
a d v
8 0
V a lid a tio n e r r o r ( to p - 1 ) ( % )
but important ways. First, we use an adaptive learning rate
method (AdaGrad [7, 6]) to improve the stability of SGD when
training using the 1-softsync protocol. Second, to improve
convergence we adopt the strategy of warmstarting [21] the
training procedure by initializing the network’s weights from
a model trained with hardsync for 1 epoch.
Further improvement in the runtime performance may be
obtained by adding more learners to the system. However,
as observed in the previous section, increase in the number
of learners needs to be accompanied by a reduction in the
mini-batch size to prevent degradation in the accuracy of the
trained model. The combination of a large number of learners
and a small mini-batch size represents a scenario where the
Rudra-base architecture may suffer from a bottleneck at the
parameter server due to the frequent pushGradient and
pullWeights requests. These effects are expected to be
more pronounced for large models such as ImageNet. The
Rudra-adv architecture alleviates these bottlenecks, to some
extent, by implementing a parameter server group organized in
a tree structure. λ = 54 learners, each processing a mini-batch
size µ = 4 trains at ∼212 minutes/epoch when using Rudraadv architecture and 1-softsync protocol (adv-softsync). As in
the case of Rudra-base, the average staleness in the gradients
is close to 1 and this configuration also achieves a top-1(top-5)
error of 46.09%(22.44%).
The Rudra-adv∗ architecture improves the runtime further
by preventing the computation at the learner from stalling
on the parameter server. However, this improvement in performance comes at the cost of increasing the average staleness in the gradients, which may deteriorate the quality of
the trained model. The previous configuration runs at ∼125
minutes/epoch, but suffers an increase in the top-1 validation
error (46.53%) when using Rudra-adv∗ architecture (adv∗ softsync). Table IV summarizes the results obtained for the
4 configurations discussed above. It is worth mentioning that
the configuration µ = 8, λ = 54 performs significantly
worse, producing a model that gives top-1 error of > 50%
but trains at a speed of ∼96 minutes/epoch. This supports our
observation that scaling out to large number of learners must
be accompanied by reducing the mini-batch size per learner
so the quality of the trained model can be preserved.
Figure 8 compares the evolution of the top-1 validation error
during training for the 4 different configuration summarized in
Table IV. The training speed follows the order adv∗ -softsync >
adv-softsync > base-softsync > base-hardsync. As a result,
adv∗ -softsync is the first configuration to hit the 48% validation
7 5
7 0
6 5
6 0
a rd
o fts
fts y
o fts
s y n c
y n c
n c
y n c
+
5 5
5 0
4 8 .0 %
4 5
4 0
e -h
e -s
-s o
*-s
0
4 3 .9 5 %
2 0
4 0
6 0
8 0
1 0 0
1 2 0
T r a in in g tim e ( h o u r s )
1 4 0
1 6 0
1 8 0
Fig. 8. Results on ImageNet benchmark: Error on the validation set as a
function of training time for the configurations listed in Table IV
error mark. Configurations other than base-hardsync show
marginally higher validation error compared with the baseline.
As mentioned earlier, the experiments with 1-softsync protocol use AdaGrad to achieve stable convergence. It is welldocumented [24, 21] that AdaGrad is sensitive to the initial
setting on the learning rates. We speculate that tuning the
initial learning rate can help recover the slight loss of accuracy
for the 1-softsync runs.
VI. R ELATED W ORKS
To accelerate training of deep neural networks and handle large dataset and model size, many researchers have
adopted GPU-based solutions, either single-GPU [16] or multiGPU [26] GPUs provide enormous computing power and are
particularly suited for the matrix multiplications which are
the core of many deep learning implementations. However,
the relatively limited memory available on GPUs may restrict
their applicability to large model sizes.
Distbelief [6] pioneered scale-out deep learning on CPUs.
Distbelief is built on tens of thousands of commodity PCs and
employs both model parallelism via dividing a single model
between learners, and data parallelism via model replication.
To reduce traffic to the parameter server, Distbelief shards
parameters over a parameter server group. Learners asynchronously push gradients and pull weights from the parameter
server. The frequency of communication can be tuned via
npush and nfetch parameters.
More recently, Adam [3] employs a similar system architecture to DistBelief, while improving on Distbelief in two
respects: (1) better system tuning, e.g. customized concurrent
memory allocator, better linear algebra library implementation,
and passing activation and error gradient vector instead of the
weights update; and (2) leveraging the recent improvement in
machine learning, in particular convolutional neural network
to achieve better model accuracy.
In any parameter server based deep learning system [12],
staleness will negatively impact model accuracy. Orthogonal to
the system design, many researchers have proposed solutions
to counter staleness in the system, such as bounding the
staleness in the system [5] or changing optimization objective
function, such as elastic averaging SGD [25]. In this paper, we
empirically study how staleness affects the model accuracy and
discover the heuristics that smaller mini-batch size can effectively counter system staleness. In our experiments, we derive
this heuristics from a small problem size(e.g., CIFAR10) and
we find this heuristic is applicable even to much larger problem
size (e.g., ImageNet). Our finding coincides with a very
recent theoretical paper [19], in which the authors prove that in
order to achieve linear speedup using asynchronous protocol
while maintaining good model accuracy, one needs to increase
the number of weight updates conducted at the parameter
server. In our system, this theoretical finding is equivalent to
keeping constant number of training epochs while reducing the
mini-batch size. To the best of our knowledge, our work is the
first systematic study of the tradeoff between model accuracy
and runtime performance for distributed deep learning.
VII. C ONCLUSION
In this paper, we empirically studied the interplay of hyperparameter tuning and scale-out in three protocols for communicating model weights in asynchronous stochastic gradient
descent. We divide the learning rate by the average staleness of
gradients, resulting in faster convergence and lower test error.
Our experiments show that the 1-softsync protocol (in which
the parameter server accumulates λ gradients before updating
the weights) minimizes gradient staleness and achieves the
lowest runtime for a given test error. We found that to maintain
a model accuracy, it is necessary to reduce the mini-batch
size as the number of learners is increased. This suggests an
upper limit on the level of parallelism that can be exploited
for a given model, and consequently a need for algorithms that
permit training over larger batch sizes.
ACKNOWLEDGEMENT
The work of Fei Wang is partially supported by National
Science Foundation under Grant Number IIS-1650723.
R EFERENCES
[1] Y. Bengio. Practical recommendations for gradient-based training of deep architectures. In Neural Networks: Tricks of the
Trade, pages 437–478. Springer, 2012.
[2] T. M. Breuel. The effects of hyperparameters on SGD training
of neural networks. arXiv:1508.02788, 2015.
[3] T. Chilimbi, Y. Suzue, J. Apacible, and K. Kalyanaraman.
Project Adam: Building an efficient and scalable deep learning
training system. OSDI’14, pages 571–582, 2014.
[4] A. Coates, B. Huval, T. Wang, D. Wu, B. Catanzaro, and N. Andrew. Deep learning with cots hpc systems. In Proceedings of
the 30th ICML, pages 1337–1345, 2013.
[5] H. e. a. Cui. Exploiting bounded staleness to speed up big data
analytics. In USENIX ATC’14, pages 37–48, 2014.
[6] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V.
Le, M. Z. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang,
and A. Y. Ng. Large scale distributed deep networks. In NIPS,
2012.
[7] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient
methods for online learning and stochastic optimization. The
Journal of Machine Learning Research, 12:2121–2159, 2011.
[8] O. R. et al. ImageNet Large Scale Visual Recognition Challenge. IJCV, pages 1–42, 2015.
[9] M. P. I. Forum.
Mpi 3.0 standard.
www.mpiforum.org/docs/mpi-3.0/mpi30-report.pdf, 2012.
[10] X. Glorot and Y. Bengio. Understanding the difficulty of
training deep feedforward neural networks. In AISTATS, pages
249–256, 2010.
[11] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into
rectifiers: Surpassing human-level performance on imagenet
classification. arXiv preprint arXiv:1502.01852, 2015.
[12] Q. Ho, J. Cipar, H. Cui, S. Lee, J. K. Kim, P. B. Gibbons, G. A.
Gibson, G. Ganger, and E. P. Xing. More effective distributed
ML via a stale synchronous parallel parameter server. In NIPS
26, pages 1223–1231. 2013.
[13] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
deep network training by reducing internal covariate shift. arXiv
preprint arXiv:1502.03167, 2015.
[14] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long,
R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint
arXiv:1408.5093, 2014.
[15] A. Krizhevsky and G. Hinton. Learning multiple layers of
features from tiny images. Computer Science Department,
University of Toronto, Tech. Rep, 1(4):7, 2009.
[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in
neural information processing systems, pages 1097–1105, 2012.
[17] L. Lamport. Time, clocks, and the ordering of events in a
distributed system. Commun. ACM, 21(7):558–565, 1978.
[18] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature,
521(7553):436–444, 2015.
[19] X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous Parallel
Stochastic Gradient for Nonconvex Optimization. ArXiv eprints, June 2015.
[20] M. Raynal and M. Singhal. Logical time: Capturing causality
in distributed systems. Computer, 29(2):49–56, 1996.
[21] A. Senior, G. Heigold, M. Ranzato, and K. Yang. An empirical
study of learning rates in deep neural networks for speech
recognition. In ICASSP, pages 6724–6728. IEEE, 2013.
[22] A. Smola and S. Narayanamurthy. An architecture for parallel
topic models. Proc. VLDB Endow., 3(1-2):703–710, 2010.
[23] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the
importance of initialization and momentum in deep learning.
In Proceedinge of the 30th ICML, pages 1139–1147, 2013.
[24] M. D. Zeiler. Adadelta: An adaptive learning rate method. arXiv
preprint arXiv:1212.5701, 2012.
[25] S. Zhang, A. Choromanska, and Y. LeCun. Deep learning with
elastic averaging SGD. CoRR, abs/1412.6651, 2014.
[26] Y. Zou, X. Jin, Y. Li, Z. Guo, E. Wang, and B. Xiao. Mariana:
Tencent deep learning platform and its applications. Proc. VLDB
Endow., 7(13):1772–1777, 2014.
| 9 |
The Likelihood Ratio Test in High-Dimensional Logistic
Regression Is Asymptotically a Rescaled Chi-Square
arXiv:1706.01191v1 [math.ST] 5 Jun 2017
Pragya Sur∗
Yuxin Chen†
Emmanuel J. Candès∗‡
June 2017
Abstract
Logistic regression is used thousands of times a day to fit data, predict future outcomes, and assess the
statistical significance of explanatory variables. When used for the purpose of statistical inference, logistic
models produce p-values for the regression coefficients by using an approximation to the distribution of
the likelihood-ratio test. Indeed, Wilks’ theorem asserts that whenever we have a fixed number p of
variables, twice the log-likelihood ratio (LLR) 2Λ is distributed as a χ2k variable in the limit of large
sample sizes n; here, χ2k is a chi-square with k degrees of freedom and k the number of variables being
tested. In this paper, we prove that when p is not negligible compared to n, Wilks’ theorem does not
hold and that the chi-square approximation is grossly incorrect; in fact, this approximation produces
p-values that are far too small (under the null hypothesis).
Assume that n and p grow large in such a way that p/n → κ for some constant κ < 1/2. We prove that
d
for a class of logistic models, the LLR converges to a rescaled chi-square, namely, 2Λ → α(κ)χ2k , where
the scaling factor α(κ) is greater than one as soon as the dimensionality ratio κ is positive. Hence, the
LLR is larger than classically assumed. For instance, when κ = 0.3, α(κ) ≈ 1.5. In general, we show how
to compute the scaling factor by solving a nonlinear system of two equations with two unknowns. Our
mathematical arguments are involved and use techniques from approximate message passing theory, from
non-asymptotic random matrix theory and from convex geometry. We also complement our mathematical
study by showing that the new limiting distribution is accurate for finite sample sizes.
Finally, all the results from this paper extend to some other regression models such as the probit
regression model.
Keywords. Logistic regression, likelihood-ratio tests, Wilks’ theorem, high-dimensionality, goodness of
fit, approximate message passing, concentration inequalities, convex geometry, leave-one-out analysis
1
Introduction
Logistic regression is by far the most widely used tool for relating a binary response to a family of explanatory
variables. This model is used to infer the importance of variables and nearly all standard statistical softwares
have inbuilt packages for obtaining p-values for assessing the significance of their coefficients. For instance,
one can use the snippet of R code below to fit a logistic regression model from a vector y of binary responses
and a matrix X of covariates:
fitted <- glm(y ~ X+0, family = ‘binomial’)
pvals <- summary(fitted)$coefficients[,4]
∗ Department
of Statistics, Stanford University, Stanford, CA 94305, U.S.A.
of Electrical Engineering, Princeton University, Princeton, NJ 08544, U.S.A.
‡ Department of Mathematics, Stanford University, Stanford, CA 94305, U.S.A.
† Department
1
The vector pvals stores p-values for testing whether a variable belongs to a model or not, and it is well known
that the underlying calculations used to produce these p-values can also be used to construct confidence
intervals for the regression coefficients. Since logistic models are used hundreds of times every day for
inference purposes, it is important to know whether these calculations—e.g. these p-values—are accurate
and can be trusted.
1.1
Binary regression
Imagine we have n samples of the form (yi , Xi ), where yi ∈ {0, 1} and Xi ∈ Rp . In a generalized linear
model, one postulates the existence of a link function µ(·) relating the conditional mean of the response
variable to the linear predictor Xi> β,
E[yi |Xi ] = µ(Xi> β),
(1)
where β = [β1 , β2 , . . . , βp ]> ∈ Rp is an unknown vector of parameters. We focus here on the two most
commonly used binary regression models, namely, the logistic and the probit models for which
(
et /(1 + et ) in the logistic model,
µ(t) :=
(2)
Φ(t)
in the probit model;
here, Φ is the cumulative distribution function (CDF) of a standard normal random variable. In both cases,
the Symmetry Condition
µ(t) + µ(−t) = 1
(3)
holds, which says that the two types yi = 0 and yi = 1 are treated in a symmetric fashion. Assuming that
the observations are independent, the negative log-likelihood function is given by [1, Section 4.1.2]
n
X
µi
+ log (1 − µi ) ,
µi := µ(Xi> β).
yi log
` (β) := −
1
−
µ
i
i=1
Invoking the symmetry condition, a little algebra reveals an equivalent expression
Xn
` (β) :=
ρ − ỹi Xi> β ,
i=1
where
(
1
if yi = 1,
ỹi :=
−1 if yi = 0,
(
and
ρ(t) :=
log (1 + et ) in the logistic case,
− log Φ (−t) in the probit case.
(4)
(5)
Throughout we refer to this function ρ as the effective link.
1.2
The likelihood-ratio test and Wilks’ phenomenon
Researchers often wish to determine which covariates are of importance, or more precisely, to test whether
the jth variable belongs to the model or not: formally, we wish to test the hypothesis
Hj :
βj = 0 versus
βj 6= 0.
(6)
Arguably, one of the most commonly deployed techniques for testing Hj is the likelihood-ratio test (LRT),
which is based on the log-likelihood ratio (LLR) statistic
Λj := ` β̂(−j) − ` β̂ .
(7)
Here, β̂ and β̂(−j) denote respectively the maximum likelihood estimates (MLEs) under the full model and
the reduced model on dropping the jth predictor; that is,
β̂ = arg minp `(β)
β∈R
β̂(−j) = arg
and
2
min
β∈Rp ,βj =0
`(β).
Inference based on such log-likelihood ratio statistics has been studied extensively in prior literature [12, 37,
52]. Arguably, one of the most celebrated results in the large-sample regime is the Wilks’ theorem.
To describe the Wilk’s phenomenon, imagine we have a sequence of observations (yi , Xi ) where yi ∈ {0, 1},
Xi ∈ Rp with p fixed. Since we are interested in the limit of large samples, we may want to assume that the
covariates are i.i.d. drawn from some population with non-degenerate covariance matrix so that the problem
is fully p-dimensional. As before, we assume a conditional logistic model for the response. In this setting,
Wilks’ theorem [52] calculates the asymptotic distribution of Λj (n) when n grows to infinity:
(Wilks’ phenomenon) Under suitable regularity conditions which, for instance, guarantee that the
MLE exists and is unique,1 the LLR statistic for testing Hj : βj = 0 vs. βj 6= 0 has asymptotic
distribution under the null given by
d
2Λj (n) → χ21 ,
as n → ∞.
(8)
This fixed-p large-n asymptotic result, which is a consequence of asymptotic normality properties of the
MLE [50, Theorem 5.14], applies to a much broader class of testing problems in parametric models; for
instance, it applies to the probit model as well. We refer the readers to [34, Chapter 12] and [50, Chapter
16] for a thorough exposition and details on the regularity conditions under which Wilks’ theorem holds.
Finally, there is a well-known extension which states that if we were to drop k variables from the model,
then the LLR would converge to a chi-square distribution with k degrees of freedom under the hypothesis
that the reduced model is correct.
1.3
Inadequacy of Wilks’ theorem in high dimensions
The chi-square approximation to the distribution of the LLR statistic is used in standard statistical softwares
to provide p-values for the single or multiple coefficient likelihood ratio tests. Here, we perform a simple
experiment on synthetic data to study the accuracy of the chi-square approximation when p and n are
both decently large. Specifically, we set β = 0 and test β1 = 0 vs. β1 6= 0 using the LRT in a setting
i.i.d.
where p = 1200. In each trial, n = 4000 observations are produced with yi ∼ Bernoulli(1/2), and X :=
[X1 , · · · , Xn ]> ∈ Rn×p is obtained by generating a random matrix composed of i.i.d. N (0, 1) entries. We
fit a logistic regression of y on X using R, and extract the p-values for each coefficient. Figure 1 plots the
pooled histogram that aggregates 4.8 × 105 p-values in total (400 trials with 1200 p-values obtained in each
trial).
If the χ21 approximation were true, then we would expect to observe uniformly distributed p-values. The
histrogram from Fig. 1 is, however, far from uniform. This is an indication of the inadequacy of Wilks’
theorem when p and n are both large. The same issue was also reported in [11], where the authors observed
that this discrepancy is highly problematic since the distribution is skewed towards smaller values. Hence,
such p-values cannot be trusted to construct level-α tests and the problem is increasingly severe when we
turn attention to smaller p-values as in large-scale multiple testing applications.
1.4
The Bartlett correction?
A natural question that arises immediately is whether the observed discrepancy could be an outcome of
a finite-sample effect. It has been repeatedly observed that the chi-square approximation does not yield
accurate results with finite sample size. One correction to the LRT that is widely used in finite samples is
the Bartlett correction, which dates back to Bartlett [5] and has been extensively studied over the past few
decades (e.g. [8, 10, 13, 15, 33]). In the context of testing for a single coefficient in the logistic model, this
correction can be described as follows [38]: compute the expectation of the LLR statistic up to terms of
order 1/n2 ; that is, compute a parameter α such that
1
α
,
E[2Λj ] = 1 + + O
n
n2
1 Such
conditions would also typically imply asymptotic normality of the MLE.
3
12500
30000
15000
10000
7500
Counts
Counts
Counts
20000
10000
10000
5000
5000
2500
0
0
0.00
0.25
0.50
P−Values
0.75
1.00
0
0.00
(a)
0.25
0.50
P−Values
0.75
1.00
(b)
0.00
0.25
0.50
P−Values
0.75
1.00
(c)
Figure 1: Histogram of p-values for logistic regression under i.i.d. Gaussian design, when β = 0, n = 4000,
p = 1200, and κ = 0.3: (a) classically computed p-values; (b) Bartlett-corrected p-values; (c) adjusted
p-values.
which suggests a corrected LLR statistic
2Λj
1 + αnn
(9)
with αn being an estimator of α. With a proper choice of αn , one can ensure
2Λj
1
E
=1+O
1 + αnn
n2
in the classical setting where p is fixed and n diverges. In expectation, this corrected statistic is closer to a
χ21 distribution than the original LLR for finite samples. Notably, the correction factor may in general be a
function of the unknown β and, in that case, must be estimated from the null model via maximum likelihood
estimation.
In the context of GLMs, Cordeiro [13] derived a general formula for the Bartlett corrected LLR statistic,
see [14, 17] for a detailed survey. In the case where there is no signal (β = 0), one can compute αn for the
logistic regression model following [13] and [38], which yields
αn =
n
2
Tr Dp2 − Tr Dp−1
.
2
(10)
−1 >
>
Here, Dp is the diagonal part of X(X > X)−1 X > and Dp−1 is that of X(−j) X(−j)
X(−j)
X(−j) in
which X(−j) is the design matrix X with the jth column removed. Comparing the adjusted LLRs to a χ21
distribution yields adjusted p-values. In the setting of Fig. 1(a), the histogram of Bartlett corrected p-values
is shown in Fig. 1(b). As we see, these p-values are still far from uniform.
If the mismatch is not due to finite sample-size effects, what is the distribution of the LLR in high
dimensions? Our main contribution is to provide a very precise answer to this question; below, we derive the
high-dimensional asymptotic distribution of the log-likelihood ratios, i.e. in situations where the dimension
p is not necessarily negligible compared to the sample size n.
4
2
Main results
2.1
Modelling assumptions
In this paper, we focus on the high-dimensional regime where the sample size is not much larger than the
number of parameters to be estimated—a setting which has attracted a flurry of activity in recent years.
In particular, we assume that the number p(n) of covariates grows proportionally with the number n of
observations; that is,
p(n)
= κ,
(11)
lim
n→∞
n
where κ > 0 is a fixed constant independent of n and p(n). In fact, we shall also assume κ < 1/2 for both
the logistic and the probit models, as the MLE is otherwise at ∞; see Section 2.2.
To formalize the notion of high-dimensional asymptotics when both n and p(n) diverge, we consider a
sequence of instances {X(n), y(n)}n≥0 such that for any n,
• X(n) ∈ Rn×p(n) has i.i.d. rows Xi (n) ∼ N (0, Σ), where Σ ∈ Rp(n)×p(n) is positive definite;
ind.
• yi (n) | X(n) ∼ yi (n) | Xi (n) ∼ Bernoulli µ(Xi (n)> β(n)) , where µ satisfies the Symmetry Condition;
• we further assume β(n) = 0. From the Symmetry Condition it follows that µ(0) = 1/2, which directly
implies that y(n) is a vector with i.i.d Bernoulli(1/2) entries.
The MLE is denoted by β̂(n) and there are p(n) LLR statistics Λj (n) (1 ≤ j ≤ p(n)), one for each of the
p(n) regression coefficients. In the sequel, the dependency on n shall be suppressed whenever it is clear from
the context.
2.2
When does the MLE exist?
Even though we are operating in the regime where n > p, the existence of the MLE cannot be guaranteed
for all p and n. Interestingly, the norm of the MLE undergoes a sharp phase transition in the sense that
kβ̂k = ∞ if κ > 1/2
and
kβ̂k < ∞ if κ < 1/2.
Here, we develop some understanding about this phenomenon. Given that ρ(t) ≥ ρ(−∞) = 0 for both
the logistic and probit models, each summand in (4) is minimized if ỹi Xi> β = ∞, which occurs when
sign(Xi> β) = sign(ỹi ) and kβk = ∞. As a result, if there exists a nontrivial ray β such that
Xi> β > 0
if ỹi = 1
and
Xi> β < 0
if ỹi = −1
(12)
for any 1 ≤ i ≤ n, then pushing kβk to infinity leads to an optimizer of (4). In other words, the solution to
(4) becomes unbounded (the MLE is at ∞) whenever there is a hyperplane perfectly separating the two sets
of samples {i | ỹi = 1} and {i | ỹi = −1}.
Under the assumptions from Section 2.1, ỹi is independent of X and the distribution of X is symmetric.
Hence, to calculate the chance that there exists a separating hyperplane, we can assume ỹi = 1 (1 ≤ i ≤ n)
without loss of generality. In this case, the event (12) becomes
{Xβ | β ∈ Rp } ∩ Rn++ 6= ∅,
(13)
where Rn++ is the positive orthant. Write X = ZΣ1/2 so that Z is an n × p matrix with i.i.d. standard
Gaussian entries, and θ = Σ1/2 β. Then the event (13) is equivalent to
{Zθ | θ ∈ Rp } ∩ Rn++ 6= ∅.
5
(14)
Now the probability that (14) occurs is the same as that
{Zθ | θ ∈ Rp } ∩ Rn+ 6= {0}
(15)
occurs, where Rn+ denotes the non-negative orthant. From the approximate kinematic formula [3, Theorem
I] in the literature on convex geometry, the event (15) happens with high probability if and only if the total
statistical dimension of the two closed convex cones exceeds the ambient dimension, i.e.
δ ({Zθ | θ ∈ Rp }) + δ Rn+ > n + o(n).
(16)
Here, the statistical dimension of a closed convex cone K is defined as
δ(K) := Eg∼N (0,I) kΠK (g) k2
(17)
with ΠK (g) := arg minz∈K kg − zk the Euclidean projection. Recognizing that [3, Proposition 2.4]
δ ({Zθ | θ ∈ Rp }) = p and δ(Rn+ ) = n/2,
we reduce the condition (16) to
p + n/2 > n + o(n)
or
p/n > 1/2 + o(1),
thus indicating that kβ̂k = ∞ when κ = lim p/n > 1/2.
This argument only reveals that kβ̂k = ∞ in the regime where κ > 1/2. If κ = p/n < 1/2, then
kΣ1/2 β̂k = O(1) with high probability, a fact we shall prove in Section 5. In light of these observations we
work with the additional condition
κ < 1/2.
(18)
2.3
The high-dimensional limiting distribution of the LLR
In contrast to the classical Wilks’ result, our findings reveal that the LLR statistic follows a rescaled chisquare distribution with a rescaling factor that can be explicitly pinned down through the solution to a
system of equations.
2.3.1
A system of equations
We start by setting up the crucial system of equations. Before proceeding, we first recall the proximal
operator
1
(19)
proxbρ (z) := arg min bρ(x) + (x − z)2
x∈R
2
defined for any b > 0 and convex function ρ(·). As in [18], we introduce the operator
Ψ(z; b) := bρ0 (proxbρ (z)),
(20)
which is simply the proximal operator of the conjugate (bρ)∗ of bρ.2 To see this, we note that Ψ satisfies the
relation [18, Proposition 6.4]
Ψ(z; b) + proxbρ (z) = z.
(21)
The claim that Ψ(·; b) = prox(bρ)∗ (·) then follows from the Moreau decomposition
proxf (z) + proxf ∗ (z) = z,
2 The
∀z,
conjugate f ∗ of a function f is defined as f ∗ (x) = supu∈dom(f ) {hu, xi − f (u)}.
6
(22)
25.0
●
●
20.0
2.00
●
15.0
●
Rescaling Constant
Rescaling Constant
●
10.0
●
1.75
●
●
●
●
●
●
1.50
●
●
●
●
●
1.25
1.00
●●
●●
●●
●●●
0.00
0.05
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
5.0
●
●
●
●
1.0
0.10
0.15
0.20
κ
0.25
0.30
0.35
0.40
●
2.5
●
●●● ● ●
●●●●
●●●
●●●
●●●
●●
●●●
●●●
●●
●●
●●
●●
●
●
●●
●
●
●
●
●
●
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45
κ
Figure 2: Rescaling constant τ∗2 /b∗ as a function of κ for the logistic model. Note the logarithmic scale in
the right panel. The curves for the probit model are nearly identical.
which holds for a closed convex function f [39, Section 2.5]. Interested readers are referred to [18, Appendix
1] for more properties of proxbρ and Ψ.
We are now in position to present the system of equations that plays a crucial role in determining the
distribution of the LLR statistic in high dimensions:
i
1 h
2
E (Ψ (τ Z; b)) ,
κ
κ = E Ψ0 (τ Z; b) ,
τ2 =
(23)
(24)
where Z ∼ N (0, 1), and Ψ0 (·, ·) denotes differentiation with respect to the first variable. The fact that this
system of equations would admit a unique solution in R2+ is not obvious a priori. We shall establish this for
the logistic and the probit models later in Section 6.
2.3.2
Main result
Theorem 1. Consider a logistic or probit regression model under the assumptions from Section 2.1. If
κ ∈ (0, 1/2), then for any 1 ≤ j ≤ p, the log-likelihood ratio statistic Λj as defined in (7) obeys
d
2Λj →
τ∗2 2
χ ,
b∗ 1
as n → ∞,
(25)
where (τ∗ , b∗ ) ∈ R2+ is the unique solution to the system of equations (23) and (24). Furthermore, the LLR
statistic obtained by dropping k variables for any fixed k converges to (τ∗2 /b∗ )χ2k . Finally, these results extend
to all binary regression models with links obeying the assumptions listed in Section 2.3.3.
Hence, the limiting distribution is a rescaled chi-square with a rescaling factor τ∗2 /b∗ that only depends
on the aspect ratio κ. Fig. 2 illustrates the dependence of the rescaling factor on the limiting aspect ratio κ
for logistic regression. The figures for the probit model are similar as the rescaling constants actually differ
by very small values.
To study the quality of approximation for finite samples, we repeat the same numerical experiments as
before but now obtain the p-values by comparing the LLR statistic with the rescaled chi-square suggested by
7
Theorem 1. For a particular run of the experiment (n = 4000, p = 1200, κ = 0.3), we compute the adjusted
LLR statistic (2b∗ /τ∗2 )Λj for each coefficient and obtain the p-values based on the χ21 distribution. The
pooled histogram that aggregates 4.8 × 105 p-values in total is shown in Fig. 1(c).
As we clearly see, the p-values are much closer to a uniform distribution now. One can compute the
chi-square goodness of fit statistic to test the closeness of the above distribution to uniformity. To this end,
we divide the interval [0, 1] into 20 equally spaced bins of width 0.05 each. For each bin we compute the
observed number of times a p-value falls in the bin out of the 4.8 × 105 values. Then a chi-square goodness of
fit statistic is computed, noting that the expected frequency is 24000 for each bin. The chi-square statistic
in this case is 16.049, which gives a p-value of 0.654 in comparison with a χ219 variable. The same test when
performed with the Bartlett corrected p-values (Fig. 1(b)) yields a chi-square statistic 5599 with a p-value
of 0. 3 Thus, our correction gives the desired uniformity in the p-values when the true signal β = 0.
Practitioners would be concerned about the validity of p-values when they are small—again, think about
multiple testing applications. In order to study whether our correction yields valid results for small p-values,
we compute the proportion of times the p-values (in all the three cases) lie below 5%, 1%, 0.5%, 0.1% out
of the 4.8 × 105 times. The results are summarized in Table 1. This further illustrates the deviation from
uniformity for the classical and Bartlett corrected p-values, whereas the “adjusted” p-values obtained invoking
Theorem 1 are still valid.
P{p-value ≤ 5%}
P{p-value ≤ 1%}
P{p-value ≤ 0.5%}
P{p-value ≤ 0.1%}
P{p-value ≤ 0.05%}
P{p-value ≤ 0.01%}
Classical
11.1044%(0.0668%)
3.6383%(0.038%)
2.2477%(0.0292%)
0.7519%(0.0155%)
0.4669%(0.0112%)
0.1575%(0.0064%)
Bartlett-corrected
6.9592%(0.0534%)
1.6975%(0.0261%)
0.9242%(0.0178%)
0.2306%(0.0078%)
0.124%(0.0056%)
0.0342%(0.0027%)
Adjusted
5.0110%(0.0453%)
0.9944%(0.0186%)
0.4952%(0.0116%)
0.1008%(0.0051%)
0.0542%(0.0036%)
0.0104%(0.0014%)
Table 1: Estimates of p-value probabilities with estimated Monte Carlo standard errors in parentheses under
i.i.d. Gaussian design.
2.3.3
Extensions
As noted in Section 1.1, the Symmetry Condition (3) allows to express the negative log-likelihood in the
form (4), which makes use of the effective link ρ(·). Theorem 1 applies to any ρ(·) obeying the following
properties:
1. ρ is non-negative, has up to three derivatives, and obeys ρ(t) ≥ t.
2. ρ0 may be unbounded but it should grow sufficiently slowly, in particular, we assume |ρ0 (t)| = O(|t|)
and ρ0 (proxcρ (Z)) is a sub-Gaussian random variable for any constant c > 0 and any Z ∼ N (0, σ 2 ) for
some finite σ > 0.
3. ρ00 (t) > 0 for any t which implies that ρ is convex, and supt ρ00 (t) < ∞.
4. supt |ρ000 (t)| < ∞.
5. Given any τ > 0 ,the equation (24) has a unique solution in b.
6. The map V(τ 2 ) as defined in (57) has a fixed point.
3 Note that the p-values obtained at each trial are not exactly independent. However, they are exchangeable, and weakly
dependent (see the proof of Corollary 1 for a formal justification of this fact). Therefore, we expect the goodness of fit test to
be an approximately valid procedure in this setting.
8
It can be checked that the effective links for both the logistic and the probit models (5) obey all of the above.
The last two conditions are assumed to ensure existence of a unique solution to the system of equations (23)
and (24) as will be seen in Section 6; we shall justify these two conditions for the logistic and the probit
models in Section 6.1.
2.4
Reduction to independent covariates
In order to derive the asymptotic distribution of the LLR statistics, it in fact suffices to consider the special
case Σ = Ip .
Lemma 1. Let Λj (X) be the LLR statistic based on the design matrix X, where the rows of X are
i.i.d. N (0, Σ) and Λj (Z) that where the rows are i.i.d. N (0, Ip ). Then
d
Λj (X) = Λj (Z).
Proof: Recall from (4) that the LLR statistic for testing the jth coefficient can be expressed as
Λj (X) = min
β
n
X
ρ(−ỹi e>
i Xβ)
i=1
n
X
− min
β:βj =0
ρ(−ỹi e>
i Xβ).
i=1
Write Z 0 = XΣ−1/2 so that the rows of Z 0 are i.i.d. N (0, Ip ) and set θ 0 = Σ1/2 β. With this reparameter0
p
ization, we observe that the constraint βj = 0 is equivalent to a>
j θ = 0 for some non-zero vector aj ∈ R .
This gives
n
n
X
X
> 0 0
0 0
ρ(−ỹ
e
Z
θ
)
−
min
ρ(−ỹi e>
Λj (X) = min
i i
i Z θ ).
0
θ
0
θ 0 :a>
j θ =0
i=1
i=1
p
Now let Q be an orthogonal matrix mapping aj ∈ R into the vector kaj kej ∈ Rp , i.e. Qaj = kaj kej .
0
Additionally, set Z = Z 0 Q (the rows of Z are still i.i.d. N (0, Ip )) and θ = Qθ 0 . Since a>
j θ = 0 occurs if
and only if θj = 0, we obtain
Λj (X) = min
θ
n
X
i=1
ρ(−ỹi e>
i Zθ) − min
θ:θj =0
n
X
ρ(−ỹi e>
i Zθ) = Λj (Z),
i=1
which proves the lemma.
In the remainder of the paper we, therefore, assume Σ = Ip .
2.5
Proof architecture
This section presents the main steps for proving Theorem 1. We will only prove the theorem for {Λj }, the
LLR statistic obtained by dropping a single variable. The analysis for the LLR statistic obtained on dropping
k variables (for some fixed k) follows very similar steps and is hence omitted for the sake of conciceness. As
discussed before, we are free to work with any configuration of the yi ’s. For the two steps below, we will
adopt two different configurations for convenience of presentation.
2.5.1
Step 1: characterizing the asymptotic distributions of β̂j
Without loss of generality, we assume here that yi = 1 (and hence ỹi = 1) for all 1 ≤ i ≤ n and, therefore,
the MLE problem reduces to
Xn
minimizeβ∈Rp
ρ(−Xi> β).
i=1
We would first like to characterize the marginal distribution of β̂, which is crucial in understanding the LLR
statistic. To this end, our analysis follows by a reduction to the setup of [18–21], with certain modifications
9
that are called for due to the specific choices of ρ(·) we deal with here. Specifically, consider the linear model
y = Xβ + w,
(26)
and prior work [18–21] investigating the associated M-estimator
Xn
minimizeβ∈Rp
ρ(yi − Xi> β).
i=1
(27)
Our problem reduces to (27) on setting y = w = 0 in (27). When ρ(·) satisfies certain assumptions
(e.g. strong convexity), the asymptotic distribution of kβ̂k has been studied in a series of works [19–21] using
a leave-one-out analysis and independently in [18] using approximate message passing (AMP) machinery.
An outline of their main results is described in Section 2.7. However, the function ρ(·) in our cases has
vanishing curvature and, therefore, lacks the essential strong convexity assumption that was utilized in both
the aforementioned lines of work. To circumvent this issue, we propose to invoke the AMP machinery as
in [18], in conjunction with the following critical additional ingredients:
• (Norm Bound Condition) We utilize results from the conic geometry literature (e.g. [3]) to establish
that
kβ̂k = O(1)
with high probability as long as κ < 1/2. This will be elaborated in Theorem 4.
• (Likelihood Curvature Condition) We establish some regularity conditions on the Hessian of the loglikelihood function, generalizing the strong convexity condition, which will be detailed in Lemma 4.
• (Uniqueness of the Solution to (23) and (24)) We establish that for both the logistic and the probit
case, the system of equations (23) and (24) admits a unique solution.
We emphasize that these elements are not straightforward, require significant effort and a number of novel
ideas, which form our primary technical contributions for this step.
These ingredients enable the use of the AMP machinery even in the absence of strong convexity on ρ(·),
finally leading to the following theorem:
Theorem 2. Under the conditions of Theorem 1,
lim kβ̂k2 =a.s. τ∗2 .
n→∞
(28)
This theorem immediately implies that the marginal distribution of β̂j is normal.
Corollary 1. Under the conditions of Theorem 1, for every 1 ≤ j ≤ p, it holds that
√
d
pβ̂j → N (0, τ∗2 ),
as n → ∞.
(29)
Proof: From the rotational invariance of our i.i.d. Gaussian design, it can be easily verified that β̂/kβ̂k
is uniformly distributed on the unit sphere Sp−1 and is independent of kβ̂k. Therefore, β̂j has the same
√
distribution as kβ̂kZj /kZk, where Z = (Z1 , . . . , Zp ) ∼ N (0, Ip ) independent of kβ̂k. Since pkβ̂k/kZk
√
converges in probability to τ∗ , we have, by Slutsky’s theorem, that pβ̂j converges to N (0, τ∗2 ) in distribution.
2.5.2
Step 2: connecting Λj with β̂j
Now that we have derived the asymptotic distribution of β̂j , the next step involves a reduction of the LLR
statistic to a function of the relevant coordinate of the MLE. Before continuing, we note that the distribution
of Λj is the same for all 1 ≤ j ≤ p due to exchangeability. As a result, going forward we will only analyze
Λ1 without loss of generality. In addition, we introduce the following convenient notations and assumptions:
10
• the design matrix on dropping the first column is written as X̃ and the MLE in the corresponding
reduced model as β̃;
• write X = [X1 , · · · , Xn ]> ∈ Rn×p and X̃ = [X̃1 , · · · , X̃n ]> ∈ Rn×(p−1) ;
• without loss of generality, assume that ỹi = −1 for all i in this subsection, and hence the MLEs under
the full and the reduced models reduce to
Xn
β̂ = arg minβ∈Rp
`(β) :=
ρ(Xi> β),
(30)
i=1
Xn
˜
β̃ = arg minβ∈Rp−1 `(β)
:=
ρ(X̃i> β).
(31)
i=1
With the above notations in place, the LLR statistic for testing β1 = 0 vs. β1 6= 0 can be expressed as
n n
o
X
˜ β̃) − `(β̂) =
Λ1 := `(
ρ(X̃i> β̃) − ρ(Xi> β̂) .
(32)
i=1
To analyze Λ1 , we invoke Taylor expansion to reach
Λ1 =
n
1X
2
ρ00 Xi> β̂ X̃i> β̃ − Xi> β̂
ρ0 Xi> β̂ X̃i> β̃ − Xi> β̂ +
2 i=1
i=1
{z
}
|
n
X
:=Qlin
n
+
3
1 X 000
ρ (γi ) X̃i> β̃ − Xi> β̂ ,
6 i=1
(33)
where γi lies between X̃i> β̃ and Xi> β̂. A key observation is that the linear term Qlin in the above equation
vanishes. To see this, note that the first-order optimality conditions for the MLE β̂ is given by
Xn
ρ0 (Xi> β̂)Xi = 0.
(34)
i=1
0
Replacing X̃i> β̃ with Xi>
in Qlin and using the optimality condition, we obtain
β̃
Xn
> 0
0
>
Qlin =
ρ Xi β̂ Xi
− β̂ = 0.
i=1
β̃
Consequently, Λ1 simplifies to the following form
n
n
3
2 1 X
1 X 00 > >
ρ (Xi β̂) X̃i β̃ − Xi> β̂ +
ρ000 (γi ) X̃i> β̃ − Xi> β̂ .
Λ1 =
2 i=1
6 i=1
(35)
Thus, computing the asymptotic distribution of Λ1 boils down to analyzing Xi> β̂ − X̃i> β̃. Our argument is
inspired by the leave-one-predictor-out approach developed in [19, 20].
We re-emphasize that our setting is not covered by that of [19,20], due to the violation of strong convexity
and some other technical assumptions. We sidestep this issue by utilizing the Norm Bound Condition and
the Likelihood Curvature Condition. In the end, our analysis establishes the equivalence of Λ1 and β̂1 up to
some explicit multiplicative factors modulo negligible error terms. This is summarized as follows.
Theorem 3. Under the assumptions of Theorem 1,
p
P
2Λ1 − β̂12 → 0,
b∗
as n → ∞.
(36)
Theorem 3 reveals a simple yet surprising connection between the LLR statistic Λ1 and the MLE β̂. As
we shall see in the proof of the theorem, the quadratic term in (35) is 12 bp∗ β̂12 + o(1), while the remaining
third-order term of (35) is vanishingly small. Finally, putting Corollary 1 and Theorem 3 together directly
establishes Theorem 1.
11
2.6
Comparisons with the classical regime
We pause to shed some light on the interpretation of the correction factor τ∗2 /b∗ in Theorem 1 and understand
the differences from classical results. Classical theory (e.g. [30,31]) asserts that when p is fixed and n diverges,
the MLE for a fixed design X is asymptotically normal, namely,
√
d
n(β̂ − β) → N (0, Iβ−1 ),
where
Iβ =
1 >
X Dβ X
n
with Dβ :=
ρ00 X1> β
(37)
..
.
ρ
00
Xn> β
(38)
is the normalized Fisher information at the true value β. In particular, under the global null and i.i.d. Gaussian design, this converges to
(
1
I, for the logistic model
EX [Iβ ] = 42
π I, for the probit model
as n tends to infinity [50, Example 5.40].
The behavior in high dimensions is different. In particular, Corollary 1 states that under the global null,
we have
√
d
p(β̂j − βj ) → N (0, τ∗2 ).
(39)
Comparing the variances in the logistic model, we have that
(
√
4κ, in classical large-sample theory;
lim Var
pβ̂j =
n→∞
τ∗2 , in high dimensions.
Fig. 3 illustrates the behavior of the ratio τ∗2 /κ as a function of κ. Two observations are immediate:
• First, in Fig. 3(a) we have τ∗2 ≥ 4κ for all κ ≥ 0. This indicates an inflation in variance or an “extra
Gaussian noise” component that appears in high dimensions, as discussed in [18]. The variance of the
“extra Gaussian noise” component increases as κ grows.
• Second, as κ → 0, we have τ∗2 /4κ → 1 in the logistic model, which indicates that classical theory
becomes accurate in this case. In other words, our theory recovers the classical prediction in the
regime where p = o(n).
Further, for the testing problem considered here, the LLR statistic in the classical setup can be expressed,
through Taylor expansion, as
2Λ1 = n(β̂ − β̃)> Iβ (β̂ − β̃) + oP (1),
(40)
where β̃ is defined in (31). In the high-dimensional setting, we will also establish a quadratic approximation
of the form
1
2Λ1 = n(β̂ − β̃)> G(β̂ − β̃) + oP (1),
G = X > Dβ̂ X.
n
In Theorem 7, we shall see that b∗ is the limit of n1 Tr(G−1 ), the Stieltjes transform of the empirical spectral
distribution of G evaluated at 0. Thus, this quantity in some sense captures the spread in the eigenvalues
of G one would expect to happen in high dimensions.
12
10
●
●
30
●
●
●
●
7.5
●
●
●
●
20
●
τ2 κ
τ2 κ
●
●
●
●
●
●
●
5
●
●
●
●
●
●
●
●
10
4
●●
●●
●●
●●
●●
●●
●
●
●●●
●●● ●
0.00
0.05
0.10
0.15
●
●
●
●
●
●
●
●
2.5
●
●●● ●
1.57
0.20
κ
0.25
0.30
0.35
0.40
●●
●●
●●
●●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
κ
(a) logistic regression
(b) probit regression
Figure 3: Ratio of asymptotic variance and dimensionality factor κ as a function of κ.
2.7
Prior art
Wilks’ type of phenomenon in the presence of a diverging dimension p has received much attention in the past.
For instance, Portnoy [43] investigated simple hypotheses in regular exponential families, and established
the asymptotic chi-square approximation for the LLR test statistic as long as p3/2 /n → 0. This phenomenon
was later extended in [46] to accommodate the MLE with a quadratic penalization, and in [53] to account
for parametric models underlying several random graph models. Going beyond parametric inference, Fan
et al. [22, 24] explored extensions to infinite-dimensional non-parametric inference problems, for which the
MLE might not even exist or might be difficult to derive. While the classical Wilks’ phenomenon fails to
hold in such settings, Fan et al. [22, 24] proposed a generalization of the likelihood ratio statistics based
on suitable non-parametric estimators and characterized the asymptotic distributions. Such results have
further motivated Boucheron and Massart [9] to investigate the non-asymptotic Wilks’ phenomenon or,
more precisely, the concentration behavior of the difference between the excess empirical risk and the true
risk, from a statistical learning theory perspective. The Wilks’ phenomenon for penalized empirical likelihood
has also been established [48]. However, the precise asymptotic behavior of the LLR statistic in the regime
that permits p to grow proportional to n is still beyond reach.
On the other hand, as demonstrated in Section 2.5.1, the MLE here under the global null can be viewed as
an M-estimator for a linear regression problem. Questions regarding the behavior of robust linear regression
estimators in high dimensions—where p is allowed to grow with n—–were raised in Huber [30], and have
been extensively studied in subsequent works, e.g. [36, 40–42]. When it comes to logistic regression, the
behavior of the MLE was studied for a diverging number of parameters by [28], which characterized the
squared estimation error of the MLE if (p log p)/n → 0. In addition, the asymptotic normality properties of
the MLE and the penalized MLE for logistic regression have been established by [35] and [23], respectively.
A very recent paper by Fan et al. [25] studied the logistic model under the global null β = 0, and investigated
the classical asymptotic normality as given in (37). It was discovered in [25] that the convergence property
(37) breaks down even in terms of the marginal distribution, namely,
√
Iβ
nβ̂i
d
−1/2 9 N (0, 1) ,
i,i
13
Iβ =
1 >
X X,
4n
12500
30000
15000
10000
7500
Counts
Counts
Counts
20000
10000
10000
5000
5000
2500
0
0
0.00
0.25
0.50
P−Values
0.75
1.00
(a)
0
0.00
0.25
0.50
P−Values
(b)
0.75
1.00
0.00
0.25
0.50
P−Values
0.75
1.00
(c)
Figure 4: Histogram of p-values for logistic regression under i.i.d. Bernoulli design, when β = 0, n = 4000,
p = 1200, and κ = 0.3: (a) classically computed p-values; (b) Bartlett corrected p-values; (c) adjusted
p-values.
as soon as p grows at a rate exceeding n2/3 . This result, however, does not imply the asymptotic distribution
of the likelihood-ratio statistic in this regime. In fact, our theorem implies that LLR statistic 2Λj goes to
χ21 (and hence Wilks phenomenon remains valid) when κ = p/n → 0.
The line of work that is most relevant to the present paper was initially started by El Karoui et al. [21].
Focusing on the regime where p is comparable to n, the authors uncovered, via a non-rigorous argument,
that the asymptotic `2 error of the MLE could be characterized by a system of nonlinear equations. This
seminal result was later made rigorous independently by Donoho et al. [18] under i.i.d. Gaussian design and
by El Karoui [19, 20] under more general i.i.d. random design as well as certain assumptions on the error
distribution. Both approaches rely on strong convexity on the function ρ(·) that defines the M-estimator,
which does not hold in the models considered herein.
2.8
Notations
We adopt the standard notation f (n) = O (g(n)) or f (n) . g(n) which means that there exists a constant
c > 0 such that |f (n)| ≤ c|g(n)|. Likewise, f (n) = Ω (g(n)) or f (n) & g(n) means that there exists a
constant c > 0 such that |f (n)| ≥ c |g(n)|, f (n) g(n) means that there exist constants c1 , c2 > 0 such that
(n)
c1 |g(n)| ≤ |f (n)| ≤ c2 |g(n)|, and f (n) = o(g(n)) means that limn→∞ fg(n)
= 0. Any mention of C, Ci , c, ci
for i ∈ N refers to some positive universal constants whose value may change from line to line. For a square
symmetric matrix M , the minimum eigenvalue is denoted by λmin (M ). Logarithms are base e.
3
3.1
Numerics
Non-Gaussian covariates
In this section we first study the sensitivity of our result to the Gaussianity assumption on the design matrix.
To this end, we consider a high dimensional binary regression set up with a Bernoulli design matrix. We
i.i.d.
simulate n = 4000 i.i.d. observations (yi , Xi ) with yi ∼ Bernoulli(1/2), and Xi generated independent of
yi , such that each entry takes on values in {1, −1} w.p. 1/2. At each trial, we fit a logistic regression model
to the data and obtain the classical, Bartlett corrected and adjusted p-values (using the rescaling factor
τ∗2 /b∗ ). Figure 4 plots the histograms for the pooled p-values, obtained across 400 trials.
14
It is instructive to compare the histograms to that obtained in the Gaussian case (Figure 1). The classical
and Bartlett corrected p-values exhibit similar deviations from uniformity as in the Gaussian design case,
whereas our adjusted p-values continue to have an approximate uniform distribution. We test for deviations
from uniformity using a formal chi-squared goodness of fit test as in Section 2.3.2. For the Bartlett corrected
p-values, the chi-squared statistic turns out to be 5885, with a p-value 0. For the adjusted p-values,the
chi-squared statistic is 24.1024, with a p-value 0.1922.4
Once again, the Bartlett correction fails to provide valid p-values whereas the adjusted p-values are
consistent with a uniform distribution. These findings indicate that the distribution of the LLR statistic
under the i.i.d. Bernoulli design is in agreement to the rescaled χ21 derived under the Gaussian design in
Theorem 1, suggesting that the distribution is not too sensitive to the Gaussianity assumption. Estimates
of p-value probabilities for our method are provided in Table 2.
P{p-value ≤ 5%}
P{p-value ≤ 1%}
P{p-value ≤ 0.5%}
P{p-value ≤ 0.1%}
P{p-value ≤ 0.05%}
P{p-value ≤ 0.01%}
Adjusted
5.0222%(0.0412%)
1.0048%(0.0174%)
0.5123%(0.0119%)
0.1108%(0.005%)
0.0521%(0.0033%)
0.0102%(0.0015%)
Table 2: Estimates of p-value probabilities with estimated Monte Carlo standard errors in parentheses under
i.i.d. Bernoulli design.
3.2
Quality of approximations for finite sample sizes
In the rest of this section, we report some numerical experiments which study the applicability of our theory
in finite sample setups.
Validity of tail approximation The first experiment explores the efficacy of our correction for extremely small p-values. This is particularly important in the context of multiple comparisons, where practitioners care about the validity of exceedingly small p-values. To this end, the empirical cumulative distribution of the adjusted p-values is estimated under a standard Gaussian design with n = 4000, p = 1200 and
4.8 × 105 p-values. The range [0.1/p, 12/p] is divided into points which are equi-spaced with a distance of
1/p between any two consecutive points. The estimated empirical CDF at each of these points is represented
in blue in Figure 5. The estimated CDF is in near-perfect agreement with the diagonal, suggesting that the
adjusted p-values computed using the rescaled chi-square distribution are remarkably close to a uniform, even
when we zoom in at very small resolutions as would be the case when applying Bonferroni-style corrections.
Moderate sample sizes The final experiment studies the accuracy of our asymptotic result for moderately large samples. This is especially relevant for applications where the sample sizes are not too large. We
repeat our numerical experiments with n = 200, p = 60 for i.i.d. Gaussian design, and 4.8 × 105 p-values.
The empirical CDF for these p-values are estimated and Figure 6 shows that the adjusted p-values are nearly
uniformly distributed even for moderate sample sizes such as n = 200.
4
Preliminaries
This section gathers a few preliminary results that will be useful throughout the paper. We start by collecting
some facts regarding i.i.d. Gaussian random matrices.
4 Recall
our earlier footnote about the use of a χ2 test.
15
0.0100
●
●●
●
●
●●
●●
●
●●
●●
●
●●
●●
●
●●
●
●●
●●
●
●
●●
●●
●
●●
●●
●
●●
●●
●●
●●
●
●
●
●●
●●
●●
●●
●
●
●●
●
●
●●
●●
●
●
●●
●●
●●
●●
●
●●
●●
●●
●●
●
●
●●
●●
●
●●
●●
●
●
Empirical cdf
0.0075
0.0050
0.0025
0.0000
0.000
0.002
0.004
0.006
0.008
0.010
t
Figure 5: Empirical CDF of adjusted pvalues for logistic regression when β = 0, n = 4000, p = 1200. Here,
the blue points represent the empirical CDF (t vs. the fraction of p-values below t), and the red line is the
diagonal.
1.00
Empirical cdf
0.75
0.50
0.25
0.00
●●
●●
●●
●●
●
●
●●
●●
●●
●●
●
●
●●
●●
●●
●●
●
●
●●
●●
●●
●●
●
●
●●
●●
●●
●●
●
●
●●
●●
●●
●●
●
●
●●
●●
●●
●●
●
●
●●
●●
●●
●●
●
●
●●
●●
●●
●●
●
●●
●●
●●
●●
●
●
●●
0.00
0.25
0.50
0.75
1.00
t
Figure 6: Empirical CDF of adjusted pvalues for logistic regression when β = 0, n = 200, p = 60. Here,
the blue points represent the empirical CDF (t vs. the fraction of p-values below t), and the red line is the
diagonal.
16
Lemma 2. Let X = [X1 , X2 , . . . Xn ]> be an n × p matrix with i.i.d. standard Gaussian entries. Then
P kX > Xk ≤ 9n ≥ 1 − 2 exp(−n/2);
(41)
√
√
P sup1≤i≤n kXi k ≤ 2 p ≥ 1 − 2n exp(−( p − 1)2 /2).
Proof: This is a straighforward application of [51, Corollary 5.35] and the union bound.
(42)
Lemma 3. Suppose X is an n × p matrix
with
pentries i.i.d N (0, 1), then there exists a constant 0 such
√
that whenever 0 ≤ ≤ 0 and 0 ≤ t ≤ 1 − − p/n,
!
r
2
√
1X
p
>
λmin
1−−
Xi Xi
− t , ∀S ⊆ [n] with |S| = (1 − )n
(43)
≥
n
n
i∈S
2
with probability exceeding 1 − 2 exp − (1−)t
−
H
()
n . Here, H() = − log − (1 − ) log(1 − ).
2
Proof: See Appendix A.1.
The above facts are useful in establishing an eigenvalue lower bound on the Hessian of the log-likelihood
function. Specifically, recall that
Xn
∇2 `(β) =
ρ00 Xi> β Xi Xi> ,
(44)
i=1
and the result is this:
Lemma 4 (Likelihood Curvature Condition). Suppose that p/n < 1 and that ρ00 (·) ≥ 0. Then there exists a
constant 0 such that whenever 0 ≤ ≤ 0 , with probability at least 1 − 2 exp (−nH ()) − 2 exp (−n/2), the
matrix inequality
!2
!
r
r
√
p
H()
1 2
00
∇ `(β)
inf
−2
I
(45)
ρ (z)
1−−
3kβk
n
n
1−
z:|z|≤ √
holds simultaneously for all β ∈ Rp .
Proof: See Appendix A.2.
The message of Lemma 4 is this: take > 0 to be a sufficiently small constant. Then
1 2
∇ `(β) ω(kβk) I
n
for some non-increasing and positive function ω(·) independent of n. This is a generalization of the strong
convexity condition.
5
5.1
When is the MLE bounded?
Phase transition
In Section 2.2, we argued that the MLE is at infinity if we have less than two observations per dimension or
κ > 1/2. In fact, a stronger version of the phase transition phenemonon occurs in the sense that
kβ̂k = O(1)
as soon as κ < 1/2. This is formalized in the following theorem.
17
Theorem 4 (Norm Bound Condition). Fix any small constant > 0, and let β̂ be the MLE for a model
with effective link satisfying the conditions from Section 2.3.3.
(i) If p/n ≥ 1/2 + , then
kβ̂k = ∞
with probability exceeding 1 − 4 exp −2 n/8 .
(ii) There exist universal constants c1 , c2 , C2 > 0 such that if p/n < 1/2 − c1 3/4 , then5
kβ̂k <
4 log 2
2
with probability at least 1 − C2 exp(−c2 2 n).
These conclusions clearly continue to hold if β̂ is replaced by β̃ (the MLE under the restricted model obtained
on dropping the first predictor).
The rest of this section is devoted to proving this theorem. As we will see later, the fact that kβ̂k = O(1)
is crucial for utilizing the AMP machinery in the absence of strong convexity.
5.2
Proof of Theorem 4
As in Section 2.5.1, we assume ỹi ≡ 1 throughout this section, and hence the MLE reduces to
Xn
minimizeβ∈Rp `0 (β) :=
ρ(−Xi> β).
i=1
5.2.1
(46)
Proof of Part (i)
Invoking [3, Theorem I] yields that if
δ ({Xβ | β ∈ Rp }) + δ Rn+ ≥ (1 + ) n,
or equivalently, if p/n ≥ 1/2 + , then
P {Xβ | β ∈ Rp } ∩ Rn+ 6= {0} ≥ 1 − 4 exp −2 n/8 .
As is seen in Section 2.2, kβ̂k = ∞ when {Xβ | β ∈ Rp } ∩ Rn+ 6= {0}, establishing Part (i) of Theorem 4.
5.2.2
Proof of Part (ii)
We now turn to the regime in which p/n ≤ 1/2 − O(3/4 ), where 0 < < 1 is any fixed constant. Begin by
observing that the least singular value of X obeys
√
(47)
σmin (X) ≥ n/4
2
with probability at least 1 − 2 exp − 21 34 − √12 n (this follows from Lemma 3 using = 0). Then for any
β ∈ Rp obeying
Xn
`0 (β) =
ρ −Xj> β ≤ n log 2 = `0 (0)
(48)
j=1
and
5 When
kβk ≥
4 log 2
,
2
Xi ∼ N (0, Σ) for a general Σ 0, one has kΣ1/2 β̂k . 1/2 with high probability.
18
(49)
we must have
n
X
max −Xj> β, 0 =
(a)
−Xj> β ≤
X
j: Xj> β<0
j=1
X
(b)
ρ −Xj> β ≤ n log 2;
j: Xj> β<0
(a) follows since t ≤ ρ(t) and (b) is a consequence of (48). Continuing, (47) and (49) give
√ kXβk
√
n log 2 ≤ 4 n
log 2 ≤ 2 nkXβk.
kβk
This implies the following proposition: if the solution β̂—which necessarily satisfies `0 (β̂) ≤ `0 (0)—has
2
norm exceeding kβ̂k ≥ 4 log
2 , then X β̂ must fall within the cone
n
o
Xn
√
max {−uj , 0} ≤ 2 nkuk .
A := u ∈ Rn
(50)
j=1
2
Therefore, if one wishes to rule out the possibility of having kβ̂k ≥ 4 log
2 , it suffices to show that with high
probability,
{Xβ | β ∈ Rp } ∩ A = {0} .
(51)
This is the content of the remaining proof.
We would like to utilize tools from conic geometry [3] to analyze the probability of the event (51). Note,
however, that A is not convex, while the theory developed in [3] applies only to convex cones. To bypass the
non-convexity issue, we proceed in the following three steps:
1. Generate a set of N = exp 22 p closed convex cones {Bi | 1 ≤ i ≤ N } such that it forms a cover of
A with probability exceeding 1 − exp −Ω(2 p) .
√ 3
√
2. Show that if p < 21 − 2 2 4 − 2H(2 ) n and if n is sufficiently large, then
(
1
P {{Xβ | β ∈ R } ∩ Bi =
6 {0}} ≤ 4 exp −
8
p
√ 3
√
p
1
− 2 2 4 − 10H(2 ) −
2
n
2 )
n
for each 1 ≤ i ≤ N .
3. Invoke the union bound to reach
P {{Xβ | β ∈ Rp } ∩ A 6= {0}}
≤
P {{Bi | 1 ≤ i ≤ N } does not form a cover of A}
+
N
X
P {{Xβ | β ∈ Rp } ∩ Bi 6= {0}}
i=1
≤ exp −Ω(2 p) ,
where we have used the fact that
2 )
√
√
3
1
1
p
P {{Xβ | β ∈ Rp } ∩ Bi 6= {0}} ≤ 4N exp −
− 2 2 4 − 10H(2 ) −
n
8
2
n
i=1
(
! )
2
√ 3
√
1 1
p
2
< 4 exp −
− 2 2 4 − 10H(2 ) −
− 2 n
8 2
n
< 4 exp −2 n .
2
√ 3
√
Here, the last inequality holds if 21 − 2 2 4 − 10H(2 ) − np
> 242 , or equivalently, np < 12 −
√ 3
√
√
2 2 4 − 10H(2 ) − 24.
(
N
X
19
√
√ 3
√
Taken collectively, these steps establish the following claim: if np < 12 − 2 2 4 − 10H(2 ) − 24, then
4 log 2
P kβ̂k >
< exp −Ω(2 n) ,
2
thus establishing Part (ii) of Theorem 4. We defer the complete details of the preceding steps to Appendix
D.
Asymptotic `2 error of the MLE
6
This section aims to establish Theorem 2, which characterizes precisely the asymptotic squared error of the
MLE β̂ under the global null β = 0. As described in Section 2.5.1, it suffices to assume that β̂ is the solution
to the following problem
Xn
minimizeβ∈Rp
ρ(−Xi> β).
(52)
i=1
In what follows, we derive the asymptotic convergence of kβ̂k under the assumptions from our main theorem.
Theorem 5. Under the assumptions of Theorem 1, the solution β̂ to (52) obeys
lim kβ̂k2 =a.s. τ∗2 .
n→∞
(53)
Theorem 5 is derived by invoking the AMP machinery [6, 7, 32]. The high-level idea is the following:
in order to study β̂, one introduces an iterative algorithm (called AMP) where a sequence of iterates β̂ t is
formed at each time t. The algorithm is constructed so that the iterates asymptotically converge to the MLE
in the sense that
lim lim kβ̂ t − β̂k2 =a.s. 0.
(54)
t→∞ n→∞
On the other hand, the asymptotic behavior (asymptotic in n) of β̂ t for each t can be described accurately
by a scalar sequence {τt }—called state evolution (SE)—following certain update equations [6]. This, in turn,
provides a characterization of the `2 loss of β̂.
Further, in order to prove Theorem 2, one still needs to justify
(a) the existence of a solution to the system of equations (23) and (24),
(b) and the existence of a fixed point for the iterative map governing the SE sequence updates.
We will elaborate on these steps in the rest of this section.
6.1
State evolution
We begin with the SE sequence {τt } introduced in [18]. Starting from some initial point τ0 , we produce two
sequences {bt } and {τt } following a two-step procedure.
• For t = 0, 1, . . .:
– Set bt to be the solution in b to
κ = E Ψ0 (τt Z; b) ;
(55)
– Set τt+1 to be
2
τt+1
=
1 2
E (Ψ (τt Z; bt )) .
κ
20
(56)
10.0
7.5
7.5
ν(τ2)
ν(τ2)
10.0
5.0
2.5
0.0
0.0
5.0
2.5
2.5
5.0
τ2
0.0
7.5
(a) logistic regression
0
1
2
τ2
3
4
(b) probit regression
Figure 7: The variance map for both the logistic and the probit models when κ = 0.3: (blue line) variance
map V(τ 2 ) as a function of τ 2 ; (red line) diagonal.
Suppose that for given any τ > 0, the solution in b to (55) with τt = τ exists and is unique, then one can
denote the solution as b(τ ), which in turn allows one to write the sequence {τt } as
2
τt+1
= V(τt2 )
with the variance map
1 2
E Ψ (τ Z; b(τ )) .
(57)
κ
As a result, if there exists a fixed point τ∗ obeying V(τ∗2 ) = τ∗2 and if we start with τ0 = τ∗ , then by induction,
V(τ 2 ) =
τt ≡ τ∗
and bt ≡ b∗ := b(τ∗ ),
t = 0, 1, . . .
Notably, (τ∗ , b∗ ) solves the system of equations (23) and (24). We shall work with this choice of initial
condition throughout our proof.
The preceding arguments hold under two conditions: (i) the solution to (24) exists and is unique for
any τt > 0; (ii) the variance map (57) admits a fixed point. To verify these two conditions, we make two
observations.
• Condition (i) holds if one can show that the function
G(b) := E [Ψ0 (τ Z; b)] ,
b>0
(58)
is strictly monotone for any given τ > 0, and that limb→0 G(b) < κ < limb→∞ G(b).
• Since V(·) is a continuous function, Condition (ii) becomes self-evident once we show that V(0) > 0
and that there exists τ > 0 obeying V(τ 2 ) < τ 2 . The behavior of the variance map is illustrated in
Figure 7 for the logistic and probit regression when κ = 0.3. One can in fact observe that the fixed
point is unique. For other values of κ, the variance map shows the same behavior.
In fact, the aforementioned properties can be proved for a certain class of effective links, as summarized
in the following lemmas. In particular, they can be shown for the logistic and the probit models.
21
Lemma 5. Suppose the effective link ρ satisfies the following two properties:
(a) ρ0 is log-concave.
(b) For any fixed τ > 0 and any fixed z, bρ00 (proxbρ (τ z)) → ∞ when b → ∞.
Then for any τ > 0, the function G(b) defined in (58) is an increasing function in b (b > 0), and the equation
G(b) = κ
has a unique positive solution.
Proof: See Appendix B.
Lemma 6. Suppose that 0 < κ < 1/2 and that ρ = log(1 + et ) or ρ = − log Φ(−t). Then
(i) V(0) > 0;
(ii) V(τ 2 ) < τ 2 for some sufficiently large τ 2 .
Proof: See Appendix C and the supplemental material [47].
Remark 1. A byproduct of the proof is that the following relations hold for any constant 0 < κ < 1/2:
• In the logistic case,
2
limτ →∞ V (τ2 )
τ
lim
τ →∞
b(τ )
τ
=
x2 P{Z>x}+E[Z 2 1{0<Z<x} ]
P{0<Z<x}
;
x=Φ−1 (κ+0.5)
= Φ−1 (κ + 0.5).
• In the probit case,
lim b(τ ) =
τ →∞
2κ
1 − 2κ
and
V(τ 2 )
= 2κ.
τ →∞ τ 2
lim
(59)
Remark 2. Lemma 6 is proved for the two special effective link functions, the logistic and the probit cases.
However, the proof sheds light on general conditions on the effective link that suffice for the lemma to hold.
Such general sufficient conditions are also discussed in the supplemental material [47].
6.2
AMP recursion
In this section, we construct the AMP trajectory tracked by two sequences {β̂ t (n) ∈ Rp } and {η t (n) ∈ Rn }
for t ≥ 0. Going forward we suppress the dependence on n to simplify presentation. Picking β̂ 0 such that
lim kβ̂ 0 k2 = τ02 = τ∗2
n→∞
and taking η −1 = 0 and b−1 = 0, the AMP path is obtained via Algorithm 1, which is adapted from the
algorithm in [18, Section 2.2].
22
Algorithm 1 Approximate message passing.
For t = 0, 1, · · · :
1. Set
η t = X β̂ t + Ψ η t−1 ; bt−1 ;
(60)
κ = E [Ψ0 (τt Z; b)] ,
(61)
2. Let bt be the solution to
where τt is the SE sequence value at that time.
3. Set
1
β̂ t+1 = β̂ t − X > Ψ η t ; bt .
p
(62)
Here, Ψ(·) is applied in an entrywise manner, and Ψ0 (., .) denotes derivative w.r.t the first variable.
As asserted by [18], the SE sequence {τt } introduced in Section 6.1 proves useful as it offers a formal
procedure for predicting operating characteristics of the AMP iterates at any fixed iteration. In particular
it assigns predictions to two types of observables: observables which are functions of the β̂ t sequence and
those which are functions of η t . Repeating identical argument as in [18, Theorem 3.4], we obtain
lim kβ̂ t k2 =a.s. τt2 ≡ τ∗2 ,
n→∞
6.3
t = 0, 1, . . . .
(63)
AMP converges to the MLE
We are now in position to show that the AMP iterates {β̂ t } converge to the MLE in the large n and t limit.
Before continuing, we state below two properties that are satisfied under our assumptions.
• The MLE β̂ obeys
lim kβ̂k < ∞
n→∞
(64)
almost surely.
• And there exists some non-increasing continuous function 0 < ω (·) < 1 independent of n such that
1 2
P
∇ ` (β) ω (kβk) · I, ∀β ≥ 1 − c1 e−c2 n .
(65)
n
In fact, the norm bound (64) follows from Theorem 4 together with Borel-Cantelli, while the likelihood
curvature condition (65) is an immediate consequence of Lemma 4. With this in place, we have:
Theorem 6. Suppose (64) and (65) hold. Let (τ∗ , b∗ ) be a solution to the system (23) and (24), and assume
that limn→∞ kβ̂ 0 k2 = τ∗2 . Then the AMP trajectory as defined in Algorithm 1 obeys
lim lim kβ̂ t − β̂k =a.s. 0.
t→∞ n→∞
Taken collectively, Theorem 6 and Eqn. (63) imply that
lim kβ̂k =a.s. lim lim kβ̂ t k =a.s. τ∗ ,
n→∞
t→∞ n→∞
thus establishing Theorem 5. In addition, an upshot of these theorems is a uniqueness result:
Corollary 2. The solution to the system of equations (23) and (24) is unique.
23
(66)
Proof: When the AMP trajectory β̂ t is started with the initial condition from Theorem 6, limn→∞ kβ̂k2 =a.s.
τ∗2 . This holds for any τ∗ such that (τ∗ , b∗ ) is a solution to (23) and (24). However, since the MLE problem
is strongly convex and hence admits a unique solution β̂, this implies that τ∗ must be unique, which together
with the monotonicity of G(·) (cf. (58)) implies that b∗ is unique as well.
Proof of Theorem 6: To begin with, repeating the arguments in [18, Lemma 6.9] we reach
lim lim kβ̂ t+1 − β̂ t k2 =a.s. 0;
t→∞ n→∞
(67)
1 t+1
kη
− η t k2 =a.s. 0.
(68)
n
To show that the AMP iterates converge to the MLE, we shall analyze the log-likelihood function. Recall
from Taylor’s theorem that
>
E 1
D
β̂ − β̂ t ∇2 ` β̂ t + λ(β̂ − β̂ t ) β̂ − β̂ t
`(β̂) = `(β̂ t ) + ∇`(β̂ t ), β̂ − β̂ t +
2
holds for some 0 < λ < 1. To deal with the quadratic term, we would like to control the Hessian of the
likelihood at a point between β̂ and β̂ t . Invoking the likelihood curvature condition (65), one has
D
E 1
n
o
`(β̂ t ) ≥ `(β̂) ≥ `(β̂ t ) + ∇`(β̂ t ), β̂ − β̂ t + nω max kβ̂k, kβ̂ t k kβ̂ − β̂ t k2
(69)
2
with high probability. Apply Cauchy-Schwarz to yield that with exponentially high probability,
lim lim
t→∞ n→∞
kβ̂ − β̂ t k ≤
1
1
2
2
n
o
∇`(β̂ t ) ≤
∇`(β̂ t ) ,
t
n
n
t
ω kβ̂k ω kβ̂ k
ω max kβ̂k, kβ̂ k
where the last inequality follows since 0 < ω(·) < 1 and ω(·) is non-decreasing.
It remains to control k∇`(β̂ t )k. The identity Ψ(z; b∗ ) = z − proxb∗ ρ (z) and (60) give
proxb∗ ρ η t−1 = X β̂ t + η t−1 − η t .
(70)
0
In addition, substituting Ψ (z; b) = bρ (proxρb (z)) into (62) yields
p t
(β̂ − β̂ t−1 ) = −X > ρ0 proxb∗ ρ (η t−1 ) = −X > ρ0 X β̂ t + η t−1 − η t .
b∗
We are now ready to bound k∇`(β̂ t )k. Recalling that
∇`(β̂ t ) = X > ρ0 (X > β̂ t ) = X > ρ0 X β̂ t + η t−1 − η t + X > ρ0 (X > β̂ t ) − ρ0 X β̂ t + η t−1 − η t
and that supz ρ00 (z) < ∞, we have
∇`(β̂ t )
≤
−X > ρ0 X β̂ t + η t−1 − η t + kXk ρ0 X β̂ t + η t−1 − η t − ρ0 (X β̂ t )
p
t
t−1
00
kβ̂ − β̂ k + kXk sup ρ (z) kη t−1 − η t k.
≤
b∗
z
This establishes that with probability at least 1 − c1 e−c2 n ,
2
p
1
kβ̂ − β̂ t k ≤
kβ̂ t − β̂ t−1 k +
sup ρ00 (z) kXkkη t−1 − η t k .
(71)
b∗ n
n
z
ω kβ̂k ω kβ̂ t k
√
Using (42) together with Borel-Cantelli yields limn→∞ kXk/ n < ∞ almost surely. Further, it follows
from (63) that limn→∞ kβ̂ t k is finite almost surely as τ∗ < ∞. These taken together with (64), (67) and (68)
yield
lim lim kβ̂ − β̂ t k =a.s. 0
(72)
t→∞ n→∞
as claimed.
24
7
Likelihood ratio analysis
This section presents the analytical details for Section 2.5.2, which relates the log-likelihood ratio statistic
Λi with β̂i . Recall from (35) that the LLR statistic for testing β1 = 0 vs. β1 6= 0 is given by
n
Λ1 =
>
3
1X
1
X̃ β̃ − X β̂ Dβ̂ X̃ β̃ − X β̂ +
ρ000 (γi ) X̃i> β̃ − Xi> β̂ ,
2
6 i=1
where
ρ00 X1> β̂
(73)
Dβ̂ :=
..
.
ρ00 Xn> β̂
(74)
and γi lies between Xi> β̂ and X̃i> β̃. The asymptotic distribution of Λ1 claimed in Theorem 3 immediately
follows from the result below, whose proof is the subject of the rest of this section.
Theorem 7. Let (τ∗ , b∗ ) be the unique solution to the system of equations (23) and (24), and define
G̃ =
1 >
X̃ Dβ̃ X̃
n
and
α̃ =
1
Tr(G̃−1 ).
n
(75)
Suppose p/n → κ ∈ (0, 1/2) . Then
(a) the log-likelihood ratio statistic obeys
P
2Λ1 − pβ̂12 /α̃ → 0;
(76)
(b) and the scalar α̃ converges,
P
α̃ → b∗ .
7.1
(77)
More notations and preliminaries
Before proceeding, we introduce some notations that will be used throughout. For any matrix X, denote
by Xij and X·j its (i, j)-th entry and jth column, respectively. We denote an analogue r = {ri }1≤i≤n
(resp. r̃ = {r̃i }1≤i≤n ) of residuals in the full (resp. reduced) model by
ri := −ρ0 Xi> β̂
and
r̃i := −ρ0 X̃i> β̃ .
(78)
As in (74), set
Dβ̃ :=
ρ00 X̃1> β̃
..
.
ρ00 X̃n> β̃
and Dβ̂,b̃ :=
ρ00 (γ1∗ )
..
.
ρ
00
(γn∗ ),
,
(79)
where γi∗ is between Xi> β̂ and Xi> b̃, and b̃ is to be defined later in Section 7.2. Further, as in (75), introduce
the Gram matrices
G :=
1 >
X Dβ̂ X
n
and Gβ̂,b̃ =
1 >
X Dβ̂,b̃ X.
n
(80)
Let G̃(i) denote the version of G̃ without the term corresponding to the ith observation, that is,
G̃(i) =
1X
ρ00 (X̃j> β̃)X̃j X̃j> .
j:j6=i
n
25
(81)
Additionally, let β̂[−i] be the MLE when the ith observation is dropped and let G[−i] be the corresponding
Gram matrix,
G[−i] =
1X
ρ00 (Xj> β̂[−i] )Xj Xj> .
j:j6=i
n
(82)
Further, let β̃[−i] be the MLE when the first predictor and ith observation are removed, i.e.
X
β̃[−i] := arg min
ρ(X̃j> β).
β∈Rp−1
j:j6=i
Below G̃[−i] is the corresponding version of G̃,
G̃[−i] =
1X
ρ00 (X̃j> β̃[−i] )X̃j X̃j> .
j:j6=i
n
(83)
For these different versions of G, their least eigenvalues are all bounded away from 0, as asserted by the
following lemma.
Lemma 7. There exist some absolute constants λlb , C, c > 0 such that
P(λmin (G) > λlb ) ≥ 1 − Ce−cn .
Moreover, the same result holds for G̃, Gβ̂,b̃ , G̃(i) , G[−i] and G̃[−i] for all i ∈ [n].
Proof: This result follows directly from Lemma 2, Lemma 4, and Theorem 4.
Throughout the rest of this section, we restrict ourselves (for any given n) to the following event:
An := {λmin (G̃) > λlb } ∩ {λmin (G) > λlb } ∩ {λmin (Gβ̂,b̃ ) > λlb }
∩ {∩ni=1 λmin (G̃(i) ) > λlb } ∩ {∩ni=1 λmin (G̃[−i] ) > λlb } ∩ {∩ni=1 λmin (G[−i] ) > λlb }.
(84)
By Lemma 7, An arises with exponentially high probability, i.e.
P(An ) ≥ 1 − exp(−Ω(n)).
7.2
(85)
A surrogate for the MLE
In view
of (73), the main step in controlling Λ1 consists of characterizing the differences X β̂ − X̃ β̃ or
0
β̂ −
. Since the definition of β̂ is implicit and not amenable to direct analysis, we approximate β̂ by
β̃
a more amenable surrogate b̃, an idea introduced in [19–21]. We collect some properties of the surrogate
which will prove valuable in the subsequent analysis.
To begin with, our surrogate is
0
1
b̃ =
+ b̃1
,
(86)
−G̃−1 w
β̃
where G̃ is defined in (80),
w :=
1 Xn
1
ρ00 (X̃i> β̃)Xi1 X̃i = X̃ > Dβ̃ X·1 ,
i=1
n
n
and b̃1 is a scalar to be specified later. This vector is constructed in the hope that
(
β̂1 ≈ b̃1 ,
β̂ ≈ b̃,
or equivalently,
β̂2:p − β̃ ≈ −b̃1 G̃−1 w,
26
(87)
(88)
where β̂2:p contains the 2nd through pth components of β̂.
Before specifying b̃1 , we shall first shed some insights into the remaining terms in b̃. By definition,
"
#
>
>
>
X
D
X
X
D
X̃
X·1
Dβ̃ X·1 nw>
0
·1
·1
·1
2
>
β̃
β̃
∇ `
= X Dβ̃ X =
.
=
β̃
X̃ > Dβ̃ X·1 X̃ > Dβ̃ X̃
nw
nG̃
Employing the first-order approximation of ∇`(·) gives
0
0
0
∇2 `
β̂ −
≈ ∇`(β̂) − ∇`
.
β̃
β̃
β̃
(89)
0
are
β̂
also very close to each other. Therefore, taking the 2nd through pth components of (89) and approximating
them by zero give
h
i
0
w, G̃ β̂ −
≈ 0.
β̃
Suppose β̂2:p is well approximated by β̃. Then all but the first coordinates of ∇`(β̃) and ∇`
This together with a little algebra yields
β̂2:p − β̃ ≈ −β̂1 G̃−1 w ≈ −b̃1 G̃−1 w,
which coincides with (88). In fact, for all but the 1st entries, b̃ is constructed by moving β̃ one-step in the
direction which takes it closest to β̂.
Next, we come to discussing the scalar b̃1 . Introduce the projection matrix
H := I −
and define b̃1 as
b̃1 :=
1 1/2
1/2
D X̃ G̃−1 X̃ > Dβ̃ ,
n β̃
>
X·1
r̃
,
1/2
1/2
>
X·1 Dβ̃ HDβ̃ X·1
(90)
(91)
where r̃ comes from (78). In fact, the expression b̃1 is obtained through similar (but slightly more complicated) first-order approximation as for b̃2:p , in order to make sure that b1 ≈ β̂1 ; see [21, Pages 14560-14561]
for a detailed description.
We now formally justify that the surrogate b̃ and the MLE β̂ are close to each other.
Theorem 8. The MLE β̂ and the surrote b̃ (86) obey
kβ̂ − b̃k . n−1+o(1) ,
(92)
|b̃1 | . n−1/2+o(1) ,
(93)
sup |Xi> b̃ − X̃i> β̃| . n−1/2+o(1)
(94)
and
1≤i≤n
with probability tending to one as n → ∞.
Proof: See Section 7.4.
The global accuracy (92) immediately leads to a coordinate-wise approximation result between β̂1 and
b̃1 .
27
Corollary 3. With probability tending to one as n → ∞,
√
n|b̃1 − β̂1 | . n−1/2+o(1) .
(95)
Another consequence from Theorem 8 is that the value Xi> β̂ in the full model and its counterpart X̃i> β̃
in the reduced model are uniformly close.
Corollary 4. The values Xi> β̂ and X̃i> β̃ are uniformly close in the sense that
sup Xi> β̂ − X̃i> β̃
. n−1/2+o(1)
(96)
1≤i≤n
holds with probability approaching one as n → ∞.
Proof: Note that
sup Xi> β̂ − X̃i> β̃ ≤
1≤i≤n
sup Xi> (β̂ − b̃) + sup Xi> b̃ − X̃i> β̃ .
1≤i≤n
1≤i≤n
The second term in the right-hans side is upper bounded by n−1/2+o(1) with probability 1 − o(1) according
to Theorem 8. Invoking Lemma 2 and Theorem 8 and applying Cauchy-Schwarz inequality yield that the
first term is O(n−1/2+o(1) ) with probability 1 − o(1). This establishes the claim.
7.3
Analysis of the likelihood-ratio statistic
We are now positioned to use our surrogate b̃ to analyze the likelihood-ratio statistic. In this subsection we
establish Theorem 7(a). The proof for Theorem 7(b) is deferred to Appendix I.
Recall from (35) that
n
2Λ1 = (X̃ β̃ − X β̂)> Dβ̂ (X̃ β̃ − X β̂) +
1 X 000
ρ (γi )(X̃i> β̃ − Xi> β̂)3 .
3 i=1
{z
}
|
:=I3
000
To begin with, Corollary 4 together with the assumption supz ρ (z) < ∞ implies that
I3 . n−1/2+o(1)
with probability 1 − o(1). Hence, I3 converges to zero in probability.
Reorganize the quadratic term as follows:
2
X
(X̃ β̃ − X β̂)> Dβ̂ (X̃ β̃ − X β̂) =
ρ00 (Xi> β̂) Xi> β̂ − X̃i> β̃
i
=
X
=
X
h
i2
ρ00 (Xi> β̂) Xi> (β̂ − b̃) + (Xi> b̃ − X̃i> β̃)
i
ρ00 (Xi> β̂)(Xi> (β̂ − b̃))2 + 2
X
ρ00 (Xi> β̂)Xi> (β̂ − b̃)(Xi> b̃ − X̃i> β̃)
i
i
+
X
ρ
00
(Xi> β̂)
Xi> b̃
2
− X̃i> β̃ .
(97)
i
We control each of the three terms in the right-hand side of (97).
• Since supz ρ00 (z) < ∞, the first term in (97) is bounded by
X
X
ρ00 (Xi> β̂)(Xi> (β̂ − b̃))2 . ||β̃ − b̃||2
Xi Xi>
i
i
with probability 1 − o(1), by an application of Theorem 8 and Lemma 2.
28
. n−1+o(1)
• From the definition of b̃, the second term can be upper bounded by
p
X
X
1
≤ |b̃1 | · kβ̂ − b̃k ·
Xi Xi> · 1 + w> G̃−2 w
2
ρ00 (Xi> β̂)(β̂ − b̃)> Xi Xi> b̃1
−1
i
−G̃ w
i
1
. n− 2 +o(1)
with probability 1 − o(1), where the last line follows from a combination of Theorem 8, Lemma 2 and
the following lemma.
Lemma 8. Let G̃ and w be as defined in (80) and (87), respectively. Then
P w> G̃−2 w . 1 ≥ 1 − exp(−Ω(n)).
(98)
Proof: See Appendix E.
• The third term in (97) can be decomposed as
X
ρ00 (Xi> β̂)(Xi> b̃ − X̃i> β̃))2
i
=
X
=
X
X
ρ00 (Xi> β̂) − ρ00 (X̃i> β̃) (Xi> b̃ − X̃i> β̃))2 +
ρ00 (X̃i> β̃)(Xi> b̃ − X̃i> β̃)2
i
i
ρ000 (γ̃i )(Xi> β̂ − X̃i> β̃) Xi> b̃ − X̃i> β̃
i
2
+
X
2
ρ00 (X̃i> β̃) Xi> b̃ − X̃i> β̃
(99)
i
for some γ̃i between Xi> β̂ and X̃i> β̃. From Theorem 8 and Corollary 4, the first term in (99) is
O(n−1/2+o(1) ) with probability 1 − o(1). Hence, the only remaining term is the second.
In summary, we have
2Λ1 −
X
2
P
→ 0,
ρ00 (X̃i> β̃) Xi> b̃ − X̃i> β̃
(100)
i
|
{z
=v > X > Dβ̃ Xv
}
where v := b̃1
1
according to (86). On simplification, the quadratic form reduces to
−G̃−1 w
>
v > X > Dβ̃ Xv = b̃21 X·1 − X̃ G̃−1 w Dβ̃ X·1 − X̃ G̃−1 w
>
>
= b̃21 X·1
Dβ̃ X·1 − 2X·1
Dβ̃ X̃ G̃−1 w + w> G̃−1 X̃ > Dβ̃ X̃ G̃−1 w
>
= b̃21 X·1
Dβ̃ X·1 − nw> G̃−1 w
!
1/2
> 1/2
2 1
= nb̃1
X·1 Dβ̃ HDβ̃ X·1 ,
|n
{z
}
:=ξ
recalling the definitions (80), (87), and (90). Hence, the log-likelihood ratio 2Λ1 simplifies to nb̃21 ξ + oP (1)
on An .
Finally, rewrite v > X > Dβ̃ Xv as n(b̃21 − β̂12 )ξ + nβ̂12 ξ. To analyze the first term, note that
1
n|b̃21 − β̂12 | = n|b̃1 − β̂1 | · |b̃1 + β̂1 | ≤ n|b̃1 − βˆ1 |2 + 2n|b̃1 | · |b̃1 − β̂1 | . n− 2 +o(1)
29
(101)
with probability 1 − o(1) in view of Theorem 8 and Corollary 3. It remains to analyze ξ. Recognize that X·1
1/2
1/2
is independent of Dβ̃ HDβ̃ . Applying the Hanson-Wright inequality [27, 44] and the Sherman-MorrisonWoodbury formula (e.g. [26]) leads to the following lemma:
Lemma 9. Let α̃ =
1
−1
),
n Tr(G̃
where G̃ =
1
>
n X̃ Dβ̃ X̃.
Then one has
1 > 1/2
p−1
1/2
− α̃ X·1
Dβ̃ HDβ̃ X·1 . n−1/2+o(1)
n
n
(102)
with probability approaching one as n → ∞.
Proof: See Appendix F.
In addition, if one can show that α̃ is bounded away from zero with probability 1 − o(1), then it is seen from
Lemma 9 that
p P
ξ−
→ 0.
(103)
nα̃
To justify the above claim, we observe that since ρ00 is bounded, λmax (G̃) . λmax (X̃ > X̃)/n . 1 with
exponentially high probability (Lemma 2). This yields
α̃ = Tr(G̃−1 )/n & p/n
with probability 1 − o(1). On the other hand, on An one has
α̃ ≤ p/(nλmin (G̃)) . p/n.
Hence, it follows that ξ = Ω(1) with probability 1 − o(1). Putting this together with (101) gives the
approximation
v > X > Dβ̃ Xv = nβ̂12 ξ + o(1).
(104)
Taken collectively (100), (103) and (104) yields the desired result
P
2Λ1 − pβ̂12 /α̃ → 0.
7.4
Proof of Theorem 8
This subsection outlines the main steps for the proof of Theorem 8. To begin with, we shall express the
difference β̂ − b̃ in terms of the gradient of the negative log-likelihood function. Note that ∇`(β̂) = 0, and
hence
Xn
∇`(b̃) = ∇`(b̃) − ∇`(β̂) =
Xi [ρ0 (Xi> b̃) − ρ0 (Xi0 β̂)]
i=1
Xn
=
ρ00 (γi∗ )Xi Xi> (b̃ − β̂),
i=1
where γi∗ is between Xi> β̂ and Xi> b̃. Recalling the notation introduced in (80), this can be rearranged as
b̃ − β̂ =
1 −1
G ∇`(b̃).
n β̂,b̃
Hence, on An , this yields
kβ̂ − b̃k ≤
k∇`(b̃)k
.
λlb n
0
The next step involves expressing ∇`(b̃) in terms of the difference b̃ −
.
β̃
30
(105)
Lemma 10. On the event An (84), the negative log-likelihood evaluated at the surrogate b̃ obeys
n
X
00 ∗
0
00
>
>
∇`(b̃) =
ρ (γi ) − ρ (X̃i β̃) Xi Xi b̃ −
,
β̃
i=1
where γi∗ is some quantity between Xi> b̃ and X̃i> β̃.
Proof: The proof follows exactly the same argument as in the proof of [20, Proposition 3.11], and is thus
omitted.
0
The point of expressing ∇`(b̃) in this way is that the difference b̃ −
is known explicitly from the
β̃
definition of b̃. Invoking Lemma 10 and the definition (86) allows one to further upper bound (105) as
kβ̂ − b̃k .
1
∇`(b̃)
n
. sup ρ00 (γi∗ ) − ρ00 (X̃i> β̃)
i
. sup Xi> b̃ − X̃i> β̃ |ρ000 |∞
i
n
1X
0
Xi Xi> b̃ −
β̃
n i=1
p
1 Xn
Xi Xi> · |b̃1 | 1 + w> G̃−2 w
i=1
n
. |b̃1 | sup Xi> b̃ − X̃i> β̃
(106)
i
with probability at least 1 − exp(−Ω(n)). The last inequality here comes from our assumption that
supz |ρ000 (z)| < ∞ together with Lemmas 2 and 8.
In order to bound (106), we first make use of the definition of b̃ to reach
sup Xi> b̃ − X̃i> β̃ = |b̃1 | sup |Xi1 − X̃i> G̃−1 w|.
i
(107)
i
The following lemma provides an upper bound on supi |Xi1 − X̃i> G̃−1 w|.
Lemma 11. With G̃ and w as defined in (80) and (87),
> −1
o(1)
P sup Xi1 − X̃i G̃ w ≤ n
≥ 1 − o(1).
(108)
1≤i≤n
Proof: See Appendix G.
In view of Lemma 11, the second term in the right-hand side of (107) is bounded above by no(1) with
high probability. Thus, in both the bounds (106) and (107), it only remains to analyze the term b̃1 . To this
end, we control the numerator and the denominator of b̃1 separately.
>
• Recall from the definition (91) that the numerator of b̃1 is given by X·1
r̃ and that r̃ is independent
>
of X·1 . Thus, conditional on X̃, the quantity X·1 r̃ is distributed as a Gaussian with mean zero and
variance
Xn
2
σ2 =
ρ0 (X̃i> β̃) .
i=1
0
Since |ρ (x)| = O(|x|), the variance is bounded by
Xn
σ 2 . β̃ >
X̃i X̃i> β̃ . nkβ̃k2 . n
i=1
(109)
with probability at least 1 − exp(−Ω(n))), a consequence from Theorem 4 and Lemma 2. Therefore,
with probability 1 − o(1), we have
1
>
√ X·1
r̃ . no(1) .
(110)
n
31
• We now move on to the denominator of b̃1 in (91). In the discussion following Lemma 9 we showed
1/2
1
> 1/2
n X·1 Dβ̃ HDβ̃ X·1 = Ω(1) with probability 1 − o(1).
Putting the above bounds together, we conclude
1
P |b̃1 | . n− 2 +o(1) = 1 − o(1).
(111)
Substitution into (106) and (107) yields
kβ̂ − b̃k . n−1+o(1)
and
sup Xi> b̃ − X̃i> β̃ . n−1/2+o(1)
i
with probability 1 − o(1) as claimed.
8
Discussion
In this paper, we derived the high-dimensional asymptotic distribution of the LLR under our modelling
assumptions. In particular, we showed that the LLR is inflated vis a vis the classical Wilks’ approximation
and that this inflation grows as the dimensionality κ increases. This inflation is typical of high-dimensional
problems, and one immediate practical consequence is that it explains why classically computed p-values are
completely off since they tend to be far too small under the null hypothesis. In contrast, we have shown
in our simulations that our new limiting distribution yields reasonably accurate p-values in finite samples.
Having said this, our work raises a few important questions that we have not answered and we conclude this
paper with a couple of them.
• We expect that our results continue to hold when the covariates are not normally distributed, see
Section 3 for some numerical evidence in this direction. To be more precise, we expect the same
limiting distribution to hold when the variables are simply sub-Gaussian. If this were true, then this
would imply that our rescaled chi-square has a form of universal validity.
• The major limitation of our work is arguably the fact that our limiting distribution holds under the
global null; that is, under the assumption that all the regression coefficients vanish. It is unclear to
us how the distribution would change in the case where the coefficients are not all zero. In particular,
would the limiting distribution depend upon the unknown values of these coefficients? Are there
assumptions under which it would not? Suppose for instance that we model the regression coefficients
as i.i.d. samples from the mixture model
(1 − )δ0 + Π? ,
where 0 < < 1 is a mixture parameter, δ0 is a point mass at zero and Π? is a distribution with
vanishing mass at zero. Then what would we need to know about and Π? to compute the asymptotic
distribution of the LLR under the null?
Acknowledgements
E. C. was partially supported by the Office of Naval Research under grant N00014-16-1-2712, and by the
Math + X Award from the Simons Foundation. Y. C. and P. S. are grateful to Andrea Montanari for his help
in understanding AMP and [18]. Y. C. thanks Kaizheng Wang and Cong Ma for helpful discussion about [20],
and P. S. thanks Subhabrata Sen for several helpful discussions regarding this project. E. C. would like to
thank Iain Johnstone for a helpful discussion as well.
32
A
A.1
Proofs for Eigenvalue Bounds
Proof of Lemma 3
p
√
Fix ≥ 0 sufficiently small. For any given S ⊆ [n] obeying |S| = (1 − )n and 0 ≤ t ≤ 1 − − p/n it
follows from [51, Corollary 5.35] that
!
r
2
√
√ 2
1 p
p
1X
√
>
λmin
<
Xi Xi
|S| − p − t n =
1−−
−t
n
n
n
i∈S
2
2
n
. Taking the union bound over all possible
holds with probability at most 2 exp − t 2|S| = 2 exp − (1−)t
2
subsets S of size (1 − )n gives
(
1
λmin
n
P ∃S ⊆ [n] with |S| = (1 − )n s.t.
!
X
Xi Xi>
<
√
r
1−−
i∈S
p
−t
n
2 )
n
(1 − ) t2 n
2 exp −
(1 − )n
2
2
(1 − ) t
n ,
≤ 2 exp nH () −
2
≤
where the last line is a consequence of the inequality
A.2
n
(1−)n
≤ enH() [16, Example 11.1.3].
Proof of Lemma 4
Define
SB (β) := i : |Xi> β| ≤ Bkβk
for any B > 0 and any β. Then
n
X
ρ00 Xi> β Xi Xi>
X
i=1
ρ00 Xi> β Xi Xi>
i∈SB (β)
inf
z:|z|≤Bkβk
ρ00 (z)
X
Xi Xi> .
i∈SB (β)
If one also has |SB (β) | ≥ (1 − )n (for ≥ 0 sufficiently small), then this together with Lemma 3 implies
that
r
2
n
√
1 X 00
p
ρ Xi> β Xi Xi>
inf
ρ00 (z)
1−−
−t I
n i=1
n
z:|z|≤Bkβk
2
with probability at least 1 − 2 exp − (1−)t
− H () n .
2
Thus if we can ensure that with high probability, |SB (β) | ≥ (1 − )n holds simultaneously for all β, then
we are done. From Lemma 2 we see that n1 X > X ≤ 9 with probability exceeding 1 − 2 exp (−n/2). On
this event,
2
kXβk ≤ 9nkβk2 ,
∀β.
(112)
On the other hand, the definition of SB (β) gives
2
kXβk ≥
X
2
Xi> β
i∈S
/ B (β)
|SB (β)|
B 2 kβk2 .
≥ n − |SB (β)| (Bkβk) = n 1 −
n
2
Taken together, (112) and (113) yield
|SB (β)| ≥
1−
9
B2
33
n,
∀β
(113)
with probability at least 1−2 exp(−n/2). Therefore, with probability 1−2 exp(−n/2), S3/√ (β) ≥ (1 − ) n
q
holds simultaneously for all β. Putting the above results together and setting t = 2 H()
1− give
n
X
ρ
00
Xi> β
Xi Xi>
00
ρ (z)
inf
z:|z|≤
i=1
√
r
1−−
3kβk
√
!2
r
p
H()
−2
I
n
1−
simultaneously for all β with probability at least 1 − 2 exp (−nH ()) − 2 exp (−n/2).
B
Proof of Lemma 5
Applying an integration by parts leads to
Z ∞
1
E [Ψ0 (τ Z; b)] =
Ψ0 (τ z; b)φ(z)dz = Ψ(τ z; b)φ(z)
τ
−∞
Z
1 ∞
Ψ(τ z; b)φ0 (z)dz
= −
τ −∞
with φ(z) =
√1
2π
∞
−∞
−
1
τ
Z
∞
Ψ(τ z; b)φ0 (z)dz
−∞
exp(−z 2 /2). This reveals that
0
G (b)
=
=
Z
Z
ρ0 proxbρ (τ z)
1 ∞ ∂Ψ(τ z; b) 0
1 ∞
φ0 (z)dz
−
φ (z)dz = −
τ −∞
∂b
τ −∞ 1 + bρ00 proxbρ (τ z)
!
Z
ρ0 proxbρ (−τ z)
ρ0 proxbρ (τ z)
1 ∞
−
φ0 (z)dz,
τ 0
1 + xρ00 proxbρ (−τ z)
1 + xρ00 proxbρ (τ z)
(114)
where the second identity comes from [18, Proposition 6.4], and the last identity holds since φ0 (z) = −φ0 (−z).
Next, we claim that
(a) The function h (z) :=
ρ0 (z)
1+bρ00 (z)
is increasing in z;
(b) proxbρ (z) is increasing in z.
These two claims imply that
ρ0 proxbρ (−τ z)
1 + bρ00 proxbρ (−τ z)
−
ρ0 proxbρ (τ z)
< 0,
1 + bρ00 proxbρ (τ z)
∀z > 0,
which combined with the fact φ0 (z) < 0 for z > 0 reveals
sign
ρ0 proxbρ (−τ z)
ρ0 proxbρ (τ z)
−
1 + bρ00 proxbρ (−τ z)
1 + bρ00 proxbρ (τ z)
!
!
0
φ (z)
= 1,
∀z > 0.
In other words, the integrand in (114) is positive, which allows one to conclude that G0 (b) > 0.
We then move on to justify (a) and (b). For the first, the derivative of h is given by
h0 (z) =
ρ00 (z) + b(ρ00 (z))2 − bρ0 (z)ρ000 (z)
2
(1 + bρ00 (z))
.
Since ρ0 is log concave, this directly yields (ρ00 )2 −ρ0 ρ000 > 0. As ρ00 > 0 and b ≥ 0, the above implies h0 (z) > 0
∂proxbρ (z)
for all z. The second claim follows from
≥ 1+bkρ1 00 k∞ > 0 (cf. [18, Equation (56)]).
∂z
34
It remains to analyze the behavior of G in the limits when b → 0 and b → ∞. From [18, Proposition 6.4],
G(b) can also be expressed as
1
G(b) = 1 − E
.
1 + bρ00 (proxbρ (τ Z))
Since ρ00 is bounded and the integrand is at most 1, the dominated convergence theorem gives
lim G(b) = 0.
b→0
When b → ∞, bρ00 (proxbρ (τ z)) → ∞ for a fixed z. Again by applying the dominated convergence theorem,
lim G(b) = 1.
b→∞
It follows that limb→0 G(b) < κ < limb→∞ G(b) and, therefore, G(b) = κ has a unique positive solution.
Remark 3. Finally, we show that the logistic and the probit effective links obey the assumptions of Lemma
5. We work with a fixed τ > 0.
• A direct computation shows that ρ0 is log-concave for the logistic model. For the probit, it is well-known
that the reciprocal of the hazard function (also known as Mills’ ratio) is strictly log-convex [4].
• To check the other condition, recall that the proximal mapping operator satisfies
bρ0 (proxbρ (τ z)) + proxbρ (τ z) = τ z.
(115)
For a fixed z, we claim that if b → ∞, proxbρ (τ z) → −∞. To prove this claim, we start by assuming
that this is not true. Then either proxbρ (τ z) is bounded or diverges to ∞. If it is bounded, the LHS
above diverges to ∞ while the RHS is fixed, which is a contradiction. Similarly if proxbρ (τ z) diverges
to ∞, the left-hand side of (115) diverges to ∞ while the right-hand side is fixed, which cannot be
true as well. Further, when b → ∞, we must have proxbρ (τ z) → −∞, bρ0 (proxbρ (τ z)) → ∞, such that
the difference of these two is τ z. Observe that for the logistic, ρ00 (x) = ρ0 (x)(1 − ρ0 (x)) and for the
probit, ρ00 (x) = ρ0 (x)(ρ0 (x) − x) [45]. Hence, combining the asymptotic behavior of proxbρ (τ z) and
bρ0 (proxbρ (τ z)), we obtain that bρ00 (proxbρ (τ z)) diverges to ∞ in both models when b → ∞.
C
C.1
Proof of Lemma 6
Proof of Part (i)
Recall from [18, Proposition 6.4] that
"
#
1
.
κ = E [Ψ (τ Z; b(τ ))] = 1 − E
1 + b(τ )ρ00 proxb(τ )ρ (τ Z)
0
If we denote c := proxbρ (0), then b(0) is given by the following relation:
1−κ=
1
1 + b(0)ρ00 (c)
=⇒
b(0) =
κ
>0
ρ00 (c)(1 − κ)
as ρ00 (c) > 0 for any given c > 0. In addition, since ρ0 (c) > 0, we have
V(0) =
Ψ(0, b(0))2 (a) b(0)2 ρ0 (c)2
=
> 0,
κ
κ
where (a) comes from (20).
35
(116)
C.2
Proof of Part (ii)
We defer the proof of this part to the supplemental materials [47].
D
Proof of Part (ii) of Theorem 4
As discussed in Section 5.2.2, it suffices to (1) construct a set {Bi | 1 ≤ i ≤ N } that forms a cover of the
cone A defined in (50), and (2) upper bound P {{Xβ | β ∈ Rp } ∩ Bi 6= {0}}. In what follows, we elaborate
on these two steps.
• Step 1. Generate N = exp 22 p i.i.d. points z (i) ∼ N (0, p1 Ip ), 1 ≤ i ≤ N , and construct a collection
of convex cones
z (i)
p
u, (i)
Ci := u ∈ R
≥ kuk ,
1 ≤ i ≤ N.
kz k
In words, Ci consists of all directions that have nontrivial positive correlation with z (i) . With high probability, this collection {Ci | 1 ≤ i ≤ N } forms a cover of Rp , a fact which is an immediate consequence
of the following lemma.
Lemma 12. Consider any given constant 0 < < 1, and let N = exp 22 p . Then there exist
some
positive universal constants c5 , C5 > 0 such that with probability exceeding 1 − C5 exp −c5 2 p ,
N
X
1{hx,z(i) i≥kxkkz(i) k} ≥ 1
i=1
holds simultaneously for all x ∈ Rp .
With our family {Ci | 1 ≤ i ≤ N } we can introduce
n
(i)
X
√
z
Bi := Ci ∩ u ∈ Rn |
max {−uj , 0} ≤ n u, (i)
,
kz k
1 ≤ i ≤ N,
(117)
j=1
which in turn forms a cover of the nonconvex cone A defined in (50). DTo justifyE this, note that for
(i)
any u ∈ A, one can find i ∈ {1, · · · , N } obeying u ∈ Ci , or equivalently, u, kzz(i) k ≥ kuk, with high
probability. Combined with the membership to A this gives
n
X
√
√
z (i)
max {−uj , 0} ≤ 2 nkuk ≤ n u, (i)
,
kz k
j=1
indicating that u is contained within some Bi .
• Step 2. We now move on to control P {{Xβ | β ∈ Rp } ∩ Bi 6= {0}}. If the statistical dimensions of
the two cones obey δ (Bi ) < n − δ ({Xβ | β ∈ Rp }) = n − p, then an application of [3, Theorem I] gives
(
2 )
1 n − δ ({Xβ | β ∈ Rp }) − δ (Bi )
p
√
P {{Xβ | β ∈ R } ∩ Bi 6= {0}} ≤ 4 exp −
8
n
(
)
2
(n − p − δ(Bi ))
≤ 4 exp −
.
(118)
8n
It then comes down to upper bounding δ(Bi ), which is the content of the following lemma.
36
Lemma 13. Fix > 0. When n is sufficiently large, the statistical dimension of the convex cone Bi
defined in (117) obeys
√ 3
√
1
4
δ(Bi ) ≤
(119)
+ 2 2 + 10H(2 ) n,
2
where H(x) := −x log x − (1 − x) log(1 − x).
Substitution into (118) gives
P {{Xβ | β ∈ Rp } ∩ Bi 6= {0}}
2
√ 3
√
1
4 − 10H(2
−
2
2
)
n
−
p
2
≤ 4 exp −
8n
(
2 )
√ 3
√
p
1 1
4
− 2 2 − 10H(2 ) −
n .
= 4 exp −
8 2
n
(120)
Finally, we prove Lemmas 12-13 in the next subsections. These are the only remaining parts for the proof
of Theorem 4.
D.1
Proof of Lemma 12
To begin with, it is seen that all kz (i) k concentrates around 1. Specifically, apply [29, Proposition 1] to get
r
2t
t
(i) 2
P kz k > 1 + 2
+
≤ e−t ,
p
p
and set t = 32 p to reach
o
n
o
n
√
2
P kz (i) k2 > 1 + 10 ≤ P kz (i) k2 > 1 + 2 3 + 62 ≤ e−3 p .
Taking the union bound we obtain
n
o
P ∃1 ≤ i ≤ N s.t. kz (i) k2 > 1 + 10
2
≤ N e−3
p
2
= e− p .
(121)
Next, we note that it suffices to prove Lemma 12 for all unit vectors x. The following lemma provides a
bound on z (i) , x for any fixed unit vector x ∈ Rp .
Lemma 14. Consider any fixed unit vector x ∈ Rp and any given constant 0 < < 1, and set N =
exp 22 p . There exist positive universal constants c5 , c6 , C6 > 0 such that
P
)
7 2
7
1{hz(i) ,xi≥ 1 } ≤ exp (1 − o (1)) p
≤ exp −2 exp (1 − o (1)) 2 p
.
2
4
4
i=1
(N
X
(122)
Recognizing that Lemma 12 is a uniform result, we need to extend Lemma 14 to all
x simultaneously,
which we achieve via the standard covering argument. Specifically, one can find a set C := x(j) ∈ Rp | 1 ≤ j ≤ K
p
of unit vectors with cardinality K = 1 + 2p2 to form a cover of the unit ball of resolution p−2 [51, Lemma
5.2]; that is, for any unit vector x ∈ Rp , there exists a x(j) ∈ C such that
kx(j) − xk ≤ p−2 .
37
Apply Lemma 14 and take the union bound to arrive at
N
X
7
1{hz(i) ,x(j) i≥ 1 } ≥ exp (1 − o(1)) 2 p > 1,
1≤j≤K
(123)
2
4
i=1
with probability exceeding 1−K exp −2 exp (1 − o(1)) 47 2 p ≥ 1−exp −2 (1 − o (1)) exp (1 − o(1)) 47 2 p .
This guarantees that for each x(j) , one can find at least one z (i) obeying
D
E 1
z (i) , x(j) ≥ .
2
This result together with (121) yields that with probability exceeding 1 − C exp −c2 p , for some universal
constants C, c > 0.
E
E D
E
D
D
E D
≥
z (i) , x(j) − kz (i) k · kx(j) − xk
z (i) , x ≥ z (i) , x(j) − z (i) , x(j) − x
1
1
1
1
kz (i) k − 2 kz (i) k
− 2 kz (i) k ≥ √ 2
2
p
p
1 + 10
1
kz (i) k
≥
30
holds simultaneously for all unit vectors x ∈ Rp . Since > 0 can be an arbitrary constant, this concludes
the proof.
≥
Proof of Lemma 14: Without loss of generality, it suffices to consider x = e1 = [1, 0, · · · , 0]> . For any
t > 0 and any constant ζ > 0, it comes from [2, Theorem A.1.4] that
(
)
N
1 X
√
√
1{hz(i) ,e1 i<ζ } > (1 + t) Φ (ζ p) ≤ exp −2t2 Φ2 (ζ p) N .
P
N i=1
√
Setting t = 1 − Φ ζ p gives
(
)
N
1 X
√
√
√ 2
√
P
1{hz(i) ,e1 i<ζ } > (2 − Φ (ζ p)) Φ (ζ p) ≤ exp −2 (1 − Φ (ζ p)) Φ2 (ζ p) N .
N i=1
Recall that for any t > 1, one has (t−1 − t−3 )φ(t) ≤ 1 − Φ(t) ≤ t−1 φ(t) which implies that
(1 + o (1)) ζ 2 p
√
1 − Φ (ζ p) = exp −
.
2
Taking ζ = 12 , we arrive at
√
√
(2 − Φ (ζ p)) Φ (ζ p)
=
√ 2
√
(1 − Φ (ζ p)) Φ2 (ζ p)
=
1
1 − exp − (1 + o (1)) ζ 2 p = 1 − exp − (1 + o (1)) 2 p ,
4
1
1
exp − (1 + o (1)) ζ 2 p = exp − (1 + o (1)) 2 p .
4
N
This justifies that
(N
(
)
)
N
X
1 2
1 X
√
√
P
1{hz(i) ,e1 i≥ 1 } ≤ N exp − (1 + o (1)) p
=P
1 (i)
> (2 − Φ (ζ p)) Φ (ζ p)
2
4
N i=1 {hz ,e1 i<ζ }
i=1
1 2
≤ exp −2 exp − (1 + o (1)) p N
4
7
= exp −2 exp (1 − o (1)) 2 p
4
as claimed.
38
D.2
Proof of Lemma 13
First of all, recall from the definition (17) that
h
i
2
2
2
2
δ(Bi ) = E kΠBi (g)k = E kgk − min kg − uk = n − E min kg − uk
u∈Bi
u∈Bi
2
≤ n − E min kg − uk ,
u∈Di
where g ∼ N (0, In ), and Di is a superset of Bi defined by
n
o
Xn
√
Di := u ∈ Rn |
max {−uj , 0} ≤ nkuk .
(124)
j=1
Recall from the triangle inequality that
kg − uk
≥
kuk − kgk > kgk = kg − 0k,
∀u : kuk > 2kgk.
Since 0 ∈ Di , this implies that
arg min kg − uk ≤ 2kgk,
u∈Di
revealing that
2
E min kg − uk = E
u∈Di
2
kg − uk
min
.
u∈Di ,kuk≤2kgk
In what follows, it suffices to look at the set of u’s within Di obeying kuk ≤ 2kgk, which verify
Xn
√
√
max {−uj , 0} ≤ nkuk ≤ 2 nkgk.
(125)
j=1
It is seen that
kg − uk2
≥
X
2
(gi − ui ) =
≥
gi2
+
i:gi <0,ui ≥0
≥
X
i:gi <0, −
i:gi <0, −
X
i:gi <0, ui >−
√
X
+
i:gi <0, −
i:gi <0, ui ≤−
√
n
(gi − ui )
n kgk<ui <0
gi2 − 2ui gi
√
n kgk<ui <0
X
i:gi <0, −
2ui gi .
√
(126)
n kgk<ui <0
1. Regarding the first term of (126), we first recognize that
r
Pn
P
√
max {−ui , 0}
i: ui <0 |ui |
i | ui ≤ −
kgk ≤ p
= i=1 p
≤ 2 n,
n
n kgk
n kgk
where the last inequality follows from the constraint (125). As a consequence,
X
X
X
gi2 ≥
gi2 −
gi2
√
√
i:gi <0
i:gi <0, ui >−
2
(gi − ui )
kgk
2
√
gi2 −
n kgk
n kgk<ui <0
X
+
√
X
gi2 +
i:gi <0,ui ≥0
≥
X
i:gi <0,ui ≥0
X
i:gi <0
X
n kgk
i:ui ≤−
X
≥
i:gi <0
39
gi2 −
n kgk
max
√
S⊆[n]: |S|=2 n
X
i∈S
gi2 .
2. Next, we turn to the second term of (126), which can be bounded by
v
u
u
X
X
u
ui gi ≤ u
u2i
t
√
√
i:gi <0, −
n kgk<ui <0
i:gi <0, −
≤
v
u
u
t
√
i:gi <0, −
!
i:−
≤
n kgk<ui <0
X
n kgk<ui <0
v
ur
u
t
kgk
n
· kgk2
|ui |
i:ui <0
!
X
n kgk<ui <0
!
X
|ui |
√ max
gi2
· kgk2 ≤
|ui |
√
3
2 4 kgk2 ,
i:ui <0
where the last inequality follows from the constraint (125).
Putting the above results together, we have
X
2
kg − uk ≥
gi2 −
i:gi <0
√
S⊆[n]: |S|=2 n
for any u ∈ Di obeying kuk ≤ 2kgk, whence
X
2
≥ E
gi2 −
E min kg − uk
u∈Di
i:gi <0
=
X
max
√ 3
1
− 2 2 4
2
√ 3
gi2 − 2 2 4 kgk2
i∈S
max √
S⊆[n]: |S|=2 n
X
√
3
gi2 − 2 2 4 kgk2
i∈S
#
"
n−E
max √
S⊆[n]: |S|=2 n
X
gi2
.
(127)
i∈S
√
Finally, it follows from [29, Proposition 1] that for any t > 2 n,
(
)
)
(
X
X
p
2
2
P
gi ≥ 5t ≤ P
gi ≥ |S| + 2 |S|t + 2t ≤ e−t ,
i∈S
i∈S
which together with the union bound gives
(
)
X
2
P
max √
gi ≥ 5t ≤
S⊆[n]: |S|=2 n
i∈S
(
X
√
S⊆[n]: |S|=2 n
P
)
X
gi2
≥ 5t
√
≤ exp H 2 n − t .
i∈S
This gives
"
E
#
max √
S⊆[n]: |S|=2 n
X
gi2
Z
=
P
0
i∈S
≤
(
∞
)
max √
S⊆[n]: |S|=2 n
√
5H 2 n +
√
< 10H 2 n,
Z
X
gi2
≥ t dt
i∈S
∞
√
1
exp
H
2
n
−
t
dt
√
5
5H (2 )n
for any given > 0 with the proviso that n is sufficiently large. This combined with (127) yields
√ 3
√
1
2
4
E min kg − uk
≥
− 2 2 − 10H(2 ) n
u∈Di
2
as claimed.
40
(128)
E
Proof of Lemma 8
Throughout, we shall restrict ourselves on the event An as defined in (84), on which G̃ λlb I. Recalling
the definitions of G̃ and w from (80) and (87), we see that
w> G̃−2 w =
≤
−2
1 >
X̃ Dβ̃ X̃
X̃ > Dβ̃ X·1
n
−2
1
1 >
D X̃
X̃ Dβ̃ X̃
X̃ > Dβ̃ .
n β̃
n
1 >
X D X̃
n2 ·1 β̃
>
X·1
n
2
1/2
If we let the singular value decomposition of √1n Dβ̃ X̃ be U ΣV > , then a little algebra gives Σ
and
−2
1 1/2
1 0
1/2
D X̃
X̃ Dβ̃ X̃
X̃ > Dβ̃ = U Σ−2 U > λ−1
lb I.
n β̃
n
(129)
√
λlb I
Substituting this into (129) and using the fact kX·1 k2 . n with high probability (by Lemma 2), we obtain
w> G̃−2 w .
1
kX·1 k2 . 1
nλL
with probability at least 1 − exp(−Ω(n)).
F
Proof of Lemma 9
Throughout this and the subsequent sections, we consider Hn and Kn to be two diverging sequences with
the following properties:
Hn = o (n ) , Kn = o (n ) , n2 exp −c1 Hn2 = o(1), n exp −c2 Kn2 = o(1),
(130)
for any constants ci > 0, i = 1, 2 and any > 0. This lemma is an analogue of [20, Proposition 3.18]. We
modify and adapt the proof ideas to establish the result in our setup. Throughout we shall restrict ourselves
to the event An , on which G̃ λlb I.
Due to independence between X·1 and {Dβ̃ , H}, one can invoke the Hanson-Wright inequality [44,
Theorem 1.1] to yield
P
!
1 > 1/2
1 1/2
1/2
1/2
> t H, Dβ̃
X D HDβ̃ X·1 − Tr Dβ̃ HDβ̃
n ·1 β̃
n
2
t
t
≤ 2 exp −c min
,
K24 D 1/2 HD 1/2 2 K 2 D 1/2 HD 1/2
n
n
F
β̃
β̃
β̃
β̃
2
t
t
,
≤ 2 exp −c min
,
K 4 D 1/2 HD 1/2 2 K 2 D 1/2 HD 1/2
n
n
β̃
β̃
β̃
β̃
√
1/2
1/2
where k.kF denotes the Frobenius norm. Choose t = C 2 Dβ̃ HDβ̃ Hn / n with C > 0 a sufficiently
large constant, and take Hn to be as in (130). Substitution into the above inequality and unconditioning
41
give
P
1 > 1/2
1 1/2
1 2
1/2
1/2
1/2
1/2
√
C Hn kDβ̃ HDβ̃ k
X D HDβ̃ X·1 − Tr Dβ̃ HDβ̃
>
n ·1 β̃
n
n
4 2
√
C Hn C 2 nHn
,
= C exp −cHn2 = o(1),
≤ 2 exp −c min
K4
K2
(131)
for some universal constants C, c > 0.
1/2
1/2
We are left to analyzing Tr Dβ̃ HDβ̃ . Recall from the definition (90) of H that
1/2
1/2
Dβ̃ HDβ̃
= Dβ̃ −
1
D X̃ G̃−1 X̃ > Dβ̃ ,
n β̃
and, hence,
Tr
1/2
1/2
Dβ̃ HDβ̃
=
n
X
ρ
00
(X̃i> β̃)
i=1
ρ00 (X̃i> β̃)2 > −1
X̃i G̃ X̃i
−
n
!
.
(132)
This requires us to analyze G̃−1 carefully. To this end, recall that the matrix G̃(i) defined in (81) obeys
G̃(i) = G̃ −
1 00 >
ρ (X̃ β̃)X̃i X̃i> .
n
Invoking Sherman-Morrison-Woodbury formula (e.g. [26]), we have
G̃
−1
=
G̃−1
(i)
−
ρ00 (X̃i> β̃) −1
G̃(i) X̃i X̃i> G̃−1
(i)
n
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
1+
It follows that
X̃i> G̃−1 X̃i
=
X̃i> G̃−1
(i) X̃i
−
.
ρ00 (X̃i> β̃)
2
(Xi> G̃−1
(i) X̃i )
n
1+
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
(133)
,
which implies that
X̃i> G̃−1
(i) X̃i
X̃i> G̃−1 X̃i =
1+
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
.
(134)
The relations (132) and (134) taken collectively reveal that
n
1X
1 1/2
1/2
=
Tr Dβ̃ HDβ̃
n
n i=1 1 +
ρ00 (X̃i β̃)
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
.
(135)
We shall show that the trace above is close to Tr(I − H) up to some factors. For this purpose we analyze
the latter quantity in two different ways. To begin with, observe that
1/2 !
1/2
Tr(I − H) = Tr
Dβ̃ X̃ G̃−1 X̃ > Dβ̃
n
= Tr(G̃G̃−1 ) = p − 1.
(136)
On the other hand, it directly follows from the definition of H and (134) that the ith diagonal entry of H
is given by
1
Hi,i =
.
ρ00 (X̃i> β̃)
> G̃−1 X̃
1+
X̃
i
i
(i)
n
42
Applying this relation, we can compute Tr(I − H) analytically as follows:
Tr(I − H) =
X
i
=
1+
X
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
1+
1
n Tr
− ρ00 (X̃i> β̃)α̃
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
ρ00 (X̃i> β̃)α̃Hi,i +
X ρ00 (X̃i> β̃)
i
where α̃ :=
(137)
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
X ρ00 (X̃i> β̃)α̃ +
i
=
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
1+
i
1
> −1
n X̃i G̃(i) X̃i
− α̃
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
,
(138)
G̃−1 .
1/2
1/2
Observe that the first quantity in the right-hand side above is simply α̃Tr Dβ̃ HDβ̃
denote
1
ηi = X̃i> G̃−1
(i) X̃i − α̃.
n
. For simplicity,
(139)
Note that G̃(i) 0 on An and that ρ00 > 0. Hence the denominator in the second term in (138) is greater
than 1 for all i. Comparing (136) and (138), we deduce that
p−1
1 X 00 >
1 1/2
1/2
− Tr Dβ̃ HDβ̃
α̃ ≤ sup |ηi | ·
|ρ (X̃i β̃)| . sup |ηi |
(140)
n
n
n i
i
i
on An . It thus suffices to control supi |ηi |. The above bounds together with Lemma (85) and the proposition
below complete the proof.
Proposition 1. Let ηi be as defined in (139). Then there exist universal constants C1 , C2 , C3 > 0 such that
C1 Kn2 Hn
√
≥ 1 − C2 n2 exp −c2 Hn2 − C3 n exp −c3 Kn2
P sup |ηi | ≤
n
i
− exp (−C4 n (1 + o(1))) = 1 − o(1),
where Kn , Hn are diverging sequences as specified in (130)
Proof of Proposition 1: Fix any index i. Recall that β̃[−i] is the MLE when the 1st predictor and ith
observation are removed. Also recall the definition of G̃[−i] in (83). The proof essentially follows three steps.
First, note that X̃i and G̃[−i] are independent. Hence, an application of the Hanson-Wright inequality [44]
yields that
!
2
1 −1
t
1 > −1
t
X̃i G̃[−i] X̃i − Tr G̃[−i] > t G̃[−i] ≤ 2 exp −c min
,
P
K24 G̃−1 2 K 2 G̃−1
n
n
[−i]
n
[−i] F
n
2
t
t
.
≤ 2 exp −c min
,
K 4 G̃−1 2 K 2 G̃−1
n
[−i]
n
[−i]
√
We choose t = C 2 G̃−1
[−i] Hn / n, where C > 0 is a sufficiently large constant. Now marginalizing gives
P
1 > −1
1
H
√n
X̃i G̃[−i] X̃i − Tr G̃−1
> C 2 G̃−1
[−i]
[−i]
n
n
n
43
≤ 2 exp −c min
≤ 2 exp −C 0 Hn2 ,
√
C 4 Hn2 C 2 nHn
,
K4
K2
where C 0 > 0 is a sufficiently large constant. On An , the spectral norm G̃−1
(i) is bounded above by λlb for
all i. Invoking (85) we obtain that there exist universal constants C1 , C2 , C3 > 0 such that
1 > −1
1 −1
Hn
P sup X̃i G̃[−i] X̃i − Tr G̃[−i] > C1 √
≤ C2 n exp −C3 Hn2 .
(141)
n
n
n
i
−1
> −1
> −1
The next step consists of showing that Tr G̃−1
[−i] (resp. X̃i G̃[−i] X̃i ) and Tr G̃(i) (resp. X̃i G̃(i) X̃i ) are
uniformly close across all i. This is established in the following lemma.
Lemma 15. Let G̃(i) and G̃[−i] be defined as in (81) and (83), respectively. Then there exist universal
constants C1 , C2 , C3 , C4 , c2 , c3 > 0 such that
Kn2 Hn
1 > −1
1 > −1
P sup X̃i G̃(i) X̃i − X̃i G̃[−i] X̃i ≤ C1 √
n
n
n
i
= 1 − C2 n2 exp −c2 Hn2 − C3 n exp −c3 Kn2 − exp (−C4 n (1 + o(1))) = 1 − o(1), (142)
1
Kn2 Hn
1
−1
√
−
≤
C
Tr
G̃
P sup Tr G̃−1
1
(i)
[−i]
n
n
n
i
2
2
= 1 − C2 n exp −c2 Hn − C3 n exp −c3 Kn2 − exp (−C4 n (1 + o(1))) = 1 − o(1), (143)
where Kn , Hn are diverging sequences as defined in (130).
This together with (141) yields that
1
Kn2 Hn
1
−1
√
X̃
−
>
C
Tr(
G̃
)
P sup X̃i> G̃−1
i
1
(i)
(i)
n
n
n
i
≤ C2 n2 exp −c2 Hn2 + C3 n exp −c3 Kn2 + exp (−C4 n (1 + o(1))) . (144)
1
−1
The final ingredient is to establish that n1 Tr G̃−1
are uniformly close across i.
(i) and n Tr G̃
Lemma 16. Let G̃ and G̃(i) be as defined in (80) and (81), respectively. Then one has
1
−1
−
Tr
G̃
≤
P Tr G̃−1
≥ 1 − exp (−Ω(n)) .
(i)
λlb
(145)
This completes the proof.
Proof of Lemma 15: For two invertible matrices A and B of the same dimensions, the difference of their
inverses can be written as
A−1 − B −1 = A−1 (B − A)B −1 .
Applying this identity, we have
−1
−1
G̃−1
−
G̃
=
G̃
G̃
−
G̃
G̃−1
[−i]
(i)
(i)
[−i]
(i)
[−i] .
From the definition of these matrices, it follows directly that
1 X 00
ρ X̃j> β̃[−i] − ρ00 X̃j> β̃ X̃j X̃j> .
G̃[−i] − G̃(i) =
n
(146)
j:j6=i
As ρ000 is bounded, by the mean-value theorem, it suffices to control the differences Xj> β̃[−i] − X̃j> β̃
uniformly across all j. This is established in the following lemma, the proof of which is deferred to Appendix
H.
44
Lemma 17. Let β̂ be the full model MLE and β̂[−i] be the MLE when the ith observation is dropped.
Let qi be as described in Lemma 18 and Kn , Hn be as in (130). Then there exist universal constants
C1 , C2 , C3 , C4 , c2 , c3 > 0 such that
!
Kn2 Hn
>
>
P sup Xj β̂[−i] − Xj β̂ ≤ C1 √
n
j6=i
≥ 1 − C2 n exp −c2 Hn2 − C3 exp −c3 Kn2 − exp (−C4 n (1 + o(1))) = 1 − o(1), (147)
Kn2 Hn
P sup |Xi> β̂ − proxqi ρ (Xi> β̂[−i] )| ≤ C1 √
n
i
2
≥ 1 − C2 n exp −c2 Hn − C3 exp −c3 Kn2 − exp (−C4 n (1 + o(1))) = 1 − o(1). (148)
Invoking this lemma, we see that the spectral norm of (146) is bounded above by some constant times
Kn2 Hn X
√
X̃j X̃j> /n
n
j:j6=i
with high probability as specified in (147). From Lemma 2, the spectral norm here is bounded by some
constant with probability at least 1 − c1 exp(−c2 n). These observations together with (85) and the fact that
on An the minimum eigenvalues of G̃(i) and G̃[−i] are bounded by λlb yield that
Kn2 Hn
−1
√
≤
C
−
G̃
P G̃−1
≥ 1 − C2 n exp −c2 Hn2 − C3 exp −c3 Kn2 − exp (−C4 n (1 + o(1))) .
1
(i)
[−i]
n
This is true for any i. Hence, taking the union bound we obtain
Kn2 Hn
−1
√
≤
C
−
G̃
P sup G̃−1
1
(i)
[−i]
n
i
≥ 1 − C2 n2 exp −c2 Hn2 − C3 n exp −c3 Kn2 − exp (−C4 n (1 + o(1))) . (149)
In order to establish the first result, note that
sup
i
1
kX̃i k2
−1
> −1
sup kG̃−1
X̃i> G̃−1
X̃
−
X̃
G̃
X̃
≤
sup
i
i
i
(i) − G̃[−i] k.
(i)
[−i]
n
n
i
i
To obtain the second result, note that
sup
i
p−1
1
1
−1
Tr(G̃−1
) − Tr(G̃−1
) ≤
sup kG̃−1
(i)
[−i]
(i) − G̃[−i] k.
n
n
n
i
Therefore, combining (149) and Lemma 2 gives the desired result.
Proof of Lemma 16: We restrict ourselves to the event An throughout. Recalling (133), one has
−1
Tr(G̃−1
)=
(i) ) − Tr(G̃
ρ00 (X̃i> β̃)
n
1+
X̃i> G̃−2
(i) X̃i
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
.
In addition, on An we have
1
1
> −2
> −1
X̃i> G̃−1
X̃
−
X̃
G̃
X̃
=
X̃
G̃
G̃
−
λ
I
G̃−1
i
i
lb
(i)
i
(i)
(i)
(i) X̃i ≥ 0.
λlb
λlb i (i)
Combining these results and recognizing that ρ00 > 0, we get
−1
Tr(G̃−1
) ≤
(i) ) − Tr(G̃
ρ00 (X̃i> β̃)
n
1+
1
> −1
λlb X̃i G̃(i) X̃i
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
≤
1
λlb
(150)
as claimed.
45
G
Proof of Lemma 11
Again, we restrict ourselves to the event An on which G̃ λlb I. Note that
X̃i> G̃−1 w =
1 > −1 >
X̃ G̃ X̃ Dβ̃ X·1 .
n i
Note that {G̃, X̃} and X·1 are independent. Conditional on X̃, the left-hand side is Gaussian with mean
zero and variance n12 X̃i> G̃−1 X̃ > Dβ̃2 X̃ G̃−1 X̃i . The variance is bounded above by
1 > −1 > 2
1
X̃ G̃ X̃ Dβ̃ X̃ G̃−1 X̃i ≤ sup ρ00 (X̃i> β̃) · 2 X̃i> G̃−1 X̃ > Dβ̃ X̃ G̃−1 X̃i
n2 i
n
i
1
1
= sup ρ00 (X̃i> β̃) · X̃i> G̃−1 X̃i . kX̃i k2
n i
n
2
σX
:=
(151)
In turn, Lemma 2 asserts that n−1 kX̃i k2 is bounded by a constant with high probability. As a result,
applying Gaussian concentration results [49, Theorem 2.1.12] gives
|X̃i> G̃−1 w| . Hn
with probability exceeding 1 − C exp −cHn2 , where C, c > 0 are universal constants.
In addition, supi |Xi1 | . Hn holds with probability exceeding 1 − C exp −cHn2 . Putting the above
results together, applying the triangle inequality |Xi1 − X̃i> G̃−1 w| ≤ |Xi1 | + |X̃i> G̃−1 w|, and taking the
union bound, we obtain
P( sup |Xi1 − X̃i> G̃−1 w| . Hn ) ≥ 1 − Cn exp −cHn2 = 1 − o(1).
1≤i≤n
H
Proof of Lemma 17
The goal of this section is to prove Lemma 17, which relates the full-model MLE β̂ and the MLE β̂[−i] . To
this end, we establish the key lemma below.
Lemma 18. Suppose β̂[−i] denote the MLE when the ith observation is dropped. Further let G[−i] be as in
(82), and define qi and b̂ as follows:
1 > −1
X G Xi ;
n i [−i]
1
0
>
b̂ = β̂[−i] − G−1
X
ρ
prox
X
β̂
.
i
[−i]
q
ρ
i
i
n [−i]
qi =
(152)
Suppose Kn , Hn are diverging sequences as in (130). Then there exist universal constants C1 , C2 , C3 > 0
such that
Kn2 Hn
P kβ̂ − b̂k ≤ C1
≥ 1 − C2 n exp(−c2 Hn2 ) − C3 exp(−c3 Kn2 ) − exp(−C4 n(1 + o(1)));
(153)
n
P sup
j6=i
Xj> β̂[−i]
−
Xj> b̂
Kn Hn
≤ C1 √
n
!
≥ 1 − C2 n exp −c2 Hn2 − C3 exp −c3 Kn2 − exp (−C4 n (1 + o(1))) . (154)
46
The proof ideas are inspired by the leave-one-observation-out approach of [20]. We however emphasize
once more that the adaptation of these ideas to our setup is not straightforward and crucially hinges on
Theorem 4, Lemma 7 and properties of the effective link function.
Proof of Lemma 18: Invoking techniques similar to that for establishing Lemma 7, it can be shown that
n
1 X 00 ∗
ρ (γi )Xi Xi> λlb I
n i=1
(155)
with probability at least 1 − exp(Ω(n)), where γi∗ is between Xi> b̂ and Xi> β̂. Denote by Bn the event
where (155) holds. Throughout this proof, we work on the event Cn := An ∩ Bn , which has probability
1 − exp (−Ω(n)). As in (105) then,
1
∇`(b̂) .
(156)
kβ̂ − b̂k ≤
nλlb
Next, we simplify (156). To this end, recall the defining relation of the proximal operator
bρ0 (proxbρ (z)) + proxbρ (z) = z,
which together with the definitions of b̂ and qi gives
Xi> b̂ = proxqi ρ Xi> β̂[−i] .
(157)
Now, let `[−i] denote the negative log-likelihood function when the ith observation is dropped, and hence
∇`[−i] β̂[−i] = 0. Expressing ∇`(b̂) as ∇`(b̂) − ∇`[−i] β̂[−i] , applying the mean value theorem, and using
the analysis similar to that in [20, Proposition 3.4], we obtain
i
1
1 X h 00 ∗
∇`(b̂) =
ρ (γj ) − ρ00 (Xj> β̂[−i] ) Xj Xj> b̂ − β̂[−i] ,
(158)
n
n
j:j6=i
where γj∗ is between Xj> b̂ and Xj> β̂[−i] . Combining (156) and (158) leads to the upper bound
kβ̂ − b̂k ≤
1
λlb
1 X
1 −1
Xj Xj> · sup ρ00 (γj∗ ) − ρ00 Xj> β̂[−i] ·
G[−i] Xi · ρ0 proxqi ρ (Xi> β̂[−i] ) . (159)
n
n
j6=i
j:j6=i
We need to control each term in the right-hand side. To start with, the first term is bounded by a
universal constant with probability 1 − exp(−Ω(n)) (Lemma 2). For the second term, since γj∗ is between
Xj> b̂ and Xj> β̂[−i] and kρ000 k∞ < ∞, we get
sup ρ00 (γj∗ ) − ρ00 (Xj> β̂[−i] ) ≤ kρ000 k∞ kXj> b̂ − Xj> β̂[−i] k
(160)
j6=i
1 > −1
Xj G[−i] Xi ρ0 proxqi ρ Xi> β̂[−i]
n
1
0
>
≤ kρ000 k∞ sup Xj> G−1
X
·
ρ
prox
(X
β̂
)
.
i
[−i]
qi ρ
i
[−i]
n j6=i
≤ kρ000 k∞
(161)
(162)
Given that {Xj , G[−i] } and Xi are independent for all j 6= i, conditional on {Xj , G[−i] } one has
> −2
Xj> G−1
X
∼
N
0,
X
G
X
.
i
j
j
[−i]
[−i]
In addition, the variance satisfies
|Xj> G−2
[−i] Xj | ≤
47
kXj k2
.n
λ2lb
(163)
with probability at least 1 − exp(−Ω(n)). Applying standard Gaussian concentration results [49, Theorem
2.1.12], we obtain
1
≥
C
H
≤ C2 exp −c2 Hn2 + exp (−C3 n (1 + o(1))) .
P √ Xj> G−1
X
(164)
1 n
i
[−i]
p
By the union bound
P
1
√ sup Xj> G−1
[−i] Xi ≤ C1 Hn
p j6=i
!
≥ 1 − nC2 exp −c2 Hn2 − exp (−C3 n (1 + o(1))) .
(165)
Consequently,
1
sup ρ00 (γj∗ ) − ρ00 (Xj> β̂[−i] ) . sup kXj> b̂ − Xj> β̂[−i] k . √ Hn ρ0 proxqi ρ (Xi> β̂[−i] ) .
n
j6=i
j6=i
In addition, the third term in the right-hand side of (159) can be upper bounded as well since
1
1
1 q > −2
|Xi G[−i] Xi | . √
kG−1
X
k
=
i
[−i]
n
n
n
(166)
(167)
with high probability.
It remains to bound ρ0 proxqi ρ (Xi> β̂[−i] ) . To do this, we begin by considering ρ0 (proxcρ (Z)) for any
constant c > 0 (rather than a random variable qi ). Recall that for any constant c > 0 and any Z ∼ N (0, σ 2 )
with finite variance, the random variable ρ0 (proxcρ (Z)) is sub-Gaussian. Conditional on β̂[−i] , one has
Xi> β̂[−i] ∼ N 0, kβ̂[−i] k2 . This yields
!#
"
C32 Kn2
0
>
P ρ proxcρ (Xi β̂[−i] ) ≥ C1 Kn ≤ C2 E exp −
kβ̂[−i] k2
≤ C2 exp −C3 Kn2 + C4 exp (−C5 n)
(168)
for some constants C1 , C2 , C3 , C4 , C5 > 0 sincekβ̂[−i] k is bounded with high probability (see Theorem 4).
∂prox
(z)
bρ
≤ 0 by [18, Proposition 6.3]. Hence, in order to move over from the above concenNote that
∂b
tration result established for a fixed constant c to the random variables qi , it suffices to establish a uniform
lower bound for qi with high probability. Observe that for each i,
qi ≥
kXi k2
1
n
G[−i]
≥ C∗
with probability 1 − exp(−Ω(n)), where C ∗ is some universal constant. On this event, one has
ρ0 proxqi ρ Xi> β̂[−i]
≤ ρ0 proxC ∗ ρ Xi> β̂[−i] .
This taken collectively with (168) yields
P ρ0 (proxqi ρ (Xi> β̂[−i] )) ≤ C1 Kn ≥ P ρ0 (proxC ∗ ρ (Xi> β̂[−i] )) ≤ C1 Kn
≥ 1 − C2 exp −C3 Kn2 − C4 exp (−C5 n) .
(169)
(170)
This controls the last term.
To summarize, if {Kn } and {Hn } are diverging sequences satisfying the assumptions in (130), combining
(159) and the bounds for each term in the right-hand side finally gives (153). On the other hand, combining
(165) and (170) yields (154).
48
With the help of Lemma 18 we are ready to prove Lemma 17. Indeed, observe that
Xj> (β̂[−i] − β̂) ≤ Xj> (b̂ − β̂) + Xj> (β̂[−i] − b̂) ,
and hence by combining Lemma 2 and Lemma 18, we establish the first claim (147). The second claim (148)
follows directly from Lemmas 2, 18 and (157).
I
Proof of Theorem 7(b)
This section proves that the random sequence α̃ = Tr G̃−1 /n converges in probability to the constant b∗
defined by the system of equations (23) and (24). To begin with, we claim that α̃ is close to a set of auxiliary
random variables {q̃i } defined below.
Lemma 19. Define q̃i to be
q̃i =
1 > −1
X̃ G̃ X̃i ,
n i [−i]
where G̃[−i] is defined in (83). Then there exist universal constants C1 , C2 , C3 , C4 , c2 , c3 > 0 such that
Kn2 Hn
P sup |q̃i − α̃| ≤ C1 √
n
i
≥ 1 − C2 n2 exp c2 Hn2 − C3 n exp −c3 Kn2 − exp (−C4 n (1 + o(1))) = 1 − o(1),
where Kn , Hn are as in (130).
Proof: This result follows directly from Proposition 1 and equation (142).
A consequence is that proxq̃i ρ Xi> β̂[−i]
becomes close to proxα̃ρ Xi> β̂[−i] .
Lemma 20. Let q̃i and α̃ be as defined earlier. Then one has
Kn3 Hn
P sup proxq̃i ρ Xi> β̂[−i] − proxα̃ρ Xi> β̂[−i] ≤ C1 √
n
i
2
2
≥ 1 − C2 n exp −c2 Hn − C3 n exp −c3 Kn2 − exp (−C4 n (1 + o(1))) = 1 − o(1), (171)
where Kn , Hn are as in (130).
The key idea behind studying proxα̃ρ Xi> β̂[−i] is that it is connected to a random function δn (·) defined
below, which happens to be closely related to the equation (24). In fact,
we will show that δn (α̃) converges
in probability to 0; the proof relies on the connection between proxα̃ρ Xi> β̂[−i] and the auxiliary quantity
proxq̃i ρ Xi> β̂[−i] . The formal results is this:
Proposition 2. For any index i, let β̂[−i] be the MLE obtained on dropping the ith observation. Define
δn (x) to be the random function
n
δn (x) :=
p
1X
1
.
−1+
n
n i=1 1 + xρ00 prox
>
xρ Xi β̂[−i]
P
Then one has δn (α̃) → 0.
49
(172)
Furthermore, the random function δn (x) converges to a deterministic function ∆(x) defined by
1
∆(x) = κ − 1 + EZ
,
1 + xρ00 (proxxρ (τ∗ Z))
(173)
where Z ∼ N (0, 1), and τ∗ is such that (τ∗ , b∗ ) is the unique solution to (23) and (24).
P
Proposition 3. With ∆(x) as in (173), ∆(α̃) → 0.
In fact, one can easily verify that
∆(x) = κ − E Ψ0 (τ∗ Z; x) ,
(174)
and hence by Lemma 5, the solution to ∆(x) = 0 is exactly b∗ . As a result, putting the above claims together,
we show that α̃ converges in probability to b∗ .
It remains to formally prove the preceding lemmas and propositions, which is the goal of the rest of this
section.
Proof of Lemma 20: By [18, Proposition 6.3], one has
∂proxbρ (z)
ρ0 (x)
=−
∂b
1 + bρ00 (x)
,
x=proxbρ (z)
which yields
sup proxq̃i ρ Xi> β̂[−i] − proxα̃ρ Xi> β̂[−i]
i
0
ρ
(x)
= sup
· |q̃i − α̃|
1 + qα̃,i ρ00 (x) x=prox
> β̂
i
X
[−i]
qα̃,i ρ
i
≤ sup ρ0 proxqα̃,i (Xi> β̂[−i] ) · sup |q̃i − α̃|,
i
(175)
i
where qα̃,i is between q̃i and α̃. Here, the last inequality holds since qα̃,i , ρ00 ≥ 0.
In addition, just as in the proof of Lemma 18, one can show that qi is bounded below by some constant
√
C ∗ > 0 with probability 1 − exp(−Ω(n)). Since qα̃,i ≥ min{q̃i , α̃}, on the event supi |q̃i − α̃| ≤ C1 Kn2 Hn / n,
which happens with high probability (Lemma 19), qα̃,i ≥ Cα for some universal constant Cα > 0. Hence, by
an argument similar to that establishing (170), we have
P sup ρ0 proxqα̃,i Xi> β̂[−i]
≥ C1 Kn
i
≤ C2 n2 exp −c2 Hn2 + C3 n exp −c3 Kn2 + exp (−C4 n (1 + o(1))) .
This together with (175) and Lemma 19 concludes the proof.
Proof of Proposition 2: To begin with, recall from (136) and (137) that on An ,
n
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
p−1 X
=
n
i=1 1 +
ρ00 (X̃i> β̃)
n
n
X̃i> G̃−1
(i) X̃i
=1−
50
1X
n i=1 1 +
1
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
.
(176)
Using the fact that
1
1+x
−
1
1+y
≤ |x − y| for x, y ≥ 0, we obtain
n
1X
n i=1 1 +
n
1
ρ00 (X̃i> β̃)
X̃i> G̃−1
(i) X̃i
n
−
1X
1
00
n i=1 1 + ρ (X̃i> β̃)α̃
n
≤
1 X 00 >
1
1 > −1
00
ρ (X̃i β̃) X̃i> G̃−1
(i) X̃i − α̃ ≤ kρ k∞ sup n X̃i G̃(i) X̃i − α̃
n i=1
n
i
Kn2 Hn
= kρ00 k∞ sup |ηi | ≤ C1 √
,
n
i
with high probability (Proposition 1). This combined with (176) yields
!
n
p−1
Kn2 Hn
1X
1
≥ C1 √
P
−1+
n
n i=1 1 + ρ00 (X̃i> β̃)α̃
n
≤ C2 n2 exp −c2 Hn2 + C3 n exp −c3 Kn2 + exp (−C4 n (1 + o(1))) .
Pn
Pn
1
1
1
The above bound concerns n1 i=1 1+ρ00 (X̃
> β̃)α̃ , and it remains to relate it to n
i=1 1+ρ00 (prox (X̃ > β̃ ))α̃ .
α̃ρ
i
i
To this end, we first get from the uniform boundedness of ρ000 and Lemma 17 that
Kn2 Hn
P sup ρ00 (X̃i> β̃) − ρ00 proxq̃i ρ (X̃i> β̃[−i] ) ≥ C1 √
n
i
≤ C2 n exp(−c2 Hn2 ) + C3 exp(−c3 Kn2 ) + exp(−C4 n(1 + o(1))). (177)
Note that
n
n
1
1
1X
1X
−
>
00
00
n i=1 1 + ρ (X̃i β̃)α̃ n i=1 1 + ρ (proxα̃ρ (X̃i> β̃[−i] ))α̃
≤ |α̃| sup ρ00 (X̃i> β̃) − ρ00 proxα̃ρ (X̃i> β̃[−i] )
i
n
o
≤ |α̃| sup ρ00 X̃i> β̃ − ρ00 proxq̃i ρ (X̃i> β̃[−i] ) + ρ00 proxq̃i ρ (X̃i> β̃[−i] ) − ρ00 proxα̃ρ (X̃i> β̃[−i] )
.
i
By the bound (177), an application of Lemma 20, and the fact that α̃ ≤ p/(nλlb ) (on An ), we obtain
!
n
1X
1
Kn3 Hn
p
−1+
P
≥ C1 √
n
n i=1 1 + ρ00 proxα̃ρ (Xi> β̂[−i] ) α̃
n
≤ C2 n2 exp − c2 Hn2 + C3 n exp − c3 Kn2 + exp − C4 n(1 + o(1)) .
P
This establishes that δn (α̃) → 0.
Proof of Proposition 3: Note that since 0 < α ≤ p/(nλlb ) := B on An , it suffices to show that
P
sup |δn (x) − ∆(x)| → 0.
x∈[0,B]
We do this by following three steps. Below, M > 0 is some sufficiently large constant.
1. First we truncate the random function δn (x) and define
δ̃n (x) =
n
X
p
−1+
n
00 prox
i=1 1 + xρ
51
1
xρ
Xi> β̂[−i] 1{kβ̂[−i] k≤M }
.
P
The first step is to show that supx∈[0,B] δ̃n (x) − δn (x) → 0. We stress that this truncation does not
arise in [20], and we keep track of the truncation throughout the rest of the proof.
P
2. Show that supx∈[0,B] δ̃n (x) − E δ̃n (x) → 0.
P
3. Show that supx∈[0,B] E δ̃n (x) − ∆(x) → 0.
1
1
− 1+z
| ≤ |y−z| for any y, z > 0 and that
To argue about the first step, observe that | 1+y
Proposition 6.3]. Then
∂proxcρ (x)
∂x
≤ 1 [18,
|δn (x) − δ̃n (x)| ≤ |x| · kρ000 k∞ · sup Xi> β̂[−i] − Xi> β̂[−i] 1{kβ̂[−i] k≤M } .
i
For a sufficiently large constant M > 0, P(kβ̂[−i] k ≥ M ) ≤ exp(−Ω(n)) by Theorem 4. Hence, for any > 0,
!
>
P
sup |δn (x) − δ̃n (x)| ≥ ≤ P sup Xi β̂[−i] 1{kβ̂[−i] k≥M } ≥
Bkρ000 k∞
i
x∈[0,B]
≤
n
X
P kβ̂[−i] k ≥ M = o(1),
(178)
i=1
establishing Step 1.
To argue about Step 2, note that for any x and z,
n
δ̃n (x) − δ̃n (z) ≤
1
1X
1
−
n i=1 1 + xρ00 prox
>
1 + zρ00 proxzρ Xi> β̂[−i] 1{kβ̂[−i] k≤M }
xρ Xi β̂[−i] 1{kβ̂[−i] k≤M }
n
1X
xρ00 proxxρ Xi> β̂[−i] 1{kβ̂[−i] k≤M } − zρ00 proxzρ Xi> β̂[−i] 1{kβ̂[−i] k≤M }
n i=1
n
0
ρ
(x)
1 X 00
kρ k∞ |x − z| + |z| · kρ000 k∞
≤
|x − z|
n i=1
1 + z̃ρ00 (x) x=prox X > β̂[−i] 1
z̃ρ
i
{kβ̂[−i] k≤M }
!
n
1X 0
>
00
000
ρ proxz̃ρ Xi β̂[−i] 1{kβ̂[−i] k≤M }
≤ |x − z| kρ k∞ + |z| · kρ k∞
,
n i=1
≤
where z̃ ∈ (x, z). Setting
Yn := kρ00 k∞ + Bkρ000 k∞
1 Xn
ρ0 proxz̃ρ Xi> β̂[−i] 1{kβ̂[−i] k≤M }
,
i=1
n
then for any , η > 0 we have
!
P
sup
x,z∈(0,B],|x−z|≤η
|δ̃n (x) − δ̃n (z)| ≥
η
≤
E[Yn ] ≤ ηC1 (),
≤ P Yn ≥
η
(179)
where C1 () is some function independent of n. The inequality (179) is an analogue of [20, Lemma 3.24].
We remark that the truncation is particularly important here in guaranteeing that E[Yn ] < ∞.
Set
Gn (x) := E δ̃n (x) ,
52
and observe that
i
h
.
Gn (x) − Gn (z) ≤ |x − z| kρ00 k∞ + |z|kρ000 k∞ E ρ0 proxz̃ρ Xi> β̂[−i] 1{kβ̂[−i] k≤M }
A similarly inequality applies to ∆(x) in which Xi> β̂[−i] 1{kβ̂[−i] k≤M } is replaced by τ∗ Z. In either case,
|Gn (x) − Gn (z)| ≤ C2 η
sup
|∆(x) − ∆(z)| ≤ C3 η
sup
and
x,z∈(0,B],|x−z|≤η
(180)
x,z∈(0,B],|x−z|≤η
for any η > 0.
For any 0 > 0, set K = max{C1 (0 ), C2 }. Next, divide [0, B] into finitely many segments
[0, x1 ), [x1 , x2 ), . . . , [xK−1 , xK := B]
such that the length of each segment is η/K for any η > 0. Then for every x ∈ [0, B], there exists l such
that |x − xl | ≤ η/K. As a result, for any x ∈ [0, B],
sup
sup
δ̃n (x) − δ̃n (xl ) + sup δ̃n (xl ) − Gn (xl ) .
x,xl ∈(0,B],|x−xl |≤η/K
1≤l≤k
δ̃n (x) − Gn (x) ≤ η +
x∈(0,B]
Now fix δ > 0, > 0. Applying the above inequality gives
!
P
sup |δ̃n (x) − Gn (x)| ≥ δ
x∈(0,B]
!
δ−η
≤P
sup
|δ̃n (x) − δ̃n (xl )| ≥
2
x,xl ∈(0,B],|x−xl |≤η/K
δ−η
+ P sup |δ̃n (xl ) − Gn (xl )| ≥
2
1≤l≤k
η
δ−η
δ−η
≤ C1
+ P sup |δ̃n (xl ) − Gn (xl )| ≥
.
K
2
2
1≤l≤k
(181)
Choose η < min{/2, δ}, K = max{C1 ( δ−η
2 ), C2 }. Then the first term in the right-hand side is at most /2.
Furthermore, suppose one can establish that for any fixed x,
P
|δ̃n (x) − Gn (x)| → 0.
(182)
Since the second term in the right-hand side is a supremum over finitely many points, there exists an integer
N such that for all n ≥ N , the second term is less than or equal to /2. Hence, for all n ≥ N , right-hand
side is at most , which proves Step 2. Thus, it remains to prove (182) for any fixed x ≥ 0. We do this after
the analysis of Step 3.
We argue about Step 3 in a similar fashion and letting K = max{C2 , C3 }, divide (0, B] into segments of
length η/K. For every x ∈ (0, B] there exists xl such that |x − xl | ≤ η/K. Then
|Gn (x) − ∆(x)| ≤ |Gn (x) − Gn (xl )| + |Gn (xl ) − ∆(xl )| + |∆(xl ) − ∆(x)|
≤ 2η + |Gn (xl ) − ∆(xl )|,
=⇒
sup |Gn (x) − ∆(x)| ≤ 2η + sup |Gn (xl ) − ∆(xl )|.
1≤l≤k
[0,B]
Hence, it suffices to show that for any fixed x, |Gn (x) − ∆(x)| → 0. To this end, observe that for any fixed x,
|Gn (x) − ∆(x)|
p
≤
− κ + EX
n
1 + xρ00 prox
1
xρ
X1> β̂[−1] 1{kβ̂[−1] k≤M }
53
− EZ
"
1 + xρ00
1
proxxρ (τ∗ Z)
#
.
Additionally,
X1> β̂[−1] 1{kβ̂[−1] k≤M } = kβ̂[−1] k1{kβ̂[−1] k≤M } Z̃,
P
where Z̃ ∼ N (0, 1). Since kβ̂[−1] k1{kβ̂[−1] k≤M } → τ∗ , by Slutsky’s theorem, X1> β̂[−1] 1{kβ̂[−1] k≤M } converges
weakly to τ∗ Z̃. Since t 7→ 1/(1 + xρ”(proxxρ (t))) is bounded, one directly gets
P
Gn (x) − ∆(x) → 0
for every x.
Finally, we establish (182). To this end, we will prove instead that for any given x,
L
δn (x) − Gn (x) →2 0.
Define
Mi :=
Xi> β̂[−i] 1{kβ̂[−i] k≤M }
Then δn (x) − Gn (x) =
1
n
Pn
i=1
and
f (Mi ) :=
1 + xρ00
"
#
1
1
−E
.
proxxρ (Mi )
1 + xρ00 proxxρ (Mi )
f (Mi ). Hence, for any x ∈ [0, B],
n
1 X 2
1 X
E f (Mi ) + 2
E f (Mi )f (Mj )
2
n i=1
n
i6=j
2
E f (M1 )
n(n − 1)
=
+
E f (M1 )f (M2 ) .
n
n2
Var[δn (x)] =
The first term in the right-hand side is at most 1/n. Hence, it suffices to show that E f (M1 )f (M2 ) → 0.
Let β̂[−12] be the MLE when the 1st and 2nd observations are dropped. Define
1 X 00 >
1
ρ Xj β̂[−12] Xj Xj> ,
G[−12] :=
q2 := X2> G−1
[−12] X2 ,
n
n
j6=1,2
1
0
>
b̂[−1] := β̂[−12] + G−1
X
−ρ
prox
X
β̂
.
2
[−12]
q2 ρ
2
n [−12]
By an application of Lemma 18,
K 2 Hn
≤ C2 n exp −c2 Hn2 − C3 exp −c3 Kn2 − exp (−C4 n(1 + o(1))) . (183)
P β̂[−1] − b̂[−1] ≥ C1 n
n
Also, by the triangle inequality,
X1> (β̂[−1] − β̂[−12] ) ≤ X1> (β̂[−1] − b̂[−1] ) +
1
0
>
X1> G−1
−ρ
prox
X
β̂
.
X
q2 ρ
2 [−12]
[−12] 2
n
Invoking Lemma 2, (183), and an argument similar to that leading to (170) and (164), we obtain
Kn2 Hn
>
P X1 β̂[−1] − β̂[−12] ≥ C1 √
n
≤ C2 n exp −c2 Hn2 − C3 exp −c3 Kn2 − exp (−C4 n (1 + o(1))) .
The event {kβ̂[−1] k ≤ M } ∩ {kβ̂[−12] k ≤ M } occurs with probability at least 1 − C exp(−cn). Hence, one
obtains
Kn2 Hn
>
P X1 β̂[−1] 1kβ̂[−1] k≤M − β̂[−12] 1kβ̂[−12] k≤M ≤ C1 √
n
≥ C2 n exp −c2 Hn2 − C3 exp −c3 Kn2 − exp (−C4 n (1 + o(1))) . (184)
54
A similar statement continues to hold with X1 replaced by X2 and β̂[−1] replaced by β̂[−2] . Some simple
computation yields that kf 0 k∞ is bounded by some constant times |x|. By the mean value theorem and the
fact that kf k∞ ≤ 1,
f (M1 )f (M2 ) − f X1> β̂[−12] 1{kβ̂[−12] k≤M } f X2> β̂[−12] 1{kβ̂[−12] k≤M }
n
≤ kf k∞ f (M1 ) − f X1> β̂[−12] 1{kβ̂[−12] k≤M }
o
+ f (M2 ) − f X2> β̂[−12] 1{kβ̂[−12] k≤M }
≤ C|x| · X1> β̂[−1] 1{kβ̂[−1] k≤M } − X1> β̂[−12] 1{kβ̂[−12] k≤M }
+ |x| · X2> β̂[−2] 1{kβ̂[−2] k≤M } − X2> β̂[−12] 1{kβ̂[−12] k≤M } .
Consequently,
P
f (M1 )f (M2 ) − f X1> β̂[−12] 1{kβ̂[−12] k≤M } f X2> β̂[−12] 1{kβ̂[−12] k≤M } → 0.
As kf k∞ ≤ 1, this implies convergence in L1 . Thus, it simply suffices to show that
h
i
E f X1> β̂[−12] 1{kβ̂[−12] k≤M } f X2> β̂[−12] 1{kβ̂[−12] k≤M }
→ 0.
Denote the design matrix on dropping the first and second row as X[−12] . Note that conditional on X[−12] ,
X1> β̂[−12] 1{kβ̂[−12] k≤M } and X2> β̂[−12] 1{ kβ̂[−12] k ≤ M } are independent and have distribution
N 0, kβ̂[−12] k2 1{kβ̂[−12] k≤M } .
Using this and by arguments similar to [20, Lemma 3.23], one can show that
"
E e
i tX1> β̂[−12] 1{kβ̂
[−12] k≤M }
+wX2> β̂[−12] 1{kβ̂
#
[−12] k≤M }
itX1> β̂[−12] 1{kβ̂
−E e
[−12] k≤M }
iwX2> β̂[−12] 1{kβ̂
k≤M }
[−12]
E e
→ 0. (185)
On repeated application of the multivariate inversion theorem for obtaining densities from characteristic
functions, we get that
h
i
E f X1> β̂[−12] 1{kβ̂[−12] k≤M } f X2> β̂[−12] 1{kβ̂[−12] k≤M }
h
i h
i
− E f X1> β̂[−12] 1{kβ̂[−12] k≤M } E f X2> β̂[−12] 1{kβ̂[−12] k≤M }
→ 0.
Since f is centered, this completes the proof.
References
[1] Alan Agresti and Maria Kateri. Categorical data analysis. Springer, 2011.
[2] Noga Alon and Joel H Spencer. The probabilistic method (3rd edition). John Wiley & Sons, 2008.
[3] Dennis Amelunxen, Martin Lotz, Michael B McCoy, and Joel A Tropp. Living on the edge: Phase
transitions in convex programs with random data. Information and Inference, page iau005, 2014.
55
[4] Árpád Baricz. Mills’ ratio: monotonicity patterns and functional inequalities. Journal of Mathematical
Analysis and Applications, 340(2):1362–1370, 2008.
[5] Maurice S Bartlett. Properties of sufficiency and statistical tests. Proceedings of the Royal Society of
London. Series A, Mathematical and Physical Sciences, pages 268–282, 1937.
[6] Mohsen Bayati and Andrea Montanari. The dynamics of message passing on dense graphs, with applications to compressed sensing. IEEE Transactions on Information Theory, 57(2):764–785, 2011.
[7] Mohsen Bayati and Andrea Montanari. The LASSO risk for Gaussian matrices. IEEE Transactions on
Information Theory, 58(4):1997–2017, 2012.
[8] Peter J Bickel and JK Ghosh. A decomposition for the likelihood ratio statistic and the bartlett
correction–a bayesian argument. The Annals of Statistics, pages 1070–1090, 1990.
[9] Stéphane Boucheron and Pascal Massart. A high-dimensional Wilks phenomenon. Probability theory
and related fields, 150(3-4):405–433, 2011.
[10] George Box. A general distribution theory for a class of likelihood criteria. Biometrika, 36(3/4):317–346,
1949.
[11] Emmanuel Candès, Yingying Fan, Lucas Janson, and Jinchi Lv. Panning for gold: Model-free knockoffs
for high-dimensional controlled variable selection. arXiv preprint arXiv:1610.02351, 2016.
[12] Herman Chernoff. On the distribution of the likelihood ratio. The Annals of Mathematical Statistics,
pages 573–578, 1954.
[13] Gauss M Cordeiro. Improved likelihood ratio statistics for generalized linear models. Journal of the
Royal Statistical Society. Series B (Methodological), pages 404–413, 1983.
[14] Gauss M Cordeiro and Francisco Cribari-Neto. An introduction to Bartlett correction and bias reduction.
Springer, 2014.
[15] Gauss M Cordeiro, Franciso Cribari-Neto, Elisete CQ Aubin, and Silvia LP Ferrari. Bartlett corrections
for one-parameter exponential family models. Journal of Statistical Computation and Simulation, 53(34):211–231, 1995.
[16] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
[17] Francisco Cribari-Neto and Gauss M Cordeiro. On bartlett and bartlett-type corrections francisco
cribari-neto. Econometric reviews, 15(4):339–367, 1996.
[18] David Donoho and Andrea Montanari. High dimensional robust M-estimation: Asymptotic variance
via approximate message passing. Probability Theory and Related Fields, pages 1–35, 2013.
[19] Noureddine El Karoui. Asymptotic behavior of unregularized and ridge-regularized high-dimensional
robust regression estimators: rigorous results. arXiv preprint arXiv:1311.2445, 2013.
[20] Noureddine El Karoui. On the impact of predictor geometry on the performance on high-dimensional
ridge-regularized generalized robust regression estimators. Probability Theory and Related Fields, pages
1–81, 2017.
[21] Noureddine El Karoui, Derek Bean, Peter J Bickel, Chinghway Lim, and Bin Yu. On robust regression
with high-dimensional predictors. Proceedings of the National Academy of Sciences, 110(36):14557–
14562, 2013.
[22] Jianqing Fan and Jiancheng Jiang. Nonparametric inference with generalized likelihood ratio tests.
Test, 16(3):409–444, 2007.
56
[23] Jianqing Fan and Jinchi Lv. Nonconcave penalized likelihood with NP-dimensionality. IEEE Transactions on Information Theory, 57(8):5467–5484, 2011.
[24] Jianqing Fan, Chunming Zhang, and Jian Zhang. Generalized likelihood ratio statistics and Wilks
phenomenon. Annals of statistics, pages 153–193, 2001.
[25] Yingying Fan, Emre Demirkaya, and Jinchi Lv. Nonuniformity of p-values can occur early in diverging
dimensions. https://arxiv.org/abs/1705.03604, May 2017.
[26] William W Hager. Updating the inverse of a matrix. SIAM review, 31(2):221–239, 1989.
[27] David L. Hanson and Farroll T. Wright. A bound on tail probabilities for quadratic forms in independent
random variables. The Annals of Mathematical Statistics, 42(3):1079–1083, 1971.
[28] Xuming He and Qi-Man Shao. On parameters of increasing dimensions. Journal of Multivariate Analysis,
73(1):120–135, 2000.
[29] Daniel Hsu, Sham Kakade, and Tong Zhang. A tail inequality for quadratic forms of subgaussian random
vectors. Electron. Commun. Probab, 17(52):1–6, 2012.
[30] Peter J Huber. Robust regression: asymptotics, conjectures and Monte Carlo. The Annals of Statistics,
pages 799–821, 1973.
[31] Peter J Huber. Robust statistics. Springer, 2011.
[32] Adel Javanmard and Andrea Montanari. State evolution for general approximate message passing
algorithms, with applications to spatial coupling. Information and Inference, page iat004, 2013.
[33] DN Lawley. A general method for approximating to the distribution of likelihood ratio criteria.
Biometrika, 43(3/4):295–303, 1956.
[34] Erich L Lehmann and Joseph P Romano. Testing statistical hypotheses. Springer Science & Business
Media, 2006.
[35] Hua Liang, Pang Du, et al. Maximum likelihood estimation in logistic regression models with a diverging
number of covariates. Electronic Journal of Statistics, 6:1838–1846, 2012.
[36] Enno Mammen. Asymptotics with increasing dimension for robust regression with applications to the
bootstrap. The Annals of Statistics, pages 382–400, 1989.
[37] Peter McCullagh and James A Nelder. Generalized linear models. Monograph on Statistics and Applied
Probability, 1989.
[38] Lawrence H Moulton, Lisa A Weissfeld, and Roy T St Laurent. Bartlett correction factors in logistic
regression models. Computational statistics & data analysis, 15(1):1–11, 1993.
[39] Neal Parikh and Stephen Boyd.
1(3):127–239, 2014.
Proximal algorithms.
Foundations and Trends in Optimization,
[40] Stephen Portnoy. Asymptotic behavior of M-estimators of p regression parameters when p2 /n is large.
i. consistency. The Annals of Statistics, pages 1298–1309, 1984.
[41] Stephen Portnoy. Asymptotic behavior of M-estimators of p regression parameters when p2 /n is large;
ii. normal approximation. The Annals of Statistics, pages 1403–1417, 1985.
[42] Stephen Portnoy. Asymptotic behavior of the empiric distribution of m-estimated residuals from a
regression model with many parameters. The Annals of Statistics, pages 1152–1170, 1986.
57
[43] Stephen Portnoy et al. Asymptotic behavior of likelihood methods for exponential families when the
number of parameters tends to infinity. The Annals of Statistics, 16(1):356–366, 1988.
[44] Mark Rudelson, Roman Vershynin, et al. Hanson-Wright inequality and sub-gaussian concentration.
Electron. Commun. Probab, 18(82):1–9, 2013.
[45] Michael R Sampford. Some inequalities on mill’s ratio and related functions. The Annals of Mathematical
Statistics, 24(1):130–132, 1953.
[46] Vladimir Spokoiny. Penalized maximum likelihood estimation and effective dimension. arXiv preprint
arXiv:1205.0498, 2012.
[47] Pragya Sur, Yuxin Chen, and Emmanuel Candès. Supplemental materials for “the likelihood ratio
test in high-dimensional logistic regression is asymptotically a rescaled chi-square”. http: // statweb.
stanford. edu/ ~candes/ papers/ supplement_ LRT. pdf , 2017.
[48] Cheng Yong Tang and Chenlei Leng. Penalized high-dimensional empirical likelihood. Biometrika, pages
905–919, 2010.
[49] Terence Tao. Topics in random matrix theory, volume 132. American Mathematical Society Providence,
RI, 2012.
[50] A. W. Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000.
[51] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. Compressed Sensing: Theory and Applications, pages 210 – 268, 2012.
[52] Samuel S Wilks. The large-sample distribution of the likelihood ratio for testing composite hypotheses.
The Annals of Mathematical Statistics, 9(1):60–62, 1938.
[53] Ting Yan, Yuanzhuang Li, Jinfeng Xu, Yaning Yang, and Ji Zhu. High-dimensional Wilks phenomena
in some exponential random graph models. arXiv preprint arXiv:1201.0058, 2012.
58
| 1 |
Visual Data Augmentation through Learning
arXiv:1801.06665v1 [cs.CV] 20 Jan 2018
Grigorios G. Chrysos1 , Yannis Panagakis1,2 , Stefanos Zafeiriou1
1
Department of Computing, Imperial College London, UK
2
Department of Computer Science, Middlesex University London, UK
{g.chrysos, i.panagakis, s.zafeiriou}@imperial.ac.uk
Abstract
The rapid progress in machine learning methods has
been empowered by i) huge datasets that have been collected and annotated, ii) improved engineering (e.g. data
pre-processing/normalization). The existing datasets typically include several million samples, which constitutes
their extension a colossal task. In addition, the state-ofthe-art data-driven methods demand a vast amount of data,
hence a standard engineering trick employed is artificial
data augmentation for instance by adding into the data
cropped and (affinely) transformed images. However, this
approach does not correspond to any change in the natural
3D scene. We propose instead to perform data augmentation through learning realistic local transformations. We
learn a forward and an inverse transformation that maps an
image from the high-dimensional space of pixel intensities
to a latent space which varies (approximately) linearly with
the latent space of a realistically transformed version of the
image. Such transformed images can be considered two
successive frames in a video. Next, we utilize these transformations to learn a linear model that modifies the latent
spaces and then use the inverse transformation to synthesize
a new image. We argue that the this procedure produces
powerful invariant representations. We perform both qualitative and quantitative experiments that demonstrate our
proposed method creates new realistic images.
Figure 1: (Preferably viewed in color) We want to augment arbitrary images (left column) by learning local transformations. We find a low-dimensional space and learn
the forward and inverse transformations from the image
to the representation space. Then, we can perform a simple linear transformation in the (approximately) linear lowdimensional space and acquire a new synthesized image
(the middle column). The same procedure can be repeated
with the latest synthesized image (e.g. from the middle to
the right columns).
work, we propose a new data augmentation technique that
finds a low-dimensional space in which performing a simple linear change results in a nonlinear change in the image
space.
Data augmentation methods are used as label-preserving
transformations with twofold goals: a) avoid over-fitting,
b) ensure that enough samples have been provided to the
network for learning. A plethora of label-preserving transformations have been proposed, however the majority is
classified into a) either model-based methods, b) or generic
augmentations. The model-based demand an elaborate
model to augment the data, e.g. the 3DMM-based [2] face
profiling of [29], the novel-view synthesis from 3D models [26]. Such models are available for only a small number of classes and the realistic generation from 3D models/synthetic data is still an open problem [3]. The second
augmentation category is comprised of methods defined artificially; these methods do not correspond to any natural
1. Introduction
The lack of training data has till recently been an impediment for training machine learning methods. The latest breakthroughs of Neural Networks (NNs) can be partly
attributed to the increased amount of (public) databases
with massive number of labels/meta-data. Nevertheless, the
state-of-the-art networks include tens or hundreds of millions of parameters [8, 37], i.e. they require more labelled
examples than we have available. To ameliorate the lack
of sufficient labelled examples, different data augmentation
methods have become commonplace during training. In this
1
movement in the scene/object. For instance, a 2D image rotation does not correspond to any actual change in the 3D
scene space; it is purely a computational method for encouraging rotation invariance.
We argue that a third category of augmentations consists
of local transformations. We learn a nonlinear transformation that maps the image to a low-dimensional space that
is assumed to be (approximately) linear. This linear property allows us to perform a linear transformation and map
the original latent representation to the representation of a
slightly transformed image (e.g. a pair of successive frames
in a video). If we can learn the inverse transform, i.e. mapping from the low-dimensional space to the transformed image, then we can modify the latent representation of the image linearly and this results in a nonlinear change in the
image domain.
We propose a three-stage approach that learns a forward
transformation (from image to low-dimensional representation) and an inverse transformation (from latent to image
representation) so that a linear change in the latent space results in a nonlinear change in the image space. The forward
and the inverse learned transformations are approximated
by an Adversarial Autoencoder and a GAN respectively.
In our work, we learn object-specific transformations
while we do not introduce any temporal smoothness. Even
though learning a generic model for all classes is theoretically plausible, we advocate that with the existing methods,
there is not sufficient capacity to learn such generic transformations for all the objects. Instead we introduce objectspecific transformations. Even though we have not explicitly constrained our low-dimensional space to be temporally
smooth, e.g. by using the cosine distance, we have observed
that the transformations learned are powerful enough to linearize the space. As a visual illustration, we have run TSNE [17] with the latent representations of the first video
of 300VW [28] against the rest 49 videos of the published
training set; Fig. 2 validates our hypothesis that the latent
representations of that video reside in a discrete cluster over
the rest of the representations. In a similar experiment with
the collected videos of cats, the same conclusion is reached,
i.e. the representations of the first video form a discrete
cluster.
We have opted to report the results in the facial space that
is highly nonlinear, while the representations are quite rich.
To assess further our approach, we have used two ad-hoc
objects, i.e. cat faces, dog faces, that have far less data labelled available online. Additionally, in both ad-hoc objects
the shape/appearance presents greater variation than that of
human faces, hence more elaborate transformations should
be learned.
In the following Sections we review the neural networks
based on which we have developed our method (Sec. 2.1),
introduce our method in Sec. 3. Sequentially, we demon-
strate our experimental results in Sec. 4. Due to the restricted space, additional visualizations are deferred to the
supplementary material, including indicative figures of the
cats’, dogs’ videos, additional (animated) visual results of
our method, an experiment illustrating that few images
suffice to learn object-specific deformable models. We
strongly encourage the reviewers to check the supplementary material.
Notation: A small (capital) bold letter represents a vector (matrix); a plain letter designates a scalar number. A
vectorized image of a dynamic scene at time t is denoted as
(t )
i(t) , while ik k refers to the k th training sample.
2. Background
The following lines of research are related with our proposed method:
Model-based augmentation for faces: The methods
in this category utilize 2D/3D geometric information. In
T-CNN [32] the authors introduce an alignment-sensitive
method tailored to their task. Namely, they warp a face from
its original shape (2D landmarks) to a similar shape (based
on their devised clustering). Recently, Zhu et al. [36] use a
3D morphable model (3DMM) [2] to simulate the effect of
profiling for synthesizing images in large poses. Tran et al.
in [29] fit a 3DMM to estimate the facial pose and learn a
GAN conditioned on the pose. During inference, new facial
images are synthesized by sampling different poses. The
major limitation of the model-based methods is that they
require elaborate 2D/3D models. Such models have been
studied only for the human face1 or the human body, while
the rest objects, e.g. animals faces, have not attracted such
attention yet. On the contrary, our method is not limited
to any object (we have learned models with cats’ faces and
1 18 years since the original 3DMM model and the problem is not solved
for all cases.
Figure 2: (Preferably viewed in color) T-SNE [17] in the
latent representations of a) 300VW [28] (left Fig.), b) cats’
videos. In both cases the representations of the first video
(red dots) are compared against the rest videos (blue dots)).
To avoid cluttering the graphs every second frame is skipped
(their representation is similar to the previous/next frame).
For further emphasis, a green circle is drawn around the red
points.
dogs’ faces) and does not require elaborate 3D/2D shape
models.
Unconditional image synthesis: The successful application of GANs [5] in a variety of tasks including photorealistic image synthesis[14], style transfer [34], inpainting [25], image-to-image mapping tasks [11] has led to
a proliferation of works on unconditional image synthesis [1, 35]. Even though unconditional image generation
has significant applications, it cannot be used for conditional generation when labels are available. Another line
of research is directly approximating the conditional distribution over pixels [23]. The generation of a single pixel
is conditioned on all the previously generated pixels. Even
though realistic samples are produced, it is costly to sample
from them; additionally such models do not provide access
to the latent representation.
Video frames’ prediction: The recent (experimental)
breakthroughs of generative models have accelerated the
progress in video frames prediction. In [30] the authors
learn a model that captures the scene dynamics and synthesizes new frames. To generalize the deterministic prediction of [30], the authors of [33] propose a probabilistic model, however they show only a single frame prediction in low-resolution objects. In addition, the unified latent code z (learned for all objects) does not allow particular motion patterns, e.g. of an object of interest in
the video, to be distinguished. Lotter et al. [15] approach
the task as a conditional generation. They employ a Recurrent Neural Network (RNN) to condition future frames
on previously seen ones, which implicitly imposes temporal smoothness. A core differentiating factor of these approaches from our work is that they i) impose temporal
smoothness, ii) make simplifying assumptions (e.g. stationary camera [30]); these restrictions constrain their solution space and allow for realistic video frames’ prediction. In addition, the techniques for future prediction often
result in blurry frames, which can be attributed to the multimodal distributions of unconstrained natural images, however our end-goal consists in creating realistic images for
highly-complex images, e.g. animals’ faces.
The work of [6] is the most similar to our work. The
authors construct a customized architecture and loss to linearize the feature space and then perform frame prediction
to demonstrate that they have successfully achieved the linearization. Their highly customized architecture (in comparison to our off-the-shelves networks) have not been applied to any highly nonlinear space, in [6] mostly synthetic,
simple examples are demonstrated. Apart from the highly
nonlinear objects we experiment with, we provide several
experimental indicators that our proposed method achieves
this linearization in challenging cases.
An additional differentiating factor from the aforementioned works is that, to the best of our knowledge, this three-
stage approach has not been used in the past for a related
task.
2.1. cGAN and Adversarial Autoencoder
Let us briefly describe the two methods that consist our
workhorse for learning the transformations. These are the
conditional GAN and the Adversarial Autoencoder.
A Generative Adversarial Network (GAN) [5] is a generative network that has been very successfully employed
for learning probability distributions [14]. A GAN is comprised of a generator G and a discriminator D network,
where the generator samples from a pre-defined distribution in order to approximate the probability distribution of
the training data, while the discriminator tries to distinguish
between the samples originating from the model distribution to those from the data distribution. Conditional GAN
(cGAN) [20] extends the formulation by conditioning the
distributions with additional labels. More formally, if we
denote with pd the true distribution of the data, with pz the
distribution of the noise, with s the conditioning label and
y the data, then the objective function is:
LcGAN (G, D) = Es,y∼pd (s,y) [log D(s, y)]+
Es∼pd (s),z∼pz (z) [log(1 − D(s, G(s, z)))]
(1)
This objective function is optimized in an iterative manner, as
min max LcGAN (G, D) = Es,y∼pd (s,y) [log D(s, y; wD )]+
wG
wD
Es∼pd (s),z∼pz (z) [log(1 − D(s, G(s, z; wG ); wD ))]
where wG , wD denote the generator’s, discriminator’s
parameters respectively.
An Autoencoder (AE) [9, 19] is a neural network with
two parts (an encoder and a decoder) and aims to learn a
latent representation z of their input y. Autoencoders are
mostly used in an unsupervised learning context [12] with
the loss being the reconstruction error. On the other hand,
an Adversarial Autoencoder (AAE) [18] consists of two
sub-networks: i) a generator (an AE network), ii) a discriminator. The discriminator, which is motivated by GAN’s
discriminator, accepts the latent vector (generated by the
encoder) and tries to match the latent space representation
with a pre-defined distribution.
3. Method
The core idea of our approach consists in finding a lowdimensional space that is (approximately) linear with respect to the projected representations. We aim to learn the
(forward and inverse) transformations from the image space
to the low-dimensional space. We know that an image i(t)
is an instance of a dynamic scene at time t, hence the difference between the representations of two temporally close
Figure 3: The architectures used in (a) separate training per step (the network for Stage I is on the top, for Stage III on the
bottom), (b) fine-tuning of the unified model, (c) prediction. The ‘[]’ symbol denotes concatenation.
moments should be small and linear. We can learn the linear
transitions of the representations and transform our image
to i(t+x) . We perform this linearization in 2-steps; an additional step is used to synthesize images of the same object
with slightly different representations. The synthesized image can be thought of as a locally transformed image, e.g.
the scene at t + x moment with x sufficiently small.
3.1. Stage I: Latent image representation
Our goal consists in learning the transformations to the
linearized space, however for the majority of the objects
there are not enough videos annotated that can express a
sufficient percent of the variation. For instance, it is not
straightforward to find long videos of all breeds of dogs
where the full body is visible. However, there are far more
static images available online, which are faster to collect
and can be used to learn the transformation from the image
space to the latent space.
In an unsupervised setting a single image i(t) (per step)
suffices for learning latent representations, no additional labels are required, which is precisely the task that Autoencoders were designed for. The latent vector of the Autoencoder lies in the latent space we want to find.
We experimentally noticed that the optimization converged faster if we used an adversarial learning procedure.
We chose an Adversarial Autoencoder (AAE) [18] with a
customized loss function. The encoder feI accepts an image
i(t) , encodes it to d(t) ; the decoder fdI reconstructs i(t) . We
modify the discriminator to accept both the latent representation and the reconstructed image as input (fake example)
and try to distinguish those from the distribution sample and
the input image respectively. Moreover, we add a loss term
that captures the reconstruction loss, which in our case consists of i) an `1 norm and ii) `1 in the image gradients. Con-
sequently, the final loss function is comprised of the following two terms: i) the adversarial loss, ii) the reconstruction
loss or:
LI = Ladver + λI LIrec
(2)
with
LIrec = ||fdI (feI (y)) − y||`1 + ||∇fdI (feI (y)) − ∇y||`1 (3)
(t )
The vector y in this case is a training sample ik k , while
λI is a hyper-parameter.
3.2. Stage II: Linear Model Learning
In this stage the latent representation d(t) of an
image i(t) (as learned from stage I) is used to learn
a mapping to the latent representation d(t+x) of the
image i(t+x) ; the simple method of linear regression
is chosen as a very simple transformation we can perform in a linear space. Given N pairs of images2
(t ) (t +x )
(t ) (t +x )
(t ) (t +x )
{(i1 1 , i1 1 1 ), (i2 2 , i2 2 2 ), . . . , (iNN , iNN N )},
the set of the respective latent representations D =
(t )
(t +x )
(t +x )
(t )
(t +x )
(t )
{(d1 1 , d1 1 1 ), (d2 2 , d2 2 2 ), . . . , (dNN , dNN N )};
the set D is used to learn the linear mapping:
d(tj +xj ) = A · [d(tj ) ; 1] +
(4)
where is the noise; the Frobenius norm of the residual
consists the error term:
L = ||d(tj +xj ) − A · [d(tj ) ; 1]||2F
(5)
To ensure the stability of the linear transformation we
add a Tikhonov regularization term (i.e, Frobenius norm)
2 Each pair includes two highly correlated images, i.e. two nearby
frames from a video sequence.
3.4. End-to-end fine-tuning
on Eq. 5. That is,
II
L
(tj +xj )
= ||d
(tj )
− A · [d
; 1]||2F
+
λII ||A||2f ,
(6)
with λII a regularization hyper-parameter. The closed-form
solution to Eq. 6 is
A = Y · X T · (X · X T + λII · I)−1 ,
(7)
where I denotes an identity matrix, X, Y two matrices that
contain column-wise the initial and target representations
(t )
respectively, i.e. for the k th sample X(:, k) = [dk k ; 1],
(t +x )
Y (:, k) = dk k k .
3.3. Stage III: Latent representation to image
In this step, we want to learn a transformation from the
latent space to the image space, i.e. the inverse transformation of Stage I. In particular, we aim to map the regressed
representation dˆ(t+x) to the image i(t+x) . Our prior distribution consists of a low-dimensional space, which we want
to map to a high-dimensional space; GANs have experimentally proven very effective in such mappings [14, 25].
A conditional GAN is employed for this step; we condition GAN in both the (regressed) latent representation
dˆ(t+x) and the original image i(t) . Conditioning on the
original image has experimentally resulted in faster convergence and it might be a significant feature in case of limited
amount of training samples. Inspired by the work of [11],
we form the generator as an autoencoder denoting the encoder as feIII , the decoder as fdIII . Skip connections are
added from the second and fourth layers of the encoder to
the respective layers in the decoder with the purpose of allowing the low-level features of the original images to be
propagated to the result.
In conjunction with [11] and Sec. 3.1, we add a reconstruction loss term as
III
III
III
III
LIII
rec = ||fd (fe (y))−s||`1 +||∇fd (fe (y))−∇s||`1
(8)
(t )
where y is a training sample ik k and s is the conditioning
(tk−x )
label (original image) ik−x
. In addition, we add a loss
term that encourages the features of the real/fake samples
to be similar. Those features are extracted from the penultimate layer of the AAE’s discriminator. Effectively, this
leads the fake (i.e. synthesized) images to have representations that are close to the original image. The final objective
function for this step includes three terms, i.e. the adversarial, the reconstruction and the feature loss:
LIII = LcGAN + λIII LIII
rec + λIII,f eat Lf eat
Even though the training in each of the aforementioned
three stages is performed separately, all the components
are differentiable with respect to their parameters. Hence,
Stochastic Gradient Descent (SGD) can be used to fine-tune
the pipeline.
Not all of the components are required for the finetuning, for instance the discriminator of the Adversarial
Autoencoder is redundant. From the network in Stage I,
only the encoder is utilized for extracting the latent representations, then linear regression (learned matrix A) can be
thought of as a linear fully-connected layer. From network
in Stage III, all the components are kept. The overall architecture for fine-tuning is depicted in Fig. 3.
3.5. Prediction
The structure of our three-stage pipeline is simplified for
performing predictions. The image i(t) is encoded (only the
encoder of the network in Stage I is required); the resulting representation d(t) is multiplied by A to obtain dˆ(t+x) ,
which is fed into the conditional GAN to synthesize a new
image î(t+x) . This procedure is visually illustrated in Fig. 3,
while more formally:
î(t+x) = fdIII (feIII (A · [feI (i(t) ; 1)], i(t) )))
(10)
3.6. Network architectures
Our method includes two networks, i.e. an Adversarial
Autoencoder for Stage I and a conditional GAN for Stage
III. The encoder/decoder of both networks share the same
architecture, i.e. 8 convolutional layers followed by batch
normalization [10] and LeakyRELU [16]. The discriminator consists of 5 layers in both cases, while the dimensionality of the latent space is 1024 for all cases. Please refer
to the table in the supplementary material for further details
about the layers.
4. Experiments
In this Section we provide the details of the training procedure along with the dedicated qualitative and quantitative
results for all three objects, i.e. human faces, cats’ faces and
dogs’ faces. Our objective is to demonstrate that this augmentation leads to learning invariances, e.g. deformations,
not covered by commonly used techniques.
(9)
where LcGAN is defined in Eq.1, Lf eat represents the
similarity cost imposed on the features from the discriminator’s penultimate layer and λIII , λIII,f eat are scalar hyperparameters. To reduce the amount of hyper-parameters in
our work, we have set λIII = λI .
4.1. Implementation details
The pairs of images required by the second and third
stages, were obtained by sequential frames of that object.
Different sampling of x was allowed per frame to increase
the variation. To avoid the abrupt changes between pairs
(a) Human faces
(b) Cats’ faces
(c) Dogs’ faces
Figure 4: Average variance in the dynamics representation per video for (a) the case of human faces, (b) cats’ faces, (c) dogs’
faces.
Figure 5: (Preferably viewed in color) Conditional, iterative prediction from our proposed method. The images on the left are
the original ones; then from the left to the right the ith column depicts the (i − 1)th synthesized image (iteration (i − 1)). In
both rows, the image on the left is animated, hence if opened with Adobe Acrobat reader the transitions will be auto-played.
of frames, the structural similarity (SSIM) of a pair was required to lie in an interval, i.e. the frames with i) zero, ii)
excessive movement were omitted.
Each of the aforementioned stages was trained separately; after training all of them, we have performed finetuning in the combined model (all stages consist of convolutions). However, as is visually illustrated in figures in
the supplementary material there are minor differences in
the two models. The results of the fine-tuned model are
marginally more photo-realistic, which consists fine-tuning
optional.
4.2. Datasets
A brief description of the databases utilized for training
is provided below:
Human faces: The recent dataset of MS Celeb [7] was
employed for Stage I (Sec. 3.1). MS Celeb includes 8,5 million facial images of 100 thousand celebrities consisting it
one of the largest public datasets for static facial images.
In our case, the grayscale images were excluded, while
from the remaining images a subset of 2 million random
images was sampled. For the following two stages that require pairs of images the dataset of 300 Videos in-the-wild
(300VW) [28] was employed. This dataset includes 114
videos with approximately 1 minute duration each. The to-
Figure 6: (Preferably viewed in color) Visual results of the synthesized images. There are four columns from the left to the
right (split into left and right parts) which depict: (a) the original image, (b) the linear model (PCA + regression), (c) our
proposed method, (d) the difference in intensities between the proposed method and the original image. The difference does
not depict accurately the pose variation; the gif images in the supplementary material demonstrate the animated movement.
Nevertheless, some noticeable changes are the following: a) in the left part in the second, and fifth images there is a considerable 3D rotation (pose variation), b) in the first, third and sixth in the left split there are several deformations (eyes closing,
mouth opening etc.), c) in the second image on the right part, the person has moved towards the camera.
tal amount of frames sampled for Stage II (Sec. 3.2) is 13
thousand frames; 10 thousand frames are sampled for validation, while the rest are used for training the network in
Stage III (Sec. 3.3).
Cat faces: The pet dataset of [24] was employed for
learning representations of cats’ faces. The dataset includes
37 different breeds of cats and dogs (12 for cats) with approximately 200 images each3 . In addition to those, we
collected 1000 additional images, for a total of 2000 images. For the subsequent stages of our pipeline, pairs of images were required, hence we have collected 20 videos with
an average duration of 200 frames. The head was detected
with the DPM detector of [4] in the first frame and the rest
3 Each
image is annotated with a head bounding box.
tracked with the robust MDNET tracker of [22]. Since the
images of cats are limited, the prior weights learned for the
(human) facial experiment were employed (effectively the
pre-trained model includes a prior which we adapt for cats).
Dog faces: The Stanford dog dataset [13] includes 20
thousand images of dogs from 120 breeds. The annotations
are in the body-level, hence the DPM detector was utilized
to detect a bounding box of the head. The detected images, i.e. 8 thousand, consisted the input for Stage I of our
pipeline. Similarly to the procedure for cats, 30 videos (with
average duration of 200 frames) were collected and tracked
for Stages II and III.
4.3. Variance in the latent space
A quantitative self-evaluation experiment was to measure the variance of latent representations per video. The
latent representations of sequential frames should be highly
correlated; hence the variance in a video containing the
same object should be low.
A PCA was learned per video and the cumulative eigenvalue ratio was computed. We repeated the same procedure
for all the videos (per object) and then averaged the results.
The resulting plots with the average cumulative ratio are visualized in Fig. 4. In the videos of the cats and the dogs,
we observe that the first 30 components express 90% of the
variance. In the facial videos that are longer (over 1500
frames) the variance is greater, however the first 50 components explain over 90% of the variance.
4.4. Qualitative assessment
Considering the sub-space defined by PCA as the latent
space and learning a linear regression there is the linear
counterpart of our proposed method. To demonstrate the
complexity of the task, we have learned a PCA per object4 ;
the representations of each pair were extracted, linear regression was performed and then the regressed representations were used to create the new sample.
In Fig. 6, we have visualized some results for all three
cases (human, cats’ and dogs’ faces). In all cases the images
were not seen during the training with the cats’ and dogs’
images being downloaded from the web (all were recently
uploaded), while the faces are from WIKI-DB dataset [27].
The visualizations verify our claims that a linear transformation in the latent space, can produce a realistic non-linear
transformation in the image domain. In all of the facial images there is a deformation of the mouth, while in the majority of them there is a 3D movement. On the contrary,
on the dogs’ and the cats’ faces, the major source of deformation seems to be the 3D rotation. An additional remark
is that the linear model, i.e. regressing the components of
PCA, does not result in realistic new images, which can be
attributed to the linear assumptions of PCA.
Aside of the visual assessment of the synthesized images, we have considered whether the new synthesized image is realistic enough to be considered as input itself to
the pipelne. Hence, we have run an iterative procedure of
applying our method, i.e. the outcome of iteration k becomes the input to iteration k + 1. Such an iterative procedure essentially creates a collection of different images
(constrained to include the same object of interest but with
slightly different latent representations). Two such collections are depicted in Fig. 5, where the person in the first
row performs a 3D movement, while in the second different
4 To provide a fair comparison PCA received the same input as our
method (i.e. there was no effort to provide further (geometric) details about
the image, the pixel values are the only input).
deformations of the mouth are observed. The image on the
left is animated, hence if opened with Adobe Acrobat reader
the transitions will be auto-played. We strongly encourage
the reviewers to view the animated images and check the
supplementary animations.
4.5. Age estimation with augmented data
To ensure that a) our method did not reproduce the input
to the output, b) the images are close enough (small change
in the representations) we have validated our method by performing age estimation with the augmented data.
We utilized as a testbed the AgeDB dataset of [21],
which includes 16 thousand manually selected images. As
the authors of [21] report, the annotations of AgeDB are accurate to the year, unlike the semi-automatic IMDB-WIKI
dataset of [27]. For the aforementioned reasons, we selected
AgeDB to perform age estimation with i) the original data,
ii) the original plus the new synthesized samples. The first
80% of the images was used as training set and the rest as
test-set. We augmented only the training set images with
our method by generating one new image for every original one. We discarded the examples that have a structural
similarity (SSIM) [31] of less than 0.4 with the original image; this resulted in synthesizing 6 thousand new frames
(approximately 50% augmentation).
We trained a Resnet-50 [8] with i) the original training
images, ii) the augmented images and report here the Mean
Absolute Error (MAE). The pre-trained DEX [27] resulted
in a MAE of 12.8 years in our test subset [21], the Resnet
with the original data in MAE of 11.4 years, while with
the augmented data resulted in a MAE of 10.3 years, which
is a 9.5% relative decrease in the MAE. That dictates that
our proposed method can generate new samples that are not
trivially replicated by affine transformations.
5. Conclusion
In this work, we have introduced a method that finds a
low-dimensional (approximately) linear space. We have introduced a three-stage approach that learns the transformations from the hihgly non-linear image space to the latent
space along with the inverse transformation. This approach
enables us to make linear changes in the space of representations and these result in non-linear changes in the image space. The first transformation was approximated by an
Advervarsial Autoencoder, while a conditional GAN was
employed for learning the inverse transformation and acquiring the synthesized image. The middle step consists of
a simple linear regression to transform the representations.
We have visually illustrated that i) the representations of a
video form a discrete cluster (T-SNE in Fig. 2) ii) the representations of a single video are highly correlated (average
cumulative eigenvalue ratio for all videos).
References
[1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan.
arXiv preprint arXiv:1701.07875, 2017. 3
[2] V. Blanz and T. Vetter. A morphable model for the synthesis
of 3d faces. In Proceedings of the 26th annual conference on
Computer graphics and interactive techniques, 1999. 1, 2
[3] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan. Unsupervised pixel-level domain adaptation with
generative adversarial networks. CVPR, 2017. 1
[4] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained partbased models. T-PAMI, 32(9):1627–1645, 2010. 7
[5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu,
D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014. 3
[6] R. Goroshin, M. F. Mathieu, and Y. LeCun. Learning to linearize under uncertainty. In NIPS, pages 1234–1242, 2015.
3
[7] Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao. Ms-celeb-1m:
A dataset and benchmark for large-scale face recognition. In
ECCV, pages 87–102, 2016. 6
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning
for image recognition. In CVPR. IEEE, 2016. 1, 8
[9] G. E. Hinton and R. S. Zemel. Autoencoders, minimum description length and helmholtz free energy. In NIPS, 1994.
3
[10] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
deep network training by reducing internal covariate shift. In
ICML, 2015. 5
[11] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image
translation with conditional adversarial networks. CVPR,
2017. 3, 5
[12] M. Kan, S. Shan, H. Chang, and X. Chen. Stacked progressive auto-encoders (spae) for face recognition across poses.
In CVPR, 2014. 3
[13] A. Khosla, N. Jayadevaprakash, B. Yao, and F.-F. Li. Novel
dataset for fine-grained image categorization: Stanford dogs.
In CVPR Workshops, volume 2, page 1, 2011. 7
[14] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham,
A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al.
Photo-realistic single image super-resolution using a generative adversarial network. CVPR, 2017. 3, 5
[15] W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning.
ICLR, 2017. 3
[16] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML,
2013. 5
[17] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne.
JMLR, 9(Nov):2579–2605, 2008. 2
[18] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey.
Adversarial autoencoders. arXiv preprint arXiv:1511.05644,
2015. 3, 4
[19] J. Masci, U. Meier, D. Cireşan, and J. Schmidhuber. Stacked
convolutional auto-encoders for hierarchical feature extraction. ICANN, 2011. 3
[20] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. 3
[21] S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, and S. Zafeiriou. Agedb: the first manually collected,
in-the-wild age database. In CVPR Workshops, 2017. 8
[22] H. Nam and B. Han. Learning multi-domain convolutional
neural networks for visual tracking. In CVPR, 2016. 7
[23] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel
recurrent neural networks. arXiv preprint arXiv:1601.06759,
2016. 3
[24] O. M. Parkhi, A. Vedaldi, A. Zisserman, and C. Jawahar.
Cats and dogs. In CVPR, 2012. 7
[25] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A.
Efros. Context encoders: Feature learning by inpainting. In
CVPR, pages 2536–2544, 2016. 3, 5
[26] K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars. Imagebased synthesis and re-synthesis of viewpoints guided by 3d
models. In CVPR, 2014. 1
[27] R. Rothe, R. Timofte, and L. Van Gool. Deep expectation
of real and apparent age from a single image without facial
landmarks. International Journal of Computer Vision, pages
1–14, 2016. 8
[28] J. Shen, S. Zafeiriou, G. Chrysos, J. Kossaifi, G. Tzimiropoulos, and M. Pantic. The first facial landmark tracking inthe-wild challenge: Benchmark and results. In 300-VW in
ICCV-W, December 2015. 2, 7
[29] L. Tran, X. Yin, and X. Liu. Disentangled representation
learning gan for pose-invariant face recognition. In CVPR,
volume 4, page 7, 2017. 1, 2
[30] C. Vondrick, H. Pirsiavash, and A. Torralba. Generating
videos with scene dynamics. In NIPS, pages 613–621, 2016.
3
[31] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli.
Image quality assessment: from error visibility to structural
similarity. TIP, 13(4):600–612, 2004. 8
[32] Y. Wu, T. Hassner, K. Kim, G. Medioni, and P. Natarajan.
Facial landmark detection with tweaked convolutional neural
networks. arXiv preprint arXiv:1511.04031, 2015. 2
[33] T. Xue, J. Wu, K. Bouman, and B. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In NIPS, pages 91–99, 2016. 3
[34] D. Yoo, N. Kim, S. Park, A. S. Paek, and I. S. Kweon. Pixellevel domain transfer. In European Conference on Computer
Vision, pages 517–532. Springer, 2016. 3
[35] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126,
2016. 3
[36] X. Zhu, Z. Lei, X. Liu, H. Shi, and S. Z. Li. Face alignment
across large poses: A 3d solution. In CVPR, pages 146–155,
2016. 2
[37] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition.
arXiv preprint arXiv:1707.07012, 2017. 1
| 1 |
The power of sum-of-squares for detecting hidden structures
arXiv:1710.05017v1 [cs.DS] 13 Oct 2017
Samuel B. Hopkins∗
Prasad Raghavendra
Pravesh K. Kothari †
Tselil Schramm‡
Aaron Potechin
David Steurer§
October 31, 2017
Abstract
We study planted problems—finding hidden structures in random noisy inputs—through
the lens of the sum-of-squares semidefinite programming hierarchy (SoS). This family of powerful semidefinite programs has recently yielded many new algorithms for planted problems,
often achieving the best known polynomial-time guarantees in terms of accuracy of recovered
solutions and robustness to noise. One theme in recent work is the design of spectral algorithms which match the guarantees of SoS algorithms for planted problems. Classical spectral
algorithms are often unable to accomplish this: the twist in these new spectral algorithms is
the use of spectral structure of matrices whose entries are low-degree polynomials of the input
variables.
We prove that for a wide class of planted problems, including refuting random constraint
satisfaction problems, tensor and sparse PCA, densest-k-subgraph, community detection in
stochastic block models, planted clique, and others, eigenvalues of degree-d matrix polynomials
are as powerful as SoS semidefinite programs of size roughly n d . For such problems it is
therefore always possible to match the guarantees of SoS without solving a large semidefinite
program.
Using related ideas on SoS algorithms and low-degree matrix polynomials (and inspired
by recent work on SoS and the planted clique problem [BHK+ 16]), we prove new nearly-tight
SoS lower bounds for the tensor and sparse principal component analysis problems. Our
lower bounds are the first to suggest that improving upon the signal-to-noise ratios handled by
existing polynomial-time algorithms for these problems may require subexponential time.
∗ Cornell
University, [email protected] Partially supported by an NSF GRFP under grant no. 1144153, by a
Microsoft Research Graduate Fellowship, and by David Steurer’s NSF CAREER award.
† Princeton University and IAS, [email protected]
‡ UC Berkeley, [email protected]. Supported by an NSF Graduate Research Fellowship (1106400).
§ Cornell University, [email protected]. Supported by a Microsoft Research Fellowship, a Alfred P. Sloan
Fellowship, an NSF CAREER award, and the Simons Collaboration for Algorithms and Geometry.
Contents
1 Introduction
1.1 SoS and spectral algorithms for robust inference . . . . . .
1.2 SoS and information-computation gaps . . . . . . . . . . .
1.3 Exponential lower bounds for sparse PCA and tensor PCA
1.4 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Distinguishing Problems and Robust Inference
3 Moment-Matching Pseudodistributions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
4
5
8
9
9
12
4 Proof of Theorem 2.6
15
4.1 Handling Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Applications to Classical Distinguishing Problems
23
6 Exponential lower bounds for PCA problems
29
6.1 Predicting sos lower bounds from low-degree distinguishers for Tensor PCA . . . . . 29
6.2 Main theorem and proof overview for Tensor PCA . . . . . . . . . . . . . . . . . . . . 31
6.3 Main theorem and proof overview for sparse PCA . . . . . . . . . . . . . . . . . . . . 33
References
35
A Bounding the sum-of-squares proof ideal term
40
B Lower bounds on the nonzero eigenvalues of some moment matrices
43
C From Boolean to Gaussian lower bounds
45
1 Introduction
Recent years have seen a surge of progress in algorithm design via the sum-of-squares (SoS)
semidefinite programming hierarchy. Initiated by the work of [BBH+ 12], who showed that
polynomial time algorithms in the hierarchy solve all known integrality gap instances for
Unique Games and related problems, a steady stream of works have developed efficient algorithms for both worst-case [BKS14, BKS15, BKS17, BGG+ 16] and average-case problems
[HSS15, GM15, BM16, RRS16, BGL16, MSS16a, PS17]. The insights from these works extend
beyond individual algorithms to characterizations of broad classes of algorithmic techniques. In
addition, for a large class of problems (including constraint satisfaction), the family of SoS semidefinite programs is now known to be as powerful as any semidefinite program (SDP) [LRS15].
In this paper we focus on recent progress in using Sum of Squares algorithms to solve averagecase, and especially planted problems—problems that ask for the recovery of a planted signal
perturbed by random noise. Key examples are finding solutions of random constraint satisfaction
problems (CSPs) with planted assignments [RRS16] and finding planted optima of random polynomials over the n-dimensional unit sphere [RRS16, BGL16]. The latter formulation captures a wide
range of unsupervised learning problems, and has led to many unsupervised learning algorithms
with the best-known polynomial time guarantees [BKS15, BKS14, MSS16b, HSS15, PS17, BGG+ 16].
In many cases, classical algorithms for such planted problems are spectral algorithms—i.e.,
using the top eigenvector of a natural matrix associated with the problem input to recover a
planted solution. The canonical algorithms for the planted clique [AKS98], principal components
analysis (PCA) [Pea01], and tensor decomposition (which is intimately connected to optimizaton of
polynomials on the unit sphere) [Har70] are all based on this general scheme. In all of these cases,
the algorithm employs the top eigenvector of a matrix which is either given as input (the adjacency
matrix, for planted clique), or is a simple function of the input (the empirical covariance, for PCA).
Recent works have shown that one can often improve upon these basic spectral methods
using SoS, yielding better accuracy and robustness guarantees against noise in recovering planted
solutions. Furthermore, for worst case problems—as opposed to the average-case planted problems
we consider here—semidefinite programs are strictly more powerful than spectral algorithms.1 A
priori one might therefore expect that these new SoS guarantees for planted problems would not
be achievable via spectral algorithms. But curiously enough, in numerous cases these stronger
guarantees for planted problems can be achieved by spectral methods! The twist is that the
entries of these matrices are low-degree polynomials in the input to the algorithm . The result
is a new family of low-degree spectral algorithms with guarantees matching SoS but requriring
only eigenvector computations instead of general semidefinite programming [HSSS16, RRS16,
AOW15a].
This leads to the following question which is the main focus of this work.
Are SoS algorithms equivalent to low-degree spectral methods for planted problems?
We answer this question affirmatively for a wide class of distinguishing problems which includes refuting random CSPs, tensor and sparse PCA, densest-k-subgraph, community detection
in stochastic block models, planted clique, and more. Our positive answer to this question implies
1For example, consider the contrast between the SDP algorithm for Max-Cut of Goemans and Williamson, [GW94],
and the spectral algorithm of Trevisan [Tre09]; or the SDP-based algorithms for coloring worst-case 3-colorable graphs
[KT17] relative to the best spectral methods [AK97] which only work for random inputs.
1
that a light-weight algorithm—computing the top eigenvalue of a single matrix whose entries are
low-degree polynomials in the input—can recover the performance guarantees of an often bulky
semidefinite programming relaxation.
To complement this picture, we prove two new SoS lower bounds for particular planted problems, both variants of component analysis: sparse principal component analysis and tensor principal component analysis (henceforth sparse PCA and tensor PCA, respectively) [ZHT06, RM14].
For both problems there are nontrivial low-degree spectral algorithms, which have better noise
tolerance than naive spectral methods [HSSS16, DM14b, RRS16, BGL16]. Sparse PCA, which
is used in machine learning and statistics to find important coordinates in high-dimensional
data sets, has attracted much attention in recent years for being apparently computationally intractable to solve with a number of samples which is more than sufficient for brute-force algorithms
[KNV+ 15, BR13b, MW15a]. Tensor PCA appears to exhibit similar behavior [HSS15]. That is, both
problems exhibit information-computation gaps.
Our SoS lower bounds for both problems are the strongest yet formal evidence for informationcomputation gaps for these problems. We rule out the possibility of subexponential-time SoS
algorithms which improve by polynomial factors on the signal-to-noise ratios tolerated by the
known low degree spectral methods. In particular, in the case of sparse PCA, it appeared possible
prior to this work that it might be possible in quasipolynomial time to recover a k-sparse unit vector
v in p dimensions from O(k log p) samples from the distribution N(0, Id +vv ⊤ ). Our lower bounds
suggest that this is extremely unlikely; in fact this task probably requires polynomial SoS degree
and hence exp(n Ω(1) ) time for SoS algorithms. This demonstrates that (at least with regard to SoS
algorithms) both problems are much harder than the planted clique problem, previously used as a
basis for reductions in the setting of sparse PCA [BR13b].
Our lower bounds for sparse and tensor PCA are closely connected to the failure of low-degree
spectral methods in high noise regimes of both problems. We prove them both by showing that
with noise beyond what known low-degree spectral algorithms can tolerate, even low-degree scalar
algorithms (the result of restricting low-degree spectral algorithms to 1 × 1 matrices) would require
subexponential time to detect and recover planted signals. We then show that in the restricted
settings of tensor and sparse PCA, ruling out these weakened low-degree spectral algorithms is
enough to imply a strong SoS lower bound.
1.1 SoS and spectral algorithms for robust inference
We turn to our characterization of SoS algorithms for planted problems in terms of low-degree
spectral algorithms. First, a word on planted problems. Many planted problems have several
formulations: search, in which the goal is to recover a planted solution, refutation, in which the goal
is to certify that no planted solution is present, and distinguishing, where the goal is to determine
with good probability whether an instance contains a planted solution or not. Often an algorithm
for one version can be parlayed into algorithms for the others, but distinguishing problems are
often the easiest, and we focus on them here.
A distinguishing problem is specified by two distributions on instances: a planted distribution
supported on instances with a hidden structure, and a uniform distribution, where samples w.h.p.
contain no hidden structure. Given an instance drawn with equal probability from the planted or
the uniform distribution, the goal is to determine with probability greater than 12 whether or not
2
the instance comes from the planted distribution. For example:
Planted clique Uniform distribution:
G(n, 12 ), the Erdős-Renyi distribution, which w.h.p.
contains no clique of size ω(log n). Planted distribution: The uniform distribution on graphs
containing a n ε -size clique, for some ε > 0. (The problem gets harder as ε gets smaller, since the
distance between the distributions shrinks.)
Planted 3xor Uniform distribution: a 3xor instance on n variables and m > n equations
x i x j x k a i jk , where all the triples (i, j, k) and the signs a i jk ∈ {±1} are sampled uniformly and
independently. No assignment to x will satisfy more than a 0.51-fraction of the equations, w.h.p.
Planted distribution: The same, except the signs a i jk are sampled to correlate with b i b j b k for a
randomly chosen b i ∈ {±1}, so that the assignment x b satisfies a 0.9-fraction of the equations.
(The problem gets easier as m/n gets larger, and the contradictions in the uniform case become
more locally apparent.)
We now formally define a family of distinguishing problems, in order to give our main theorem.
Let I be a set of instances corresponding to a product space (for concreteness one may think of
n
I to be the set of graphs on n vertices, indexed by {0, 1}( 2 ) , although the theorem applies more
broadly). Let ν, our uniform distrbution, be a product distribution on I.
With some decision problem P in mind (e.g. does G contain a clique of size > n ε ?), let X be a
set of solutions to P; again for concreteness one may think of X as being associated with cliques
in a graph, so that X ⊂ {0, 1} n is the set of all indicator vectors on at least n ε vertices.
For each solution x ∈ X, let µ |x be the uniform distribution over instances I ∈ I that contain x.
For example, in the context of planted clique, if x is a clique on vertices 1, . . . , n ε , then µ |x would
be the uniform distribution on graphs containing the clique 1, . . . , n ε . We define the planted
distribution µ to be the uniform mixture over µ x , µ U x∼X µ |x .
The following is our main theorem on the equivalence of sum of squares algorithms for distinguishing problems and spectral algorithms employing low-degree matrix polynomials.
Theorem 1.1 (Informal). Let N, n ∈ N, and let A, B be sets of real numbers. Let I be a family of
instances over A N , and let P be a decision problem over I with X B n the set of possible solutions to P
over I. Let {1 j (x, I)} be a system of n O(d) polynomials of degree at most d in the variables x and constant
degree in the variables I that encodes P, so that
• for I ∼ν I, with high probability the system is unsatisfiable and admits a degree-d SoS refutation, and
• for I ∼µ I, with high probability the system is satisfiable by some solution x ∈ X, and x remains
feasible even if all but an n −0.01 -fraction of the coordinates of I are re-randomized according to ν.
n
n
Then there exists a matrix whose entries are degree-O(d) polynomials Q : I → (6 d)×(6 d) such that
I∼ν
λ +max (Q(I)) 6 1,
while
I∼µ
where λ +max denotes the maximum non-negative eigenvalue.
λ +max (Q(I)) > n 10d ,
The condition that a solution x remain feasible if all but a fraction of the coordinates of I ∼ µ |x
are re-randomized should be interpreted as a noise-robustness condition. To see an example, in
the context of planted clique, suppose we start with a planted distribution over graphs with a
clique x of size n ε+0.01 . If a random subset of n 0.99 vertices are chosen, and all edges not entirely
3
contained in that subset are re-randomized according to the G(n, 1/2) distribution, then with high
probability at least n ε of the vertices in x remain in a clique, and so x remains feasible for the
problem P: G has a clique of size > n ε ?
1.2 SoS and information-computation gaps
Computational complexity of planted problems has become a rich area of study. The goal is to
understand which planted problems admit efficient (polynomial time) algorithms, and to study the
information-computation gap phenomenon: many problems have noisy regimes in which planted
structures can be found by inefficient algorithms, but (conjecturally) not by polynomial time
algorithms. One example is the planted clique problem, where the goal find a large clique in
a sample from the uniform distribution over graphs containing a clique of size n ε for a small
constant ε > 0. While the problem is solvable for any ε > 0 by a brute-force algorithm requiring
n Ω(log n) time, polynomial time algorithms are conjectured to require ε > 21 .
A common strategy to provide evidence for such a gap is to prove that powerful classes of
efficient algorithms are unable to solve the planted problem in the (conjecturally) hard regime.
SoS algorithms are particularly attractive targets for such lower bounds because of their broad
applicability and strong guarantees.
In a recent work, Barak et al. [BHK+ 16] show an SoS lower bound for the planted clique
problem, demonstrating that when ε < 12 , SoS algorithms require n Ω(log n) time to solve planted
clique. Intriguingly, they show that in the case of planted clique that SoS algorithms requiring
≈ n d time can distinguish planted from random graphs only when there is a scalar-valued degree
≈ d · log n polynomial p(A) : n×n → (here A is the adjacency matrix of a graph) with
G(n,1/2)
p(A) 0,
planted
p(A) > n
Ω(1)
·
G(n,1/2)
p(A)
1/2
.
That is, such a polynomial p has much larger expectation in under the planted distribution than
its standard deviation in uniform distribution. (The choice of n Ω(1) is somewhat arbitrary, and
could be replaced with Ω(1) or n Ω(d) with small changes in the parameters.) By showing that
as long as ε < 21 any such polynomial p must have degree Ω(log n)2 , they rule out efficient SoS
algorithms when ε < 12 . Interestingly, this matches the spectral distinguishing threshold—the
spectral algorithm of [AKS98] is known to work when ε > 21 .
This stronger characterization of SoS for the planted clique problem, in terms of scalar distinguishing algorithms rather than spectral distinguishing algorihtms, may at first seem insignificant.
To see why the scalar characterization is more powerful, we point out that if the degree-d moments
of the planted and uniform distributions are known, determining the optimal scalar distinguishing
polynomial is easy: given a planted distribution µ and a random distribution ν over instances I,
one just solves a linear algebra problem in the n d log n coefficients of p to maximize the expectation
over µ relative to ν:
max
[p 2 (I)] s.t.
[p 2 (I)] 1 .
p
I∼µ
I∼ν
It is not difficult to show that the optimal solution to the above program has a simple form: it is
the projection of the relative density of ν with respect to µ projected to the degree-d log n polynomials.
So given a pair of distributions µ, ν, in n O(d log n) time, it is possible to determine whether there
4
exists a degree-d log n scalar distinguishing polynomial. Answering the same question about the
existence of a spectral distinguisher is more complex, and to the best of our knowledge cannot be
done efficiently.
Given this powerful theorem for the case of the planted clique problem, one may be tempted
to conjecture that this stronger, scalar distinguisher characterization of the SoS algorithm applies
more broadly than just to the planted clique problem, and perhaps as broadly as Theorem 1.1. If
this conjecture is true, given a pair of distributions ν and µ with known moments, it would be
possible in many cases to efficiently and mechanically determine whether polynomial-time SoS
distinguishing algorithms exist!
Conjecture 1.2. In the setting of Theorem 1.1, the conclusion may be replaced with the conclusion that
there exists a scalar-valued polynomial p : I → of degree O(d · log n) so that
uniform
p(I) 0 and
planted
p(I) > n
Ω(1)
2
uniform
p(I)
1/2
To illustrate the power of this conjecture, in the beginning of Section 6 we give a short and
self-contained explanation of how this predicts, via simple linear algebra, our n Ω(1) -degree SoS
lower bound for tensor PCA. As evidence for the conjecture, we verify this prediction by proving
such a lower bound unconditionally.
We also note why Theorem 1.1 does not imply Conjecture 1.2. While, in the notation of that
theorem, the entries of Q(I) are low-degree polynomials in I, the function M 7→ λ +max (M) is not (to
the best of our knowledge) a low-degree polynomial in the entries of M (even approximately). (This
stands in contrast to, say the operator norm or Frobenious norm of M, both of which are exactly
or approximately low-degree polynomials in the entries of M.) This means that the final output
of the spectral distinguishing algorithm offered by Theorem 1.1 is not a low-degree polynomial in
the instance I.
1.3 Exponential lower bounds for sparse PCA and tensor PCA
Our other main results are strong exponential lower bound on the sum-of-squares method (specifΩ(1)
ically, against 2n time or n Ω(1) degree algorithms) for the tensor and sparse principal component
analysis (PCA). We prove the lower bounds by extending the techniques pioneered in [BHK+ 16].
In the present work we describe the proofs informally, leaving full details to a forthcoming full
version.
Tensor PCA. We start with the simpler case of tensor PCA, introduced by [RM14].
Problem 1.3 (Tensor PCA). Given an order-k tensor in (n )⊗k , determine whether it comes from:
• Uniform Distribution: each entry of the tensor sampled independently from N(0, 1).
• Planted Distribution: a spiked tensor, T λ · v ⊗k + G where v is sampled uniformly from
n−1 , and where G is a random tensor with each entry sampled independently from N(0, 1).
Here, we think of v as a signal hidden by Gaussian noise. The parameter λ is a signal-to-noise
ratio. In particular, as λ grows, we expect the distinguishing problem above to get easier.
5
Tensor PCA is a natural generalization of the PCA problem in machine learning and statistics.
Tensor methods in general are useful when data naturally has more than two modalities: for
example, one might consider a recommender system which factors in not only people and movies
but also time of day. Many natural tensor problems are NP hard in the worst-case. Though this is
not necessarily an obstacle to machine learning applications, it is important to have average-case
models to in which to study algorithms for tensor problems. The spiked tensor setting we consider
here is one such simple model.
Turning to algorithms: consider first the ordinary PCA problem in a spiked-matrix model.
Given an n × n matrix M, the problem is to distinguish between the case where every entry of M
is independently drawn from the standard Gaussian distribution N(0, 1) and the case when M is
drawn from a distribution as above with an added rank one shift λvv ⊤ in a uniformly random
direction v. A natural and well-studied algorithm, which solves this problem to informationtheoretic optimality is to threshold on the largest singular value/spectral norm of the input matrix.
Equivalently, one thresholds on the maximizer of the degree two polynomial hx, Mxi in x ∈ n−1 .
A natural generalization of this algorithm to the tensor PCA setting (restricting for simplicity
k 3 for this discussion) is the maximum of the degree-three polynomial hT, x ⊗3i over the unit
sphere—equivalently, the (symmetric) injective tensor norm of T. This maximum can be shown
√
to be much larger in case of the planted distribution so long as λ ≫ n. Indeed, this approach
to distinguishing between planted and uniform distributions is information-theoretically optimal
[PWB16, BMVX16]. Since recovering the spike v and optimizing the polynomial hT, x ⊗3 i on the
sphere are equivalent, tensor PCA can be thought of as an average-case version of the problem of
optimizing a degree-3 polynomial on the unit sphere (this problem is NP hard in the worst case,
even to approximate [HL09, BBH+12]).
Even in this average-case model, it is believed that there is a gap between which signal strengths
λ allow recovery of v by brute-force methods and which permit polynomial time algorithms. This is
quite distinct from the vanilla PCA setting, where eigenvector algorithms solve the spike-recovery
problem to information-theoretic optimality. Nevertheless, the best-known algorithms for tensor
PCA arise from computing convex relaxations of this degree-3 polynomial optimization problem.
Specifically, the SoS method captures the state of the art algorithms for the problem; it is known
to recover the vector v to o(1) error in polynomial time whenever λ ≫ n 3/4 [HSS15]. A major
open question in this direction is to understand the complexity of the problem for λ 6 n 3/4−ε .
O(ε)
Algorithms (again captured by SoS) are known which run in 2n
time [RRS16, BGG+ 16]. We
show the following theorem which shows that the sub-exponential algorithm above is in fact nearly
optimal for SoS algorithm.
Theorem 1.4. For a tensor T, let
SoS d (T) max ˜ [hT, x ⊗k i] such that ˜ is a degree d pseudoexpectation and satisfies {kx k 2 1}2
˜
For every small enough constant ε > 0, if T ∈ n×n×n has iid Gaussian or {±1} entries,
n k/4−ε , for every d 6 n c·ε for some universal c > 0.
T
SoS d (T) >
In particular for third order tensors (i.e k 3), since degree n Ω(ε) SoS is unable to certify that a
2For definitions of pseudoexpectations and related matters, see the survey [BS14].
6
random 3-tensor has maximum value much less than n 3/4−ε , this SoS relaxation cannot be used to
distinguish the planted and random distributions above when λ ≪ n 3/4−ε .3
Sparse PCA. We turn to sparse PCA, which we formalize as the following planted distinguishing
problem.
Problem 1.5 (Sparse PCA (λ, k)). Given an n × n symmetric real matrix A, determine whether A
comes from:
• Uniform Distribution: each upper-triangular entry of the matrix A is sampled iid from
N(0, 1); other entries are filled in to preserve symmetry.
√
• Planted Distribution: a random k-sparse unit vector v with entries {±1/ k, 0} is sampled,
and B is sampled from the uniform distribution above; then A B + λ · vv⊺ .
We defer significant discussion to Section 6, noting just a few things before stating our main
theorem on sparse PCA. First, the planted model above is sometimes called the spiked Wigner
model—this refers to the independence of the entries of the matrix B. An alternative model for
Í
sparse PCA is the spiked Wishart model: A is replaced by i 6 m x i x i⊺ , where each x i ∼ N(0, Id +βvv⊺ ),
for some number m ∈ of samples and some signal-strength β ∈ . Though there are technical
differences between the models, to the best of our knowledge all known algorithms with provable
guarantees are equally applicable to either model; we expect that our SoS lower bounds also apply
in the spiked Wishart model.
We generally think of k, λ as small powers of n; i.e. n ρ for some ρ ∈ (0, 1); this allows us to
generally ignore logarithmic factors in our arguments. As in the tensor PCA setting, a natural
and information-theoretically optimal algorithm for sparse PCA is to maximize the quadratic
form hx, Axi, this time over k-sparse unit vectors. For A from the uniform distribution √
standard
techniques (ε-nets and union bounds) show that the maximum value achievable is O( k log√n)
with high probability, while for A from the planted model of course hv, Avi ≈ λ. So, when λ ≫ k
one may distinguish the two models by this maximum value.
However, this maximization problem is NP hard for general quadratic forms A [CPR16]. So,
efficient algorithms must use some other distinguisher which leverages the randomness in the
instances. Essentially only two polynomial-time-computable distinguishers are known.4 If λ ≫
√
n then the maximum eigenvalue of A distinguishes the models. If λ ≫ k then the planted
model can be distinguished by the presence of large
diagonal entries of A. Notice both of these
√
√
distinguishers fail for some choices of λ (that is, k ≪ λ ≪ n, k) for which brute-force methods
(optimizing hx, Axi over sparse x) could successfully distinguish planted from uniform A’s.
√ The
theorem below should be interpreted as an impossibility result for SoS algorithms in the k ≪
√
λ ≪ n, k regime. This is the strongest known impossibility result for sparse PCA among those
ruling out classes of efficient algorithms (one reduction-based result is also know, which shows
sparse PCA is at least as hard as the planted clique problem [BR13a]. It is also the first evidence
that the problem may require subexponential (as opposed to merely quasi-polynomial) time.
3In fact, our proof for this theorem will show somewhat more: that a large family of constraints—any valid constraint
which is itself a low-degree polynomial of T—could be added to this convex relaxation and the lower bound would still
obtain.
4If one studies the problem at much finer granularity than we do here, in particular studying λ up to low-order
additive terms and how precisely it is possible to estimate the planted signal v, then the situation is more subtle [DM14a].
7
Theorem 1.6. If A ∈ n×n , let
SoS d,k (A) max ˜ hx, Axi s.t. ˜ is degree d and satisfies x 3i x i , kx k 2 k .
˜
There are absolute constants c, ε ∗ > 0 so that for every ρ ∈ (0, 1) and ε ∈ (0, ε ∗ ), if k n ρ , then for
d 6 n c·ε ,
SoS d,k (A) > min(n 1/2−ε k, n ρ−ε k) .
n
A∼{±1} ( 2 )
For more thorough discussion of the theorem, see Section 6.3.
1.4 Related work
On interplay of SoS relaxations and spectral methods. As we have already alluded to, many
prior works explore the connection between SoS relaxations and spectral algorithms, beginning
with the work of [BBH+ 12] and including the followup works [HSS15, AOW15b, BM16] (plus many
more). Of particular interest are the papers [HSSS16, MS16b], which use the SoS algorithms to
obtain fast spectral algorithms, in some cases running in time linear in the input size (smaller even
than the number of variables in the associated SoS SDP).
In light of our Theorem 1.1, it is particularly interesting to note cases in which the known SoS
lower bounds matching the known spectral algorithms—these problems include planted clique
(upper bound: [AKS98], lower bound:5 [BHK+ 16]), strong refutations for random CSPs (upper
bound:6 [AOW15b, RRS16], lower bounds: [Gri01b, Sch08, KMOW17]), and tensor principal
components analysis (upper bound: [HSS15, RRS16, BGG+ 16], lower bound: this paper).
We also remark that our work applies to several previously-considered distinguishing and
average-case problems within the sum-of-squares algorithmic framework: block models [MS16a] ,
densest-k-subgraph [BCC+ 10]; for each of these problems, we have by Theorem 1.1 an equivalence
between efficient sum-of-squares algorithms and efficient spectral algorithms, and it remains to
establish exactly what the tradeoff is between efficiency of the algorithm and the difficulty of
distinguishing, or the strength of the noise.
To the best of knowledge, no previous work has attempted to characterize SoS relaxations
for planted problems by simpler algorithms in the generality we do here. Some works have
considered characterizing degree-2 SoS relaxations (i.e. basic semidefinie programs) in terms of
simpler algorithms. One such example is recent work of Fan and Montanari [FM16] who showed
that for some planted problems on sparse random graphs, a class of simple procedures called local
algorithms performs as well as semidefinite programming relaxations.
On strong SoS lower bounds for planted problems. By now, there’s a large body of work that
establishes lower bounds on SoS SDP for various average case problems. Beginning with the work
of Grigoriev [Gri01a], a long line work have established tight lower bounds for random constraint
satisfaction problems [Sch08, BCK15, KMOW17] and planted clique [MPW15, DM15, HKP15, RS15,
5SDP lower bounds for the planted clique problem were known for smaller degrees of sum-of-squares relaxations
and for other SDP relaxations before; see the references therein for details.
6There is a long line of work on algorithms for refuting random CSPs, and 3SAT in particular; the listed papers
contain additional references.
8
BHK+ 16]. The recent SoS lower bound for planted clique of [BHK+ 16] was particularly influential
to this work, setting the stage for our main line of inquiry. We also draw attention to previous
work on lower bounds for the tensor PCA and sparse PCA problems in the degree-4 SoS relaxation
[HSS15, MW15b]—our paper improves on this and extends our understanding of lower bounds
for tensor and sparse PCA to any degree.
Tensor principle component analysis was introduced by Montanari and Richard [RM14] who
indentified information theoretic threshold for recovery of the planted component and analyzed the
maximum likelihood estimator for the problem. The work of [HSS15] began the effort to analyze
the sum of squares method for the problem and showed that it yields an efficient algorithm
for recovering the planted component with strength ω̃(n 3/4 ). They also established that this
threshold is tight for the sum of squares relaxation of degree 4. Following this, Hopkins et al.
[HSSS16] showed how to extract a linear time spectral algorithm from the above analysis. Tomioka
and Suzuki derived tight information theoretic thresholds for detecting planted components by
establishing tight bounds on the injective tensor norm of random tensors [TS14]. Finally, very
recently, Raghavendra et. al. and Bhattiprolu et. al. independently showed sub-exponential time
algorithms for tensor pca [RRS16, BGL16]. Their algorithms are spectral and are captured by the
sum of squares method.
1.5 Organization
In Section 2 we set up and state our main theorem on SoS algorithms versus low-degree spectral
algorithms. In Section 5 we show that the main theorem applies to numerous planted problems—
we emphasize that checking each problem is very simple (and barely requires more than a careful
definition of the planted and uniform distributions). In Section 3 and Section 4 we prove the main
theorerm on SoS algorithms versus low-degree spectral algorithms.
In section 7 we get prepared to prove our lower bound for tensor PCA by proving a structural
theorem on factorizations of low-degree matrix polynomials with well-behaved Fourier transforms.
In section 8 we prove our lower bound for tensor PCA, using some tools proved in section 9.
def
Notation. For two matrices A, B, let hA, Bi Tr(AB). Let kAkFr denote the Frobenius norm, and
kAk its spectral norm. For matrix valued functions A, B over I and a distribution ν over I ∼ I,
def
we will denote hA, Biν I∼ν hA(I), B(I)i and by kAkFr,ν ( I∼ν hA(I), A(I)i)1/2 .
For a vector of formal variables x (x1 , . . . , x n ), we use x 6 d to denote the vector consisting of all
def
monomials of degree at most d in these variables. Furthermore, let us denote X 6 d (x 6 d )(x 6 d )T .
2 Distinguishing Problems and Robust Inference
In this section, we set up the formal framework within which we will prove our main result.
Uniform vs. Planted Distinguishing Problems
We begin by describing a class of distinguishing problems. For A a set of real numbers, we will
use I A N denote a space of instances indexed by N variables—for the sake of concreteness, it
9
will be useful to think of I as {0, 1} N ; for example, we could have N n2 and I as the set of
all graphs on n vertices. However, the results that we will show here continue to hold in other
contexts, where the space of all instances is N or [q]N .
Definition 2.1 (Uniform Distinguishing Problem). Suppose that I is the space of all instances, and
suppose we have two distributions over I, a product distribution ν (the “uniform” distribution),
and an arbitrary distribution µ (the “planted” distribution).
In a uniform distinguishing problem, we are given an instance I ∈ I which is sampled with
probability 21 from ν and with probability 12 from µ, and the goal is to determine with probability
greater than 21 + ε which distribution I was sampled from, for any constant ε > 0.
Polynomial Systems
In the uniform distinguishing problems that we are interested in, the planted distribution µ will be
a distribution over instances that obtain a large value for some optimization problem of interest (i.e.
the max clique problem). We define polynomial systems in order to formally capture optimization
problems.
Program 2.2 (Polynomial System). Let A, B be sets of real numbers, let n, N ∈ , and let I A N
be a space of instances and X ⊆ B n be a space of solutions. A polynomial system is a set of polynomial
equalities
1 j (x, I) 0 ∀ j ∈ [m],
where {1 j } m
are polynomials in the program variables {x i }i∈[n] , representing x ∈ X, and in the
j1
instance variables {Ij } j∈[N] , representing I ∈ I. We define degprog (1 j ) to be the degree of 1 j in the
program variables, and deginst (1 j ) to be the degree of 1 j in the instance variables.
Remark 2.3. For the sake of simplicity, the polynomial system Program 2.2 has no inequalities.
Inequalities can be incorporated in to the program by converting each inequality in to an equality
with an additional slack variable. Our main theorem still holds, but for some minor modifications
of the proof, as outlined in Section 4.
A polynomial system allows us to capture problem-specific objective functions as well as
problem-specific constraints. For concreteness, consider a quadtratic program which checks if a
graph on n vertices contains a clique of size k. We can express this with the polynomial system
n
over program variables x ∈ n and instance variables I ∈ {0, 1}( 2 ) , where Ii j 1 iff there is an
edge from i to j, as follows:
nÕ
i∈[n]
o
x i − k 0 ∪ {x i (x i − 1) 0}i∈[n] ∪ {(1 − Ii j )x i x j 0}i, j∈( [n]) .
2
Planted Distributions
We will be concerned with planted distributions of a particular form; first, we fix a polynomial
system of interest S {1 j (x, I)} j∈[m] and some set X ⊆ B n of feasible solutions for S, so that the
10
program variables x represent elements of X. Again, for concreteness, if I is the set of graphs on
n vertices, we can take X ⊆ {0, 1} n to be the set of indicators for subsets of at least n ε vertices.
For each fixed x ∈ X, let µ |x denote the uniform distribution over I ∈ I for which the
polynomial system {1 j (x, I)} j∈[m] is feasible. The planted distribution µ is given by taking the
uniform mixture over the µ |x , i.e., µ ∼ U x∼X [µ |x ].
SoS Relaxations
If we have a polynomial system {1 j } j∈[m] where degprog (1 j ) 6 2d for every j ∈ [m], then the
degree-2d sum-of-squares SDP relaxation for the polynomial system Program 2.2 can be written
as,
Program 2.4 (SoS Relaxation for Polynomial System). Let S {1 j (x, I)} j∈[m] be a polynomial
system in instance variables I ∈ I and program variables x ∈ X. If degprog (1 j ) 6 2d for all j ∈ [m],
then an SoS relaxation for S is
hG j (I), Xi 0 ∀ j ∈ [m]
X0
6d
6d
where X is an [n]6 d ×[n]6 d matrix containing the variables of the SDP and G j : I → [n] ×[n] are
matrices containing the coefficients of 1 j (x, I) in x, so that the constraint hG j (I), Xi 0 encodes
the constraint 1 j (x, I) 0 in the SDP variables. Note that the entries of G j are polynomials of
degree at most deginst (1 j ) in the instance variables.
Sub-instances
Suppose that I A N is a family of instances; then given an instance I ∈ I and a subset S ⊆ [N],
let IS denote the sub-instance consisting of coordinates within S. Further, for a distribution Θ over
subsets of [N], let IS ∼Θ I denote a subinstance generated by sampling S ∼ Θ. Let I↓ denote
the set of all sub-instances of an instance I, and let I↓ denote the set of all sub-instances of all
instances.
Robust Inference
Our result will pertain to polynomial systems that define planted distributions whose solutions to
sub-instances generalize to feasible solutions over the entire instance. We call this property “robust
inference.”
Definition 2.5. Let I A N be a family of instances, let Θ be a distribution over subsets of [N], let S
be a polynomial system as in Program 2.2, and let µ be a planted distribution over instances feasible
for S. Then the polynomial system S is said to satisfy the robust inference property for probability
distribution µ on I and subsampling distribution Θ, if given a subsampling IS of an instance I from
µ, one can infer a setting of the program variables x ∗ that remains feasible to S for most settings of
IS .
Formally, there exists a map x : I↓ → n such that
I∼µ,S∼Θ,Ĩ∼ν|IS
[x(IS ) is a feasible for S on IS ◦ Ĩ] > 1 − ε(n, d)
11
for some negligible function ε(n, d). To specify the error probability, we will say that polynomial
system is ε(n, d)-robustly inferable.
Main Theorem
We are now ready to state our main theorem.
Theorem 2.6. Suppose that S is a polynomial system as defined in Program 2.2, of degree at most 2d in the
program variables and degree at most k in the instance variables. Let B > d · k ∈ such that
1. The polynpomial system S is n18B -robustly inferable with respect to the planted distribution µ and the
sub-sampling distribution Θ.
2. For I ∼ ν, the polynomial system S admits a degree-d SoS refutation with numbers bounded by n B
with probability at least 1 − n18B .
Let D ∈ be such that for any subset α ⊆ [N] with |α| > D − 2dk,
[α ⊆ S] 6
S∼Θ
1
n 8B
There exists a degree 2D matrix polynomial Q : I → [n]
+
I∼µ [λ max (Q(I))]
+
I∼ν [λ max (Q(I))]
6 d ×[n] 6 d
such that,
> n B/2
Remark 2.7. Our argument implies a stronger result that can be stated in terms of the eigenspaces
of the subsampling operator. Specifically, suppose we define
def
Sε
α | {α ⊆ S} 6 ε
S∼Θ
Then, the distinguishing polynomial exhibited by Theorem 2.6 satisfies Q
∈
span{ monomials Iα |α ∈ Sε }. This refinement can yield tighter bounds in cases where all
monomials of a certain degree are not equivalent to each other. For example, in the Planted
Clique problem, each monomial consists of a subgraph and the right measure of the degree of a
sub-graph is the number of vertices in it, as opposed to the number of edges in it.
In Section 5, we will make the routine verifications that the conditions of this theorem hold
for a variety of distinguishing problems: planted clique (Lemma 5.2), refuting random CSPs
(Lemma 5.4, stochastic block models (Lemma 5.6), densest-k-subgraph (Lemma 5.8), tensor PCA
(Lemma 5.10), and sparse PCA (Lemma 5.12). Now we will proceed to prove the theorem.
3 Moment-Matching Pseudodistributions
We assume the setup from Section 2: we have a family of instances I A N , a polynomial system
S {1 j (x, I)} j∈[m] with a family of solutions X B n , a “uniform” distribution ν which is a product
distribution over I, and a “planted” distribution µ over I defied by the polynomial system S as
described in Section 2.
12
The contrapositive of Theorem 2.6 is that if S is robustly inferable with respect to µ and a
distribution over sub-instances Θ, and if there is no spectral algorithm for distinguishing µ and
ν, then with high probability there is no degree-d SoS refutation for the polynomial system S (as
defined in Program 2.4). To prove the theorem, we will use duality to argue that if no spectral
algorithm exists, then there must exist an object which is in some sense close to a feasible solution
to the SoS SDP relaxation.
Since each I in the support of µ is feasible for S by definition, a natural starting point is the
6d
6d
SoS SDP solution for instances I ∼µ I. With this in mind, we let Λ : I → ([n] ×[n] )+ be an
arbitrary function from the support of µ over I to PSD matrices. In other words, we take
Λ(I) µ̂(I) · M(I)
where µ̂ is the relative density of µ with respect to ν, so that µ̂(I) µ(I)/ν(I), and M is some
matrix valued function such that M(I) 0 and kM(I)k 6 B for all I ∈ I. Our goal is to find a
PSD matrix-valued function P that matches the low-degree moments of Λ in the variables I, while
being supported over most of I (rather than just over the support of µ).
6d
6d
The function P : I → ([n] ×[n] )+ is given by the following exponentially large convex
program over matrix-valued functions,
Program 3.1 (Pseudodistribution Program).
min
s.t.
2
kP kFr,ν
(3.1)
hQ, Piν hQ, Λ′iν
P0
Λ′ Λ + η · Id,
[n]6 d ×[n]6 d
∀Q : I →
2n
2−2
, deginst (Q) 6 D
>η>0
(3.2)
(3.3)
The constraint (3.2) fixes Tr(P), and so the objective function (3.1) can be viewied as minimizing Tr(P 2 ), a proxy for the collision probability of the distribution, which is a measure of
entropy.
Remark 3.2. We have perturbed Λ in (3.3) so that we can easily show that strong duality holds in
the proof of Claim 3.4. For the remainder of the paper we ignore this perturbation, as we can
accumulate the resulting error terms and set η to be small enough so that they can be neglected.
The dual of the above program will allow us to relate the existence of an SoS refutation to the
existence of a spectral algorithm.
Program 3.3 (Low-Degree Distinguisher).
max
s.t.
hΛ, Qiν
Q : I → [n]
6 d ×[n] 6 d
2
kQ + kFr,ν
6 1,
, deginst (Q) 6 D
where Q + is the projection of Q to the PSD cone.
Claim 3.4. Program 3.3 is a manipulation of the dual of Program 3.1, so that if Program 3.1 has
√
optimum c > 1, Program 3.3 as optimum at least Ω( c).
13
Before we present the proof of the claim, we summarize its central consequence in the following
theorem: if Program 3.1 has a large objective value (and therefore does not provide a feasible SoS
solution), then there is a spectral algorithm.
[n]6 d ×[n]6 d
be such that Id M 0. Let λ +max (·) be the
Theorem 3.5. Fix a function M : I → +
function that gives the largest non-negative eigenvalue of a matrix. Suppose Λ µ · M then the optimum
of Program 3.1 is equal to opt > 1 only if there exists a low-degree matrix polynomial Q such that,
I∼µ
p
[λ+max (Q(I))] > Ω( opt/n d )
while,
I∼ν
[λ +max (Q(I))] 6 1 .
Proof. By Claim 3.4, if the value of Program 3.1 is opt > 1, then there is a polynomial Q achieves a
√
value of Ω( opt) for the dual. It follows that
I∼µ
1
nd
[λ +max (Q(I))] >
while
I∼ν
[λ+max (Q(I))]
6
I∼µ
q
[hId, Q(I))i] >
I∼ν
[λ +max (Q(I))2 ]
p
1
hΛ,
Qi
Ω(
opt/n d ),
ν
nd
6
r
I∼ν
2
6 1.
kQ + (I)kFr
It is interesting to note that the specific structure of the PSD matrix valued function M plays no
role in the above argument—since M serves as a proxy for monomials in the solution as represented
by the program variables x ⊗d , it follows that the choice of how to represent the planted solution
is not critical. Although seemingly counterintuitive, this is natural because the property of being
distinguishable by low-degre distinguishers or by SoS SDP relaxations is a property of ν and µ.
We wrap up the section by presenting a proof of the Claim 3.4.
Proof of Claim 3.4. We take the Lagrangian dual of Program 3.1. Our dual variables will be some
combination of low-degree matrix polynomials, Q, and a PSD matrix A:
2
L(P, Q, A) kP kFr,ν
− hQ, P − Λ′iν − hA, Piν
s.t.
A 0.
It is easy to verify that if P is not PSD, then A can be chosen so that the value of L is ∞. Similarly if
there exists a low-degree polynomial upon which P and Λ differ in expectation, Q can be chosen
as a multiple of that polynomial so that the value of L is ∞.
Now, we argue that Slater’s conditions are met for Program 3.1, as P Λ′ is strictly feasible.
Thus strong duality holds, and therefore
min max L(P, Q, A) 6 max min L(P, Q, A).
P
A0,Q
A0,Q
P
Taking the partial derivative of L(P, Q, A) with respect to P, we have
∂
L(P, Q, A) 2 · P − Q − A.
∂P
14
6d
6d
where the first derivative is in the space of functions from I → [n] ×[n] . By the convexity of
∂
L as a function of P, it follows that if we set ∂P
L 0, we will have the minimizer. Substituting, it
follows that
1
1
1
2
kA + Q kFr,ν
− hQ, A + Q − Λ′iν − hA, A + Qiν
4
2
2
1
2
max hQ, Λ′iν − kA + Q kFr,ν
4
A0,Q
min max L(P, Q, A) 6 max
P
A0,Q
A0,Q
(3.4)
Now it is clear that the maximizing choice of A is to set A −Q − , the negation of the negativesemi-definite projection of Q. Thus (3.4) simplifies to
min max L(P, Q, A) 6 max hQ, Λ′iν −
P
A0,Q
Q
1
2
kQ + kFr,ν
4
1
4
2
,
6 max hQ, Λiν + η Trν (Q + ) − kQ + kFr,ν
Q
(3.5)
def
where we have used the shorthand Trν (Q + ) I∼ν Tr(Q(I)+ ). Now suppose that the low-degree
matrix polynomial Q ∗ achieves a right-hand-side value of
1
2
> c.
hQ ∗ , Λiν + η · Trν (Q +∗ ) − kQ +∗ kFr,ν
4
Consider Q ′ Q ∗ /kQ +∗ kFr,ν . Clearly kQ +′ kFr,ν 1. Now, multiplying the above inequality through
by the scalar 1/kQ +∗ kFr,ν , we have that
Trν (Q +∗ )
1
c
−
η
·
+ kQ +∗ kFr,ν
∗
∗
kQ + kFr,ν
kQ + kFr,ν 4
1
c
− η · n d + kQ +∗ kFr,ν .
>
∗
kQ + kFr,ν
4
hQ ′ , Λiν >
√
Therefore hQ ′ , Λiν is at least Ω(c 1/2 ), as if kQ +∗ kFr,ν > c then the third term gives the lower bound,
and otherwise the first term gives the lower bound.
Thus by substituting Q ′, the square root of the maximum of (3.5) within an additive ηn d
lower-bounds the maximum of the program
max
s.t.
hQ, Λiν
Q : I → [n]
2
kQ + kFr,ν
6 d ×[n] 6 d
6 1.
This concludes the proof.
,
deginst (Q) 6 D
4 Proof of Theorem 2.6
We will prove Theorem 2.6 by contradiction. Let us assume that there exists no degree-2D matrix
polynomial that distinguishes ν from µ. First, the lack of distinguishers implies the following fact
about scalar polynomials.
15
Lemma 4.1. Under the assumption that there are no degree-2D distinguishers, for every degree-D scalar
polynomial Q,
2
2
kQ kFr,µ
6 n B kQ kFr,ν
Proof. Suppose not, then the degree-2D 1 × 1 matrix polynomial Tr(Q(I)2 ) will be a distinguisher
between µ and ν.
Constructing Λ. First, we will use the robust inference property of µ to construct a pseudodistribution Λ. Recall again that we have defined µ̂ to be the relative density of µ with respect
to ν, so that µ̂(I) µ(I)/ν(I). For each subset S ⊆ [N], define a PSD matrix-valued function
6d
6d
ΛS : I → ([n] ×[n] )+ as,
ΛS (I)
I′
S
[ µ̂(IS ◦ I ′)] · x(IS )6 d (x(IS )6 d )T
S
where we use IS to denote the restriction of I to S ⊂ [N], and IS ◦ I ′ to denote the instance given by
S
completing the sub-instance IS with the setting I ′. Notice that ΛS is a function depending only on
S
def
IS —this fact will be important to us. Define Λ
function that satisfies
hΛ∅,∅ , 1iν
I∼ν S∼Θ I ′ ∼ν
S
S∼Θ ΛS .
[ µ̂(IS ◦ I ′)]
S
Observe that Λ is a PSD matrix-valued
S IS IS ◦I ′ ∼ν
S
[ µ̂(IS ◦ I ′)] 1
S
(4.1)
Since Λ(I) is an average over ΛS (I), each of which is a feasible solution with high probability,
Λ(I) is close to a feasible solution to the SDP relaxation for I. The following Lemma formalizes
this intuition.
def
Define G span{χS · G j | j ∈ [m], S ⊆ [N]}, and use ΠG to denote the orthogonal projection
into G.
Lemma 4.2. Suppose Program 2.2 satisfies the ε-robust inference property with respect to planted distribution µ and subsampling distribution Θ and if kx(IS )6 d k22 6 K for all IS then for every G ∈ G, we
have
! 1/2
√
hΛ, Giν 6 ε · K ·
kG(IS ◦ IS )k22
S∼Θ Ĩ ∼ν I∼µ
S
Proof. We begin by expanding the left-hand side by substituting the definition of Λ. We have
hΛ, Giν
S∼Θ I∼ν
hΛS (IS ), G(I)i
S∼Θ I∼ν I ′ ∼ν
S
µ̂(IS ◦ I ′) · hx(IS )6 d (x(IS )6 d )T , G(I)i
S
And because the inner product is zero if x(IS ) is a feasible solution,
6
6
S∼Θ I∼ν
I ′ ∼ν
S
S∼Θ I∼ν I ′ ∼ν
S
µ̂(IS ◦ I ′) · [x(IS ) is infeasible for S(I)] · x(IS )6 d
S
2
2
· kG(I)kFr
µ̂(IS ◦ I ′) · [x(IS ) is infeasible for S(I)] · K · kG(I)kFr
S
16
And now letting ĨS denote the completion of IS to I, so that IS ◦ ĨS I, we note that the above
is like sampling I ′ , ĨS independently from ν and then reweighting by µ̂(IS ◦ I ′ ), or equivalently
S
S
taking the expectation over IS ◦ I ′ I ′ ∼ µ and ĨS ∼ ν:
S
S∼Θ I ′ ∼µ Ĩ ∼ν
S
· [x(IS ) is infeasible for S(IS ◦ ĨS )] · K · kG(IS ◦ ĨS )kFr
and by Cauchy-Schwarz,
6K·
S∼Θ I ′ ∼µ Ĩ ∼ν
S
· [x(IS ) is infeasible for S(IS ◦ ĨS )]
! 1/2
·
S∼Θ I ′ ∼µ Ĩ ∼ν
S
2
kG(IS ◦ ĨS )kFr
! 1/2
The lemma follows by observing that the first term in the product above is exactly the nonrobustness of inference probability ε.
Corollary 4.3. If G ∈ G is a degree-D polynomial in I, then under the assumption that there are no
degree-2D distinguishers for ν, µ,
hΛ, Giν 6
√
ε · K · n B · kGkFr,ν
Proof. For each fixing of ĨS , kG(IS ◦ ĨS )k22 is a degree-2D-scalar polynomial in I. Therefore by
Lemma 4.1 we have that,
I∼µ
2
kG(IS ◦ ĨS )kFr
6 nB ·
I∼ν
2
kG(IS ◦ ĨS )kFr
.
Substituting back in the bound in Lemma 4.2 the corollary follows.
Now, since there are no degree-D matrix distinguishers Q, for each S in the support of Θ we can
apply reasoning similar to Theorem 3.5 to conclude that there is a high-entropy PSD matrix-valued
function PS that matches the degree-D moments of ΛS .
Lemma 4.4. If there are no degree-D matrix distinguishers Q for µ, ν, then for each S ∼ Θ, there exists a
solution PS to Program 3.1 (with the variable Λ : ΛS ) and
kPS kFr,ν 6 n
(B+d)/4
6 n B/2
(4.2)
This does not follow directly from Theorem 3.5, because a priori a distinguisher for some
specific S may only apply to a small fraction of the support of µ. However, we can show that
Program 3.1 has large value for ΛS only if there is a distinguisher for µ, ν.
Proof. By Claim 3.4, it suffices for us to argue that there is no degree-D matrix polynomial Q which
has large inner product with ΛS relative to its Frobenius norm. So, suppose by way of contradiction
that Q is a degree-D matrix that distinguishes ΛS , so that hQ, ΛS iν > n B+d but kQ kFr,ν 6 1.
It follows by definition of ΛS that
n B+d 6 hQ, ΛS iν
I∼ν I ′ ∼ν
S
µ̂(IS ◦ I ′) · hQ(I), x(IS )6 d (x(IS )6 d )⊤ i
S
17
IS ◦I ′ ∼µ IS ∼ν
6
µ
S
λ +max
Q(IS ◦ IS ), x(IS )
IS ∼ν
Q(IS ◦ IS )
6d
(x(IS )
6d ⊤
· x(IS )6 d
)
2
2
.
So, we will show that Q S (I) I ′∼ν Q(IS ◦ I ′) is a degree-D distinguisher for µ. The degree of Q S
S
S
is at most D, since averaging over settings of the variables cannot increase the degree. Applying
our assumption that kx(IS )6 d k22 6 K 6 n d , we already have µ λ +max (Q S ) > n B . It remains to show
that ν λ +max (Q S ) is bounded. For this, we use the following fact about the trace.
Fact 4.5 (See e.g. Theorem 2.10 in [CC09]). For a function f : → and a symmetric matrix A with
Í
Í
eigendecomposition λ · vv ⊤ , define f (A) f (λ) · vv ⊤ . If f : → is continuous and convex, then
the map A → Tr( f (A)) is convex for symmetric A.
The function f (t) (max{0, t})2 is continuous and convex over , so the fact above implies
2
is convex for symmetric A. We can take Q S to be symmetric without loss
that the map A → kA+ kFr
of generality, as in the argument above we only consider the inner product of Q S with symmetric
matrices. Now we have that
k(Q S (I))+ k 2Fr
I′
S
h
Q(IS ◦
I ′)
S
i
!
2
6
I′
S
+ Fr
Q(IS ◦ I ′)
S
2
+ Fr
,
where the inequality is the definition of convexity. Taking the expectation over I ∼ ν gives us that
2
2
k(Q S )+ kFr,ν
6 kQ + kFr,ν
6 1, which gives us our contradiciton.
def
Now, analogous to Λ, set P
S∼Θ
PS .
Random Restriction. We will exploit the crucial property that Λ and P are averages over functions
that depend on subsets of variables. This has the same effect as a random restriction, in that hP, Riν
essentially depends on the low-degree part of R. Formally, we will show the following lemma.
Lemma 4.6. (Random Restriction) Fix D, ℓ ∈ . For matrix-valued functions R : I → ℓ×ℓ and a family
of functions {PS : IS → ℓ×ℓ }S⊆[N] , and a distribution Θ over subsets of [N],
I∼ν S∼Θ
hPS (IS ), R(I)i >
S∼Θ I∼ν
hPS (IS ), R <D
S (IS )i
1/2
− ρ(D, Θ)
·
2
kPS kFr,ν
S∼Θ
where
ρ(D, Θ) max
[α ⊆ S].
α,|α| > D S∼Θ
Proof. We first re-express the left-hand side as
I∼ν S∼Θ
hPS (IS ), R(I)i
S∼Θ I∼ν
18
hPS (IS ), R S (IS )i
12
kRkFr,ν
def
where R S (IS ) IS [R(I)] obtained by averaging out all coordinates outside S. Splitting the
function R S into its low-degree and high-degree parts, R S R S6D + R >D
, then applying a CauchyS
Schwartz inequality we get
S∼Θ I∼ν
hPS (IS ), R S (IS )i >
S∼Θ I∼ν
hPS (IS ), R <D
S (IS )i
−
2
kPS kFr,ν
S∼Θ
1/2
·
>D 2
S∼Θ
kR S kFr,ν
1/2
.
Expressing R >D (I) in the Fourier basis, we have that over a random choice of S ∼ Θ,
S∼Θ
2
kR S>D kFr,ν
Õ
α,|α| > D
2
[α ⊆ S] · R̂ 2α 6 ρ(D, Θ) · kRkFr
S∼Θ
Substituting into the above inequality, the conclusion follows.
Equality Constraints. Since Λ is close to satisfying all the equality constraints G of the SDP,
the function P approximately satisfies the low-degree part of G. Specifically, we can prove the
following.
Lemma 4.7. Let k > deginst (G j ) for all G j ∈ S. With P defined as above and under the conditions of
Theorem 2.6 for any function G ∈ G,
hP, G 6D iν 6
2
kGkFr,ν
n 2B
Proof. Recall that G span{χS · G j | j ∈ [m], S ⊆ [N]} and let ΠG be the orthogonal projection into
G. Now, since G ∈ G,
G 6D (ΠG G)6D (ΠG G 6D−2k )6D + (ΠG G >D−2k )6D .
(4.3)
Now we make the following claim regarding the effect of projection on to the ideal G, on the
degree of a polynomial.
Claim 4.8. For every polynomial Q, deg(ΠG Q) 6 deg(Q) + 2k. Furthermore for all α, ΠG Q >α has
no monomials of degree 6 α − k
Proof. To establish the first part of the claim it suffices to show that ΠG Q ∈ span{χS · G j | |S| 6
deg(Q) + k}, since deg(G j ) 6 k for all j ∈ [m]. To see this, observe that ΠG Q ∈ span{χS · G j | |S| 6
deg(Q) + k} and is orthogonal to every χS · G j with |S| > deg(Q) + k:
hΠG Q, χS · G j iν hQ, ΠG χS · G j iν hQ, χS · G j iν hQG j , χS iν 0,
where the final equality is because deg(χS ) > deg(G j ) + deg(Q). On the other hand, for every
subset S with deg(χS ) 6 α − k,
hΠG Q >α , χS · G j i hQ >α , ΠG χS · G j i hQ >α , χS · G j i 0, since α > deg(G j ) + deg(χS )
This implies that ΠG Q >α ∈ span{χS · G j | |S| > α − k} which implies that ΠG Q >α has no monomials
of degree 6 α − k.
19
Incorporating the above claim into (4.3), we have that
G 6D ΠG G 6D−2k + (ΠG G >D−2k )[D−3k,D] ,
where the superscript [D − 3k, D] denotes the degree range. Now,
hP, G 6D iν hP, ΠG G 6D−2k iν + hP, (ΠG G >D−2k )[D−3k,D] iν
And since ΠG G 6D−2k is of degree at most D we can replace P by Λ,
hΛ, ΠG G 6D−2k iν + hP, (ΠG G >D−2k )[D−3k,D] iν
Now bounding the first term using Corollary 4.3 with a n B bound on K,
1
6 8B
n
1/2
6 D−2k
· n B · (n B · kΠG G∅,∅
kFr,ν ) + hP, (ΠG G >D−2k )[D−3k,D] i
And for the latter term we use Lemma 4.6,
1
1
6 D−2k
6 2B kΠG G∅,∅
kFr,ν + 4B
n
n
2
kPS kFr,ν
S
1/2
kGkFr,ν ,
where we have used the fact that (ΠG G >D−2k )[D−3k,D] is high degree. By property of orthogonal
projections, kΠG G >D−2k kFr,ν 6 kG >D−2k kFr,ν 6 kGkFr,ν . Along with the bound on kPS kFr,ν from
(4.2), this implies the claim of the lemma.
Finally, we have all the ingredients to complete the proof of Theorem 2.6.
Proof of Theorem 2.6. Suppose we sample an instance I ∼ ν, and suppose by way of contradiction
this implies that with high probability the SoS SDP relaxation is infeasible. In particular, this
implies that there is a degree-d sum-of-squares refutation of the form,
−1 a I (x) +
Õ
j∈[m]
1 Ij (x) · q Ij (x),
where a I is a sum-of-squares of polynomials of degree at most 2d in x, and deg(q Ij ) + deg(1 Ij ) 6 2d.
6d
6d
Let AI ∈ [n] ×[n] be the matrix of coefficients for a I (c) on input I, and let G I be defined similarly
Í
for j∈[m] 1 j (x) · q j (x). We can rewrite the sum-of-squares refutation as a matrix equality,
−1 hX 6 d , AI i + hX 6 d , G I i,
where G I ∈ G, the span of the equality constraints of the SDP.
Define s : I → {0, 1} as
def
s(I) [∃ a degree-2d sos-refutation for S(I)]
By assumption,
setting,
I∼ν [s(I)]
1−
1
.
n 8B
Define matrix valued functions A, G : I → [n]
def
A(I) s(I) · AI
20
6 d ×[n] 6 d
by
def
G(I) s(I) · G I
With this notation, we can rewrite the sos-refutation identity as a polynomial identity in X and I,
−s(I) hX 6 d , A(I)i + hX 6 d , G(I)i .
Let e∅,∅ denote the [n]6 d × [n]6 d matrix with the entry corresponding to (∅, ∅) equal to 1, while the
remaining entries are zero. We can rewrite the above equality as,
−hX 6 d , s(I) · e∅,∅ i hX 6 d , A(I)i + hX 6 d , G(I)i .
for all I and formal variables X.
Now, let P S∼Θ PS where each PS is obtained by from the Program 3.1 with ΛS . Substituting
6
d
X with P(I) and taking an expectation over I,
hP, s(I) · e∅,∅ iν hP, Aiν + hP, Giν
(4.4)
> hP, Giν
(4.5)
where the inequality follows because A, P 0. We will show that the above equation is a
contradiction by proving that LHS is less than −0.9, while the right hand side is at least −0.5. First,
the right hand side of (4.4) can be bounded by Lemma 4.7
hP, Giν
>
I∼ν S∼Θ
hPS (IS ), G(I)i
I∼ν S∼Θ
hPS (IS ), G
6D
1
(I)i − 4B ·
n
1
2
> − 2B · kGkFr,ν − 4B
n
n
1
>−
2
2
kPS kFr,ν
S
2
kPS kFr,ν
S
1/2
kGkFr,ν
1/2
· kGkFr,ν
(random restriction Lemma 4.6)
(using Lemma 4.7)
where the last step used the bounds on kPS kFr,ν from (4.2) and on kGkFr,ν from the n B bound
assumed on the SoS proofs in Theorem 2.6.
Now the negation of the left hand side of (4.4) is
I∼ν
hP(I), s(I) · e∅,∅ i >
I∼ν
[P∅,∅ (I) · 1] − [(s − 1)2 ]1/2 · kP kFr,ν
The latter term can be simplified by noticing that the expectation of the square of a 0,1 indicator is
equal to the expectation of the indicator, which is in this case n18B by assumption. Also, since 1 is a
constant, P∅,∅ and Λ∅,∅ are equivalent:
I∼ν
[Λ∅,∅ (I) · 1] −
1
· kP kFr,ν
n 4B
1
· kP kFr,ν
( using (4.1))
n 4B
1
(using (4.2))
1 − 3B
n
1−
We have the desired contradiction in (4.4).
21
4.1 Handling Inequalities
Suppose the polynomial system Program 2.2 includes inequalities of the form h(I, x) > 0, then
a natural approach would be to introduce a slack variable z and set h(I, x) − z 2 0. Now, we
can view the vector (x, z) consisting of the original variables along with the slack variables as
the hidden planted solution. The proof of Theorem 2.6 can be carried out as described earlier in
this section, with this setup. However, in many cases of interest, the inclusion of slack variables
invalidates the robust inference property. This is because, although a feasible solution x can be
recovered from a subinstance IS , the value of the corresponding slack variables could potentially
depend on IS . For instance, in a random CSP, the value of the objective function on the assignment
x generated from IS depends on all the constraints outside of S too.
The proof we described is to be modified as follows.
• As earlier, construct ΛS using only the robust inference property of original variables x, and
the corresponding matrix functions PS .
• Convert each inequality of the form h i (I, x) > 0, in to an equality by setting h i (I, x) z 2i .
• Now we define a pseudo-distribution Λ̃S (IS ) over original variables x and slack variables z
as follows. It is convenient to describe the pseudo-distribution in terms of the corresponding
pseudo-expectation operator. Specifically, if x(IS ) is a feasible solution for Program 2.2 then
define
(
if σi odd for some i
def 0
Ẽ[z σ x α ] Î
σ i /2 · x(I )
otherwise
S α
i∈σ (h i (I, x(IS )))
Intuitively, the pseudo-distribution picks the sign for each z i uniformly at random, independent of all other variables. Therefore, all moments involving an odd power of z i are
zero. On the other hand, the moments of even powers of z i are picked so that the equalities
h i (I, x) z i are satisfied.
It is easy to check that Λ̃ is psd matrix valued, satisfies (4.1) and all the equalities.
• While ΛS in the original proof was a function of IS , Λ̃S is not. However, the key observation
is that, Λ̃S is degree at most k · d in the variables outside of S. Each function h i (I, x(IS ))
is degree at most k in IS , and the entries of Λ̃S (IS ) are a product of at most d of these
polynomials.
• The main ingredient of the proof that is different from the case of equalities is the random
restriction lemma which we outline below. The error in the random restriction is multiplied
by D dk/2 6 n B/2 ; however this does not substantially change our results, since Theorem 2.6
requires ρ(D, Θ) < n −8B , which leaves us enough slack to absorb this factor (and in every
application ρ(D, Θ) p O(D) for some p < 1 sufficiently small that we meet the requirement
that D dk ρ(D − dk, Θ) is monotone non-increasing in D).
Lemma 4.9. [Random Restriction for Inequalities] Fix D, ℓ ∈ . Consider a matrix-valued function
R : I → ℓ×ℓ and a family of functions {PS : I → ℓ×ℓ }S⊆[N] such that each PS has degree at most dk
in IS . If Θ is a distribution over subsets of [N] with
ρ(D, Θ) max
[α ⊆ S],
α,|α| > D S∼Θ
22
and the additional requirement that D dk · ρ(D − dk, Θ) is monotone non-increasing in D, then
I∼ν S∼Θ
hPS (IS ), R(I)i >
S∼Θ I∼ν
hPS (IS ), R̃ <D
S (IS )i
−D
1/2
dk/2
· ρ(D − dk, Θ)
·
2
kPS k2,ν
S∼Θ
12
kRkFr,ν
Proof.
I∼ν S∼Θ
hPS (IS ), R(I)i
S∼Θ I∼ν
hPS (IS ), R̃ S (I)i
where R̃ S (I) is now obtained by averaging out the values for all monomials whose degree in S is
> dk. Writing R̃ S R̃ S6D + R̃ >D
and applying a Cauchy-Schwartz inequality we get,
S
S∼Θ I∼ν
hPS (IS ), R̃ S (I)i >
S∼Θ I∼ν
hPS (IS ), R̃ <D
S (I)i
−
2
kPS kFr,ν
S∼Θ
1/2
·
>D
S∼Θ
k R̃ S kFr,ν
1/2
Over a random choice of S,
S∼Θ
2
k R̃ S>D kFr,ν
Õ
α,|α| > D
2
[|α ∩ S| 6 dk] · R̂ 2α 6 D dk · ρ(D − dk, Θ) · kRkFr
,
S∼Θ
where we have used that D dk ρ(D − dk, Θ) is a monotone non-increasing function of D. Substituting
this in the earlier inequality the Lemma follows.
5 Applications to Classical Distinguishing Problems
In this section, we verify that the conditions of Theorem 2.6 hold for a variety of canonical distinguishing problems. We’ll rely upon the (simple) proofs in Appendix A, which show that the ideal
term of the SoS proof is well-conditioned.
Problem 5.1 (Planted clique with clique of size n δ ). Given a graph G (V, E) on n vertices,
determine whether it comes from:
• Uniform Distribution: the uniform distribution over graphs on n vertices (G(n, 21 )).
• Planted Distribution: the uniform distribution over n-vertex graphs with a clique of size at
least n δ
The usual polynomial program for planted clique in variables x1 , . . . , x n is:
obj 6
Õ
xi
i
x 2i x i
∀i ∈ [n]
x i x j 0 ∀(i, j) ∈ E
Lemma 5.2. Theorem 2.6 applies to the above planted clique program, so long as obj 6 n δ−ε for any
c·d
ε > D−6d
for a fixed constant c.
Proof. For planted clique, for our notion of “instance degree”, rather than the multiplicity of
instance variables, the “degree” of Iα will be the number of distinct vertices incident on the edges
23
in α. The proof of Theorem 2.6 proceeds identically with this notion of degree, but we will be able
to achieve better bounds on D relative to d.
In this case, the instance degree of the SoS relaxation is k 2. We have from Corollary A.3 that
the degree-d SoS refutation is well-conditioned, with numbers bounded by n c1 ·d for some constant
c 1 /2. Define B c 1 d > dk.
Our subsampling distribution Θ is the distribution given by including every vertex with probability ρ, producing an induced subgraph of ≈ ρn vertices. For any set of edges α of instance
degree at most D − 6d,
[α ⊆ S] 6 ρ D−6d ,
S∼Θ
since the instance degree corresponds to the number of vertices incident on α.
This subsampling operation satisfies the subsample inference condition for the clique constraints with probability 1, since a clique in any subgraph of G is also a clique in G. Also, if there
is a clique of size n δ in G, then by a Chernoff bound
β 2 ρn δ
[∃ clique of size > (1 − β)ρn ∈ S] > 1 − exp(−
).
S∼Θ
2
δ
q
10B log n
, this gives us that Θ gives n −10B -robust inference for the planted clique
Choosing β
ρn δ
problem, so long as obj 6 ρn/2. Choosing ρ n −ε for ε so that
ρ D−6d 6 n −8B ⇒ ε >
c2 d
,
D − 6d
for some constant c 2 , all of the conditions required by Theorem 2.6 now hold.
Problem 5.3 (Random CSP Refutation at clause density α). Given an instance of a Boolean k-CSP
with predicate P : {±1} k → {±1} on n variables with clause set C, determine whether it comes
from:
• Uniform Distribution: m ≈ αn constraints are generated as follows. Each k-tuple of variables
S ∈ [n]k is independently with probability p αn −k+1 given the constraint P(x S ◦ z S ) b S
(where ◦ is the entry-wise multiplication operation) for a uniformly random z S ∈ {±1} k and
b S ∈ {±1}.
• Planted Distribution: a planted solution y ∈ {±1} n is chosen, and then m ≈ αn constraints
are generated as follows. Each k-tuple of variables S ∈ [n]k is independently with probability
p αn −k+1 given the constraint P(x S ◦ z S ) b S for a uniformly random z S ∈ {±1} k , but
b S P(yS ◦ z S ) with probability 1 − δ and b S is uniformly random otherwise.
The usual polynomial program for random CSP refutation in variables x1 , . . . , x n is:
obj 6
Õ
S∈[n]
1 + P(x S ◦ z S ) · b S
[∃ constraint on S] ·
2
k
x 2i 1 ∀ i ∈ [n]
Lemma 5.4. If α > 1, then Theorem 2.6 applies to the above random k-CSP refutation problem, so long
c·d log n
as obj 6 (1 − δ − ε)m for any ε > D−3d , where c is a fixed constant.
24
Proof. In this case, the instance degree of the SoS relaxation k 1. We have from Corollary A.3 that
the degree-d SoS refutation is well-conditioned, with numbers bounded by n c1 d for some constant
c 1 . Define B c 1 d.
Our subsampling distribution Θ is the distribution given by including each constraint independently with probability ρ, producing an induced CSP instance on n variables with approximately ρm constraints. Since each constraint survives the subsampling with probability ρ, for any
C
,
α ∈ D−3d
[α ⊆ S] 6 ρ D−3d .
S∼Θ
The subsample inference property clearly holds for the boolean constraints {x 2i 1}i∈[n] , as
a Boolean assignment to the variables is valid regardless of the number of constraints. Before
subsampling there are at least (1 − δ)m satisfied constraints, and so letting OS be the number of
constraints satisfied in sub-instance S, we have by a Chernoff bound
β 2 ρ(1 − δ)m
.
[OS > (1 − β) · ρ(1 − δ)m] > 1 − exp −
2
S∼Θ
Choosing β
q
10B log n
ρ(1−δ)m
o(1) (with overwhelming probability since we have α > 1 ⇒
n −10B -robust inference
[m] >
n), we have that Θ gives us
for the random CSP refutation problem, so long
as obj 6 (1 − o(1))ρ(1 − δ)m. Choosing ρ (1 − ε) so that
ρ D−3d 6 n −8B ⇒ ε >
c 2 d log n
,
D − 3d
for some constant c 2 . The conclusion follows (after making appropriate adjustments to the constant).
Problem 5.5 (Community detection with average degree d (stochastic block model)). Given a graph
G (V, E) on n vertices, determine whether it comes from:
• Uniform Distribution: G(n, b/n), the distribution over graphs in which each edge is included
independently with probability b/n.
• Planted Distribution: the stochastic block model—there is a partition of the vertices into two
equally-sized sets, Y and Z, and the edge (u, v) is present with probability a/n if u, v ∈ Y or
u, v ∈ Z, and with probability (b − a)/n otherwise.
Letting x1 , . . . , x n be variables corresponding to the membership of each vertex’s membership, and
let A be the adjacency of the graph. The canonical polynomial optimization problem is
obj 6 x ⊤ Ax
Õ
x 2i 1
∀i ∈ [n]
x i 0.
i
Lemma 5.6. Theorem 2.6 applies to the community detection problem so long as obj 6 (1 − ε)
for ε >
c·d log n
D−3d
where c is a fixed constant.
25
(2a−b)
4 n,
Proof. The degree of the SoS relaxation in the instance is k 1. Since we have only hypercube and
balancedness constraints, we have from Corollary A.3 that the SoS ideal matrix is well-conditioned,
with no number in the SoS refutation larger than n c1 d for some constant c 1 . Let B c 1 d.
Consider the solution x which assigns x i 1 to i ∈ Y and x i −1 to i ∈ Z. Our subsampling
operation is to remove every edge independently with probability 1−ρ. The resulting distribution Θ
and the corresponding restriction of x clearly satisfies the Booleanity and balancedness constraints
with probability 1. Since each edge is included independently with probability ρ, for any α ∈
E
D−3d ,
[α ⊆ S] 6 ρ D−3d .
S∼Θ
In the sub-instance, the expected value (over the choice of planted instance and over the choice
of sub-instance) of the restricted solution x is
ρa
·
n
|Z|
|Y|
+
2
2
−ρ
(2a − b)ρn
b−a
· |Y| · |Z|
− ρa,
n
4
and by a Chernoffqbound, the value in the sub instance is within a (1 − β)-factor with probability
10B log n
1 − n −10B for β
. On resampling the edges outside the sub-instance from the uniform
n
distribution, this value can only decrease by at most (1 − ρ)(1 + β)nb/2 w.h.p over the choice of the
outside edges.
c (2a−b) log n
If we set ρ (1 − ε(2a − b)/10b), then ρ D−3d 6 n −8B for ε > 2 D−3d . for some constant c 2 ,
while the objective value is at least (1 − ε)
adjustments to the constant).
(2a−b)n
.
4
The conclusion follows (after making appropriate
Problem 5.7 (Densest-k-subgraph). Given a graph G (V, E) on n vertices, determine whether it
comes from:
• Uniform Distribution: G(n, p).
• Planted Distribution: A graph from G(n, p) with an instance of G(k, q) planted on a random
subset of k vertices, p < q.
Letting A be the adjacency matrix, the usual polynomial program for densest-k-subgraph in
variables x1 , . . . , x n is:
obj 6 x ⊤ Ax
Õ
x 2i x i
∀i ∈ [n]
xi k
i
Lemma 5.8. When k 2 (p + q) ≫ d log n, Theorem 2.6 applies to the densest-k-subgraph problem with
c·d log n
obj 6 (1 − ε)(p + q) 2k for any ε > D−3d for a fixed constant c.
Proof. The degree of the SoS relaxation in the instance is k 1. We have from Corollary A.3 that
the SoS proof has no values larger than n c1 d for a constant c 1 ; fix B c 1 d.
Our subsampling operation is to include each edge independently with probability ρ, and take
the subgraph induced by the included edges. Clearly, the Booleanity and sparsity constraints are
26
preserved by this subsampling distribution Θ. Since each edge is included independently with
E
,
probability ρ, for any α ∈ D−3d
[α ⊆ S] 6 ρ D−3d .
S∼Θ
Now, the expected objective value (over the instance and the sub-sampling) is at least ρ(p + q) 2k ,
and applying a Chernoff bound, we hace that the probability the
r sub-sampled instance has value
less than (1 − β)ρ(p + q)
k
2
is at most n −10B if we choose β
10B log n
ρ(p+q)(2k)
(which is valid since we
assumed that d log n ≪ (p + q)k 2 ). Further, a dense subgraph on a subset of the edges is still dense
when more edges are added back, so we have the n −10B -robust inference property.
Thus, choosing ρ (1 − ε) and setting
ρ D−3d 6 n −8B ⇒ ε >
c 2 d log n
,
D − 3d
for some constant c 2 , which concludes the proof (after making appropriate adjustments to the
constant).
Problem 5.9 (Tensor PCA). Given an order-k tensor in (n )⊗k , determine whether it comes from:
• Uniform Distribution: each entry of the tensor sampled independently from N(0, 1).
• Planted Distribution: a spiked tensor, T λ · v ⊗k + G where v is sampled uniformly from
{± √1n } n , and where G is a random tensor with each entry sampled independently from
N(0, 1).
Given the tensor T, the canonical program for the tensor PCA problem in variables x1 , . . . , x n is:
obj 6 hx ⊗k , Ti
kx k22 1
Lemma 5.10. For λn −ε ≫ log n, Theorem 2.6 applies to the tensor PCA problem with obj 6 λn −ε for
c·d
for a fixed constant c.
any ε > D−3d
Proof. The degree of the SoS relaxation in the instance is k 1. Since the entries of the noise
component of the tensor are standard normal variables, with exponentially good probability over
the input tensor T we will have no entry of magnitude greater than n d . This, together with
Corollary A.3, gives us that except with exponentially small probability the SoS proof will have no
values exceeding n c1 d for a fixed constant c 1 .
Our subsampling operation is to set to zero every entry of T independently with probability
1 − ρ, obtaining a sub-instance T′ on the nonzero entries. Also, for any α ∈
[α ∈ S] 6 ρ D−3d .
[n]k
,
D−3d
S∼Θ
This subsampling operation clearly preserves the planted solution unit sphere constraint. Additionally, let R be the operator that restricts a tensor to the nonzero entries. We have that
hR(λ · v ⊗k ), v ⊗k i has expectation λ · ρ, since every entry of v ⊗k has magnitude n −k/2 . Applying a
Chernoff bound, q
we have that this quantity will be at least (1 − β)λρ with probability at least n −10B
if we choose β
10B log n
.
λρ
27
It remains to address the noise introduced by GT′ and resampling all the entries outside of the
subinstance T′. Each of these entries is a standard normal entry. The quantity h(Id −R)(N), v ⊗k i
is a sum over at most n k i.i.d. Gaussian entries each with standard deviation n −k/2 (since that is
the magnitude of (v ⊗k )α . The entire quantity is thus a Gaussian random variable with mean
0 and
p
−10B
variance 1, and therefore with probability at least n
this quantity will not exceed 10B log n.
p
So long as 10B log n ≪ λρ, the signal term will dominate, and the solution will have value at
least λρ/2.
Now, we set ρ n −ε so that
ρ D−3d 6 n −8B ⇒ ε >
2c 1 d
,
D − 3d
which concludes the proof (after making appropriate adjustments to the constant c 1 ).
Problem 5.11 (Sparse PCA). Given an n × m matrix M in n , determine whether it comes from:
• Uniform Distribution: each entry of the matrix sampled independently from
√ N(0, 1).
• Planted Distribution: a random vector with k non-zero entries v ∈ {0, ±1/ k} n is chosen,
and then the ith column of the matrix is sampled independently by taking s i v + γi for a
uniformly random sign s i ∈ {±1} and a standard gaussian vector γi ∼ N(0, Id).
The canonical program for the sparse PCA problem in variables x1 , . . . , x n is:
obj 6 kM ⊤ x k22
x 2i x i
kx k22
k
∀i ∈ [n]
Lemma 5.12. For kn −ε/2 ≫ log n, Theorem 2.6 applies to the sparse PCA problem with obj 6 k 2−ε m for
c·d
any ε > D−6d
for a fixed constant c.
Proof. The degree of the SoS relaxation in the instance is 2. Since the entries of the noise are
standard normal variables, with exponentially good probability over the input matrix M we will
have no entry of magnitude greater than n d . This, together with Corollary A.3, gives us that except
with exponentially small probability the SoS proof will have no values exceeding n c1 d for a fixed
constant c 1 .
Our subsampling operation is to set to zero every entry of M independently with probability
M
,
1 − ρ, obtaining a sub-instance M on the nonzero entries. Also, for any α ∈ D−6d
[α ∈ S] 6 ρ D−6d .
S∼Θ
This subsampling operation clearly preserves
the constraints on the solution variables.
√
We take our subinstance solution y kv, which is feasible. Let R be the subsampling operator
that zeros out a set of columns. On subsampling, and then resampling the zeroed out columns
from the uniform distribution, we can write the resulting M̃ as
M̃ ⊤ R(sv T ) + G ⊤
T
where
√ G is a random Gaussian matrix. Therefore, the objective value obtained by the solution
y kv is
28
M̃ ⊤ y
√
√
k · R(sv ⊤ )v + k · G ⊤ v
The first term is a vector u si1nal with m entries, each of which is a sum of k Bernoulli random
variables, all of the same sign, with probability ρ of being nonzero. The second term is a vector
u noise with m entries, each of them an independent Gaussian variable with variance bounded by
k. We have that
[ku si1nal k22 ] (ρk)2 m,
Θ
and by Chernoff bounds
qwe have that this concentrates within a (1 − β) factor with probability
10B log n
.
1 − n −10B if we take β
(ρk)2 m
The expectation of hu si1nal , u noise i is zero, and applying similar concentration arguments we
have that with probability 1 − n 10B , |hu si1nal , u noise i| 6 (1 + β)ρk. Taking the union bound over
these events and applying Cauchy-Schwarz, we have that
kR(M)y k22 > (ρk)2 m − 2km ρ 2 k 2 m − 2km.
so long as ρk ≫ 1, the first term dominates.
Now, we set ρ n −ε for ε < 1 so that
ρ D−6d 6 n −8B ⇒ ε >
c2 d
,
D − 6d
for some constant c 2 , which concludes the proof.
Remark 5.13. For tensor PCA and sparse PCA, the underlying distributions were Gaussian. Applying Theorem 2.6 in these contexts yields the existence of distinguishers that are low-degree in a
non-standard sense. Specifically, the degree of a monomial will be the number of distinct variables
in it, irrespective of the powers to which they are raised.
6 Exponential lower bounds for PCA problems
In this section we give an overview of the proofs of our SoS lower bounds for the tensor and sparse
PCA problems. We begin by showing how Conjecture 1.2 predicts such a lower bound in the tensor
PCA setting. Following this we state the key lemmas to prove the exponential lower bounds; since
these lemmas can be proved largely by techniques present in the work of Barak et al. on planted
clique [BHK+ 16], we leave the details to a forthcoming full version of the present paper.
6.1 Predicting sos lower bounds from low-degree distinguishers for Tensor PCA
In this section we demonstrate how to predict using Conjecture 1.2 that when λ ≪ n 3/4−ε for ε > 0,
SoS algorithms cannot solve Tensor PCA. This prediction is borne out in Theorem 1.4.
Theorem 6.1. Let µ be the distribution on n⊗n⊗n which places a standard Gaussian in each entry. Let ν
be the density of the Tensor PCA planted distribution with respect to µ, where we take the planted vector v
29
to have each entry uniformly chosen from {± √1n }.7 If λ 6 n 3/4−ε , there is no degree n o(1) polynomial p with
p(A) 0,
µ
p(A) > n
planted
Ω(1)
· p(A)
µ
1/2
.
We sketch the proof of this theorem. The theorem follows from two claims.
Claim 6.2.
ν
max
deg p 6 n o(1)
,
µ
p(T)0
µ
p(T)
( (ν 6 d (T) − 1)2 )1/2
1/2
µ
2
(6.1)
p(T)
where ν 6 d is the orthogonal projection (with respect to µ) of the density ν to the degree-d polynomials. Note that the last quantity is just the 2 norm, or the variance, of the truncation to low-degree
polynomials of the density ν of the planted distribution.
Claim 6.3. (
µ (v
6 d (T) −
1)2 )1/2 ≪ 1 when λ 6 n 3/4−ε for ε > Ω(1) and d n o(1) .
The theorem follows immediately. We sketch proofs of the claims in order.
Sketch of proof for Claim 6.2. By definition of ν, the maximization is equivalent to maximizing
o(1) and with
2
µ ν(T) · p(T) among all p of degree d n
µ p(T) 1 and
µ p(T) 0. Standard Fourier analysis shows that this maximum is achieved by the orthogonal projection of ν − 1
into the span of degree d polynomials.
To make this more precise, recall that the Hermite polynomials provide an orthonormal basis
for real-valued polynomials under the multivariate Gaussian distribution. (For an introduction
to Hermite polynomials, see the book [O’D14].) The tensor T ∼ µ is an n 3 -dimensional multivariate Gaussian. For a (multi)-set W ⊆ [n]3 , let HW be the W-th Hermite polynomial, so that
µ HW (T)HW ′ (T) WW ′ .
Then the best p (ignoring normalization momentarily) will be the function
p(A) ν 6 d (A) − 1
Õ
16 |W | 6 d
(
T∼µ
ν(T)HW (T)) · HW (A)
Here µ ν(T)HW (T) b
ν (W) is the W-th Fourier coefficient of ν. What value for (6.1) is achieved
by this p? Again by standard Fourier analysis, in the numerator we have,
ν
p(T)
ν
(ν 6D (T) − 1)
µ
ν(T) · (ν 6D (T) − 1)
µ
(ν 6 d (T) − 1)2
Comparing this to the denominator, the maximum value of (6.1) is ( µ (v 6 d (T) − 1)2 )1/2 . This is
nothing more than the 2-norm of the projection of ν − 1 to degree-d polynomials!
The following fact, used to prove Claim 6.3, is an elementary computation with Hermite
polynomials.
Fact 6.4. Let W ⊆ [n]3 . Then b
ν (W) λ |W | n −3|W |/2 if W, thought of as a 3-uniform hypergraph, has all
even degrees, and is 0 otherwise.
7This does not substantially modify the problem but it will make calculations in this proof sketch more convenient.
30
µ ν(T)HW (T) ν HW (T),
To see that this calculation is straightforward, note that ν(W)
so it is enough to understand the expectations of the Hermite polynomials under the planted
distribution.
Sketch of proof for Claim 6.3. Working in the Hermite basis (as described above), we get µ (v 6 d (T) −
Í
1)2 16 |W | 6 d b
ν (W)2 . For the sake of exposition, we will restrict attention in the sum to W in
which no element appears with multiplicity larger than 1 (other terms can be treated similarly).
Í
What is the contribution to 16 |W | 6 d b
ν (W)2 of terms W with |W | t? By the fact above, to
contribute a nonzero term to the sum, W,considered as a 3-uniform hypergraph must have even
degrees. So, if it has t hyperedges, it contains at most 3t/2 nodes. There are n 3t/2 choices for these
nodes, and having chosen them, at most t O(t) 3-uniform hypergraphs on those nodes. Hence,
Õ
16 |W | 6 d
2
b
ν (W) 6
d
Õ
n 3t/2 t O(t) λ 2t n −3t .
t1
So long as λ 2 6 n 3/2−ε for some ε Ω(1) and t 6 d 6 n O(ε) , this is o(1).
6.2 Main theorem and proof overview for Tensor PCA
In this section we give an overview of the proof of Theorem 1.4. The techniques involved in proving
the main lemmas are technical refinements of techniques used in the work of Barak et al. on SoS
lower bounds for planted clique [BHK+ 16]; we therefore leave full proofs to a forthcoming full
version of this paper.
To state and prove our main theorem on tensor PCA it is useful to define a Boolean version of
the problem. For technical convenience we actually prove an SoS lower bound for this problem;
then standard techniques (see Section C) allow us to prove the main theorem for Gaussian tensors.
Problem 6.5 (k-Tensor PCA, signal-strength λ, boolean version). Distinguish the following two
n
def
distributions on Ωk {±1}( k ) .
• the uniform distribution: A ∼ Ω chosen uniformly at random.
• the planted distribution: Choose v ∼ {±1} n and let B v ⊗k . Sample A by rerandomizing
every coordinate of B with probability 1 − λn −k/2 .
We show that the natural SoS relaxation of this problem suffers from a large integrality gap,
when λ is slightly less than n k/4 , even when the degree of the SoS relaxation is n Ω(1) . (When
O(ε)
are known for k O(1) [RM14, HSS15, HSSS16,
λ ≫ n k/4−ε , algorithms with running time 2n
BGL16, RRS16].)
Theorem 6.6. Let k O(1). For A ∈ Ωk , let
def
SoS d (A) max ˜ hx ⊗k , Ai s.t. ˜ is a degree-d pseudoexpectation satisfying {kx k 2 1} .
˜
There is a constant c so that for every small enough ε > 0, if d 6 n c·ε , then for large enough n,
{SoS d (A) > n k/4−ε } > 1 − o(1)
A∼Ω
31
and
A∼Ω
SoS d (A) > n k/4−ε .
Moreover, the latter also holds for A with iid entries from N(0, 1).8
To prove the theorem we will exhibit for a typical sample A from the uniform distribution a
degree n Ω(ε) pseudodistribution ˜ which satisfies {kx k 2 1} and has ˜ hx ⊗k , Ai > n k/4−ε . The
following lemma ensures that the pseudo-distribution we exhibit will be PSD.
Í
Lemma 6.7. Let d ∈ and let Nd s 6 d n(n − 1) · · · (n − (s − 1)) be the number of 6 d-tuples with
unique entries from [n]. There is a constant ε ∗ independent of n such that for any ε < ε ∗ also independent of
n, the following is true. Let λ n k/4−ε . Let µ(A) be the density of the following distribution (with respect
n
to the uniform distribution on Ω {±1}( k ) ).
The Planted Distribution: Choose v ∼ {±1} n uniformly. Let B v ⊗k . Sample A by
• replacing every coordinate of B with a random draw from {±1} independently with probability
1 − λn −k/2 ,
• then choosing a subset S ⊆ [n] by including every coordinate with probability n −ε ,
• then replacing every entry of B with some index outside S independently with a uniform draw from
{±1}.
Let Λ : Ω → Nd ×Nd be the following function
Λ(A) µ(A) ·
v|A
v ⊗ 62d
Here we abuse notation and denote by x 6 ⊗2d the matrix indexed by tuples of length 6 d with unique
entries from [n]. For D ∈ , let Λ6D be the projection of Λ into the degree-D real-valued polynomials on
n
{±1}( k ) . There is a universal constant C so that if Cd/ε < D < n ε/C , then for large enough n
{Λ6D (A) 0} > 1 − o(1) .
A∼Ω
For a tensor A, the moment matrix of the pseudodistribution we exhibit will be Λ6D (A). We will
need it to satisfy the constraint {kx k 2 1}. This follows from the following general lemma. (The
lemma is much more general than what we state here, and uses only the vector space structures of
space of real matrices and matrix-valued functions.)
n
Lemma 6.8. Let k ∈ . Let V be a linear subspace of N×M . Let Ω {±1}( k ) . Let Λ : Ω → V. Let Λ6D
be the entrywise orthogonal projection of Λ to polynomials of degree at most D. Then for every A ∈ Ω, the
matrix Λ6D (A) ∈ V.
Proof. The function Λ is an element of the vector space N×M ⊗ Ω . The projection ΠV : N×M →
V and the projection Π6D from Ω to the degree-D polynomials commute as projections on
N×M ⊗ Ω , since they act on separate tensor coordinates. It follows that Λ6D ∈ V ⊗ (Ω )6D takes
values in V.
8For technical reasons we do not prove a tail bound type statement for Gaussian A, but we conjecture that this is also
true.
32
Last, we will require a couple of scalar functions of Λ6D to be well concentrated.
Lemma 6.9. Let Λ, d, ε, D be as in Lemma 6.7. The function Λ6D satisfies
6D
• A∼Ω {Λ∅,∅
(A) 1 ± o(1)} > 1 − o(1) (Here Λ∅,∅ 1 is the upper-left-most entry of Λ.)
• A∼Ω {hΛ6D (A), Ai (1 ± o(1)) · n 3k/4−ε } > 1 − o(1) (Here we are abusing notation to write
hΛ6D (A), Ai for the inner product of the part of Λ6D indexed by monomials of degree k and A.)
The Boolean case of Theorem 6.6 follows from combining the lemmas. The Gaussian case can
be proved in a black-box fashion from the Boolean case following the argument in Section C.
The proofs of all the lemmas in this section follow analogous lemmas in the work of Barak et
al. on planted clique [BHK+ 16]; we defer them to the full version of the present work.
6.3 Main theorem and proof overview for sparse PCA
In this section we prove the following main theorem. Formally, the theorem shows that with high
probability for a random n × n matrix A, even high-degree SoS relaxations are unable to certify
that no sparse vector v has large quadratic form hv, Avi.
Theorem 6.10 (Restatement of Theorem 1.6). If A ∈ n×n , let
SoS d,k (A) max ˜ hx, Axi s.t. ˜ is degree d and satisfies x 3i x i , kx k 2 k .
˜
There are absolute constants c, ε ∗ > 0 so that for every ρ ∈ (0, 1) and ε ∈ (0, ε ∗ ), if k n ρ , then for
d 6 n c·ε ,
n {SoS d,k (A) > min(n 1/2−ε k, n ρ−ε k)} > 1 − o(1)
A∼{±1} ( 2 )
and
n
A∼{±1} ( 2 )
SoS d,k (A) > min(n 1/2−ε k, n ρ−ε k) .
Furthermore, the latter is true also if A is symmetric with iid entries from N(0, 1).9
We turn to some discussion of the theorem statement. First of all, though it is technically
convenient for A in the theorem statement above to be a ±1 matrix, the entries may be replaced by
standard Gaussians (see Section C).
Remark 6.11 (Relation to the spiked-Wigner model of sparse principal component analysis). To get
some intuition for the theorem statement, it is useful to return to a familiar planted problem: the
spiked-Wigner model of sparse principal component analysis. Let W be a symmetric matrix
with
√
iid entries from N(0, 1), and let v be a random k-sparse unit vector with entries {±1/ k, 0}. Let
B W + λvv⊺ . The problem is to distinguish between a single sample from B and a sample from
W. There are two main algorithms for this problem, both captured by the SoS hierarchy. The
√
first, applicable when λ ≫ n, is vanilla PCA: the top eigenvalue of B will be larger than the top
eigenvalue of W. The second, applicable when λ ≫ k, is diagonal thresholding: the diagonal
9For technical reasons we do not prove a tail bound type statement for Gaussian A, but we conjecture that this is also
true.
33
entries of B which corresponds to nonzero coordinates will be noticeably large. The theorem
statement above (transferred to the Gaussian setting, though this has little effect) shows that once
λ is well outside these parameter regimes, i.e. when λ < n 1/2−ε , k 1−ε for arbitrarily small ε > 0,
even degree n Ω(ε) SoS programs do not distinguish between B and W.
Remark 6.12 (Interpretation as an integrality gap). A second interpretation of the theorem statement,
independent of any planted problem, is as a strong integrality gap for random instances for the
problem of maximizing a quadratic form over k-sparse vectors. Consider the actual maximum of
hx, Axi for random ({±1} or Gaussian) A over k-sparse unit vectors x. There are roughly 2k log n
points in a 21 -net for such vectors, meaning that by standard arguments,
max
kxk1,x is k-sparse
√
hx, Axi 6 O( k log n) .
With the parameters of the theorem, this means that the integrality gap of the degree n Ω(ε) SoS
relaxation is at least min(n ρ/2−ε , n 1/2−ρ/2−ε ) when k n ρ .
Remark 6.13 (Relation to spiked-Wishart model). Theorem 1.6 most closely concerns the spikedWigner model of sparse PCA; this refers to independence of the entries of the matrix A. Often,
sparse PCA is instead studied in the (perhaps more realistic) spiked-Wishart model, where the input
is m samples x1 , . . . , x m from an n-dimensional Gaussian vector N(0, Id +λ · vv ⊤ ), where v is
a unit-norm k-sparse vector. Here the question is: as a function of the sparsity k, the ambient
dimension n, and the signal strength λ, how many samples m are needed to recover the vector v?
The natural approach to recovering v in this setting is to solve a convex relaxation of the problem
Í
of maximizing he quadratic form of the empirical covariance M i 6 m x i x i⊺ over k-sparse unit
vectors (the maximization problem itself is NP-hard even to approximate [CPR16]).
Theoretically, one may apply our proof technique for Theorem 1.6 directly to the spiked-Wishart
model, but this carries the expense of substantial technical complication. We may however make
intelligent guesses about the behavior of SoS relaxations for the spiked-Wishart model on the basis
of Theorem 1.6 alone. As in the spiked Wigner model, there are essentially two known algorithms
to recover a planted sparse vector v in the spiked Wishart model: vanilla PCA and diagonal
thresholding [DM14b]. We conjecture that, as in the spiked Wigner model, the SoS hierarchy
requires n Ω(1) degree to improve the number of samples required by these algorithms by any
polynomial factor. Concretely, considering the case λ 1 for simplicity, we conjecture that there
are constants c, ε ∗ such that for every ε ∈ (0, ε ∗ ) if m 6 min(k 2−ε , n 1−ε ) and x1 , . . . , x m ∼ N(0, Id)
are iid, then with high probability for every ρ ∈ (0, 1) if k n ρ ,
SoS d,k
Õ
i6m
!
x i x i⊺ > min(n 1−ε k, k 2−ε )
for all d 6 n c·ε .
Lemmas for Theorem 1.6. Our proof of Theorem 1.6 is very similar to the analogous proof for
Tensor PCA, Theorem 6.6. We state the analogues of Lemma 6.7 and Lemma 6.9. Lemma 6.8 can
be used unchanged in the sparse PCA setting.
The main lemma, analogous to Lemma 6.7 is as follows.
34
Í
Lemma 6.14. Let d ∈ and let Nd s 6 d n(n − 1) · · · (n − (s − 1)) be the number of 6 d-tuples with
unique entries from [n]. Let µ(A) be the density of the following distribution on n × n matrices A with
n
respect to the uniform distribution on {±1}( 2) .
Planted distribution: Let k k(n) ∈ and λ λ(n) ∈ , and γ > 0, and assume λ 6 k. Sample
a uniformly random k-sparse vector v ∈ n with entries ±1, 0. Form the matrix B vv ⊤ . For each nonzero
entry of B independently, replace it with a uniform draw from {±1} with probability 1 − λ/k (maintaining
the symmetry B B ⊤ ). For each zero entry of B, replace it with a uniform draw from {±1} (maintaining
the same symmetry). Finally, choose every i ∈ [n] with probability n −γ independently; for those indices that
were not chosen, replace every entry in the corresponding row and column of B with random ±1 entries.10
Output the resulting matrix A. (We remark that this matrix is a Boolean version of the more standard
spiked-Wigner model B + λvv√⊤ where B has iid standard normal entries and v is a random k-sparse unit
vector with entries from {±1/ k, 0}.)
n
Let Λ : {±1}( 2) → Nd ×Nd be the following function
Λ(A) µ(A) ·
v|A
v ⊗ 62d
where the expectation is with respect to the planted distribution above. For D D(n) ∈ , let Λ6D be the
entrywise projection of Λ into the Boolean functions of degree at most D.
There are constants C, ε ∗ > 0 such that for every γ > 0 and ρ ∈ (0, 1) and every ε ∈ (0, ε ∗ ) (all
independent of n), if k n ρ and λ 6 min{n ρ−ε , n 1/2−ε }, and if Cd/ε < D < n ε/C , then for large enough
n
n {Λ6D (A) 0} > 1 − o(1) .
A∼{±1} ( 2 )
Remark 6.15. We make a few remarks about the necessity of some of the assumptions above.
A useful intuition is that the function Λ6D (A) is (with high probability) positive-valued when
the parameters ρ, ε, γ of the planted distribution are such that there is no degree-D polynomial
n
f : {±1}( 2) → whose values distinguish a typical sample from the planted distribution from a
null model: a random symmetric matrix with iid entries.
At this point it is useful to consider a more familiar planted model, which the lemma above
n
mimics. Let W be a n × n symmetric
√ matrix with iid entries from N(0, 1). Let v ∈ be a k-sparse
⊺
unit vector, with entries in {±1/ k, 0}. Let A W + λvv . Notice that if λ ≫ k, then diagonal
thresholding on the matrix W identifies the nonzero coordinates of v. (This is the analogue of
the covariance-thresholding algorithm in the spiked-Wishart version of sparse PCA.) On the other
√
√
hand, if λ ≫ n then (since typically kW k ≈ n), ordinary PCA identifies v. The lemma captures
computational hardness for the problem of distinguishing a single sample from A from a sample
from the null model W both diagonal thresholding and ordinary PCA fail.
Next we state the analogue of Lemma 6.9.
Lemma 6.16. Let Λ, d, k, λ, γ, D be as in Lemma 6.14. The function Λ6D satisfies
6D
• A∼{±1}(nk) {Λ∅,∅
(A) 1 ± o(1)} > 1 − o(1).
• A∼{±1}(nk) {hΛ6D (A), Ai (1 ± o(1)) · λn Θ(−γ) } > 1 − o(1).
10This additional n −γ noising step is a technical convenience which has the effect of somewhat decreasing the number
of nonzero entries of v and decreasing the signal-strength λ.
35
References
[AK97]
Noga Alon and Nabil Kahale, A spectral technique for coloring random 3-colorable graphs,
SIAM J. Comput. 26 (1997), no. 6, 1733–1748. 1
[AKS98]
Noga Alon, Michael Krivelevich, and Benny Sudakov, Finding a large hidden clique in a
random graph, Random Struct. Algorithms 13 (1998), no. 3-4, 457–466. 1, 4, 8
[AOW15a] Sarah R. Allen, Ryan O’Donnell, and David Witmer, How to refute a random CSP, 2015
IEEE 56th Annual Symposium on Foundations of Computer Science—FOCS 2015,
IEEE Computer Soc., Los Alamitos, CA, 2015, pp. 689–708. MR 3473335 1
[AOW15b] Sarah R. Allen, Ryan O’Donnell, and David Witmer, How to refute a random CSP, FOCS,
IEEE Computer Society, 2015, pp. 689–708. 8
[BBH+ 12]
Boaz Barak, Fernando G. S. L. Brandão, Aram Wettroth Harrow, Jonathan A. Kelner, David Steurer, and Yuan Zhou, Hypercontractivity, sum-of-squares proofs, and their
applications, STOC, ACM, 2012, pp. 307–326. 1, 6, 8
[BCC+ 10]
Aditya Bhaskara, Moses Charikar, Eden Chlamtac, Uriel Feige, and Aravindan Vijayaraghavan, Detecting high log-densities—an O(n 1/4 ) approximation for densest k-subgraph,
STOC’10—Proceedings of the 2010 ACM International Symposium on Theory of Computing, ACM, New York, 2010, pp. 201–210. MR 2743268 8
[BCK15]
Boaz Barak, Siu On Chan, and Pravesh K. Kothari, Sum of squares lower bounds from
pairwise independence [extended abstract], STOC’15—Proceedings of the 2015 ACM Symposium on Theory of Computing, ACM, New York, 2015, pp. 97–106. MR 3388187
8
[BGG+ 16]
Vijay V. S. P. Bhattiprolu, Mrinal Kanti Ghosh, Venkatesan Guruswami, Euiwoong Lee,
and Madhur Tulsiani, Multiplicative approximations for polynomial optimization over the
unit sphere, Electronic Colloquium on Computational Complexity (ECCC) 23 (2016),
185. 1, 6, 8
[BGL16]
Vijay V. S. P. Bhattiprolu, Venkatesan Guruswami, and Euiwoong Lee, Certifying
random polynomials over the unit sphere via sum of squares hierarchy, CoRR abs/1605.00903
(2016). 1, 2, 9, 31
[BHK+ 16]
Boaz Barak, Samuel B. Hopkins, Jonathan A. Kelner, Pravesh Kothari, Ankur Moitra,
and Aaron Potechin, A nearly tight sum-of-squares lower bound for the planted clique
problem, FOCS, IEEE Computer Society, 2016, pp. 428–437. 1, 4, 5, 8, 9, 29, 31, 33
[BKS14]
Boaz Barak, Jonathan A. Kelner, and David Steurer, Rounding sum-of-squares relaxations,
STOC, ACM, 2014, pp. 31–40. 1
[BKS15]
, Dictionary learning and tensor decomposition via the sum-of-squares method, STOC,
ACM, 2015, pp. 143–151. 1
36
[BKS17]
Boaz Barak, Pravesh Kothari, and David Steurer, Quantum entanglement, sum of squares,
and the log rank conjecture, CoRR abs/1701.06321 (2017). 1
[BM16]
Boaz Barak and Ankur Moitra, Noisy tensor completion via the sum-of-squares hierarchy,
COLT, JMLR Workshop and Conference Proceedings, vol. 49, JMLR.org, 2016, pp. 417–
445. 1, 8
[BMVX16] Jess Banks, Cristopher Moore, Roman Vershynin, and Jiaming Xu, Information-theoretic
bounds and phase transitions in clustering, sparse pca, and submatrix localization, CoRR
abs/1607.05222 (2016). 6
[BR13a]
Quentin Berthet and Philippe Rigollet, Complexity theoretic lower bounds for sparse principal component detection, COLT, JMLR Workshop and Conference Proceedings, vol. 30,
JMLR.org, 2013, pp. 1046–1066. 7
[BR13b]
Quentin Berthet and Philippe Rigollet, Computational lower bounds for sparse pca, COLT
(2013). 2
[BS14]
Boaz Barak and David Steurer, Sum-of-squares proofs and the quest toward optimal algorithms, CoRR abs/1404.5236 (2014). 6
[CC09]
Eric Carlen and ERIC CARLEN, Trace inequalities and quantum entropy: An introductory
course, 2009. 18
[CPR16]
Siu On Chan, Dimitris Papailliopoulos, and Aviad Rubinstein, On the approximability
of sparse PCA, COLT, JMLR Workshop and Conference Proceedings, vol. 49, JMLR.org,
2016, pp. 623–646. 7, 34
[DM14a]
Yash Deshpande and Andrea Montanari, Information-theoretically optimal sparse PCA,
ISIT, IEEE, 2014, pp. 2197–2201. 7
[DM14b]
, Sparse PCA via covariance thresholding, NIPS, 2014, pp. 334–342. 2, 34
[DM15]
, Improved sum-of-squares lower bounds for hidden clique and hidden submatrix
problems, COLT, JMLR Workshop and Conference Proceedings, vol. 40, JMLR.org,
2015, pp. 523–562. 9
[DX13]
Feng Dai and Yuan Xi, Spherical harmonics, arXiv preprint arXiv:1304.2585 (2013). 44
[Fil16]
Yuval Filmus, An orthogonal basis for functions over a slice of the boolean hypercube, Electr.
J. Comb. 23 (2016), no. 1, P1.23. 44
[FM16]
Zhou Fan and Andrea Montanari, How well do local algorithms solve semidefinite programs?, CoRR abs/1610.05350 (2016). 8
[GM15]
Rong Ge and Tengyu Ma, Decomposing overcomplete 3rd order tensors using sum-of-squares
algorithms, APPROX-RANDOM, LIPIcs, vol. 40, Schloss Dagstuhl - Leibniz-Zentrum
fuer Informatik, 2015, pp. 829–849. 1
37
[Gri01a]
Dima Grigoriev, Complexity of positivstellensatz proofs for the knapsack, Computational
Complexity 10 (2001), no. 2, 139–154. 8
[Gri01b]
, Linear lower bound on degrees of positivstellensatz calculus proofs for the parity,
Theor. Comput. Sci. 259 (2001), no. 1-2, 613–622. 8
[GW94]
Michel X. Goemans and David P. Williamson, .879-approximation algorithms for MAX
CUT and MAX 2sat, STOC, ACM, 1994, pp. 422–431. 1
[Har70]
Richard A Harshman, Foundations of the parafac procedure: Models and conditions for an"
explanatory" multi-modal factor analysis. 1
[HKP15]
Samuel B. Hopkins, Pravesh K. Kothari, and Aaron Potechin, Sos and planted clique:
Tight analysis of MPW moments at all degrees and an optimal lower bound at degree four,
CoRR abs/1507.05230 (2015). 9
[HL09]
Christopher J. Hillar and Lek-Heng Lim, Most tensor problems are NP hard, CoRR
abs/0911.1393 (2009). 6
[HSS15]
Samuel B. Hopkins, Jonathan Shi, and David Steurer, Tensor principal component analysis
via sum-of-square proofs, COLT, JMLR Workshop and Conference Proceedings, vol. 40,
JMLR.org, 2015, pp. 956–1006. 1, 2, 6, 8, 9, 31
[HSSS16]
Samuel B. Hopkins, Tselil Schramm, Jonathan Shi, and David Steurer, Fast spectral
algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors,
STOC, ACM, 2016, pp. 178–191. 1, 2, 8, 9, 31
[KMOW17] Pravesh K. Kothari, Ryuhei Mori, Ryan O’Donnell, and David Witmer, Sum of squares
lower bounds for refuting any CSP, CoRR abs/1701.04521 (2017). 8
[KNV+ 15]
Robert Krauthgamer, Boaz Nadler, Dan Vilenchik, et al., Do semidefinite relaxations solve
sparse pca up to the information limit?, The Annals of Statistics 43 (2015), no. 3, 1300–1322.
2
[KT17]
Ken-ichi Kawarabayashi and Mikkel Thorup, Coloring 3-colorable graphs with less than
n1/5 colors, J. ACM 64 (2017), no. 1, 4:1–4:23. 1
[LRS15]
James R. Lee, Prasad Raghavendra, and David Steurer, Lower bounds on the size of
semidefinite programming relaxations, STOC, ACM, 2015, pp. 567–576. 1
[MPW15]
Raghu Meka, Aaron Potechin, and Avi Wigderson, Sum-of-squares lower bounds for
planted clique [extended abstract], STOC’15—Proceedings of the 2015 ACM Symposium
on Theory of Computing, ACM, New York, 2015, pp. 87–96. MR 3388186 9
[MS16a]
Andrea Montanari and Subhabrata Sen, Semidefinite programs on sparse random graphs
and their application to community detection, STOC, ACM, 2016, pp. 814–827. 8
[MS16b]
Andrea Montanari and Nike Sun, Spectral algorithms for tensor completion, CoRR
abs/1612.07866 (2016). 8
38
[MSS16a]
Tengyu Ma, Jonathan Shi, and David Steurer, Polynomial-time tensor decompositions with
sum-of-squares, CoRR abs/1610.01980 (2016). 1
[MSS16b]
, Polynomial-time tensor decompositions with sum-of-squares, FOCS, IEEE Computer Society, 2016, pp. 438–446. 1
[MW15a]
Tengyu Ma and Avi Wigderson, Sum-of-squares lower bounds for sparse PCA, NIPS, 2015,
pp. 1612–1620. 2
[MW15b]
, Sum-of-squares lower bounds for sparse PCA, CoRR abs/1507.06370 (2015). 9
[O’D14]
Ryan O’Donnell, Analysis of boolean functions, Cambridge University Press, 2014. 30
[Pea01]
Karl Pearson, On lines and planes of closes fit to systems of points in space, Philosophical
Magazine 2 (1901), 559–572. 1
[PS17]
Aaron Potechin and David Steurer, Exact tensor completion with sum-of-squares, CoRR
abs/1702.06237 (2017). 1
[PWB16]
Amelia Perry, Alexander S. Wein, and Afonso S. Bandeira, Statistical limits of spiked
tensor models, CoRR abs/1612.07728 (2016). 6
[RM14]
Emile Richard and Andrea Montanari, A statistical model for tensor PCA, NIPS, 2014,
pp. 2897–2905. 2, 5, 9, 31
[RRS16]
Prasad Raghavendra, Satish Rao, and Tselil Schramm, Strongly refuting random csps
below the spectral threshold, CoRR abs/1605.00058 (2016). 1, 2, 6, 8, 9, 31
[RS15]
Prasad Raghavendra and Tselil Schramm, Tight lower bounds for planted clique in the
degree-4 SOS program, CoRR abs/1507.05136 (2015). 9
[RW17]
Prasad Raghavendra and Benjamin Weitz, On the bit complexity of sum-of-squares proofs,
CoRR abs/1702.05139 (2017). 40, 42
[Sch08]
Grant Schoenebeck, Linear level lasserre lower bounds for certain k-csps, FOCS, IEEE
Computer Society, 2008, pp. 593–602. 8
[Tre09]
Luca Trevisan, Max cut and the smallest eigenvalue, STOC, ACM, 2009, pp. 263–272. 1
[TS14]
Ryota Tomioka and Taiji Suzuki, Spectral norm of random tensors, arXiv preprint
arXiv:1407.1870 (2014). 9
[Wei17]
Benjamin Weitz, Polynomial proof systems, effective derivations, and their applications in the
sum-of-squares hierarchy, Ph.D. thesis, UC Berkeley, 2017. 42
[ZHT06]
Hui Zou, Trevor Hastie, and Robert Tibshirani, Sparse principal component analysis,
Journal of Computational and Graphical Statistics 15 (2006), no. 2, 265–286. 2
39
A
Bounding the sum-of-squares proof ideal term
We give conditions under which sum-of-squares proofs are well-conditioned, using techniques
similar to those that appear in [RW17] for bounding the bit complexity of SoS proofs. We begin
with some definitions.
Definition A.1. Let P be a polynomial optimization problem and let D be the uniform distribution
over the set of feasible solutions S for P. Define the degree-2d moment matrix of D to be
X D s∼D [ ŝ ⊗2d ], where ŝ [1 s]⊤ .
• We say that P is k-complete on up to degree 2d if every zero eigenvector of X D has a degree-k
derivation from the ideal constraints of P.
Theorem A.2. Let P be a polynomial optimization problem over variables x ∈ n of degree at most 2d,
with objective function f (x) and ideal constraints {1 j (x) 0} j∈[m] . Suppose also that P is 2d-complete up
to degree 2d. Let G be the matrix of ideal constraints in the degree-2d SoS proof for P. Then if
• the SDP optimum value is bounded by n O(d)
• the coefficients of the objective function are bounded by n O(d) ,
• there is a set of feasible solutions S ⊆ n with the property that for each α ⊆ [n]d , |α| 6 d for
which χα is not identically zero over the solution space, there exists some s ∈ S such that the square
monomial χα (s)2 > n −O(d) ,
it follows that the SoS certificate for the problem is well-conditioned, with no value larger than n O(d).
To prove this, we essentially reproduce the proof of the main theorem of [RW17], up to the very
end of the proof at which point we slightly deviate to draw a different conclusion.
Proof. Following our previous convention, the degree-2d sum-of-squares proof for P is of the form
sdpOpt − f (x) a(x) + 1(x),
where the 1(x) is a polynomial in the span of the ideal constraints, and A is a sum of squares of
polynomials. Alternatively, we have the matrix characterization,
sdpOpt −hF, x̂ ⊗2d i hA, x̂ ⊗2d i + hG, x̂ ⊗2d i,
where x̂ [1 x]⊤ , F, A, and G are matrix polynomials corresponding to f , a, and 1 respectively,
and with A 0.
Now let s ∈ S be a feasible solution. Then we have that
sdpOpt −hF, s ⊗2d i hA, s ⊗2d i + hG, s ⊗2d i hA, s ⊗2d i,
where the second equality follows because each s ∈ S is feasible. By assumption the left-hand-side
is bounded by n O(d).
We will now argue that the diagonal entries of A cannot be too large. Our first step is to argue
that A cannot have nonzero diagonal entries unless there is a solution element in the solution
Let X D [x ⊗2d ] be the 2d-moment matrix of the uniform distribution of feasible solutions to
40
P. Define Π to be the orthogonal projection into the zero eigenspace of X D . By linearity and
orthonormality, we have that
hX D , Ai X D , (Π + Π⊥ )A(Π + Π⊥ )
X D , Π⊥ AΠ⊥ + X D , ΠAΠ⊥ + X D , Π⊥ AΠ + hX D , ΠAΠi .
By assumption P is 2d-complete on D up to degree 2d, and therefore Π is derivable in degree 2d
from the ideal constraints {1 j } j∈[m] . Therefore, the latter three terms may be absorbed into G, or
more formally, we can set A′ Π⊥ AΠ⊥ , G′ G + (Π + Π⊥ )A(Π + Π⊥ ) − Π⊥ AΠ⊥ , and re-write the
original proof
sdpOpt −hF, x̂ ⊗2d i hA′ , x̂ ⊗2d i + hG′ , x̂ ⊗2d i.
(A.1)
The left-hand-side remains unchanged, so we still have that it is bounded by n O(d) for any feasible
solution s ∈ S. Furthermore, the nonzero eigenspaces of X D and A′ are identical, and so A′ cannot
be nonzero on any diagonal entry which is orthogonal to the space of feasible solutions.
Now, we argue that every diagonal entry of A′ is at most n O(d) . To see this, for each diagonal
term χ2α , we choose the solution s ∈ S for which χα (s)2 > n −O(d) . We then have by the PSDness of
A′ that
A′α,α · χα (s)2 6 hs ⊗2d , A′i 6 n O(d) ,
which then implies that A′α,α 6 n O(d) . It follows that Tr(A′) 6 n O(d) , and again since A′ is PSD,
kA′ kF 6
p
Tr(A′) 6 n O(d) .
(A.2)
Putting things together, we have from our original matrix identity (A.1) that
kG′ kF k sdpOpt −A′ − F kF
6 k sdpOpt kF + kA′ kF + kF kF
6 k sdpOpt kF + n O(d) + kF kF
(triangle inequality)
(from (A.2)).
Therefore by our assumptions that k sdpOpt k, kF kF n O(d) , the conclusion follows.
We now argue that the conditions of this theorem are met by several general families of
problems.
Corollary A.3. The following problems have degree-2d SoS proofs with all coefficients bounded by n O(d) :
1. The hypercube: Any polynomial optimization problem with the only constraints being {x 2i x i }i∈[n]
or {x 2i 1}i∈[n] and objective value at most n O(d) over the set of integer feasible solutions. (Including
max k-csp).
2. The hypercube with balancedness constraints: Any polynomial optimization problem with the only
Í
constraints being {x 2i − 1}i∈[n] ∪ { i x i 0}. (Including community detection).
Í
3. The unit sphere: Any polynomial optimization problem with the only constraints being { i∈[n] x 2i 1}
and objective value at most n O(d) over the set of integer feasible solutions. (Including tensor PCA).
41
4. The sparse hypercube: As long as 2d 6 k, any polynomial optimization problem with the only
Í
Í
constraints being {x 2i x i }i∈[n] ∪ { i∈[n] x i k}, or {x 3i x i }i∈[n] ∪ { i∈[n] x 2i k}, and objective
value at most n O(d) over the set of integer feasible solutions. (Including densest k-subgraph and the
Boolean version of sparse PCA).
5. The max clique problem.
We prove this corollary below. For each of the above problems, it is clear that the objective value
is bounded and the objective function has no large coefficients. To prove this corollary, we need to
verify the completeness of the constraint sets, and then demonstrate a set of feasible solutions so
that each square term receives non-negligible mass from some solution.
A large family of completeness conditions were already verified by [RW17] and others (see the
references therein):
Proposition A.4 (Completeness of canonical polynomial optimization problems (from Corollary
3.5 of [RW17])). The following pairs of polynomial optimization problems P and distributions over solutions
D are complete:
1. If the feasible set is x ∈ n with {x 2i 1}i∈[n] or {x 2i x i }i∈[n] , P is d-complete up to degree d
Í
(e.g. if P is a CSP). This is still true of the constraints {x 2i 1}i∈[n] ∪ { i x i 0} (e.g. if P is a
community detection problem).
2. If the feasible set is x ∈ n with
is the tensor PCA problem).
Í
i∈[n]
x 2i α, then P is d-complete on D up to degree d (e.g. if P
3. If P is the max clique problem with feasible set x ∈ n with {x 2i x i }i∈[n] ∪ {x i x j 0}(i, j)∈E , then
P is d-complete on D up to degree d.
A couple of additional examples can be found in the upcoming thesis of Benjamin Weitz
[Wei17]:
Proposition A.5 (Completeness of additional polynomial optimization problems) [Wei17]). The
following pairs of polynomial optimization problems P and distributions over solutions D are complete:
1. If P is the densest k-subgraph relaxation, with feasible set x ∈ n with {x 2i x i }i∈[n] ∪ {
k}, P is d-complete on D up to degree d 6 k.
Í
i∈[n]
xi
2. If P is the sparse PCA relaxation with sparsity k, with feasible set x ∈ n with {x 3i x i }i∈[n] ∪
Í
{ i∈[n] x 2i k}, P is d-complete up to degree d 6 k/2.
Proof of Corollary A.3. We verify the conditions of Theorem A.2 separately for each case.
1. The hypercube: the completeness conditions are satisfied by Proposition A.4. We choose the
® for which χ2α (s) 1 always.
set of feasible solutions to contain a single point, s 1,
2. The hypercube with balancedness constraints: the completeness conditions are satisfied by
Proposition A.4. We choose the set of feasible solutions to contain a single point, s, some
perfectly balanced vector, for which χ2α (s) 1 always.
42
3. The unit sphere: the completeness conditions are satisfied by Proposition A.4. We choose
the set of feasible solutions to contain a single point, s √1n · ®1, for which χ2α (s) > n −d as long
as |α| 6 d, which meets the conditions of Theorem A.2.
4. The sparse hypercube: the completeness conditions are satisfied by Proposition A.5. Here,
Í
we choose the set of solutions S {x ∈ {0, 1} n | i x i k}. as long as k > d, for any |α| 6 d
we have that χS (x)2 1 when s is 1 on α.
5. The max clique problem: the completeness conditions are satisfied by Proposition A.4. We
choose the solution set S to be the set of 0, 1 indicators for cliques in the graph. Any α
that corresponds to a non-clique in the graph has χα identically zero in the solution space.
Otherwise, χα (s)2 1 when s ∈ S is the indicator vector for the clique on α.
This concludes the proof.
B Lower bounds on the nonzero eigenvalues of some moment matrices
In this appendix, we prove lower bounds on the magnitude of nonzero eigenvalues of covariance
matrices for certain distributions over solutions. Many of these bounds are well-known, but we
re-state and re-prove them here for completeness. We first define the property we want:
Definition B.1. Let P be a polynomial optimization problem and let D be the uniform distribution
over the set of feasible solutions S for P. Define the degree-2d moment matrix of D to be
X D x∼D [ x̂ ⊗2d ], where x̂ [1 x]⊤ .
• We say that D is δ-spectrally rich up to degree 2d if every nonzero eigenvalue of X D is at least
δ.
Proposition B.2 (Spectral richness of polynomial optimization problems). The following distributions
over solutions D are polynomially spectrally rich:
1. If D is the uniform distribution over {±1} n , then D is polynomially spectrally rich up to degree
d 6 n.
2. If D is the uniform distribution over α · Sn−1 , then D is polynomially spectrally rich up to degree
d 6 n.
3. If D is the uniform distribution over x ∈ {1, 0} n with kx k0 k, then if 2d 6 k, D is polynomially
spectrally rich up to degree d.
4. If D is the uniform distribution over x ∈ {±1, 0} n with kx k0 k, then if 2d 6 k, D is polynomially
spectrally rich up to degree d.
def
Proof. In the proof of each statement, denote the 2dth moment matrix of D by X D x∼D [x ⊗2d ].
Because X D is a sum of rank-1 outer-products, an eigenvector of X D has eigenvalue 0 if and only if it
is orthogonal to every solution in the support of D, and therefore the zero eigenvectors correspond
exactly to the degree at most d constraints that can be derived from the ideal constraints.
43
Now, let p1 (x), . . . , p r (x) be a basis for polynomials of degree at most 2d in x which is orthonormal with respect to D, so that
x∼D
[p i (x)p j (x)]
(
1
ij
0 otherwise
If p̂ i is the representation of p i in the monomial basis, we have that
(p̂ i )⊤ X D p̂ j
Therefore, the matrix R
Í
⊤
i e i ( p̂ i )
x∼D
[p i (x)p j (x)].
diagonalizes X D ,
RX D R ⊤ Id .
It follows that the minimum non-zero eigenvalue of X D is equal to the smallest eigenvalue of
(RR ⊤ )−1 , which is in turn equal to σ 1(R)2 where σmax (R) is the largest singular value of R. Therefore,
max
for each of these cases it suffices to bound the singular values of the change-of-basis matrix
between the monomial basis and an orthogonal basis over D. We now proceed to handle each
case separately.
1. D uniform over hypercube: In this case, the monomial basis is an orthogonal basis, so R is the
identity on the space orthogonal to the ideal constraints, and σmax (R) 1, which completes
the proof.
2. D uniform over sphere: Here, the canonical orthonormal basis the spherical harmonic
polynomials. Examining an explicit characterization of the spherical harmonic polynomials
(given for example in [DX13], Theorem 5.1), we have that when expressing p i in the monomial
basis, no coefficient of a monomial (and thus no entry of p̂ i ) exceeds n O(d) , and since there
Í
are at most n d polynomials each with di0 nd 6 n d coefficients, employing the triangle
inequality we have that σmax (R) 6 n O(d) , which completes the proof.
3. D uniform over {x ∈ {0, 1} k | kx k0 k}: In this case, the canonical orthonormal basis is
the correctly normalized Young’s basis (see e.g. [Fil16] Theorems 3.1,3.2 and 5.1), and agan
we have that when expressing an orthonormal basis polynomial p i in the monomial basis,
no coefficient exceeds n O(d) . As in the above case, this implies that σmax (R) 6 n O(d) and
completes the proof.
4. D uniform over {x ∈ {±1, 0} k | kx k0 k}: Again the canonical orthonormal basis is Young’s
basis with a correct normalization. We again apply [Fil16] Theorems 3.1,3.2, but this time we
calculate the normalization by hand: we have that in expressing each p i , no element of the
monomial basis has coefficient larger than n O(d) multiplied by the quantity
x∼D
"
d
Ö
i1
2
(x2i−1 − x2i )
#
O(1).
This gives the desired conclusion.
44
C
From Boolean to Gaussian lower bounds
In this section we show how to prove our SoS lower bounds for Gaussian PCA problems using the
lower bounds for Boolean problems in a black-box fashion. The techniques are standard and more
broadly applicable than the exposition here but we prove only what we need.
The following proposition captures what is needed for tensor PCA; the argument for sparse
PCA is entirely analogous so we leave it to the reader.
n
Proposition C.1. Let k ∈ and let A ∼ {±1}( k ) be a symmetric random Boolean tensor. Suppose that for
n
every A ∈ {±1}( k ) there is a degree-d pseudodistribution ˜ satisfying {kx k 2 1} such that
A
˜ hx ⊗k , Ai C .
n
Let T ∼ N(0, 1)( k ) be a Gaussian random tensor. Then
T
max ˜ hx ⊗k , Ti > Ω(C)
˜
where the maximization is over pseudodistributions of degree d which satisfy {kx k 2 1}.
Proof. For a tensor T ∈ (n )⊗k , let A(T) have entries A(T)α sign(Tα ). Now consider
T
˜ A(T) hx ⊗k , Ti
Õ
α
T
˜ A(T) x α Tα
where α ranges over multi-indices of size k over [n]. We rearrange each term above to
A(T)
where 1 ∼ N(0, 1). Since
( ˜ A(T) x α ) ·
Tα | A(T)
Tα
( ˜ A(T) x α ) · A(T)α ·
A(T)
|1 |
|1 | is a constant independent of n, all of this is
Ω(1) ·
Õ
α
A
˜ A x α · Aα C .
45
| 8 |
arXiv:1709.08605v1 [cs.CV] 25 Sep 2017
Muon Trigger for Mobile Phones
M Borisyak1,2 , M Usvyatsov2,3,4 , M Mulhearn5 , C Shimmin6,7 and
A Ustyuzhanin1,2
1
National Research University Higher School of Economics, 20 Myasnitskaya st., Moscow
101000, Russia
2
Yandex School of Data Analysis, 11/2, Timura Frunze st., Moscow 119021, Russia
3
Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow
Region, 141700, Russia
4
Skolkovo Institute of Science and Technology, Skolkovo Innovation Center, Building 3,
Moscow 143026, Russia
5
University of California, Davis, 1 One Shields Avenue, Davis, CA 95616, USA
6
University of California, Irvine, 4129 Frederick Reines Hall, Irvine, CA 92697-4575, USA
7
Yale University, 217 Prospect Street, New Haven, CT 06520, USA
E-mail: [email protected]
Abstract. The CRAYFIS experiment proposes to use privately owned mobile phones as a
ground detector array for Ultra High Energy Cosmic Rays. Upon interacting with Earth’s
atmosphere, these events produce extensive particle showers which can be detected by cameras
on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As
these particles interact with CMOS image sensors, they may leave tracks of faintly-activated
pixels that are sometimes hard to distinguish from random detector noise. Triggers that rely
on the presence of very bright pixels within an image frame are not efficient in this case.
We present a trigger algorithm based on Convolutional Neural Networks which selects images
containing such tracks and are evaluated in a lazy manner: the response of each successive layer
is computed only if activation of the current layer satisfies a continuation criterion. Usage
of neural networks increases the sensitivity considerably comparable with image thresholding,
while the lazy evaluation allows for execution of the trigger under the limited computational
power of mobile phones.
1. Introduction
The problem of pattern detection over a set of images arises in many applications. The CRAYFIS
experiment is dedicated to observations of Ultra-High-Energy Cosmic Rays (UHECR) by a
distributed network of mobile phones provided by volunteers. In the process of interaction with
the Earth’s atmosphere, UHECRs produce cascades of particles called Extensive Air Showers
(EAS). Some of the particles reach the ground, affecting areas of up to several hundreds of
meters in radius. These particles can be detected by cameras on mobile phones, and a localized
coincidence of particle detection by several phones can be used to observe very rare UHECR
events [1].
This approach presents a number of challenges. In order to observe an EAS, each active
smartphone needs to continuously monitor its camera output by scanning megapixel-scale images
at rates of 15-60 frames per second. This generates a vast amount of raw data, which presents
problems both for volunteers1 and experimenters if transmitted to data processing servers for
later analysis. However, the recorded data contains almost entirely random camera noise, as
signals from cosmic ray interactions are expected to occur in fewer than 1 in 10,000 image frames.
As there would be potentially millions of smartphones operating simultaneously, it is critical to
utilize the local processing power available on each device to select only the most interesting
data. Hence, a trigger algorithm is required to filter out background data and identify possible
candidates for cosmic rays traces. It is also important that the camera monitoring is subject
to negligible dead time; therefore any trigger must operate with an average evaluation response
time on the order of 30ms to track with the raw data rate.
Some constituents of an EAS, such as electrons and gamma-ray photons, leave bright traces
in the camera [1]. In this case, the simplest trigger strategy is cut on brightness (if there
are bright pixels in a fragment then this fragment is considered interesting). This usually
enough to provide acceptable background rejection rate in the case of bright traces, and given
a target background rejection rate it is possible to automatically determine the threshold value
for decision making. However, this strategy is much less effective against another component of
the shower, comprising minimally-ionizing particles such as high-energy muons. These particles
may leave relatively faint signals in the pixels they traverse, possibly at a level comparable to
the sensor’s intrinsic noise.
Nevertheless, these minimally-ionizing particles traverse the sensor’s pixels in distinctively
straight lines. If these tracks are long enough in the plane of the sensor, there is a low probability
of the same pattern emerging from intrinsic random camera noise. Thus it is still possible to
discriminate even these faintly-interacting particles from background.
In this work, we propose a novel approach for fast visual pattern detection, realized here as
a trigger designed for fast identification of muon traces on mobile phone’s camera. This method
is based on Convolutional Neural Networks and does not contain any specific assumptions for
identification of muon traces, hence, in principle, it can be applied to any visual pattern detection
problem.
2. Related Work
Minimally-ionizing particles are characterized by the pattern of activated pixels they leave over
a small region of an exposure. Hence the problem of minimally-ionizing particle detection can
be transformed to the problem of pattern detection over the image. Several attempts were
performed to solve pattern detection problem in different setups. The solution proposed in
works [2] and [3] utilizes Convolutional Neural Networks (CNNs). Certain properties of CNNs
such as translation invariance, locality, and correspondingly few weights to be learned, make
them particularly well suited to extracting features from images. The performance of even
simple CNNs on image classification tasks have been shown to be very good compared to
other methods [4]. However, the training and evaluation of CNNs requires relatively intense
computation to which smartphones are not particularly well suited. Viola and Jones in [5]
introduced the idea of a simple feature-based cascading classifier. They proposed using small
binary filters as feature detectors and to increase computational power from cascade to cascade.
Bagherinzhad et al. in [6] enumerated a wide range of methods which have been proposed
to address efficient training and inference in deep neural networks. Although these methods are
orthogonal to our approach, they may be incorporated with the method described here in order
to improve efficiency in related tasks.
1
For example, it may quickly exceed smartphone’s storage capacity or introduce a considerable load on networks.
3. CNN trigger
The key insight of the proposed method is to view a Deep Convolutional Neural Network (CNN)
as a chain of triggers, or cascades: each trigger filters its input stream of data before passing
it further down the chain. The main feature of such chains is that amount of data passing
successfully through the chain gradually decreases, while the complexity of triggers gradually
increases, allowing finer selection with each subsequent trigger. This architecture allows one to
effectively control the computational complexity of the overall chain, and usually, to substantially
decrease the amount of computational resources required [5].
Convolutional Neural Networks are particularly well suited for adopting this approach
as instead of passing an image itself throughout the chain, the CNN computes a series of
intermediate representations (activations of hidden layers) of the image [4]. Following the
same reasoning as in Deeply Supervised Nets (DSN) [7], one can build a network for which
discriminative power of intermediate representations grows along the network, making it possible
to treat such CNN as a progressive trigger chain 2 .
In order to build a trigger chain from a CNN, we propose a method similar to DSN: each
layer of the CNN is extended with a binary trigger based on the image representation obtained
by this layer. In the present work we use logistic regression as model for the trigger, although,
any binary classifier with a differentiable model would be sufficient.
The trigger is applied to each region of the output image to determine if that region should
proceed further down the trigger chain. We call these layers with their corresponding triggers
a convolutional cascade, by analogy with Viola-Jones cascades [5]. The output of the trigger at
each stage produces what we refer to as an activation map, to be passed to the next layer, as
illustrated in Fig. 1a.
From another perspective, this approach can be seen as an extension of the CNN architecture
in which network computations concerning regions of the input image are halted as soon as a
negative outcome for a region becomes evident3 . This is effectively accomplished by generalizing
the convolution operator to additionally accept as input an activation map indicating the regions
to be computed. In the case where sparse regions of activation are expected, this lazy application
can result in much fewer computations being performed.
After each application of the lazy convolution, the activation map for the subsequent layer
is furnished by the trigger associated with the current layer. The whole chain is illustrated in
Fig. 1b.
Training of the CNN trigger may be problematic for gradient methods, since prediction is
no longer a continuous function of network parameters. This is because the lazy convolution,
described above, is in general non-differentiable.
In order to overcome this limitation, we propose to use a slightly different network architecture
during training by substituting a differentiable approximation of the lazy convolution operator.
The basic idea is that instead of skipping the evaluation of unlikely regions, we simply ensure
that any region which has low activation on a given cascade will continue to have low activation
on all subsequent cascades. In this scheme, the evaluation is no longer lazy, but since training
may be performed on much more powerful hardware, this is not a concern.
To accomplish this, we first replace the binary activation maps (which result from the
trigger classification) with continuous probability estimates. Secondly, we introduce intermediate
activation maps, which are computed by the trigger function at each layer. The intermediate
map is multiplied by the previous layer’s activation map to produce a refined activation map4 .
In this way, the activation probability for any region is nonincreasing with each cascade layer.
2
However, in contrast to DSN, the growth of discriminative power is a requirement for an effective trigger chain
rather than for network regularization.
3
Lazy application can be viewed as a variation of attention mechanisms recently proposed in Deep Learning
literature, see e.g. [8].
cascade input
cascade output
convolution
convolutions,
subsampling
activation,
lazy application
lazy
application
activation maps
input
activation map
output
activation map
(b) CNN trigger structure
(a) convolutional cascade
Figure 1: Fig. 1a shows the building block of the CNN trigger, an individual convolutional
cascade. In contrast to conventional convolutional layers, the convolutional cascade has an
additional input, the activation map, which indicates regions to which the convolutional operator
should be applied (lazy application, denoted by dashed lines). The activation map is updated
by the associated trigger (represented by the s-shaped node), which may be passed on to the
subsequent cascade or interpreted as the final output indicating regions of interest. Fig. 1b
shows the full structure of CNN trigger as a sequence of convolutional cascades. Initially the
whole image is activated (red areas). As the image proceeds through the chain, the activated
area becomes smaller as each cascade refines the activation map.
The process is depicted schematically in Fig. 2.
This differentiable version of the lazy application operation for the ith cascade is described
by the following equations:
Ii
Âix,y
i
A
A0x,y
=
=
hi (I i−1 );
i
σ (Ix,y
);
i
i−1
(1)
i
= Â ⊗ A
:= 1,
(2)
;
(3)
(4)
where I i is the intermediate representation of the input image I 0 after successive applications
of CNN layers, and hi represents the transformation associated with the ith layer of the CNN
(typically this would be convolution, nonlinearity, and pooling). σ i is the function associated
with the trigger (in our case, logistic regression), and its result Âi is the intermediate activation
map of the ith cascade. Finally, Ai is the differentiable version of the activation map, given
by the element-wise product of the intermediate activation with the previous layer’s activation.
That elements of the initial activation map A0 are set to 1, and the subscripts x, y denote region
position.
Note that the dimensions of the activation map define the granularity of regions-of-interest
that may be triggered, and may in general be smaller than the dimensions of the input image.
In this case, the trigger function σ should incorporate some downsampling such as max-pooling.
We also note that since similar but still technically different networks are used for training
and prediction, special care should be taken while transitioning from probability estimations
to binary classification. In particular, additional adjustment of classifier thresholds may be
4
If layers of underlying CNN contains pooling, i.e. change size of the image, pooling should be applied to
intermediate activation maps as well.
convolutions,
subsampling
activation
intermediate
activation maps
x
combination
x
x
activation maps
Figure 2: Schematic of the CNN trigger used for training. To make the network differentiable,
lazy application is replaced by its approximation, that does not involve any “laziness”.
Activation maps are approximated as the elementwise product of unconditional trigger response
(intermediate activation maps) and the previous activation map.
required. Nevertheless, in the present work no significant differences in the two networks’
behaviors were found (see Fig. 4).
To train the network we utilize cross-entropy loss. Since activation maps are also intermediate
results, and the activation map An of the last cascade is the final result for a network of n
cascades, the loss can be written as:
Ln = −
X
1
n
n
Yx,y log Ix,y
+ γ n (1 − Yx,y ) log(1 − Ix,y
)
W × H x,y
(5)
where Y ∈ RW ×H denotes the ground truth map with width W and height H. The truth map
is defined with Yx,y = 1 if the region at coordinates (x, y) contains a target object, otherwise
Yx,y = 0. The coefficient γ n is introduced to provide control over the penalty for triggering on
background.
If the cross-entropy term (5) is the only component of the loss, the network will have no
particular incentive to assign regions a low activation on early cascades, limiting the benefit of
the lazy evaluation architecture. One approach to force network to produce good results on
intermediate layers is to directly introduce penalty term C for unnecessary computations:
C=
n
X
ci
i=1
X
(1 − Yx,y ) Ai−1
x,y
(6)
x,y
where ci represents the per-region cost of performing convolution and trigger in the ith cascade.
We use a naive estimation of coefficients the ci , assuming, for simplicity, that convolution is
performed by elementwise multiplication of the convolutional kernel with a corresponding image
patch. In this case, for l filters of size (k, k) applied to image with m channels:
ci =
2
mlk
| {z }
multiplications
+ l(mk 2 − 1) + |2l{z
− 1}
| {z }
summations
(7)
trigger
Combining these terms, the resulting total loss function is given by:
L = Ln + βC
(8)
where the parameter β is introduced to regulate the trade-off between computational efficiency
and classification quality.
Another approach is to apply a DSN technique:
L = Ln +
n−1
X
αi Li .
(9)
i=1
Here, Li is the loss associated with ith cascade (i.e. companion loss in DSN terminology)
defined by analogy with (5). The coefficients αi regulate the trade-off between losses on different
cascades5 .
In the present work, we find that the objectives defined by (8) and (9) are highly correlated.
However, (9) seems to propagate gradients more effectively, resulting in faster training.
4. Experiments
4.1. Dataset
As of this writing, no labeled dataset of CMOS images containing true muon tracks is available6 .
Instead, an artificial dataset was constructed with properties similar to those expected from real
data, in order to assess the CNN trigger concept.
(a) original traces
(b) composition
Figure 3: Test dataset creation steps: 3a selection of bright photon tracks, 3b track brightness
is lowered and superimposed on noisy background.
To construct the artificial dataset, images were taken from a real mobile phone exposed to
radioactive 226 Ra, an X-ray photon source. These photons interact in the sensor primarily via
compton scattering, producing energetic electrons which leave tracks as seen in Fig. 3a. These
tracks are similar to those expected by muons, the main difference being that the electron tracks
tend to be much brighter than the background noise, rendering the classification problem almost
trivial.
Therefore, the selected particle tracks are renormalized such that their average brightness is
approximately at the level of the camera’s intrinsic noise. Gloom traces are than superimposed
on the background with some Gaussian noise to modeling intrinsic camera sensor noise. An
example of the resulting artificial data is shown in Fig. 3b. After these measures, the dataset
better emulates the case of low-brightness muons, and also forces the classifier to use more
sophisticated (geometric) features for classification.
5
One may find αi ∼ ci to be a relatively good heuristic.
To obtain real data and fully validate performance of the algorithm, an experimental setup with muon
scintillators is scheduled this year.
6
(a) intermediate activation maps and ground truth map
(b) activation maps and ground truth map
(c) binary activation maps and ground truth map
Figure 4: Evaluation of the trigger CNN (using the input image from Fig. 3b). Figs. 4a and 4b
are activation maps for training regime; Fig. 4c are binary activation maps for the application
regime. The resolution of the map is reduced after each cascade to match the downsampling of
the internal image representation.
4.2. CNN trigger evaluation
To evaluate the performance of the method, we consider the case of a CNN trigger with 4
cascades. The first cascade has a single filter of size 1 × 1, equivalent to simple thresholding.
The second, third, and fourth cascades have 1, 3, and 6 filters of size 3 × 3, respectively. Within
each cascade, convolutions are followed by 2 × 2 max-pooling.
Due to the simple structure of the first cascade, its coefficient c1 from (6) is set to 1.
As motivated in Sec. 1, a successful trigger must run rapidly on hardware with limited
abilities. One of the simplest algorithms that satisfies this restriction, thresholding by brightness,
is chosen as baseline for comparison.7 This strategy yields a background rejection rate of around
10−2 (mainly due to assumptions built in dataset) with perfect signal efficiency.
Two versions of the CNN trigger with average computational costs8 of 1.4 and 2.0 operations
per pixel were trained. This computational cost is controlled by varying coefficients in the loss
function (9). For each, signal efficiency and background rejection rates at three working points
7
In order to obtain comparable results, the output of thresholding was max-pooled to match the size of the CNN
trigger output.
8
As estimated by (6).
are presented in Table 1. Fig. 4 shows some typical examples of activation maps for different
network regimes.
complexity
signal efficiency
background rejection
0.90
0.60
1.4 op. per pixel
0.95
0.39
0.99
0.12
0.90
0.65
2.0 op. per pixel
0.95
0.44
0.99
0.15
Table 1: CNN trigger performance for two models with computational costs 1.4 and 2.0
operations per pixel. Different points for signal efficiency and background rejection were obtained
by varying threshold on output of the CNN trigger (i.e. activation map of the last cascade).
These results indicate a significant improvement of background rejection rate relative to the
baseline strategy, even for nearly perfect signal efficiency.
Another performance metric that is interesting to consider is, the normalized computational
complexity:
" n
#−1
X X
Ĉ = C ·
ci
(1 − Yx,y )
.
(10)
i=1
x,y
For the models described above, the normalized computational complexity, Ĉ, is around 4-5
percent, which indicates that a significant amount of computational resources is saved due to
lazy application, as compared to a conventional CNN with the same structure.
5. Conclusion
We have introduced a novel approach to construct a CNN trigger for fast visual pattern detection
of rare events, designed particularly for the use case of fast identification of muon tracks on mobile
phone cameras. Nevertheless, the proposed method does not contain any application-specific
assumptions and can be, in principle, applied to a wide range of problems.
The method extends Convolutional Neural Networks by introducing lazy application of
convolutional operators, which can achieve comparable performance with lower computational
costs. The CNN trigger was evaluated on an artificial dataset with properties similar to those
expected from real data. Our results show significant improvement of background rejection rate
relative to a simple baseline strategy with nearly perfect signal efficiency, while the per-pixel
computational cost of the algorithm is increased by less than a factor of 2.
The effective computational cost is equivalent to 4-5 percent of the cost required by a
conventional CNN of the same size. Therefore the method can enable the evaluation of powerful
CNNs in instances where time and resources are limited, or where the network is very large. This
is a promising result for CNNs in many other possible applications, such as very fast triggering
with radiation-hard electronics, or power-efficient realtime processing of high resolution sensors.
References
[1] Whiteson D, Mulhearn M, Shimmin C, Cranmer K, Brodie K and Burns D 2016 Astroparticle Physics 79 1–9
[2] Li H, Lin Z, Shen X, Brandt J and Hua G 2015 A convolutional neural network cascade for face detection
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp 5325–5334
[3] Ren S, He K, Girshick R and Sun J 2015 Faster r-cnn: Towards real-time object detection with region proposal
networks Advances in neural information processing systems pp 91–99
[4] LeCun Y and Bengio Y 1995 The handbook of brain theory and neural networks 3361 1995
[5] Viola P and Jones M 2001 Rapid object detection using a boosted cascade of simple features Computer Vision
and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference
on vol 1 (IEEE) pp I–511
[6] Hessam B, Mohammad R and Ali F 2016 arXiv preprint arXiv:1611.06473
[7] Lee C Y, Xie S, Gallagher P, Zhang Z and Tu Z 2015 Deeply-supervised nets. AISTATS vol 2 p 6
[8] Xu K, Ba J, Kiros R, Cho K, Courville A C, Salakhutdinov R, Zemel R S and Bengio Y 2015 Show, attend
and tell: Neural image caption generation with visual attention. ICML vol 14 pp 77–81
| 1 |
arXiv:1708.09047v1 [math.GR] 29 Aug 2017
Computable groups which do not embed into
groups with decidable conjugacy problem
Arman Darbinyan
1
Abstract
We show the existence of finitely generated torsion-free groups with decidable
word problem that cannot be embedded into groups with decidable conjugacy
problem. This answers a well-known question of Collins from the early 1970’s.
2
Introduction
Two of the most central decision problems associated with finitely generated
groups are word and conjugacy problems. One of the important questions about
these problems is concerning about the relation between them. For example, if
the conjugacy problem is decidable for a finitely generated group G, then the
word problem is decidable as well. However, in general, the inverse is far from
being true, [16, 6, 17, 15, 8].
If G is a finitely generated group and H ≤ G is a subgroup of finite index,
then the word problem in G is decidable if and only if it is decidable for H.
However, it is shown by Goryaga-Kirkinskii, [10], and independently by CollinsMiller, [7], that subgroups of index 2 of some specific finitely generated groups
have decidable (respectively, undecidable) conjugacy problem, while the groups
themselves have undecidable (respectively, decidable) conjugacy problem.
Another important type of questions about word and conjugacy problems
in groups is the following: Is it true that every finitely generated group with
decidable word problem (respectively, conjugacy problem) embeds in a finitely
presented group with decidable word problem (respectively, conjugacy problem)? Both of these questions have positive answer. The answer for the word
problem is obtained by Clapham in 1967, [5], based on the classical embedding
theorem of Higman (see [13]), while the analogous question for the conjugacy
problem was a long-standing open problem until it got positive answer in 2004
by a work of Olshanskii and Sapir. See [18] and also [19].
In light of the aforementioned, a natural question about the connection of
word and conjugacy problems in groups is the following question, asked by
1
Donald Collins in the early 1970s.
Question 1. Can every torsion-free group with solvable word problem
be embedded in a group with solvable conjugacy problem?
This question appears in the 1976 edition of The Kourovka Notebook as Problem 5.21, [12]. Probably, the first source where this problem was posed in a
written form is [3]. For yet another source, see [2].
It was mentioned by Collins in [12] that due to an example by A. Macintyre, there exists a group with torsions which cannot be embedded into a
finitely generated group with decidable conjugacy problem. However, the case
for torsion-free groups remained open until now. Indeed, one of the reasons
why the torsion and torsion-free cases are different is based on the observation
that conjugate elements in a group must have the same order, and since in a
torsion-free group all non trivial elements have the same (infinite) order, one
cannot make use of this observation in order to answer Question 1.
In [19], Olshanskii and Sapir showed the following theorem which gives a
positive answer to Question 1 under the stronger assumption of decidability of
the power problem.
Theorem 1 (Olshanskii-Sapir, [19]). Every countable group with solvable power
and order problems is embeddable into a 2-generated finitely presented group with
solvable conjugacy and power problems.
Note that as it is defined in [19], for a given group G the power problem is
said to be decidable, if there exists an algorithm which for any given pair (g, h)
of elements from G decides whether or not g is equal to some power of h in G.
The order problem is decidable in G if there exists an algorithm which for each
input g ∈ G computes the order of g.
The main result of the current work is the negative answer to Question 1 in
the general case.
Theorem 2. There exists a finitely presented torsion-free group G with decidable word problem such that G cannot be embedded into a group with decidable
conjugacy problem.
A remarkable theorem of Osin (see [20]) says that every torsion-free countable group can be embedded into a two generated group with exactly two conjugacy classes. In the context of this theorem, it is very natural to ask whether
or not every torsion-free countable group with decidable word problem (= computable group) can be embedded into a group with exactly two conjugacy classes
and with decidable word problem. A more relaxed version of this question would
be whether or not every torsion-free countable group with decidable word problem can be embedded in a finitely generated recursively presented group with
finitely many conjugacy classes.
It turns out that a direct consequence of Theorem 2 gives negative answer
to both of these questions.
2
In fact, the decidability of the conjugacy problem for groups with exactly two
conjugacy classes is equivalent to the decidability of the word problem. Even
more, as it is shown in a recent paper of Miasnikov and Schupp [15], a finitely
generated recursively presented group with finitely many conjugacy classes has
decidable conjugacy problem. Therefore, a direct corollary from Theorem 2 is
the following.
Theorem 3. There exists a torsion-free finitely presented group with decidable word problem that cannot be embedded into a finitely generated recursively
presented group with finitely many conjugacy classes.
Proof. Just take the group G from Theorem 2.
Remark 1. In fact, the mentioned result of Miasnikov and Schupp is true not
only for finitely generated recursively presented groups, but for all recursively
presented groups in general. Therefore, Theorem 3 stays true after dropping
the assumption that the group in which the initial group is embedded is finitely
generated. (The exact definition of recursive presentations of groups is given in
the next section.)
Acknowledgements. I would like to thank Alexander Olshanskii for his
thoughtful comments on this work.
3
3.1
Preliminaries
Groups with decidable word problem
A countable group G is said to have recursive presentation, if G can be presented
as G = hX | Ri such that X and R are enumerable by some algorithm (i.e.
Turing machine). See [11]. If in addition, there is an algorithm which for
each pair of words (w, w′ ) from (X ∪ X −1 )∗ verifies whether or not w and w′
represent the same element of G, then the presentation G = hX | Ri is called
computable and in case G possesses such a presentation, the group G itself is
called computable as well. Modulo some slight variances, the original definition
of the concept of computable groups is due to Rabin [21] and Mal’cev [14].
In case the group G is finitely generated (i.e. |X| < ∞) computability property of G is the same as saying that G has decidable word problem. It is not
hard to notice that decidability of the word problem does not depend on the
finite generating sets. From the computability perspective, the last observation
is one of the main advantages of finitely generated groups over countably generated ones, because in case of finitely generated groups decidability of the word
problem is an intrinsic property of a group, rather than of its presentation.
However, in this paper, to keep the notations as uniform as possible, we say
that G has decidable word problem if it is given by a computable presentation.
3
Let G = hx1 , x2 , . . . | r1 , r2 , . . .i, where {x1 , x2 , . . .} and {r1 , r2 , . . .} are
recursive enumerations of X and R, respectively. Then, the embedding constructions of [9] and [19] imply the following theorem.
Theorem 4. If G = hx1 , x2 , . . . | r1 , r2 , . . .i has decidable word problem, then
there exists an embedding Φ : G → H of G into a two generated group H such
that the following holds.
(1). The word problem is decidable in H;
(2). The map i 7→ Φ(xi ) is computable;
(3). An element of H is of finite order if and only if it is conjugate to an image
under Φ of an element of finite order in G.
3.2
HNN-extensions
In the proof of the existence of the group G from Theorem 2 we use some
group theoretical constructions based on HNN-extensions. Therefore, in this
subsection we would like to recall some well-known basic facts about HNNextensions. The basics of the theory of HNN-extensions can also be found in
[13].
Suppose that A, B ≤ H and φ : A → B is a group isomorphism from A to
B. Then the HNN-extension H ′ of H with respect to A and B (and φ) and
with stable letter t is defined as
H ′ = hH, t | t−1 at = φ(a), a ∈ Ai.
In the current text, the isomorphism φ will be clear from the context, hence
we will simply use the notation H ′ = hH, t | t−1 At = Bi.
Clearly, every element h′ of H ′ can be decomposed as a product
h ′ = h 0 tǫ 1 h 1 . . . t ǫ n h n ,
(1)
where ǫi ∈ {±1}, hj ∈ H for 1 ≤ i ≤ n, 0 ≤ j ≤ n.
The decomposition (1) is said to be in reduced form, if it does not contain
subproduct of one of the forms t−1 gi t, gi ∈ A or tgi t−1 , gi ∈ B, for 1 ≤ i ≤ n.
Analogously, if H = hXi, then the word u′ ∈ (X ∪ X −1 ∪ {t±1 })∗ given by
u ′ = u 0 tǫ 1 u 1 tǫ 2 . . . t ǫ n u n ,
where ǫi ∈ {±1}, uj ∈ (X ∪ X −1 )∗ , is said to be a reduced word with respect to
the HNN-extension H ′ if the decomposition h0 tǫ1 h1 . . . tǫn hn is in reduced form,
where hi corresponds to the word ui in H.
The following well-known lemma is attributed to Britton in [13].
Lemma 1 (Britton’s Lemma). If the decomposition (1) is reduced and n ≥ 1,
then h′ 6= 1 in H ′ .
4
Lemma 2 (See Theorem 2.1 in [13]). Let H ′ = hH, t | t−1 At = Bi be an HNNextension with respect to isomorphic subgroups A and B. Then H embeds in H ′
by the maps h 7→ h, h ∈ H.
Lemma 3 (The Torsion Theorem for HNN-extensions. See Theorem 2.4 in
[13]). Let H ′ = hH, t | t−1 At = Bi be an HNN-extension. Then every element
of finite order in H ′ is a conjugate of an element of finite order in the base H.
Thus H ′ has elements of finite order n if an only if H has elements of order n.
4
Proof of Theorem 2
In order to show the existence of G from Theorem 2, first, we will construct a
special countable group Ġ with decidable word problem, then G will be defined
as a group in which Ġ embeds in a certain way.
Two disjoint sets of natural numbers S1 , S2 ⊂ N are called recursively inseparable if there is no recursive set T ⊂ N such that S1 ⊆ T and S2 ⊆ N \ T . The
set T is called separating set. Clearly, if two disjoint sets are recursively inseparable, then they cannot be recursive. Indeed, if, say, S1 and S2 are disjoint and,
say, S1 is recursive, then as a recursive separating set one could simply take
S1 . Nevertheless, it is a well-known fact that there exist disjoint recursively
enumerable and recursively inseparable sets. See, for example, [22].
Let us fix two disjoint recursively enumerable and recursively inseparable
sets N = {n1 , n2 , . . .} ⊂ N and M = {m1 , m2 , . . .} ⊂ N such that the maps
i 7→ ni and i 7→ mi are computable.
Now, for all n ∈ N, define An as a torsion-free abelian additive group of rank
two with basis {an,0 , an,1 }, i.e.
An = han,0 i ⊕ han,1 i
and such that the groups A1 , A2 , . . . are disjoint.
For all n ∈ N, define the groups Ȧn as follows.
if n ∈
/ N ∪M ,
An
Ȧn =
An / ≪ an,1 = 2i an,0 ≫ if n = ni ∈ N ,
An / ≪ an,1 = 3i an,0 ≫ if n = mi ∈ M.
For all n ∈ N and m ∈ {0, 1}, let us denote the images of an,m under the
natural homomorphisms An → Ȧn by ȧn,m .
Convention. In this text, whenever we deal with an additive group, say, A,
∗
with finite generating set, say, {a1 , . . . , ak }, by
P {±a1 , . . . , ±ak } we denote the
set of formal finite sums of the form w =
λi aji , where λi ∈ Z and aji ∈
5
{a1 , . . . , ak }, and we say that w is a word formed by letters {±a1 , . . . , ±ak }.
Note that this is the additive analogue of the central in combinatorial group
theory concept of words, where the alphabet composing P
the words is a set of
group generators. This is why the finite formal sums w = λi aji we call words
from {±a1 , . . . , ±ak }∗ .
Before moving forward, we prove the following important lemma.
Lemma 4. There exists an algorithm such that for each input n ∈ N and
w ∈ {±ȧn,0 , ±ȧn,1 }∗ , it decides whether or not w represents the trivial element
in the group Ȧn .
Proof. Indeed, since Ȧn is abelian with generating set {ȧn,0 , ȧn,1 }, each word w
from {±ȧn,0 , ±ȧn,1 }∗ can be effectively transformed to a word of the form
w′ = λ0 ȧn,0 + λ1 ȧn,1
which represents the same element in Ȧn as the initial word w, where λ0 , λ1 ∈ Z.
Now, assuming that λ0 6= 0, λ1 6= 0, in order w′ to represent the trivial
element in Ȧn it must be that n ∈ N ∪ M, because otherwise, by definition, the
group Ȧn is torsion-free abelian of rank 2 with basis {ȧn,0 , ȧn,1 }.
In case n ∈ N , by definition we have that ȧn,1 = 2x ȧn,0 , where x is the index
of n in N , i.e. n = nx .
Similarly, in case n ∈ M, by definition we have that ȧn,1 = 3x ȧn,0 , where x
is the index of n in M, i.e. n = mx .
Now, if λ0 = 0 and λ1 = 0, then clearly w′ (hence also w) represents the
trivial element in Ȧn . Therefore, without loss of generality we can assume that
at least one of λ0 and λ1 is not 0. Then, if we treat x as an unknown variable,
depending on whether n = nx or n = mx , the equality w′ = 0 would imply one
of the following equations:
λ0 + λ1 2x = 0
(2)
λ0 + λ1 3x = 0,
(3)
or
respectively.
This observation suggests that in case λ0 6= 0 or λ1 6= 0, in order to verify
whether or not w′ = 0 in Ȧn , we can first try to find x satisfying (2) or (3),
and in case such an x does not exist, conclude that w′ (hence, also w) does not
represent the trivial element in Ȧn . Otherwise, if x is the root of the equation
(2), we can check whether or not n = nx (since N is recursively enumerable,
this checking can be done algorithmically). Similarly, if x is the root of the
equation (3), we can check whether or not n = mx .
If as a result of this checking, we get n = nx (respectively, n = mx ), then
the conclusion will be that w′ (hence, also w) represents the trivial element in
6
Ȧn , otherwise, if n 6= nx (respectively, n 6= mx ), then the conclusion will be
that w′ (hence, also w) does not represent the trivial element in Ȧn .
Now, for all n ∈ N, define the group Bn as a torsion-free additive abelian
group of rank 2, that is
Bn = hbn,0 i ⊕ hbn,1 i
such that B1 , B2 , . . . are disjoint.
Now, for all n ∈ N, define the groups Ḃn as follows.
Bn
if n ∈
/ N ∪M ,
Ḃn =
Bn / ≪ bn,1 = 2i bn,0 ≫ if n = ni ∈ N or n = mi ∈ M.
For all n ∈ N, m ∈ {0, 1}, let us denote the images of bn,m under the natural
homomorphism Bn → Ḃn by ḃn,m .
It follows from the definitions of Ȧn and Ḃn that for all n ∈ N, these groups
are infinite and torsion free.
Lemma 5. There exists an algorithm such that for each input n ∈ N and
w ∈ {±ḃn,0 , ±ḃn,1 }∗ , it decides whether or not w represents the trivial element
in the group Ḃn .
Proof. Follows from the repetition of arguments of the proof of Lemma 4.
Lemma 6. The map ȧn,0 7→ ḃn,0 , ȧn,1 7→ ḃn,1 induces a group isomorphism
between the groups hȧn,0 , ȧn,1 i = Ȧn and hḃn,0 , ḃn,1 i = Ḃn if and only if n ∈
N \ M.
Proof. Indeed, in case n ∈ N , by definition, hȧn,0 , ȧn,1 i = hȧn,0 i and ȧn,1 =
2i ȧn,0 , where i is the index of n in N . Also hḃn,0 , ḃn,1 i = hḃn,0 i and ḃn,1 = 2i ḃn,0 .
Therefore, in case n ∈ N , the map ȧn,0 7→ ḃn,0 , ȧn,1 7→ ḃn,1 induces a group
isomorphism between the groups hȧn,0 , ȧn,1 i and hḃn,0 , ḃn,1 i.
In case n ∈ N \ (N ∪ M), the groups Ȧn and Ḃn are torsion-free and abelian
of rank 2 with generating sets {ȧn,0 , ȧn,1 } and {ḃn,0 , ḃn,1 }, respectively. Therefore, if n ∈ N \ (N ∪ M), the map ȧn,0 7→ ḃn,0 , ȧn,1 7→ ḃn,1 induces a group
isomorphism between the groups hȧn,0 , ȧn,1 i and hḃn,0 , ḃn,1 i as well.
Now suppose that n ∈ M. Then, hȧn,0 , ȧn,1 i = hȧn,0 i and hḃn,0 , ḃn,1 i =
hḃn,0 i, however, by definition, ȧn,1 = 3i ȧn,0 while ḃn,1 = 2i ḃn,0 . Therefore, the
map ȧn,0 7→ ḃn,0 , ȧn,1 7→ ḃn,1 does not induce a group isomorphism between the
groups hȧn,0 , ȧn,1 i and hḃn,0 , ḃn,1 i when n ∈ M.
7
Now, let T = F (t1 , t2 , . . .) be a free group with countable free basis {t1 , t2 , . . .}.
Denote the infinite free products Ȧ1 ∗ Ȧ1 ∗ . . . and Ḃ1 ∗ Ḃ1 ∗ . . . by ∗∞
n=1 Ȧn
and ∗∞
Ḃ
,
respectively.
Then
define
n=1 n
∞
Ġ = (∗∞
n=1 Ȧn ) ∗ (∗n=1 Ḃn ) ∗ T / ≪ R ≫,
(4)
where the set of defining relators R is defined as
R = t−1
i ȧni ,0 ti = ḃni ,0 | i ∈ N .
Define
∞
Ġ0 = (∗∞
n=1 Ȧn ) ∗ (∗n=1 Ḃn ),
and for all k ∈ N, define Ġk as
∞
Ġk = (∗∞
n=1 Ȧn ) ∗ (∗n=1 Ḃn ) ∗ F (t1 , . . . , tk )/ ≪ Rk ≫,
where the set of defining relators Rk is defined as
Rk = t−1
i ȧni ,0 ti = ḃni ,0 | 1 ≤ i ≤ k .
Then, clearly the group Ġ is the direct limit of the sequence of group {Ġk }∞
k=0
connected by homomorphisms ǫk : Ġk → Ġk+1
such that ǫk are the homomorphisms induced by the identity maps from ȧn,0 , ȧn,1 , ḃn,0 , ḃn,1 , ti | n ∈ N, i ∈
{1, 2, . . . , k} to themselfs for all k ∈ N.
Let us denote
S0 = ± ȧn,m , ± ḃn,m | n ∈ N, m ∈ {0, 1}
and for k ∈ N,
Sk =
±1
± ȧn,m , ± ḃn,m , t±1
| n ∈ N, m ∈ {0, 1} .
1 , . . . , tk
Note that since the sets N and M are recursively enumerable, the groups
Ġ and Ġk have recursive presentations with respect to the generating sets
S0 ∪ {t1 , t2 , . . .} and Sk , k ∈ N ∪ {0}, respectively.
Lemma 7. There exists an algorithm such that for each input w ∈ S0∗ it decides
whether or not w = 1 in Ġ0 .
Moreover, there exists an algorithm such that for each input (w, i), w ∈ S0∗ ,
i ∈ N, it decides whether or not w represents an element from hȧni ,0 i, and in
case it represents such an element, the algorithm returns λȧni ,0 , λ ∈ Z, such
that w = λȧni ,0 in Ġ0 . Analogous statement remains true when we replace ȧni ,0
with ḃni ,0 .
8
Proof. Indeed, these properties immediately follow from the basic properties of
the direct products of groups combined with Lemmas 4 and 5.
Lemma 8. For all k ∈ N ∪ {0} and n ∈ N, the following holds.
(i). The groups Ȧn and Ḃn embed into Ġk under the maps induced by ȧn,m 7→
ȧn,m and ḃn,m 7→ ḃn,m for m ∈ {0, 1}, respectivley;
(ii). The group Ġk+1 is an HNN-extension of the group Ġk . More precisely,
Ġk+1 = hĠk , tk+1 | t−1
k+1 ȧnk+1 ,0 tk+1 = ḃnk+1 ,0 i.
Proof. Indeed, if k = 0, then (i) and (ii) are obvious. Now, let us apply induction with respect to k.
Suppose that for all 0 ≤ l < k, the statements of (i) and (ii) are true.
Then, since by the inductive assumption, Ġk is obtained from Ġk−1 as an HNNextension with respect to the isomorphic subgroups hȧnk ,0 i ⋍ hḃnk ,0 i, by the
basic properties of HNN-extensions (see Lemma 2), we get that the statement of
(i) holds for Ġk . Therefore, since the subgroups hȧnk+1 ,0 i ≤ Ġk and hḃnk+1 ,0 i ≤
Ġk are isomorphic, and in the definition of Ġk+1 the only defining relation which
−1
involves the letters t±1
k+1 is the relation tk+1 ȧnk+1 ,0 tk+1 = ḃnk+1 ,0 , we get that
the statement of (ii) holds as well.
Corollary 1. If k < l, then the group Ġk embeds into the group Ġl under the
map induced by
ȧn,m 7→ ȧn,m , ḃn,m 7→ ḃn,m for n ∈ N and m ∈ {0, 1}
and
t1 7→ t1 , . . . , tk 7→ tk .
Proof. Indeed, by Lemma 8, the group Ġl is obtained from the group Ġk by
(multiple) HNN-extensions. Therefore, the statement follows from the basic
properties of HNN-extensions, namely, by Lemma 2.
Corollary 2. The map ȧn,0 7→ ḃn,0 , ȧn,1 7→ ḃn,1 induces a group isomorphism
between the subgroups hȧn,0 , ȧn,1 i = Ȧn and hḃn,0 , ḃn,1 i = Ḃn of Ġ if and only
if n ∈ N \ M.
Proof. By Corollary 1, Ġ0 embeds in Ġ by the map induced by ȧn,0 7→ ȧn,0 ,
ȧn,1 7→ ȧn,1 , ḃn,0 7→ ḃn,0 , ḃn,1 7→ ḃn,1 for n ∈ N. Therefore, the statement of the
corollary follows from Lemma 6.
Definition 1 (Reduced words over Sk∗ ). Let k ∈ N. Then, for a given word
w ∈ Sk∗ , we say that w is a reduced word over Sk∗ if the following properties hold.
9
(0). w is freely reduced, i.e. w does not contain subwords of the form xx−1 ,
x ∈ Sk ;
(1). For all 1 ≤ i ≤ k, w does not contain subwords of the form t−1
i uti , where
u ∈ S0∗ is such that u = λȧni ,0 in Ġ0 for some λ ∈ Z;
(2). For all 1 ≤ i ≤ k, w does not contain subwords of the form ti vt−1
i , where
v ∈ S0∗ is such that v = λḃni ,0 in Ġ0 for some λ ∈ Z.
∗
Lemma 9. For all k ∈ N, if w ∈ Sk∗ \ Sk−1
is a reduced word over Sk∗ , then
∗
w 6= 1 in Ġk . Moreover, w 6= u in Ġk for any word u ∈ Sk−1
.
Proof. Let us prove by induction on k. If k = 1, then the group Ġ1 = hĠ0 , t1 |
t−1
1 ȧn1 ,0 t1 = ḃn1 ,0 i is an HNN-extension of Ġ0 with respect to the isomorphic
subgroups hȧn1 ,0 i ≤ Ġ0 and hḃn1 ,0 i ≤ Ġ0 . Therefore, by Britton’s Lemma (see
Lemma 1), w 6= 1 in Ġ1 provided that it is a reduced word over S1∗ .
Also for any u ∈ S0∗ , the word wu−1 is a reduced word with respect to
the HNN-extension Ġ1 = hĠ0 , t1 | t−1
1 ȧn1 ,0 t1 = ḃn1 ,0 i. Therefore, by Britton’s
Lemma (see Lemma 1), wu−1 6= 1 in Ġ1 or, in other words, w 6= u in Ġ1 .
∗
Now assume that k > 1 and w ∈ Sk∗ \ Sk−1
is a reduced word over Sk∗ . Also,
suppose that the statement of the lemma is true for all l < k. Then, first of
all, note that from the definition of the reduced words over Sk∗ it follows that if
∗
∗
.
, then v is a reduced word over Sk−1
v is a subword of w such that v ∈ Sk−1
−1
−1
Consequently, by the inductive hypothesis, if tk utk (or tk utk ) is a subword of
∗
w such that u ∈ Sk−1
and u represents an element from the image of Ȧnk (or
Ḃnk ) in Ġk , then u ∈ S0∗ . However, this contradicts the assumption that w is a
reduced word over Sk∗ . Therefore, since Ġk = hĠk−1 , tk | t−1
k ȧnk ,0 tk = ḃnk ,0 i is
an HNN-extension of Ġk−1 with respect to the isomorphic subgroups hank ,0 i =
Ȧnk ≤ Ġk−1 and hbnk ,0 i = Ḃnk ≤ Ġk−1 , we get that if w is a reduced word over
Sk∗ , then w is a reduced word over this HNN-extension. Hence, by Britton’s
Lemma, we get that w 6= 1 in Ġk . Similarly, for any u ∈ S0∗ , again by Britton’s
Lemma, we get that wu−1 6= 1 in Ġk or, in other words, w 6= u in Ġk .
Lemma 10. There exists an algorithm such that for each input (k, w), k ∈
N ∪ {0}, w ∈ Sk∗ , it decides whether or not w = 1 in Ġk .
Proof. Let (k, w) be a fixed input. Without loss of generality assume that w is
a freely reduced word in Sk∗ .
If k = 0, then one can apply the word problem algorithm for the group
Ġ0 = hS0∗ i. See Lemma 7.
Otherwise, if k ≥ 1, for each k1 ≤ k such that w contains a letter from
{tk1 , t−1
k1 }, do the following: Find all subwords of w which are of one of the forms
−1
∗
ut
t−1
k1 or tk1 vtk1 , where u, v ∈ S0 and u = λȧnk1 ,0 , v = λḃnk1 ,0 in Ġ0 for some
k1
λ ∈ Z. (By Lemma 7, subwords of these form can be found algorithmically.)
Then, if, say, a subword of the form t−1
k1 utk1 is found, replace it with λḃnk1 ,0 .
10
Thanks to the identity t−1
k1 λȧnk1 ,0 tk1 = λḃnk1 ,0 , the newly obtained word is
equal to w in Ġk . Then repeat this procedure on the newly obtained word until
there is no more subwords of the mentioned forms. Let w1 be the word obtained
as a result of this procedure. Then, by Lemma 9, either w1 ∈ S0∗ or for some
k0 ≥ 1, w1 ∈ Sk∗0 \ Sk∗0 −1 . Then, in the last case, by Lemma 9, w1 is a reduced
word over Sk∗0 . Also in the first case (i.e. when w1 ∈ S0∗ ), w1 = 1 in Ġk if and
only if w1 = 1 in Ġ0 , hence by Lemma 7, in this case, the identity w1 = 1 can
be checked algorithmically. In the second case, by Lemma 9, w1 6= 1 in Ġk .
Lemma 11. The word problem in Ġ is decidable with respect to the presentation
(4).
Proof. Suppose that w is a finite word with letters from
±1
Sk = ± ȧn,m , ± ḃn,m , t±1
| n ∈ N, m ∈ {0, 1} ,
1 , . . . , tk
where k is some natural number. Also suppose that w represents the trivial
element in Ġ. Then, since Ġ is a direct limit of the groups {Ġi }∞
i=1 , there exists
a minimal integer N ≥ 0 such that w represents the trivial element in ĠN .
We claim that N ≤ k. Indeed, if N > k, then since N was chosen as the
minimal index such that w = 1 in ĠN , we get w 6= 1 in Ġk . However, by
Corollary 1, Ġk embeds into ĠN under the map induces by
ȧn,m 7→ ȧn,m and t1 7→ t1 , . . . , tk 7→ tk , for n ∈ N, m ∈ {0, 1},
which implies that if w 6= 1 in Ġk , then w 6= 1 in ĠN . A contradiction.
Thus, if w ∈ Sk∗ represents the trivial element in Ġ, then it represents the
trivial element in Ġk as well. In other words, in order to check whether or not
w represents the trivial element in Ġ it is enough to check its triviality in Ġk .
Therefore, since for each w ∈ S ∗ one can algorithmically find (the minimal)
k ∈ N such that w ∈ Sk∗ , the decidability of the word problem in Ġ follows from
Lemma 10.
Lemma 12. The group Ġ is torsion-free.
Proof. First of all, notice that by the properties of the groups Ȧk , Ḃk , k ∈ N,
and by the basic properties of direct products, the group Ġ0 is torsion free.
Now, suppose that u ∈ S ∗ is such that it represents a torsion element of
Ġ. Then, since Ġ is a direct limit of the groups {Ġi }∞
i=1 , there exists k ∈ N
such that u ∈ Sk∗ and u represents a torsion element in Ġk as well. Since Ġk
is obtained from Ġ0 by multiple HNN-extensions, then, by Lemma 3, Ġk is a
torsion free group. Therefore, u represents the trivial element in Ġk as well as
in Ġ.
11
Now suppose that Φ : Ġ ֒→ G̈ is an embedding of the group Ġ into a finitely
generated torsion-free group G̈ such that the maps
φ1 : (n, m) 7→ Φ(ȧn,m ), φ2 : (n, m) 7→ Φ(ḃn,m ), and φ3 : n 7→ Φ(tn ),
where n ∈ N, m ∈ {0, 1},
are computable, and G̈ has decidable word problem. Then the next lemma
shows that the group G̈ has the desirable properties we were looking for.
Lemma 13. The group G̈ cannot be embedded in a group with decidable conjugacy problem.
Proof. By contradiction, let us assume that G̈ embeds in a group Ḡ which has
decidable conjugacy problem. Then, for the purpose of convenience, without
loss of generality let us assume that G̈ is a subgroup of the group Ḡ.
Below we show that the decidability of the conjugacy problem in Ḡ contradicts the assumption that N and M are disjoint and recursively inseparable.
Let us define C ⊆ N as
C = n ∈ N | Φ(ȧn,0 ) is conjugate to Φ(ḃn,0 ) in Ḡ .
Then, the decidability of the conjugacy problem in Ḡ implies that the set C is
recursive, because, since the group Ḡ has decidable conjugacy problem, and since
by our assumptions, the above mentioned maps φ1 , φ2 and φ3 are computable,
for any input n ∈ N one can algorithmically verify whether or not Φ(ȧn,0 ) is
conjugate to Φ(ḃn,0 ) in Ḡ.
Therefore, since for groups with decidable conjugacy problem one can algorithmically find conjugator element for each pair of conjugate elements of the
group, we also get that there exists a computable map
f : C → Ḡ
such that for all n ∈ C we have
f (n)−1 Φ(ȧn,0 )f (n) = Φ(ḃn,0 ).
For n ∈ C, let us denote
f (n) = gn ∈ Ḡ.
Now let us define
A = n ∈ C | gn−1 Φ(ȧn,1 )gn = Φ(ḃn,1 ) ⊆ N.
12
Since the word problem in Ḡ is decidable, the sets C is recursive and the maps
Φ and f are computable, we get that the set A is a recursive subset of N. Also
since the following identities
ȧni ,1 = 2i ȧni ,0 , ḃni ,1 = 2i ḃni ,0 and ti−1 ȧni ,0 ti = ḃni ,0 , for i ∈ N,
hold in Ġ, we get that in Ḡ the following identities hold
i
i
Φ(ȧni ,1 ) = Φ(ȧni ,0 )2 , Φ(ḃni ,1 ) = Φ(ḃni ,0 )2
and
Φ(ti )−1 Φ(ȧni ,0 )Φ(ti ) = Φ(ḃni ,0 ) for all ni ∈ N .
Therefore, we get that
N ⊆ A.
On the other hand, Corollary 2 implies that for any n ∈ M, the pairs of elements
Φ(ȧn,0 ), Φ(ḃn,0 ) and Φ(ȧn,1 ), Φ(ḃn,1 )
cannot be conjugate in Ḡ by the same conjugator. Therefore, we get that
A ∩ M = ∅.
Thus we get that N ⊆ A and A ∩ M = ∅, which implies that A ⊂ N is a
recursive separating set for N and M, which contradicts the assumption that
N and M are recursively inseparable.
Finally, the embedding Φ : Ġ ֒→ G̈ with the prescribed properties exists,
thanks to Theorem 4. Therefore, the group G̈ with the above mentioned properties exists. Also by a version of Higman’s embedding theorem described by
Aanderaa and Cohen in [1], the group G̈ can be embedded into a finitely presented group G with decidable word problem. By a recent result of Chiodo
and Vyas, [4], the group G defined this way will also inherit the property of
torsion-freeness from the group G̈.
Clearly, since G̈ cannot be embedded into a group with decidable conjugacy
problem, this property will be inherited by G. Thus Theorem 2 is proved.
13
References
[1] S. Aanderaa, D. E. Cohen, Modular machines I, II, in [2], Stud. Logic
Found. Math. 95 (1980), 1-18, 19-28.
[2] G. Baumslag, A. Myasnikov, V. Shpilrain et al., Open problems in Combinatorial and Geometric Group Theory, http://www.grouptheory.info/.
[3] W.W. Boone, F.B. Cannonito, and R.C. Lyndon, Word Problems: Decision
Problems and the Burnside Problem in Group Theory, Studies in Logic
and the Foundations of Mathematics, vol. 71, North-Holland, Amsterdam,
1973.
[4] M. Chiodo, R. Vyas, Torsion length and finitely presented groups,
arXiv:1604.03788, 2016.
[5] Clapham C.R.J., An embedding theorem for finitely generated groups.
Proc. London Math. Soc. (1967) 17:419-430.
[6] D.J. Collins, Representation of Turing reducibility by word and conjugacy
problems in finitely presented groups, Acta Math. 128 , (1972) no. 1-2,7390.
[7] D.J. Collins, C.F. Miller III , The conjugacy problem and subgroups of
finite index, Proc. London Math. Soc. (3) 34 (1977), no. 3, 535-556.
[8] A. Darbinyan, Word and conjugacy problems in lacunary hyperbolic
groups, arXiv:1708.04591.
[9] A. Darbinyan, Group embeddings with algorithmic properties, Communications in Algebra, 43:11 (2015), 4923-4935.
[10] A.V. Gorjaga, A.S. Kirkinski , The decidability of the conjugacy problem
cannot be transferred to finite extensions of groups. (Russian) Algebra i
Logika 14 (1975), no. 4, 393-406.
[11] G. Higman. Subgroups of finitely presented groups. Proc. Roy. Soc. Ser. A,
262:455-475, 1961.
[12] Kourovka Notebook. Unsolved Problems in Group Theory. 5th edition,
Novosibirsk, 1976.
[13] R.C. Lyndon, P.E. Schupp, Combinatorial group theory, Springer, Berlin,
1977.
[14] A. Mal’cev, Constructive algebras. I, Uspehi Mat. Nauk, vol. 16 (1961), no.
3 (99), pp. 3-60.
[15] A. Miasnikov, P. Schupp, Computational complexity and the conjugacy
problem. Computability, 2016.
14
[16] C. F. Miller III. On group-theoretic decision problems and their classification, volume 68 of Annals of Mathematics Studies. Princeton University
Press, 1971.
[17] C.F. Miller III, Decision Problems for Groups Survey and Reflections.
Algorithms and Classification in Combinatorial Group Theory (1992) 23:159.
[18] A.Yu. Olshanskii, M. Sapir, The conjugacy problem and Higman embeddings. Mem. Amer. Math. Soc. 170 (2004), no. 804
[19] A. Yu. Olshanskii, M.V. Sapir, Subgroups of finitely presented groups with
solvable conjugacy problem. Internat. J. Algebra Comput., 15(5-6):10751084, 2005.
[20] D. Osin, Small cancellations over relatively hyperbolic groups and embedding theorems, Ann. Math. 172 (2010), no. 1, 1-39.
[21] M. Rabin, Computable algebra, general theory and theory of computable
fields., Trans. Amer. Math. Soc., vol. 95 (1960), pp. 341-360.
[22] J. R. Shoenfield, Mathematical logic. Addison Wesley, 1967.
A. Darbinyan, Department of Mathematics, Vanderbilt University,
1326 Stevenson Center, Nashville, TN 37240
E-mail address: [email protected]
15
| 4 |
arXiv:1510.04156v1 [math.AC] 14 Oct 2015
A CHANGE OF RINGS RESULT FOR MATLIS REFLEXIVITY
DOUGLAS J. DAILEY AND THOMAS MARLEY
Abstract. Let R be a commutative Noetherian ring and E the minimal injective cogenerator of the category of R-modules. An R-module M is (Matlis) reflexive if the natural
evaluation map M −→ HomR (HomR (M, E), E) is an isomorphism. We prove that if S is a
multiplicatively closed subset of R and M is a reflexive R-module, then M is a reflexive
RS -module. The converse holds when S is the complement of the union of finitely many
nonminimal primes of R, but fails in general.
1. Introduction
Let R be a commutative Noetherian
L ring and E the minimal injective cogenerator of the
category of R-modules; i.e., E =
m∈Λ ER (R/m), where Λ denotes the set of maximal
ideals of R and ER (−) denotes the injective hull. An R-module M is said to be (Matlis)
reflexive if the natural evaluation map M −→ HomR (HomR (M, E), E) is an isomorphism.
In [1], the authors assert the following “change of rings” principal for Matlis reflexivity ([1,
Lemma 2]): Let S be a multiplicatively closed subset of R and suppose M is an RS -module.
Then M is reflexive as an R-module if and only if M is reflexive as an RS -module. However,
the proof given in [1] is incorrect (see Examples 3.1-3.3) and in fact the “if” part is false in
general (cf. Proposition 3.4). In this note, we prove the following:
Theorem 1.1. Let R be a Noetherian ring, S a multiplicatively closed subset of R, and M
an RS -module.
(a) If M is reflexive as an R-module then M is reflexive as an RS -module.
(b) If S = R \ (p1 ∪ . . . ∪ pr ) where each pi is a maximal ideal or a nonminimal prime ideal,
then the converse to (a) holds.
2. Main results
Throughout this section R will denote a Noetherian ring and S a multiplicatively closed
set of R. We let ER (or just E if the ring is clear) denote the minimal injective cogenerator
of the category of R-modules as defined in the introduction. A semilocal ring is said to
be complete if it is complete with respect to the J-adic topology, where J is the Jacobson
radical.
We will make use of the main result of [1]:
Date: October 14, 2015.
2010 Mathematics Subject Classification. Primary 13C05; Secondary 13C13.
Key words and phrases. Matlis reflexive, minimal injective cogenerator.
The first author was partially supported by U.S. Department of Education grant P00A120068 (GAANN).
1
2
DOUGLAS J. DAILEY AND THOMAS MARLEY
Theorem 2.1. ([1, Theorem 12]) Let R be a Noetherian ring, M an R-module, and I =
AnnR M . Then M is reflexive if and only if R/I is a complete semilocal ring and there
exists a finitely generated submodule N of M such that M/N is Artinian.
We remark that the validity of this theorem does not depend on [1, Lemma 2], as the
proof of [1, Theorem 12] uses this lemma only in a special case where it is easily seen to
hold. (See the proof of [1, Theorem 9], which is the only instance [1, Lemma 2] is used
critically.)
Lemma 2.2. ([1, Lemma 1] Let M be an R-module and I an ideal of R such that IM = 0.
Then M is reflexive as an R-module if and only if M is reflexive as an R/I-module.
Proof. Since ER/I = HomR (R/I, ER ), the result follows readily by Hom-tensor adjunction.
Lemma 2.3. Let R = R1 × · · · × Rk be a product of Noetherian local rings. Let M =
M1 × · · · × Mk be an R-module. Then M is reflexive as an R-module if and only if Mi is
reflexive as an Ri -module for all i.
Proof. Let ρi : R−→Ri be the canonical projections for i = 1, . . . , k. Let ni be the maximal
ideal of Ri and mi = ρ−1
i (ni ) the corresponding maximal ideal of R. Then mi Ri = ni and
mi Rj = Rj for all j 6= i. Note that Rmi ∼
= Ri and Ei := ER (R/mi ) ∼
= ERi (Ri /ni ) for all i.
Then ER = E1 ⊕ · · · ⊕ Ek . It is easily seen that
HomR (HomR (M, ER ), ER ) ∼
=
k
M
HomRi (HomRi (Mi , Ei ), Ei ),
i=1
and that this isomorphism commutes with the evaluation maps. The result now follows.
Theorem 2.4. Let S be a multiplicatively closed set of R and M an RS -module which is
reflexive as an R-module. Then M is reflexive as an RS -module.
Proof. By Lemma 2.2, we may assume AnnR M = AnnRS M = 0. Thus, R is semilocal and
complete by Theorem 2.1. Hence, R = R1 × · · · × Rk where each Ri is a complete local ring.
Then RS = (R1 )S1 × · · · × (Rk )Sk where Si is the image of S under the canonical projection
R−→Ri . Write M = M1 × · · · × Mk , where Mi = Ri M . As M is reflexive as an R-module,
Mi is reflexive as an Ri -module for all i. Thus, it suffices to show that Mi is reflexive as an
(Ri )Si -module for all i. Hence, we may reduce the proof to the case (R, m) is a complete
local ring with AnnR M = 0 by passing to R/ AnnR M , if necessary. As M is reflexive as
an R-module, we have by Theorem 2.1 that there exists an exact sequence
0−→N −→M −→X−→0
where N is a finitely generated R-module and X is an Artinian R-module. If S ∩ m = ∅,
the RS = R and there is nothing to prove. Otherwise, as SuppR X ⊆ {m}, we have XS = 0.
Hence, M ∼
= NS , a finitely generated RS -module. To see that M is RS -reflexive, it suffices to
show that RS is Artinian (hence semilocal and complete). Since AnnR NS = AnnR M = 0,
we have that AnnR N = 0. Thus, dim R = dim N . Since M is an RS -module and S ∩m 6= ∅,
i (M ) ∼ H i
i (X) = 0 for i ≥ 1.
we have Hm
= mRS (M ) = 0 for all i. Further, as X is Artinian, Hm
i (N ) = 0 for
Thus, from the long exact sequence on local cohomology, we conclude that Hm
A CHANGE OF RINGS RESULT FOR MATLIS REFLEXIVITY
3
i ≥ 2. Thus, dim R = dim N ≤ 1, and hence, dim RS = 0. Consequently, RS is Artinian,
and M is a reflexive RS -module.
To prove part (b) of Theorem 1.1, we will need the following result on Henselian local
rings found in [2] (in which the authors credit it to F. Schmidt). As we need a slightly
different version of this result than what is stated in [2] and the proof is short, we include
it for the convenience of the reader:
Proposition 2.5. ([2, Satz 2.3.11]) Let (R, m) be a local Henselian domain which is not
a field and F the field of fractions of R. Let V be a discrete valuation ring with field of
fractions F . Then R ⊆ V .
Proof. Let k be the residue field of R and a ∈ m. As R is Henselian, for every positive
integer n not divisible by the characteristic of k, the polynomial xn − (1 + a) has a root b in
R. Let v be the valuation on F associated to V . Then nv(b) = v(1 + a). If v(a) < 0 then
v(1 + a) < 0 which implies v(b) ≤ −1. Hence, v(1 + a) ≤ −n. As n can be arbitrarily large,
this leads to a contradiction. Hence, v(a) ≥ 0 and a ∈ V . Thus, m ⊆ V . Now let c ∈ R be
arbitrary. Choose d ∈ m, d 6= 0. If v(c) < 0 then v(cℓ d) < 0 for ℓ sufficiently large. But this
contradicts that cℓ d ∈ m ⊆ V for every ℓ. Hence v(c) ≥ 0 and R ⊆ V .
For a Noetherian ring R, let Min R and Max R denote the set of minimal and maximal
primes of R, respectively. Let T(R) = (Spec R \ Min R) ∪ Max R.
Lemma 2.6. Let R be a Noetherian ring and p ∈ T(R). If Rp is Henselian then the natural
map ϕ : R−→Rp is surjective; i.e., R/ ker ϕ ∼
= Rp .
Proof. By replacing R with R/ ker ϕ, we may assume ϕ is injective. Then p contains every
minimal prime of R. Let u ∈ R, u 6∈ p. It suffices to prove that the image of u in R/q is
a unit for every minimal prime q of R. Hence, we may assume that R is a domain. (Note
that (R/q)p = Rp /qRp is still Henselian.) If Rp is a field, then, as p ∈ T(R), we must have
R is a field (as p must be both minimal and maximal in a domain). So certainly u 6∈ p = (0)
is a unit in R. Thus, we may assume Rp is not a field. Suppose u is not a unit in R. Then
u ∈ n for some maximal ideal n of R. Now, there exists a discrete valuation ring V with
same field of fractions as R such that mV ∩ R = n ([5, Theorem 6.3.3]). As Rp is Henselian,
Rp ⊆ V by Proposition 2.5. But as u ∈
/ p, u is a unit in Rp , hence in V , contradicting
u ∈ n ⊆ mV . Thus, u is a unit in R and R = Rp .
Proposition 2.7. Let R be a Noetherian ring and S = R \ (p1 ∪ · · · ∪ pr ) where p1 , . . . , pr ∈
T(R). Suppose RS is complete with respect to its Jacobson radical. Then the natural map
ϕ : R−→RS is surjective.
S
Proof. First, we may assume that pj * i6=j pi for all j. Also, by passing to the ring
R/ ker ϕ, we may assume ϕ is injective. (We note that if pi1 , . . . , pit are the ideals in
the set {p1 , . . . , pr } containing ker ϕ, it is easily seen that (R/ ker ϕ)S = (R/ ker ϕ)T where
T = R\(pi1 ∪· · ·∪pit ). Hence, we may assume each pi contains ker ϕ.) As RS is semilocal and
complete, the map ψ : RS −→Rp1 × · · · × Rpr given by ψ(u) = ( u1 , . . . , u1 ) is an isomorphism.
For each i, let ρi : R−→Rpi be the natural map. Since R−→RS is an injection, ∩i ker ρi =
(0). It suffices to prove that u is a unit in R for every u ∈ S. As Rpi is complete, hence
Henselian, we have that ρi is surjective for each i by Lemma 2.6. Thus, u is a unit in
4
DOUGLAS J. DAILEY AND THOMAS MARLEY
R/ ker ρi for every i; i.e., (u) + ker ρi = R for i = 1, . . . , r. Then (u) = (u) ∩ (∩i ker ρi ) = R.
Hence, u is a unit in R.
We now prove part (b) of the Theorem 1.1:
Theorem 2.8. Let R be a Noetherian ring and M a reflexive RS -module, where S is the
complement in R of the union of finitely many elements of T(R). Then M is reflexive as
an R-module.
Proof. We may assume M 6= 0. Let S = R \ (p1 ∪ · · · ∪ pr ), where p1 , . . . , pr ∈ T(R) Let I =
AnnR M , whence IS = AnnRS M . As in the proof of Proposition 2.7, we may assume each
pi contains I. Then by Lemma 2.2, we may reduce to the case AnnR M = AnnRS M = 0.
Note that this implies the natural map R−→RS is injective. As M is RS -reflexive, RS is
complete with respect to its Jacobson radical by Theorem 2.1. By Proposition 2.7, we have
that R ∼
= RS and hence M is R-reflexive.
3. Examples
The following examples show that HomR (RS , ER ) need not be the minimal injective
cogenerator for the category of RS -modules, contrary to what is stated in the proof of [1,
Lemma 2]:
Example 3.1. Let (R, m) be a local ring of dimension at least two and p any prime which
is not maximal or minimal. By [3, Lemma 4.1], every element of Spec Rp is an associated
prime of the Rp -module HomR (Rp , ER ). In particular, HomR (Rp , ER ) ∼
6 ERp .
=
Example 3.2. ([3, p. 127]) Let R be a local domain such that the completion of R has
a nonminimal prime contracting to (0) in R. Let Q be the field of fractions of R. Then
HomR (Q, ER ) is not Artinian.
Example 3.3. Let R be a Noetherian domain which is not local. Let m 6= n be maximal
ideals of R. By a slight modification of the proof of [3, Lemma 4.1], one obtains that (0) is
an associated prime of HomR (Rm , ER (R/n)), which is a direct summand of HomR (Rm , ER ).
Hence, HomR (Rm , ER ) 6∼
= ERm .
We now show that the converse to part (a) of Theorem 1.1 does not hold in general. Let
R be a domain and Q its field of fractions. Of course, Q is reflexive as a Q = R(0) -module.
But as the following theorem shows, Q is rarely a reflexive R-module.
Proposition 3.4. Let R be a Noetherian domain and Q the field of fractions of R. Then
Q is a reflexive R-module if and only if R is a complete local domain of dimension at most
one.
Proof. We first suppose R is a one-dimensional complete local domain with maximal ideal
m. Let E = ER (R/m). By [4, Theorem 2.5], HomR (Q, E) ∼
= Q. Since the evaluation map
of the Matlis double dual is always injective, we obtain that Q−→ HomR (HomR (Q, E), E)
is an isomorphism.
Conversely, suppose Q is a reflexive R-module. By Theorem 2.1, R is a complete semilocal
domain, hence local. It suffices to prove that dim R ≤ 1. Again by Theorem 2.1, there exists
a finitely generated R-submodule N of Q such that Q/N is Artinian. Since AnnR N = 0,
i (N ) = 0 for i ≥ 2. But this follows
dim R = dim N . Thus, it suffices to prove that Hm
A CHANGE OF RINGS RESULT FOR MATLIS REFLEXIVITY
5
i (Q) = 0 for all i and H i (Q/N ) = 0 for i ≥ 1 (as Q/N is
readily from the facts that Hm
m
Artinian).
Acknowledgments: The authors would like to thank Peder Thompson for many helpful
discussions on this topic. They are also very grateful to Bill Heinzer for pointing out the
existence of Proposition 2.5.
References
1. R. Belshoff, E. Enochs, and J. Garcı́a-Rozas, Generalized Matlis duality. Proc. Amer. Math. Soc.
128 (1999), no. 5, 1307-1312.
2. R. Berger, R. Kiehl, E. Kunz, and H.-J. Nastold, Differentialrechnung in der analytischen Geometrie.
Lecture Notes in Mathematics 38, Springer-Verlag, Berlin-New York, 1967.
3. L. Melkersson and P. Schenzel, The co-localization of an Artinian module. Proc. Edinburgh Math.
Soc. 38 (1995), 121–131.
4. P. Schenzel, A note on the Matlis dual of a certain injective hull. J. Pure Appl. Algebra 219 (2015),
no. 3, 666-671.
5. I. Swanson and C. Huneke, Integral closures of ideals, rings and modules. London Mathematical
Society Lecture Note Series 336, Cambridge University Press, Cambridge, 2006.
Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE 68588-0130
E-mail address:
[email protected]
Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE 68588-0130
E-mail address:
[email protected]
| 0 |
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
1
CUDAMPF++: A Proactive Resource Exhaustion
Scheme for Accelerating Homologous Sequence
Search on CUDA-enabled GPU
arXiv:1707.09683v1 [cs.CE] 30 Jul 2017
Hanyu Jiang, Student Member, IEEE, Narayan Ganesan, Senior Member, IEEE,
and Yu-Dong Yao, Fellow, IEEE
Abstract—Genomic sequence alignment is an important research topic in bioinformatics and continues to attract significant efforts. As
genomic data grow exponentially, however, most of alignment methods face challenges due to their huge computational costs.
HMMER, a suite of bioinformatics tools, is widely used for the analysis of homologous protein and nucleotide sequences with high
sensitivity, based on profile hidden Markov models (HMMs). Its latest version, HMMER3, introdues a heuristic pipeline to accelerate the
alignment process, which is carried out on central processing units (CPUs) with the support of streaming SIMD extensions (SSE)
instructions. Few acceleration results have since been reported based on HMMER3. In this paper, we propose a five-tiered parallel
framework, CUDAMPF++, to accelerate the most computationally intensive stages of HMMER3’s pipeline, multiple/single segment
Viterbi (MSV/SSV), on a single graphics processing unit (GPU). As an architecture-aware design, the proposed framework aims to fully
utilize hardware resources via exploiting finer-grained parallelism (multi-sequence alignment) compared with its predecessor
(CUDAMPF). In addition, we propose a novel method that proactively sacrifices L1 Cache Hit Ratio (CHR) to get improved
performance and scalability in return. A comprehensive evaluation shows that the proposed framework outperfroms all existig work and
exhibits good consistency in performance regardless of the variation of query models or protein sequence datasets. For MSV (SSV)
kernels, the peak performance of the CUDAMPF++ is 283.9 (471.7) GCUPS on a single K40 GPU, and impressive speedups ranging
from 1.x (1.7x) to 168.3x (160.7x) are achieved over the CPU-based implementation (16 cores, 32 threads).
Index Terms—GPU, CUDA, SIMD, L1 cache, hidden Markov model, HMMER, MSV, SSV, Viterbi algorithm.
F
1
I NTRODUCTION
T
YPICAL algorithms and applications in bioinformatics,
computational biology and system biology share a common trait that they are computationally challenging and
demand more computing power due to the rapid growth
of genomic data and the need for high fidelity simulations.
As one of the most important branches, the genomic sequence analysis with various alignment methods scales the
abstraction level from atoms to RNA/DNA molecules and
even whole genomes, which aims to interpret the similarity
and detect homologous domains amongst sequences [1].
For example, the protein motif detection is key to identify
conserved protein domains within a known family of proteins. This paper addresses HMMER [2], [3], a widely used
toolset designed for the analysis of homologous protein and
nucleotide sequences with high sensitivity, which is carried
out on central processing units (CPUs) originally.
HMMER is built on the basis of probabilistic inference
methods with profile hidden Markov models (HMMs) [3].
Particularly, the profile HMM used in HMMER is Plan7 architecture that consists of five main states (Match(M),
Insert(I), Delete(D), Begin(B) and End(E)) as well as five
special states (N, C, J, S and T). The M, I and D states which
are in the same position form a node, and the number of
•
H. Jiang, N. Ganesan, Y. Yao are with the Department of Electrical and
Computer Engineering, Stevens Institute of Technology, Hoboken, NJ
07030.
E-mail: {hjiang5, nganesan, Yu-Dong.Yao}@stevens.edu.
Manuscript received xxxx xx, xxxx; revised xxxx xx, xxxx.
nodes included in a profile HMM indicates its length. The
digital number “7” in Plan-7 refers to the total of seven
transitions per node, which exist in the architecture and each
has a transition probability. In addtion, some states also have
emission probabilities. This architecute is a little bit different
from the original one proposed by Krogh et al. [4] which
contains extra I -D and D-I transitions.
The profile HMMs employ position-specific Insert or
Delete probabilities rather than gap penalties, which enables
HMMER to outperform BLAST [5] on senstivity [3]. However, the previous version of HMMER, HMMER2, suffers
the computational expense and gains less utilization than
BLAST. Due to well-designed heuristics, BLAST is in the
order of 100x to 1000x faster than HMMER2 [3]. Therefore, numerous acceleration efforts have been made for
HMMER2, such as [6], [7], [8], [9], [10]. Most of them employ application accelerators and co-processors, like fieldprogrammable gate array (FPGA), graphics processing unit
(GPU) and other parallel infrastructures, which provide
good performence improvement. To popularize HMMER
for standard commodity processors, Eddy et al. propose new
versions of HMMER, HMMER3 (v3.0) and its subsequent
version (v3.1), which achieve the comparable performance
as BLAST [3]. As the main contribution, HMMER3 implements a heuristic pipeline in hmmsearch which aligns a query
model with the whole sequence dataset to find out significantly similar sequence matches. The heuristic acceleration
pipeline is highly optimized on CPU-based systems with the
support of streaming SIMD extensions (SSE) instructions,
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
and hence only few acceleration attempts, including [11],
[12], [13], [14], [15], [16] and [17], report further speedups.
Our previous work [18], CUDAMPF, proposes a multitiered parallel framework to accelerate HMMER3 pipeline
on a single GPU, which was shown to exceed the current state-of-the-art. However, the performance evaluation
shows that the thoughput of the computational kernel depends on the model length, especially for small models,
which implies underutilization of the GPU. This inspires
us to exploit finer-grained parallelism, compared with the
framework presented in [18]. In this paper, we describe
another tier of parallelization that aims to fully take advantage of the hardware resources provided by single GPU.
A novel optimization strategy that proactively utilizes onchip cache system is proposed to further boosts kernel
throughput and improves scalability of the framework. A
comprehensive evaluation indicates that our method exhibits good consistency of performance regardless of query
models and protein sequence datasets. The generalization
of the proposed framework as well as performance-oriented
suggestions are also discussed.
The rest of the paper is organized as follows. Section 2
presents background of HMMER3 pipeline, GPU architecture and highlighted CUDA features, followed by a review
of CUDAMPF implementation. In Section 3, the in-depth
description of our proposed framework is presented. Then,
we give comprehensive evaluations and analyses in Section
4. Related works and discussions are presented in Section 5
and 6, respectively. Finally, we present the conclusion of this
paper.
2
BACKGROUND
In this section, we go through the new heuristic pipeline
of HMMER3 and highlights its computationally intensive
stages. An overview of the GPU architecture and the CUDA
programming model is also presented. For better understanding of subsequent ideas, we briefly review our previous work, CUDAMPF, at end of this section.
2.1
Heuristic Pipeline in HMMER3
The main contribution that accelerates the HMMER3 is a
new algorithm, multiple segment Viterbi (MSV) [3], which
is derived from the standard Viterbi algoritm. The MSV
model is a kind of ungapped local alignment model with
multiple hits, as shown in Fig. 1, and it is achieved by
pruning Delete and Insert states as well as their transitions
in the original profile HMMs. The M -M transitions are also
treated as constants of 1. In addition to the MSV algorithm,
another simpler algorithm, single segment Viterbi (SSV), is
also introduced to boost the overall performance further.
Given that the J state is the bridge between two matched
alignments, the SSV model assumes that there is rarely a
matched alignment with a score that is higher than the cost
of going through the J state, and hence it speculatively
removes the J state to gain a significant speedup [19].
However, in order to avoid false negatives, the SSV model
is followed by regular MSV processing to re-calculate suspected sequences. Fig. 1 illustrates profiles of P7Viterbi,
MSV and SSV models with an example of 4 nodes. The solid
2
arrows indicate transitions between different types of states
whereas dashed arrows represent the self-increase of a state.
I1
S
N
B
M1
I2
I3
M2
M3
M4
D2
D3
D4
E
C
T
E
C
T
E
C
T
J
P7Viterbi: Plan-7 architecture based
Viterbi
S
N
B
M1
M2
M3
M4
J
MSV: Multiple ungapped local alignment
Segment Viterbi
S
N
B
M1
M2
M3
M4
SSV: Single ungapped local alignment
Segment Viterbi
Fig. 1. Profiles of P7Viterbi, MSV and SSV models.
In the pipeline, SSV and MSV models work as heuristic
filters (stages) that filter out nonhomologous sequences. All
sequences are scored during SSV and MSV stages, and
only about 2.2% of sequences are passed to the next stage,
given a threshold. The second stage consists of the P7Viterbi
model which only allows roughly 0.1% of sequences pass,
and resulting sequences are then scored with the the full
Forward algorithm [3]. These four stages mentioned above
form the main part of HMMER3’s pipeline. However, SSV
and MSV stages consumes more than 70% of the overall
execution time [17], [18], and hence they are prime targets
of optimization.
Fig. 2 illustrates the dynamic programming (DP) matrix
of the P7Viterbi stage, corresponding to Fig. 1. A lattice of
the middle region contains three scores for Match, Insert
and Delete states, respectively, whereas flanking lattices only
have one. To complete the alignment of a sequence, we
need to calculate every lattice of the DP matrix starting
from the left-top corner to the right-bottom corner (green)
in a row-by-row order, which is a computationally intensive
process. In the middle region of the DP matix, each lattice
(red) depends on four cells (blue) directly, denoted by solid
arrows, which can be formulated as:
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
N B
Query model (states)
1
j
M
3
E J C
Sequence (residues)
0
i
L
Fig. 2. Dynamic programming matrix of the P7Viterbi stage.
VM [i − 1, j − 1] + T (Mj−1 , Mj ),
V [i − 1, j − 1] + T (I
I
j−1 , Mj ),
VM [i, j] = M + max
VD [i − 1, j − 1] + T (Dj−1 , Mj ),
B[i − 1] + T (B, Mj )
(
VM [i − 1, j] + T (Mj , Ij ),
VI [i, j] = I + max
VI [i − 1, j] + T (Ij , Ij )
(
VM [i, j − 1] + T (Mj−1 , Dj ),
VD [i, j] = max
VD [i, j − 1] + T (Dj−1 , Dj )
2.3
(1)
where denotes emission scores. V and T represent scores
of M /I /D states and transitions, respectively. As for MSV
and SSV stages, the mathematical formula can be simplified via removing VI and VD , which results in moderate
dependencies and fewer amount of computation than the
P7Viterbi stage. However, in order to exceed the performance of the highly optimized CPU-based implementation
with the SIMD vector parallelization, it is imperative to go
beyond general methods and exploit more parallelism on
other multi/many-core processor architectures.
2.2
datapath from the existing L1 and shared memory datapath,
and the maximum amount of available registers for each
thread is increased to 255 instead of prior 63 per thread.
Moreover, a set of Shuffle instructions that enables a warp of
threads to share data without going through shared memory
are also introduced in Kepler architecture. This new feature
is heavily used in our proposed framework.
The CUDA programming model is designed for NVIDIA
GPUs, and it provides users with a development environment to easily leverage horsepower of GPUs. In CUDA, a
kernel is usually defined as a function that is executed by all
CUDA threads concurrently. Both grid and block are vitural
units that form a thread hierarchy with some restrictions. Although CUDA allows users to launch thousands of threads,
only a warp of threads (32 threads, currently) guarantee that
they advance exectuions in lockstep, which is scheduled
by a warp scheduler. Hence, the full efficiency is achieved
only if all threads within a warp have the same execution
path. Traditionally, in order to make sure that threads keep
the same pace, a barrier synchronization has to be called
explicitly, which imposes additional overhead.
GPU Architecture and CUDA Programming Model
As parallel computing engines, CUDA-enabled GPUs are
built around a scalable array of multi-threaded streaming
multiprocessors (SMs) for large-scale data and task parallelism, which are capable of executing thousands of threads
in the single-instruction multiple-thread (SIMT) pattern [20].
Each generation of GPU introduces more hardware resources and new features, which aims to deal with the everincreasing demand for computing power in both industry
and academia. In this paper, we implement our design on
Tesla K40 GPU of Kepler GK110 architecture which equips
with 15 powerful streaming multiprocessors, also known
as SMXs. Each SMX consists of 192 single-precision CUDA
cores, 64 double-precision units, 32 special function units
and load/store units [21]. The architecture offers another
48KB on-chip read-only texture cache with an independent
CUDAMPF
In [18], we proposed a four-tiered parallel framework, CUDAMPF, implemented on single GPU to accelerate SSV, MSV
and P7Viterbi stages of hmmsearch pipeline. The framework
describes a hierarchical method that parallelizes algorithms
and distributes the computational workload considering
available hardware resources. CUDAMPF is completely
warp-based that regards each resident warp as a compute
unit to handle the exclusive workload, and hence the explict
thread-synchronization is eliminated. Instead, the built-in
warp-synchronism is fully utilized. A warp of threads make
the alignment of one protein sequence one time and then
pick up next scheduled sequence. Given that 8-bit or 16bit values are sufficient to the precision of algorithms,
we couple SIMT execution mechanism with SIMD video
instructions to achieve 64 and 128-fold parallelism within
each warp. In addition, the runtime compilation (NVRTC),
first appeared in CUDA v7.0, was also incorporated into the
framework, which enabled swichable kernels and innermost
loop unrolling to boost the performance further. CUDAMPF
yields upto 440, 277 and 14.3 GCUPS (giga cells updates per
second) with strong scalability for SSV, MSV and P7Viterbi
kernels, respectively, and it has been proved to exceed all
existing work.
3
P ROPOSED F RAMEWORK : CUDAMPF++
This section presents detailed implementations of the proposed framework, CUDAMPF++, that is designed to gain
more parallelism based on CUDAMPF. We first introduce
a new tier of parallelism followed by a data reformatting
scheme for protein sequence data, and then in-depth explanations of kernel design are presented. Finally, we discuss
the optimizations of the proposed framework.
3.1
Five-tiered Parallelism
In CUDAMPF, the four-tiered parallel framework is proposed to implement MSV, SSV and P7Viterbi kernels. Although the performance improvement is observed on all
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
accelerated kernels, the speedup on P7Viterbi kernel is very
limited whereas MSV/SSV kernel yields significant improvement. Given the profiling information [18], we are able
to gain additional insights into the behaviors: (a) L1 Cache
Hit Ratio (CHR) of the P7Viterbi kernel degrades rapidly
as model size increases, and (b) its register usage always
exceed the maximum pre-allocation for each thread, which
indicates the exhaustion of on-chip memory resources and
serious register spill. As for MSV/SSV kernels, however, the
on-chip memory resources are sufficient. A large amount of
low-latency registers, especially when aligning with small
models, are underutilized. This can also be proved by performance curves in [18] in which only upward slopes are
observed without any flat or downward trends as model
size increases from 100 to 2405. The underutilization leaves
an opportunity to exploit further parallelism that can fully
take advantage of hardware resources on GPUs.
In addition to original four-tiered structure, another
tier of parallelism, inserted between 3rd and 4th tiers, is
proposed to enable each warp handle multiple alignments
with different protein sequences in parallel while the design of the CUDAMPF only allow single-sequence aligment
per warp. This scheme aims to exhaust on-chip memory
resources, regardless of the model length, to maximize the
throughput of MSV/SSV kernels. Fig. 3 illustrates the fivetiered parallel framework.
The first tier is based on multiple SMXs that possess
plenty of computing and memory resources individually.
Given the inexistence of data and execution dependency
between each protein sequence, it is straightforward to
partition whole sequence database into several chunks and
distribute them to SMXs, as the most basic data parallelism. Tier 2 describes the parallelism between multiple
warps that reside on each SMX. Since our implementation
still applies the warp-synchronous execution that all warps
are assigned to process different sequences without interwarp interactions, explicit synchronizations are eliminated
completely. Unlike the CUDAMPF in which warps move
to their next scheduled task once the sequence at hand is
done, the current design allocates a sequence data block
to each warp in advance. A data block may contain thousands of sequences, less or more, depending on the total
size of protein sequence dataset and available warps. For
multi-sequence alignment, all sequences within each data
block need to be reformatted as the striped layout, which
enables coalesced access to the global memory. Details of
data reformatting will be discussed in Sec. 3.2. The number
of resident warps per SMX is still fixed to 32 due to the
complexity of MSV and SSV algorithms, which ensures that
every thread obtains enough registers to handle complex
execution dependencies and avoids excessive register spill.
Besides, each SMX contains 4 warp schedulers, each with
dual instruction dispath units [21], and hence it is able
to dispatch upto 8 independent instructions each cycle.
Those hardware resources have the critical influence on the
parallelism of tier 2 and the performance of warp-based
CUDA kernels.
Tier 3 is built on the basis of warps. A warp of threads
update different model states simultaneously and iterate
over remaining model states in batches. The number of iterations depends on the query model size. Once the alignment
4
SMX
SMX
SMX
Block
Shared
Memory
L1
Cache
Block
Readonly
Cache
Shared
Memory
L1
Cache
Block
Readonly
Cache
Shared
Memory
Readonly
Cache
L1
Cache
L2 Cache
Global Memory
Tier 1
sequence data blocks
Warp Scheduler
Block
2 Instruction
Dispatch Units
Warp
Warp Scheduler
Warp
2 Instruction
Dispatch Units
Warp
Warp Scheduler
Warp
2 Instruction
Dispatch Units
Tier 2
32 threads in one warp
iterate through
all states of the
query model
Seq.
A
R
striped layout of query model states
Tier 3
32 threads in one warp
Group 1
L
Group 2
Seq. 1
A
R
Group S
Seq. 2
alignment
A
C
C
L
H
striped
model
states
Seq. S
V
V
L
D
S
R
H
D
W
L
H
E
W
W
Tier 4
MSV / SSV (4-lane per thread)
Thread 1
P7Viterbi (2-lane per thread)
Thread 2
Thread 1
8 bits
32-bit register
Thread 2
16 bits
32-bit register
Tier 5
Fig. 3. Five-tiered parallel framework based on the NVIDIA Kepler architecture.
of an amino-acid residue is done, such as the ’A’ and its
alignment scores (marked as a row of blue lattices) shown
in Fig. 3 (Tier 3), the warp moves to next residue ’R’ and
start over the alignment until the end of current sequence.
On the basis of tier 3, tier 4 illustrates the multi-sequence
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
alignment that a warp of threads are evenly partitioned into
several groups, each has L threads, to make alignments
with S different sequences in parallel. For example, the
first residues of S sequences, like ’A’ of Seq. 1, ’V’ of Seq.
2 and ’V’ of Seq. S, are extracted together for the firstround alignment. Each group of threads update W scores
per iteration, and H iterations are required to finish one
alignment. The model states are formatted as a rectangle
with W ×H lattices. Considering two models, a large model
Ml and a small model Ms where the size of Ml is S times
larger than the size of Ms , we are able to get Ws = S1 Wl
given Hs = Hl , which provides S -lane parallelism and
roughly keeps register utilization of Ms as same as Ml . The
tier 5 remains unchanged as the fine-grained data parallelism: every thread can operate on four 8-bit values and
two 16-bit values simutaneously, using single SIMD video
instruction [22], for MSV/SSV and P7Viterbi algorithms,
respectively. With the support of tier 5, the parallelism of
tier 4 is further extended because each thread takes charge
of four different sequences at most. The value of S , as the
number of sequences processed in parallel by a warp, is
defined as below:
ŝŵr
{S | S = 2i , i ∈ Z ∩ [1, log2
]},
(2)
ŵv
where ŝ is the warp size, ŵr and ŵv represent the width of
registers and participant values, respectively. With Eq. 2, the
rest of values can be also formulated as:
ŝŵr
,
W =
ŵv S
ŝ
(3)
L=d e
S
m̂
H = max{2, d e},
W
where m̂ represents the size of query model. W , L and H
can be regarded as functions of S .
3.2
Warp-based Sequence Data Blocks
Due to the introduction of the multi-sequence alignment,
loading sequence data in the sequential layout is highly
inefficient. Originally, in [18], residues of each sequence
are stored in contiguous memory space, and warps always
read 128 residues of one sequence by one coalesced global
memory transaction. As for current design, however, the
sequential data layout may lead to 128 transactions per
memory request while extracting residues from 128 different sequences. Hence, a striped data layout is the most
straightforward solution. Given that warps have exclusive
tasks, we propose a data reformatting method that arranges
sequences in a striped layout to (a) achieve fully coalesced
memory access and (b) balance workload amongst warps.
The proposed method partitions large sequence dataset into
blocks based on the number of resident warps.
As shown in Fig. 4, all protein sequences are divided
into N blocks, each consists of Mi × 128 residues, where
N is the total number of resident warps, and Mi represents
the height of each block with i = [1, 2, 3...N ]. The number
128, as the width of each block, is pre-fixed for two reasons:
(a) MSV/SSV kernels only need participant values with the
width of ŵv = 8 bits, and hence a warp can handle up
5
Warp 1
Addr.
(Bytes)
0
1
3
Warp 2
4
5
Warp 3
Warp N
126 127 128
M
M
M
M
M
M
C
A
E
P
S
V
L
C
A
P
E
A
M
V
M
T
L
T
E
R
K
H
S
P
A
L
Q
L
V
D
G
S
F
L
A
Y
F
@
Q
K
S
R
@
G
T
Y
A
T
D
T
G
V
L
Q
K
D
G
H
Mi=1
rows
2
P
G
Q
D
H
@
R
G
@
S
@
K
M
M
V
H
A
R
W
K
C
M
N
R
D
P
E
S
R
@
N
D
L
P
D
P
E
V
@
I
@
D
A
E
@
Q
S
V
A
P
@
#
@
T
@
@
G
#
#
#
@
#
#
Q
#
#
#
#
#
#
@
#
#
#
#
#
128 cols
128 columns (128 Bytes / row)
128 cols
128 cols
Fig. 4. Warp-based sequence data blocks.
to S = 128 sequences simultaneously. (b) 128 residues,
each occupies 1 byte, achieves aligned memory address
for coalescing access. Marked as different colors, residues
consist of three types: regular (green), padding (blue) and
ending (red). In each block, sequences are concatenated and
filled up 128 columns. A ending residue ’@’ is inserted at the
end of each sequence, which is used to trigger an intra-wrap
branch to calculate and record the score of current sequence
in the kernel. The value of Mi is equal to the length of
longest column within each block, such as second column of
the data block for warp 1. As for the rest of columns whose
length is less than Mi , padding residue ’#’s are attached to
make them all aligned. Residues that belong to the same
warp are stored together in row-major order.
Algorithm 1 shows the pseduo-code of reformatting
protein sequence data. N is determined by the number
of SMXs and the number of resident warps per SMX,
denoted by Nsmx and Nwarp , respectively (line 1). Given
S , N ∗ S containers Cx are created to hold and shape
sequences as we need. One container corresponds to one
column as shown in Fig. 4. We load first N ∗ S seqeunces
as the basis (line 4), and the for loop (line 7 to 19) iterates
over the rest to distribute them into Cx . To avoid serious
imbalance, the maxLen (line 6) is employed to monitor the
longest container as the upper limit. Every new sequence
searches for a suitable position based on the accumulated
length of each container and maxLen (line 13). We force
the sequence to be attached to the last container (line 12)
if no position is found. maxLen always be checked and
updated by checkM ax() function once a new sequence is
attached successfully (line 17). Cx are evenly divdied into
N blocks when all sequences are attached, and the length
of longest container within each block Mi is recorded by
getM ax(S) function (line 20). The padding process, as the
last step (line 21), shapes warp-based sequence data blocks
Bi into rectangles. Implementation of this method and the
evaluation of workload balancing are presented in Sec. 4.1.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Algorithm 1 Protein sequence data reformatting
Input: all sequences sorted by length in descending order
Seq , and some parameters, such as S .
Output: warp-based sequence data blocks Bi .
1: N ← Nsmx ∗ Nwarp
2: lines ← N ∗ S
3: create lines containers Cx where x ∈ Z ∩ [0, lines).
4: Cx ← Seq[x]
. load first lines sequences
5: ptr ← lines
. start from last container
6: maxLen ← max{Cx .len()}
7: for all Seq[y] where y ≥ lines do
8:
repeat
9:
ptr ← ptr − 1
10:
if ptr < 0 then
. position is not found
11:
ptr ← lines
. start over
12:
C(ptr−1) .attach(Seq[y])
13:
else if Seq[y].len() + Cptr .len() ≤ maxLen then
14:
Cptr .attach(Seq[y])
15:
end if
16:
until Seq[y] is attached.
17:
maxLen.checkM ax()
18:
ptr ← lines if ptr < 0. . refresh pointer normally
19: end for
. all sequences are attached
20: Mi ← Cx .divide(N ).getM ax(S) where i = [1, ..., N ]
21: Bi ← Cx .padding(Mi )
22: return Bi
3.3
Kernel Design
Since the number of sequences that are processed in parallel
is within the range of {2, 4, 8, 16, 32, 64, 128}, the proposed
algorithms are designed to cover all these cases. Therefore,
14 different types of kernels are generated, named as S -lane
MSV/SSV kernels, and their implementations slightly vary
with value S .
Algorithm 2 outlines the S -lane MSV kernels that is
more complex than the implementation of single-sequence
alignment in [18]. Some features are inherited, like (a) using
local memory to hold intermediate values as well as register
spill (line 1), (b) loading scores through read-only cache
instead of shared memory to avoid weak scalability with
low occupancy (line 16) and (c) fully unrolling the innermost
loop for maximizing registers usage to reside high frequency
values on the on-chip memory (line 14). In order to assign
different threads of a warp to work on different sequences
without mutual interference, we label group ID gid and the
offset in group oig on each thread (line 3 and 4). Threads that
work on the same sequence are grouped with a unique gid,
and they are assigned to different in-group tasks based on
the oig . Inter-thread collaborations are only allowed within
each thread group.
The outer loop (line 5) iterates over columns of the warpbased sequence data block B while the middle loop (line 11)
takes charge of each row. The cycle times of outer loop is
directly affected by the query model size: the larger model
results in the more cycles. This is because on-chip memory
resources are limited when making alignment with large
models, and it further leads to the kernel selection with
small S . For example, a model with length of 45 can be
handled by 128-lane kernels whereas a model of 1000-length
6
Algorithm 2 MSV kernels with multi-sequence alignment
Input: emission score E , sequence data block B , height of
data block M , sequence length Len, offset of sequence
Oseq , offset of sequence length Olen and other parameters, such as L, W , H , S and dbias, etc.
Output: P-values of all sequences Pi .
1: local memory Γ[H]
2: wid ← blockIdx.y ∗ blockDim.y + threadIdx.y
3: gid ← bthreadIdx.y/Lc
. group id
4: oig ← threadIdx.x % L
. offset in group
5: for k ← 0 to W − 1 do
6:
R ← M [wid]
7:
count ← 0
8:
mask , scE ← 0x00000000
9:
Iseq ← Oseq [wid] + gid ∗ L + bk/4c
10:
scB ← initialize it based on Len, Olen [wid] and k .
11:
while count < R do
12:
r ← extract res(B, Iseq , k, S)
13:
γ ← inter or intra reorder(L, S, v, oig)
14:
#pragma unroll H
15:
for h ← 0 to H − 1 do
16:
σ ← load emission score(E, S, L, oig, h, r)
17:
v ← vmaxu4(γ, scB )
18:
v ← vaddus4(v, dbias)
19:
v ← vsubus4(v, σ)
20:
scE ← vmaxu4(scE , v)
21:
γ ← Γ[h] & mask
. load old scores
22:
Γ[h] ← v
. store new scores
23:
end for
24:
scE ← max reduction(L, S, scE )
25:
scJ , scB ← update special states, given scE .
26:
mask ← 0xffffffff
27:
if r contains ending residue @ then
. branch
28:
Pi ← calculate P-value and record it.
29:
mask ← set affected bits to 0x00.
30:
scJ , scE , scB ← update or reset special states.
31:
end if
32:
count, Iseq ← step up.
33:
end while
34: end for
may only select 4-lane kernels. Given that, in one warp,
k is used to index S columns of residues simultaneously
during each iteration, and the Iseq always points to residues
that are being extracted from global memory. The details
of residue extraction are shown in Algorithm 3. For S -lane
kernels with S ≤ 32, only one 8-bit value (one residue) per
thread group is extracted and be ready to make alignment
though a warp always have the fully coalesced access to 128
residues assembled in 128-byte memory space. Instead, 64lane kernels extract two 8-bit values, and 128-lane kernels
are able to handle all of them. These residues are then
used in the function load emission score (line 16) to load
corresponding emission scores of “Match” states (line 3, 5
and 11 in Algorithm 4). The total number of amino acids is
extended to 32, and the extra states are filled with invalid
scores, which aims to cover the newly introduced residues
(ending and padding). 64 and 128-lane kernels are treated in a
specical way as shown in Algorithm 4 (line 3-9 and line 1215) due to the demand of score assembly. In this case, each
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
thread assembles two or four scores of different residues
into a 32-bit register to be ready for subsequent SIMD instructions. All emission scores are loaded through read-only
cache to keep shared/L1 cache path from overuse, and the
score sharing is done via inter-thread shuffle instructions.
Algorithm 3 Extract residues from data block - extract res
Input: B , Iseq , k and S .
Output: a 32-bit value r that contains one, two or four
residues, given S .
1: if S ∈ {2, 4, 8, 16, 32} then
2:
r ← (B[Iseq ] >> 8 ∗ (k % 4)) & 0x000000ff
3: else if S = 64 then
4:
r ← (B[Iseq ] >> 16 ∗ k) & 0x0000ffff
5: else if S = 128 then
6:
r ← B[Iseq ]
7: end if
Algorithm 4 Get “Match” scores - load emission score
Input: E , S , L, oig , h and r.
Output: a 32-bit value σ that contains four emission scores,
each is 8-bit, in striped layout.
1: Na ← 32
. amino acids
2: if S ∈ {2, 4, 8, 16, 32} then
3:
σ ← E[h ∗ Na ∗ L + r ∗ L + oig]
. ldg
4: else if S = 64 then
5:
sc ← E[h ∗ Na + threadIdx.x] & 0x0000ffff
6:
res ← r & 0x000000ff
. assembly
7:
σ ← σ k ( shf l(sc, res))
8:
res ← (r >> 8) & 0x000000ff
9:
σ ← σ k ( shf l(sc, res) << 16)
. assembly
10: else if S = 128 then
11:
sc ← E[h ∗ Na + threadIdx.x] & 0x000000ff
12:
for bits ∈ {0, 8, 16, 24} do
13:
res ← (r >> bits) & 0x000000ff
. assembly
14:
σ ← σ k ( shf l(sc, res) << bits)
15:
end for
16: end if
Algorithm 5 and 6 detail two crucial steps of MSV/SSV
kernels, inter or intra reorder and max reduction, (line
13 and 24 in Algorithm 2) via the PTX assembly to expose
internal mechanisms of massive bitwise operaions for the
multi-sequence alignment. They aim to reorder 8-bit values
and get the maximum value amongst each thread group
in parallel, and meanwhile, noninterference between thread
groups is guaranteed. Our design still avoids to use shared
memory since available L1 cache is the key factor on performance when unrolling the innermost loop. Therefore, all
intermediate or temporary values are held by private memory space of each thread, such as registers or local memory.
The shuffle instruction, shfl, is employed again to achieve
the inter-thread communication but the difference is that a
mask is specified to split a warp into sub-segments (line
8 in Algorithm 5). Each sub-segment represents a thread
group. In Algorithm 6, two reduction phases are required
for S -lane kernels with S ≤ 16. Line 5 to 12 presents interthread reductions by using shfl and vmax to get top four
values of 8-bit within each thread group. The following
lines are intra-thread reductions which only happen inside
7
each thread and eventually works out the maximum value.
As an example, Fig. 5 illustrates the reordering and maxreduction for 16-lane kernels. W = 8 is the width of striped
model states, and it can also be regarded as the scope of
each thread group. For S = 16, a warp is partitioned
into 16 thread groups, and each handles eight values of
8-bit. Yellow lattices in Fig. 5(a) are values that need to
be exchanged betweem two threads. Arrows indicate the
reordering direction. In Fig. 5(b), assuming digital numbers
(0 to 127) labeled inside lattices represent the values held
by threads, the yellow lattices always track the maximum
value of each thread group. Three pairs of shuffle and SIMD
instructions are used to calculate the maximum value and
broadcast it to all members of the thread group.
Algorithm 5 Reorder 8-bit value inter-thread or intra-thread
- inter or intra reorder
Input: L, S , v and oig .
Output: a 32-bit value γ that contain four 8-bit values.
1: if S ∈ {2, 4, 8, 16} then
. inter-thread
2:
lane ← (oig + L − 1) % L
. source lane
3:
asm{shr.u32 x, v, 24}
4:
asm{mov.b32 mask, 0x1f}
5:
asm{sub.b32 mask, mask, L(S)-1}
6:
asm{shl.b32 mask, mask, 16}
7:
asm{or.b32 mask, mask, 0x1f}
8:
asm{shfl.idx.b32 x, x, lane, mask}
9:
asm{shl.b32 y, v, 8}
10:
γ←x|y
11: else if S = 32 then
. intra-thread
12:
asm{shr.u32 x, v, 24}
13:
asm{shl.b32 y, v, 8}
14:
γ←x|y
15: else if S = 64 then
. intra-thread
16:
asm{shr.u32 x, v, 8}
17:
asm{and.b32 x, x, 0x00ff00ff}
18:
asm{shl.b32 y, v, 8}
19:
asm{and.b32 y, y, 0xff00ff00}
20:
γ←x|y
21: else if S = 128 then
22:
γ ← 0x00000000 or 0x80808080
. not reorder
23: end if
The potential divergent execution only happens in the
branch for recording P-value of ended sequences, as shown
in Algorithm 2, line 27-31. Threads which find the existence
of ending residues keep active and move into the branch
whereas others are inactive during this period. A mask is
introduced to mark the position of ended sequences within
32-bit memory space and set those affected bits to 0. This
is particularly helpful to 64 and 128-lane kernels because it
only cleans up corresponding lanes for new sequence while
keeping data of other lanes unchanged. Moreover, bitwise
operations with mask minimize the number of instructions
needed inside the innermost loop, which is also beneficial
to the overall performance. As for the SSV kernel shown in
Algorithm 7, it shares the same framework with the MSV
kernel but has less computational workload. Besides, one
more mask value is added to reset affected bits since the
-inf of SSV kernel is 0x80 rather than 0x00. mask1 cleans
up outdated scores as the first step followed by a bitwise
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
8
Algorithm 6 Get and broadcast maximum value through
reduction operations - max reduction
Input: L, S and scE .
Output: a 32-bit value scE that contains four 8-bit or two
16-bit values.
1: if S = 128 then
2:
do nothing but Return scE .
3: else
4:
x ← scE , y ← 0, z ← 0
5:
if S ∈ {2, 4, 8, 16} then
. inter-thread reduction
6:
i ← log2 L − 1
7:
for lm ← 20 to 2i do
8:
asm{shfl.bfly.b32 y, x, lm, 0x1f}
9:
asm{and.b32 m, m, 0x00000000}
10:
asm{vmax4.u32.u32.u32 x, x, y, m}
11:
end for
12:
end if
13:
asm{shr.u32 y, x, 8}
14:
asm{and.b32 y, y, 0x00ff00ff}
15:
asm{shl.b32 z, x, 8}
16:
asm{and.b32 z, z, 0xff00ff00}
17:
asm{or.b32 y, y, z}
18:
asm{and.b32 m, m, 0x00000000}
19:
asm{vmax4.u32.u32.u32 x, x, y, m}
20:
Return scE ← x if S = 64.
21:
asm{shr.u32 y, x, 16}
22:
asm{shl.b32 z, x, 16}
23:
asm{or.b32 y, y, z}
24:
asm{and.b32 m, m, 0x00000000}
25:
asm{vmax4.u32.u32.u32 x, x, y, m}
26:
Return scE ← x if S ∈ {1, 2, 4, 8, 16, 32}.
27: end if
Thread #31
Thread #30
Thread #3
Thread #2
Thread #1
Thread #0
32-bit reg.
32-bit reg.
32-bit reg.
32-bit reg.
32-bit reg.
32-bit reg.
Reorder
8-bit
value
W=8
W=8
W=8
(a) Reordering for MSV/SSV kernels with S = 16
Thread #31
Thread #30
Thread #3
Thread #2
32-bit reg.
Thread #1
32-bit reg.
32-bit reg.
32-bit reg.
123 122 121 120
15 14 13 12
11 10
8
7
6
5
4
3
2
1
0
123 122 121 120
127 126 125 124
11 10
8
15 14 13 12
3
2
1
0
7
6
5
4
127 126 125 124
127 126 125 124
15 14 13 12
15 14 13 12
7
6
5
4
7
6
5
4
126 127 124 125
126 127 124 125
14 15 12 13
14 15 12 13
6
7
4
5
6
7
4
5
127 127 125 125
127 127 125 125
15 15 13 13
15 15 13 13
7
7
5
5
7
7
5
5
125 125 127 127
125 125 127 127
13 13 15 15
13 13 15 15
5
5
7
7
5
5
7
7
127 127 127 127
127 127 127 127
15 15 15 15
15 15 15 15
7
7
7
7
7
7
7
7
9
32-bit reg.
Thread #0
127 126 125 124
9
32-bit reg.
Inter-thread
shuffle
vmax4
Intra-thread
shuffle
vmax4
Intra-thread
shuffle
vmax4
(b) Max-reduction for MSV/SSV kernels with S = 16
Fig. 5. An example of Reordering and Max-reduction for kernels that
handle 16 sequences simutaneously.
disjunction with mask2 to reset local memory Γ (line 18).
3.4
Algorithm 7 SSV kernel with multi-sequence alignment
Input: emission score E , sequence data block B , height of
data block M , sequence length Len, offset of sequence
Oseq , offset of sequence length Olen and other parameters, such as L, W , H , S and dbias, etc.
Output: P-values of all sequences Pi .
1: local memory Γ[H]
2: wid ← blockIdx.y ∗ blockDim.y + threadIdx.y
3: gid ← bthreadIdx.y/Lc
. group id
4: oig ← threadIdx.x % L
. offset in group
5: for k ← 0 to W − 1 do
6:
T ← M [wid]
7:
count ← 0
8:
mask1, mask2, scE ← 0x80808080
9:
Iseq ← Oseq [wid] + gid ∗ L + bk/4c
10:
while count < T do
11:
r ← extract res(B, Iseq , k, S)
12:
γ ← inter or intra reorder(L, S, v, oig)
13:
#pragma unroll H
14:
for h ← 0 to H − 1 do
15:
σ ← load emission score(E, S, L, oig, h, r)
16:
v ← vsubus4(v, σ)
17:
scE ← vmaxu4(scE , v)
18:
γ ← Γ[h] & mask1 k mask2 . load old scores
19:
Γ[h] ← v
. store new scores
20:
end for
21:
scE ← max reduction(L, S, scE )
22:
mask1 ← 0xffffffff, mask2 ← 0x00000000
23:
if r contains ending residue @ then
. branch
24:
Pi ← calculate P-value and record it.
25:
mask1 ← set affected bits to 0x00.
26:
mask2 ← set affected bits to 0x80.
27:
scE ← update or reset special states.
28:
end if
29:
count, Iseq ← step up.
30:
end while
31: end for
Kernel Optimization
The kernel performance of CUDAMPF shows that H = 19
is able to cover the longest query model whose length is
2405, for MSV and SSV algorithms [18]. In current design,
however, 2-lane kernels can only handle models with the
length of 1216 at most, given the same H . In addition, we
recall that L1 Cache-Hit-Ratio (CHR) is employed as a metric to evaluate register spill in CUDAMPF, and MSV/SSV
kernels with maximum 64 registers per thread have no spill
to local memory. This enables us to push up H to hold larger
models for the multi-sequence alignment. Two optimization
schemes are proposed to improve overall performance and
address the concern about scalability.
3.4.1 CHR Sacrificed Kernel
It is well-known that L1 cache shares the same block of
on-chip memory with shared memory physically, and it is
used to cache accesses to local memory as well as register
spill. The L1 cache has low latency on data retrieval and
storage with a cache hit, which can be further utilized to
increase the throughput of kernels based on the proposed
framework. We treat L1 cache as secondary registers, and the
usage is measured by CHR for local loads and stores. By
increasing up H , more model states can reside in registers
and cached local memory. The moderate loss of performance
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
9
due to uncached register spills are acceptable, which is
attributed to highly optimized task and data parallelism in
the current framework. However, it is impossible to increase
H unboundedly due to the limited capacity of L1 cache.
Overly large H leads to the severe register spill that causes
low CHR and stalls warps significantly. Hence, there always
is a trade-off between CHR and H , and the goal is to
find a reasonable point where kernel performance (GCUPS)
begins fall off. The decline in performance indicates that
the latency, caused by excessive communications between
on and off-chip memory, starts to overwhelm the benefits
of parallelism. The corresponding H at the turning point is
considered to be the maximum one, denoted by Hmax .
TABLE 1
Benchmarks of the maximum H via innermost loop unrolling
Once the kernel type (S -lane) is determined, the H of every
query model located in the coverage area can be obtained
via:
argmin Φ = {H | ∀H ∈ [d
H
where Φ represents the function of H . Eq. (5) minimizes H
to fit query models perfectly, which thereby avoid redundant computation and memory allocation.
In summary, this optimization scheme aims to fully
leverage the speedy on-chip memory, including L1 cache
via sacrificing CHR proactively, to further boost kernel
throughput, and in the meanwhile, it extends coverage of
the proposed framework to larger query models.
3.4.2
H
reg.
per thread
stack
frame
spill
stores
spill
loads
GCUPS
L1 CHR
(%)
258.7
261.5
272.1
279.3
283.9
280.6
263.9
240.8
99.97
99.97
99.97
75.65
75.41
75.49
25.61
14.95
269.5
314.0
329.0
347.9
361.5
375.9
375.9
340.9
99.96
99.96
99.96
99.96
99.94
80.44
60.30
17.81
MSV kernels
20
25
30
35
40
45
50
55
63
63
64
64
64
64
64
64
8
8
8
40
48
48
96
128
20
25
30
35
40
45
50
55
62
62
62
61
64
64
64
64
8
8
8
8
16
48
56
80
0
0
0
40
52
56
152
208
0
0
0
44
56
52
96
140
SSV kernels
0
0
0
0
4
44
64
116
0
0
0
0
4
40
44
72
***
Data collections on 32-lane kernels compiled with nvcc 8.0. Use env nr [23]
as the sequence dataset.
TABLE 1 lists a benchmark result that shows the relationship between H , kernel performance and CHR. Starting
from H = 20 with a step of 5 , intuitively, CHR is being
consumed after on-chip registers are exhausted, and the
kernel performance increases first and falls back eventually
as expected. We choose 45 and 50 as the Hmax for MSV
and SSV kernels, respectively. Larger H results in rapid
degradation of both performance and L1 CHR. Besides,
the difference of Hmax indicates that MSV kernels have
more instructions than SSV kernels within the innermost
loop, and hence more registers or local memory are used
while unrolling the loop. The Hmax is therefore algorithmdependent. Given Eq. (2), (3) and Hmax , we formulate the
selection of S as below:
argmax f = {S | f = WS Hmax , ∀m̂ ≤ W2 Hmax : f ≥ m̂},
S
(4)
where f is the function of S , and W2 Hmax indicates the
maximum length of query models that 2-lane kernels can
handle. The CUDAMPF implementation will be used instead if any larger model is applicable. Eq. (4) describes a
rule of kernel selection that always prefer to use kernels
with more lanes if they are able to cover the model length.
m̂
e, Hmax ] ∩ Z : Φ = WS H,
WS
Φ ≥ m̂}, (5)
Performance-oriented Kernel
Although the proposed framework achieves a significant
improvement in performance, it certainly introduces overhead due to the implementation of multi-sequence alignment, compared with CUDAMPF. This downside becomes
more apparent as the model length increases (S decreases).
Therefore, it is expected that CUDAMPF with singlesequence alignment may exceed CUDAMPF++ for large
enough query models. In order to pursue optimal performance consistently, we also merge CUDAMPF implemention into the proposed framework as a special case
with S = 1. The maximum model length m̂max on which
CUDAMPF++ still outperforms is defined as the threshold
of kernel switch.
Similar to m̂max , another threshold m̂min can also be
employed to optimize 128 and 64-lane kernels for small
models. We recall that 128 and 64-lane kernels need extra
operations to load emission scores in Algorithm 4. Thus,
they have more overhead than other kernels within the
innermost loop, which may counteract their advantages on
the number of parallel lanes. We extend the coverage of 32lane kernels to handle small models owned by 128 and 64lane kernels previously, and the evaluation is presented in
Sec. 4.3.
4
E XPERIMENTAL R ESULTS
In this section, we present several performance evaluations on the proposed CUDAMPF++, such as workload
balancing, kernel throughput and scalability. The comparison targets consist of CUDAMPF, CPU-based hmmsearch
of latest HMMER v3.1b2 and other related acceleration
attempts. Both CUDAMPF++ and CUDAMPF are evaluated
on a NVIDIA Tesla K40 GPU and compiled with CUDA
v8.0 compiler. Tesla K40 is built with the Kepler GK110
architecture that contains 15 SMXs (2880 CUDA cores) and
12 GB off-chip memory [24]. One of NVIDIA profiling
tools, nvprof [25], is also used to track metrics like L1/tex
CHR, register usage and spill. For hmmsearch, two types
of CPUs are employed to collect performance results: Intel
Xeon E5620 (4 physical cores with maximum 8 threads) and
dual Intel Xeon E5-2650 (16 physical cores and 32 threads
in total). All programs are executed in the 64-bit Linux
operating system.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Unlike CUDAMPF implementation, the NVRTC is deprecated in current design due to its unstability in compiling
kernels with high usage of on-chip registers. Even with
latest compiler nvcc v8.0, runtime compilation with the
NVRTC library still generates unexpected binary files or
report the error of resources exhaustion, especially when
unrolling large loops and register spill happens. Thus, we
choose the just-in-time (JIT) compilation instead. All kernels
are pre-compiled in offline mode and stored as .ptx files,
each with an unique kernel name inside. Given different
query models, the corresponding .ptx file is loaded and
further compiled to binary code at runtime. The load time
and overhead of compilation are negligible.
Two protein sequence databsets [23] are chosen for experiments: (a) env nr (1.9 GB) and (b) est human (5.6 GB). As
for query models, we still use Pfam 27.0 [26] that contains 34
thousand HMMs with different sizes ranging from 7 to 2405.
The overall performance is measured in kernel throughput
(GCUPS) which is directly calculated by the total number
of residues contained in each database, model length and
kernel execution time.
4.1
Evaluation of Workload Balancing
To avoid time overhead of data reformatting introduced in
Sec. 3.2, we incorporate Redis [27], a high performance inmemory database, into the proposed framework. Redis is
written in ANSI C and able to work with CUDA seamlessly.
It currently works as an auxiliary component to hold warpbased sequence data blocks and query models in memory,
which offers blazing fast speed for data retrieval. Given
the single K40 GPU, each protein sequence dataset is partitioned into 61,440 blocks which are then ingested into Redis
database separately as key-value pairs. The quantity of data
blocks resided in Redis database should be integral multiple
of the number of available warps.
Table 2 summarizes the evaluation result of workload
balancing for both protein sequence datasets. The “avg.”
and “sd.” represent average value and standard deviation
across all blocks, respectively. We recall that M is the height
of data block which serves as the metrics of computational
workload for each warp, and the number of ending residues
is another impact factor of performance because the ending
residue may lead to thread idling. It is clear to see that
both sd. M and sd. ending residues are trivial, and the last
two columns show that average multiprocessor efficiency
approaches 100%, which are strong evidences of balanced
workload over all warps on GPU. Besides, the Padding-toReal Ratio (PRR) that compares the level of invalid computation to the level of desired computation is investigated to
assess the negative effect of padding residues, and it is also
proved to be negligible.
4.2
Performance Evaluation and Analysis
In order to demonstrate the outstanding performance of
proposed method and its correlation with the utilization
of memory resources, we make an in-depth comparison
between CUDAMPF++ and CUDAMPF via profiling both
MSV and SSV kernels, reported in Table 3 and 4, respectively. A total of 27 query models are selected to investigate
the impact of H on the performance of S -lane kernels. Each
10
kernel type is evaluated with two models that correspond
−1
+ 1e, except for the 2-lane
to H = Hmax and H = d Hmax
2
kernel since the 2405 is the largest model length in [26] with
corresponding H = 38.
For the MSV kernels, the maximum speedup listed on
Table 3 is 17.0x when m̂ = 23, and the trend of speedup
is descending as model length increases. This is because
memory resources, like on-chip registers, L1 cache and
even local memory, are significantly underutilized in CUDAMPF when making alignment with small models whereas
CUDAMPF++ always intends to fully take advantage of
them. Given 64 as the maximum number of register per
thread, only about half the amount of registers are occupied
in CUDAMPF till m̂ = 735, and other resources are not
utilized at all. In contrast, the CUDAMPF++ not only keeps
high usage of registers but also utilizes L1 cache and local
memory to hold more data, which results in a near constant
performance regardless of the model length. The texture
CHR is dominated by model length since we only use
texture cache for loading emission scores. Larger model
leads to lower texture CHR. Comparing the performance
of S -lane kernels in CUDAMPF++, the cases of H = 45
outperform the cases of H = 23 though more local memory
are allocated with register spill. One exception is the 128lane kernel due to its higher complexity of innermost loop,
which can be optimized via using the 32-lane kernel instead.
As shown in Table 4, SSV kernels have similar evaluation
results with MSV kernels but higher throughput. Starting
from m̂ = 832, nevertheless, CUDAMPF outperforms and
eventually yields upto 468.9 GCUPS which is 1.5x faster
than CUDAMPF++. The case that peak performance of two
frameworks are not comparable is due to the overhead of
extra instructions introduced for the multi-sequence alignment in CUDAMPF++. The kernel profiling indicates that
both MSV and SSV kernels are bounded by computation
and memory bandwidth (texture). However, unlike MSV
kernels, SSV kernels have fewer operations within the innermost loop, which makes them more “sensitive”. In other
words, newly added operations (i.e., bitwise operations
for mask) within the innermost loop, compared with CUDAMPF, have more negative effect on SSV kernels than
MSV kernels. Therefore, an upto 50% performance gap is
observed only in SSV kernels.
4.3
Scalability Evaluation
In order to demonstrate the scalability of the proposed
framework, a total of 57 query models with different sizes
ranging from 10 to 2450 are investigated. The interval
of model length is fixed to 50. Fig. 6 and 7 show the
performance comparison between CUDAMPF++ and CUDAMPF for MSV and SSV kernels, respectively. The coverage
area of model length for each kernel type is highlighted.
Right subfigure depicts the performance of 128 and 64-lane
kernels while others are shown in the left one. Overall,
CUDAMPF++ achieves near constant performance and significantly outperform CUDAMPF with small models. The
S -lane MSV (SSV) kernel yields the maximum speedup of
30x (23x) with respect to CUDAMPF. It is worth mentioning
that, in CUDAMPF++, SSV kernels have larger fluctuation
margin of performance than MSV kernels. This is caused
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
11
TABLE 2
Evaluation of workload balancing for the warp-based sequence data blocks
*
DB
name
DB
size (GB)
total
seq.
total
residues
avg. M
sd. M
avg. ending
residues
sd. ending
residues
avg. PRR
SMX eff.
MSV (%)*
SMX eff.
SSV (%)*
env nr
1.9
6,549,721
1,290,247,663
21,109
85
13,645
71
1.14E-4
97.13
96.7
est human
5.6
8,704,954
4,449,477,543
72,563
174
18,135
60
3.22E-5
97.97
97.76
Data collection with 32-lane kernels.
TABLE 3
Performance comparison of MSV kernel between the proposed CUDAMPF++ and CUDAMPF
CUDAMPF++ vs. CUDAMPF
***
S -lane
kernels
model
length m̂
acc. ID
H
reg.
per thread
stack
frame
spill
stores
spill
loads
L1 CHR
(%)
Tex. CHR
(%)
GCUPS
speedup
128
128
64
64
32
32
16
16
8
8
4
4
2
2
23
45
46
90
92
180
184
360
368
720
735
1439
1471
2405
PF13823.1
PF05931.6
PF09501.5
PF05777.7
PF00207.17
PF02737.13
PF00596.16
PF01117.15
PF05208.8
PB000053
PF03971.9
PF12252.3
PB006678
PB003055
23 / 2
45 / 2
23 / 2
45 / 2
23 / 2
45 / 2
23 / 2
45 / 3
23 / 3
45 / 6
23 / 6
45 / 12
23 / 12
38 / 19
63 / 29
64 / 29
64 / 29
64 / 29
64 / 29
64 / 29
63 / 29
64 / 30
63 / 30
64 / 34
63 / 34
64 / 51
63 / 51
64 / 64
24 / 0
48 / 0
16 / 0
48 / 0
8/0
48 / 0
8/0
56 / 0
8/0
56 / 0
8/0
56 / 0
8/0
40 / 0
0/0
48 / 0
0/0
64 / 0
0/0
56 / 0
0/0
60 / 0
0/0
60 / 0
0/0
60 / 0
0/0
32 / 0
0/0
24 / 0
0/0
32 / 0
0/0
52 / 0
0/0
60 / 0
0/0
60 / 0
0/0
60 / 0
0/0
44 / 0
99.94 / unused
51.93 / unused
99.96 / unused
50.88 / unused
99.97 / unused
75.49 / unused
99.99 / unused
80.11 / unused
99.99 / unused
80.10 / unused
100 / unused
80.08 / unused
100 / unused
100 / unused
100 / 100
100 / 100
100 / 100
100 / 100
100 / 100
100 / 100
100 / 100
100 / 100
100 / 100
78.48 / 92.37
75.3 / 92.37
67.09 / 72.66
63.38 / 72.6
60.15 / 64.06
168.6 / 9.9
144.4 / 19.4
225.7 / 19.8
231.3 / 38.7
274.1 / 39.6
278.8 / 77.4
266.7 / 78.9
277.0 / 130.9
266.7 / 133.6
271.6 / 183.8
262.4 / 187.4
271.5 / 231.6
259.7 / 237.3
264.0 / 271.2
17.0x
7.5x
11.4x
6.0x
7.0x
3.6x
3.4x
2.1x
2.0x
1.5x
1.4x
1.2x
1.1x
-1.0x
The env nr [23] is used in data collection.
TABLE 4
Performance comparison of SSV kernel between the proposed CUDAMPF++ and CUDAMPF
CUDAMPF++ vs. CUDAMPF
***
S -lane
kernels
model
length
acc. ID
H
reg.
per thread
stack
frame
spill
stores
spill
loads
L1 CHR
(%)
Tex. CHR
(%)
GCUPS
speedup
128
128
64
64
32
32
16
16
8
8
4
4
2
2
26
50
52
100
104
200
208
400
416
800
832
1600
1630
2405
PF02822.9
PF03869.9
PF02770.14
PB000229
PF14807.1
PF13087.1
PF15420.1
PF13372.1
PF06808.7
PF02460.13
PB001474
PB000744
PB000663
PB003055
26 / 2
50 / 2
26 / 2
50 / 2
26 / 2
50 / 2
26 / 2
50 / 4
26 / 4
50 / 7
26 / 7
50 / 13
26 / 13
38 / 19
62 / 33
64 / 33
63 / 33
64 / 33
61 / 33
64 / 33
62 / 33
64 / 37
62 / 37
64 / 42
62 / 42
64 / 56
62 / 56
64 / 63
24 / 0
56 / 0
16 / 0
72 / 0
8/0
56 / 0
8/0
56 / 0
8/0
56 / 0
8/0
56 / 0
8/0
16 / 0
0/0
60 / 0
0/0
100 / 0
0/0
64 / 0
0/0
72 / 0
0/0
72 / 0
0/0
72 / 0
0/0
0/0
0/0
32 / 0
0/0
52 / 0
0/0
44 / 0
0/0
52 / 0
0/0
52 / 0
0/0
52 / 0
0/0
0/0
99.93 / unused
54.29 / unused
99.96 / unused
30.65 / unused
99.96 / unused
60.3 / unused
99.98 / unused
58.57 / unused
99.99 / unused
58.53 / unused
99.99 / unused
58.49 / unused
100 / unused
100 / unused
100 / 100
100 / 100
100 / 100
100 / 100
100 / 100
100 / 100
100 / 100
95.66 / 100
98.92 / 100
75.79 / 87.24
76.90 / 87.24
65.68 / 70.80
66.57 / 70.81
59.69 / 64.05
178.4 / 15.9
144.6 / 30.7
276.1 / 31.8
239.8 / 61.4
307.8 / 63.6
365.0 / 122.4
305.0 / 127.2
347.8 / 199.3
302.6 / 209.1
344.8 / 306.9
302.1 / 319.1
349.3 / 403.5
293.1 / 411.1
318.8 / 468.9
11.2x
4.7x
8.7x
3.9x
4.8x
3.0x
2.4x
1.7x
1.4x
1.1x
-1.1x
-1.2x
-1.4x
-1.5x
The env nr [23] is used in data collection.
by the overhead of using read-only cache (texture cache) to
load emission scores. Although both MSV and SSV kernels
have only one ldg() function inside innermost loop, the
texture cache read in SSV kernels, as one of kernel limits,
has a higher proportion of negative effect on performance
than that in MSV kernels, which results in such obvious
fluctuation. A simple evidence is that the performance curve
will be smooth and regular if replacing texture cache reads
with a constant value.
Besides, 32-lane kernels are also tested to compare with
128 and 64-lane kernels. By decreasing H , the 32-lane kernel
is able to cover smaller query models, and it outperforms
128 and 64-lane kernels until the model length is slightly
larger than 20. We simply set m̂min = 20 for both MSV and
SSV kernels in terms of evaluation results. As for m̂max , the
model lengths of 2450 and 1000 are selected for MSV and
SSV kernels, respectively.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Coverage Area of S-lane Kernels
8
4
2
250
200
150
100
CUDAMPF++
CUDAMPF
50
0
500
1000
1500
Model Length
2000
Giga Cells Update Per Second (GCUPS)
Giga Cells Update Per Second (GCUPS)
32 16
12
250
200
150
100
50
0
2500
Coverage Area of S-lane Kernels
128
64
32-lane kernel
CUDAMPF++
CUDAMPF
10
20
30
40 50 60
Model Length
70
80
90
Fig. 6. Performance comparison between CUDAMPF++ and CUDAMPF for the MSV kernel.
32 16
Coverage Area of S-lane Kernels
8
4
2
350
400
350
300
250
200
150
CUDAMPF++
CUDAMPF
100
0
500
1000
1500
Model Length
2000
2500
Giga Cells Update Per Second (GCUPS)
Giga Cells Update Per Second (GCUPS)
450
300
Coverage Area of S-lane Kernels
128
64
32-lane kernel
CUDAMPF++
CUDAMPF
250
200
150
100
50
0
20
40
60
Model Length
80
100
Fig. 7. Performance comparison between CUDAMPF++ and CUDAMPF for the SSV kernel.
4.4
Performance Comparison: CUDAMPF++ vs. Others
A comprehensive performance comparison is also made
between optimized CUDAMPF++ and other implementations. Fig 8 and 9 present results of comparison between
CUDAMPF++ and CPU-based MSV/SSV stages with two
datasets. The CUDAMPF++ achieves upto 282.6 (283.9) and
465.7 (471.7) GCUPS for MSV and SSV kernels, respectively,
given the env nr (est human) dataset. Compared with the
best performance achieved by dual Xeon E5-2650 CPUs,
a maximum speedup of 168.3x (160.7x) and a minimum
speedup of 1.8x (1.7x) are observed for the MSV (SSV)
kernel of CUDAMPF++.
In the original HMMER3 paper [3], Eddy reports 12
GCUPS for MSV stage, achieved by a single CPU core.
Several acceleration efforts exist and report higher performance: (a) an FPGA-based implementation [11] yields upto
81 GCUPS for MSV stage; (b) Lin [13] inherits and modifies
a GPU-based implementation of HMMER2 [6] to accelerate
MSV stage of HMMER3, which achieves upto 32.8 GCUPS
on a Quadro K4000 GPU; (c) [16] claims the first acceleration
work on SSV stage of latest HMMER v3.1b2 and reports
the maximum performance of 372.1 GCUPS on a GTX570
GPU. To sum up, as shown in Fig. 8 and 9, the proposed
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
HMMER3,env,E5620(8 threads)
HMMER3,human,E5620(8 threads)
250
Giga Cells Update Per Second (GCUPS)
Giga Cells Update Per Second (GCUPS)
CUDAMPF++,env,K40
CUDAMPF++,human,K40
13
200
150
100
50
0
HMMER3,env,E5-2650(32 threads)
HMMER3,human,E5-2650(32 threads)
250
200
150
100
50
0
0
500
1000
1500
Model Length
2000
2500
10
20
30
40 50 60
Model Length
70
80
90
Fig. 8. Performance comparison between CUDAMPF++ and HMMER3’s CPU-based implementation for the MSV kernel (stage).
HMMER3,env,E5620(8 threads)
HMMER3,human,E5620(8 threads)
400
300
200
100
0
HMMER3,env,E5-2650(32 threads)
HMMER3,human,E5-2650(32 threads)
350
400
Giga Cells Update Per Second (GCUPS)
Giga Cells Update Per Second (GCUPS)
CUDAMPF++,env,K40
CUDAMPF++,human,K40
300
250
200
150
100
50
0
500
1000
1500
Model Length
2000
2500
20
40
60
Model Length
80
100
Fig. 9. Performance comparison between CUDAMPF++ and HMMER3’s CPU-based implementation for the SSV kernel (stage).
framework, CUDAMPF++, exceeds all existing work and
exhibits strong consistency in performance regardless of
either the model length or the amount of protein sequences.
5
R ELATED W ORK
As one of the most popular tool for the analysis of homologous protein and nucleotide sequences, HMMER at-
tracts many acceleration attempts. The previous version,
HMMER2, is based on Viterbi algorithm that has proved to
be the computational bottleneck. The initial effort of GPUbased implementation for HMMER2 is ClawHMMER [28]
which introduces a streaming version of Viterbi algorithm
for GPUs. They also demonstrate the implementation running on a 16-node GPU cluster, each equipped with a
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Radeon 9800 Pro GPU. Another early GPU-based implementation is proposed by Walters et al. [6] who properly
fit the Viterbi algorithm into the CUDA-enabled GPU with
several optimizations, like memory coalescing, proper kernel occupancy and shared/constant memory usage, which
outperforms the ClawHMMER substantially. Yao et al. [29]
present a CPU-GPU cooperative pattern to accelerate HMMER2. Ganesan et al. [7] re-design the aligment process of
a single sequence across multiple threads to partially break
the sequential dependency in computation of Viterbi scores.
This helps building a hybrid task and data-level parallelism
that eliminates the overhead due to unbalanced sequence
lengths.
However, with the heuristic pipeline, HMMER3 achieves
about 100x to 1000x speedups over its predecessor [3],
which hence renders any acceleration effort of HMMER2
obsolete. There are only few existing work that aim to accelerate SSV, MSV and P7Viterbi stages of hmmsearch pipeline
in HMMER3. Abbas et al. [11] re-writes mathematical formulas of MSV and Viterbi algorithms to expose reduction and
prefix scan computation patterns which are fitted into the
FPGA architecture. In [12], a speculative method is proposed
to reduce the number of global memory access on the GPU,
which aims to accelerate the MSV stage. Lin et al. [13],
[14] also focus on MSV stage but incorporate SIMD video
instructions provied by the CUDA-enabled GPU into their
method. Like the strategy of [6], they assign each thread to
handle a whole sequence. A CPU-based implementation of
P7Viterb stage is done by Ferreira et al. [15] who propose a
cache-oblivious parallel SIMD Viterbi algorithm that offsets
cache miss penalties of original HMMER3 work. Neto et
al. [16] accelerate the SSV stage via a set of optimizations
on the GPU, such as model tiling, outer loop unrolling,
coalesced and vectorized memory access.
6
D ISCUSSION
While we have shown that the proposed framework with
hierarchical parallelism achieves impressive performance
based on Kepler architecture, we believe that more advanced GPU architectures, like Maxwell, Pascal and Volta,
could also benefit from it because of its hardware-based
design. It is easy to port the framework to run on advanced
GPUs and gain better performance given more available
hardware resources, such as on-chip registers, cache capacity, memory bandwidth and SMs. Also, the framework
naturally has linear scalability when distributing protein
sequences to multiple GPUs. To handle large models that
exceed the carrying capability of single GPU, however, one
potential solution is the model partitioning that distributes
different segments of model to different GPUs while introducing inter-device communication (i.e., max-reduction, reordering). The multi-GPU implementation of the proposed
framework is being investigated.
As for the general applicability, not only is the framework suitable for accelerating analogous algorithms of genomic sequence analysis, other domain-specified applications with some features may also benefit from it. The highlight features, for example, may include data irregularity,
large-scaled working set and relatively complex logic with
execution dependency. In the contrary, for some agent-based
14
problems that usually investigate the behavior of millions
of individuals, such as molecular dynamics or simulation
of spatio-temporal dynamics, our framework may not be
the preferred choice. Actually, the key performance factor is
the innermost loop, corresponding to 3rd, 4th and 5th tiers
of the proposed framework, in which we should only put
necessary operations. In general, assuming that kernels are
bound by the innermost loop, there are several suggestions
related to minimizing the cost inside the innermost loop:
(a) try to hold repeatedly used values in registers to avoid
high-frequency communications between on and off-chip
memory; (b) pre-load data needed by the innermost loop
in outer loops; (c) use either L1 or texture cache to reduce
the overhead of load/store operations; (d) try to use highthroughput arithmetic instructions. (e) use shuffle instructions rather than shared memory, if applicable.
Ultimately, this work sheds light a strategy to amplify
the horsepower of individual GPU in an architecture-aware
way while other acceleration efforts usually aim to exploit
performance scaling with muliple GPUs.
7
C ONCLUSION
In this paper, we propose a five-tiered parallel framework,
CUDAMPF++, to accelerate computationally intensive tasks
of the homologous protein sequence search with profile
HMMs. This framework is based on CUDA-enabled GPUs,
and it aims to fully utilize hardware resources of the GPU
via exploiting finer-grained parallelism (multi-sequence
alignment) compared with its predecessor. In addition, we
introduce a novel idea that improves the performance and
scalability of the proposed framework by sacrificing L1
CHR proactively. As shown by experimental results, the
optimized framework outperforms all existing work, and
it exhibits good consistency in performance regardless of
the variation of query models or protein sequence datasets.
For MSV (SSV) kernels, the peak performance of the CUDAMPF++ is 283.9 (471.7) GCUPS on single K40 GPU,
and impressive speedups ranging from 1.8x (1.7x) to 168.3x
(160.7x) are achieved over the CPU-based implementation
(16 cores, 32 threads). Moreover, further generalization of
the proposed framework is also discussed.
ACKNOWLEDGMENTS
The authors would like to thank the NVIDIA-Professor
partnership for generous donations in carrying out this
research.
R EFERENCES
[1]
[2]
[3]
[4]
[5]
N. Marco S., C. Paolo, T. Andrea, and B. Daniela, “Graphics
processing units in bioinformatics, computational biology and
systems biology,” Briefings in Bioinformatics, p. 116, 2016.
S. Eddy, “Profile hidden markov models,” Bioinformatics, vol. 14,
pp. 755–763, 1998.
——, “Accelerated profile HMM searches,” PLoS Comput Biol,
vol. 7, no. 10, 2011, doi:10.1371/journal.pcbi.1002195.
K. Anders, B. Michael, M. I. Saira, S. Kiminen, and H. David,
“Hidden Markov models in computational biology: Applications
to protein modeling,” Journal of Molecular Biology, vol. 235, pp.
1501–1531, 1994.
S. Altschul, W. Gish, W. Miller, E. Myers, and D. Lipman, “Basic
local alignment search tool,” Journal of Molecular Biology, vol. 215,
no. 3, pp. 403–410, 1990.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
J. P. Walters, V. Balu, S. Kompalli, and V. Chaudhary, “Evaluating
the use of gpus in liver image segmentation and hmmer database
searches,” in International Symposium on Parallel & Distributed Processing (IPDPS); Rome. IEEE, 2009, pp. 1–12.
N. Ganesan, R. D. Chamberlain, J. Buhler, and M. Taufer, “Accelerating HMMER on GPUs by implementing hybrid data and task
parallelism,” in Proceedings of the First ACM Int. Conf. on Bioinformatics and Computational Biology (ACM-BCB); Buffalo. ACM, 2010,
pp. 418–421.
R. Maddimsetty, J. Buhler, R. Chamberlain, M. Franklin, and
B. Harris, “Accelerator design for protein sequence HMM search,”
in Proc. 20th ACM International Conference on Supercomputing, 2006.
T. Oliver, L. Y. Yeow, and B. Schmidt, “Integrating FPGA acceleration into HMMer,” Parallel Computing, vol. 34, no. 11, pp. 681–691,
2008.
T. Takagi and T. Maruyama, “Accelerating HMMER search using
FPGA,” in International Conference on Field Programmable Logic and
Applications (FPL); Prague. IEEE, 2009, pp. 332–337.
N. Abbas, S. Derrien, S. Rajopadye, and P. Quinton, “Accelerating
HMMER on FPGA using Parallel Prefixes and Reductions,” in
International Conference on Field-Programmable Technology (FPT) : 2810 Dec. 2010; Beijing. IEEE., 2010, pp. 37–44.
X. Li, W. Han, G. Liu, H. An, M. Xu, W. Zhou, and Q. Li, “A
speculative HMMER search implementation on GPU,” in 26th
IPDPS Workshop and PhD Forum; Shanghai. IEEE, 2012, pp. 73–
74.
L. Cheng and G. Butler, “Implementing and Accelerating HMMER3 Protein Sequence Search on CUDA-Enabled GPU,” Ph.D.
dissertation, Concordia University, The Department of Computer
Science and Software Engineering, 2014.
——, “Accelerating search of protein sequence databases using
CUDA-enabled GPU,” in 20th International Conference on Database
Systems for Advanced Applications (DASFAA) : April 20-23. 2015;
Hanoi. IEEE., 2015, pp. 279–298.
M. Ferreira, N. Roma, and L. Russo, “Cache-Oblivious parallel
SIMD Viterbi decoding for sequence search in HMMER,” BMC
Bioinformatics, vol. 15, no. 165, 2014.
A. C. de Arajo Neto and N. Moreano, “Acceleration of Singleand Multiple-Segment Viterbi Algorithms for Biological SequenceProfile Comparison on GPU,” in 21st International Conference
on Parallel and Distributed Processing Techniques and Applications
(PDPTA) : July 27-30. 2015; Las Vegas. WORLDCOMP, 2015, pp.
65–71.
H. Jiang and G. Narayan, “Fine-Grained Acceleration of HMMER
3.0 via Architecture-aware Optimization on Massively Parallel
Processors,” in 14th IEEE International Workshop on High Performance Computational Biology (HiCOMB) in IPDPSW : May 25-29.
2015; Hyderabad. IEEE., 2015.
H. Jiang and N. Ganesan, “CUDAMPF: a multi-tiered parallel
framework for accelerating protein sequence search in HMMER
on CUDA-enabled GPU,” BMC Bioinformatics, vol. 17, no. 1, p.
106, 2016.
K. Bjarne. (2015, Mar.) The ssv filter implementation.
NVIDIA, “CUDA C programming guide,” June 2017.
[Online]. Available: http://docs.nvidia.com/pdf/CUDA C
Programming Guide.pdf
——,
“NVIDIAs
next
generation
CUDA
compute
architecture:
Kepler
GK110/210,”
2014,
nVIDIA
Corporation
Whitepaper.
[Online].
Available:
http://international.download.nvidia.com/pdf/kepler/
NVIDIA-Kepler-GK110-GK210-Architecture-Whitepaper.pdf
——, “Parallel thread execution ISA,” June 2017. [Online].
Available: http://docs.nvidia.com/pdf/ptx isa 5.0.pdf
Fasta sequence databases. sep. 2014. [Online]. Available:
ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/
NVIDIA, “NVIDIA Tesla GPU accelerators,” 2013. [Online]. Available: http://www.nvidia.com/content/tesla/pdf/
NVIDIA-Tesla-Kepler-Family-Datasheet.pdf
——, “Profiler user’s guide,” June 2017. [Online]. Available:
http://docs.nvidia.com/pdf/CUDA Profiler Users Guide.pdf
Pfam: Protein family database. may 2013. [Online]. Available:
ftp://ftp.ebi.ac.uk/pub/databases/Pfam/releases/Pfam27.0/
S. Karl and N. Perry, “The little redis book,” 2012. [Online].
Available: http://openmymind.net/redis.pdf
D. Horn, M. Houston, and P. Hanrahan, “ClawHMMER: A
streaming HMMer-search implementation,” in Proceedings of the
ACM/IEEE Supercomputing Conference. IEEE, 2005.
15
[29] Y. Ping, A. Hong, X. Mu, L. Gu, L. Xiaoqiang, W. Yaobin, and
H. Wenting, “CuHMMer: A load-balanced CPU-GPU cooperative
bioinformatics application,” in International Conference on High
Performance Computing and Simulation (HPCS) : June. 2010; Caen,
France. IEEE., 2010, pp. 24–30.
Hanyu Jiang Biography text here.
PLACE
PHOTO
HERE
Narayan Ganesan Biography text here.
PLACE
PHOTO
HERE
Yu-Dong Yao Biography text here.
PLACE
PHOTO
HERE
| 5 |
arXiv:1506.02361v1 [cs.NE] 8 Jun 2015
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
Julien Chevallier
Laboratoire J. A. Dieudonné, UMR CNRS 6621, Université de Nice Sophia-Antipolis, Parc
Valrose
06108 Nice Cedex 2, France
[email protected]
Marı́a José Cáceres
Departamento de Matemática Aplicada , Universidad de Granada, Campus de Fuentenueva
E-18071 Granada, Spain
[email protected]
Marie Doumic
UPMC University of Paris 6, JL Lions Lab., 4 place Jussieu
75005 Paris, France
Patricia Reynaud-Bouret
Laboratoire J. A. Dieudonné, UMR CNRS 6621, Université de Nice Sophia-Antipolis, Parc
Valrose
06108 Nice Cedex 2, France
[email protected]
The spike trains are the main components of the information processing in the brain.
To model spike trains several point processes have been investigated in the literature. And
more macroscopic approaches have also been studied, using partial differential equation
models. The main aim of the present article is to build a bridge between several point
processes models (Poisson, Wold, Hawkes) that have been proved to statistically fit
real spike trains data and age-structured partial differential equations as introduced by
Pakdaman, Perthame and Salort.
Keywords: Hawkes process; Wold process; renewal equation; neural network
AMS Subject Classification:35F15, 35B10, 92B20, 60G57, 60K15
Introduction
In Neuroscience, the action potentials (spikes) are the main components of the realtime information processing in the brain. Indeed, thanks to the synaptic integration,
the membrane voltage of a neuron depends on the action potentials emitted by some
others, whereas if this membrane potential is sufficiently high, there is production
of action potentials.
1
June 9, 2015
2
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
To access those phenomena, schematically, one can proceed in two ways: extracellularly record in vivo several neurons, at a same time, and have access to
simultaneous spike trains (only the list of events corresponding to action potentials) or intracellularly record the whole membrane voltage of only one neuron at a
time, being blind to the nearby neurons.
Many people focus on spike trains. Those data are fundamentally random and
can be modelled easily by time point processes, i.e. random countable sets of points
on R+ . Several point processes models have been investigated in the literature, each
of them reproducing different features of the neuronal reality. The easiest model is
the homogeneous Poisson process, which can only reproduce a constant firing rate
for the neuron, but which, in particular, fails to reproduce refractory periodsa . It
is commonly admitted that this model is too poor to be realistic. Indeed, in such a
model, two points or spikes can be arbitrary close as soon as their overall frequency
is respected in average. Another more realistic model is the renewal process 37 ,
where the occurrence of a point or spike depends on the previous occurrence. More
precisely, the distribution of delays between spikes (also called inter-spike intervals,
ISI) is given and a distribution, which provides small weights to small delays, is
able to mimic refractory periods. A deeper statistical analysis has shown that Wold
processes is showing good results, with respect to goodness-of-fit test on real data
sets 38 . Wold processes are point processes for which the next occurrence of a spike
depends on the previous occurrence but also on the previous ISI. From another
point of view, the fact that spike trains are usually non stationary can be easily
modelled by inhomogeneous Poisson processes 43 . All those models do not reflect
one of the main features of spike trains, which is the synaptic integration and there
has been various attempts to catch such phenomenon. One of the main model is the
Hawkes model, which has been introduced in 13 and which has been recently shown
to fit several stationary data 40 . Several studies have been done in similar directions
(see for instance 5 ). More recently a vast interest has been shown to generalized
linear models 36 , with which one can infer functional connectivity and which are
just an exponential variant of Hawkes models.
There has also been several models of the full membrane voltage such as
Hodgkin-Huxley models. It is possible to fit some of those probabilistic stochastic differential equations (SDE) on real voltage data 22 and to use them to estimate
meaningful physiological parameters 18 . However, the lack of simultaneous data
(voltages of different neurons at the same time) prevent these models to be used as
statistical models that can be fitted on network data, to estimate network parameters. A simple SDE model taking synaptic integration into account is the well-known
Integrate-and-Fire (IF) model. Several variations have been proposed to describe
several features of real neural networks such as oscillations 7,8 . In particular, there
exists hybrid IF models including inhomogeneous voltage driven Poisson process 21
that are able to mimic real membrane potential data. However up to our knowledge
a Biologically,
a neuron cannot produce two spikes too closely in time.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
3
and unlike point processes models, no statistical test have been applied to show
that any of the previous variations of the IF model fit real network data.
Both, SDE and point processes, approaches are microscopic descriptions, where
random noise explains the intrinsic variability. Many authors have argued that there
must be some more macroscopic approach describing huge neural networks as a
whole, using PDE formalism 15,42 . Some authors have already been able to perform
link between PDE approaches as the macroscopic system and SDE approach (in
particular IF models) as the microscopic model 39,30,26 . Another macroscopic point
of view on spike trains is proposed by Pakdaman, Perthame and Salort in a series
of articles 31,32,33 . It uses a nonlinear age-structured equation to describe the spikes
density. Adopting a population view, they aim at studying relaxation to equilibrium or spontaneous periodic oscillations. Their model is justified by a qualitative,
heuristic approach. As many other models, their model shows several qualitative
features such as oscillations that make it quite plausible for real networks, but once
again there is no statistical proof of it, up to our knowledge.
In this context, the main purpose of the present article is to build a bridge between several point processes models that have been proved to statistically fit real
spike trains data and age structured PDE of the type of Pakdaman, Perthame and
Salort. The point processes are the microscopic models, the PDE being their mesomacroscopic counterpart. In this sense, it extends PDE approaches for IF models to
models that statistically fit true spike trains data. In the first section, we introduce
Pakdaman, Perthame and Salort PDE (PPS) via its heuristic informal and microscopic description, which is based on IF models. Then, in Section 2, we develop
the different point process models, quite informally, to draw the main heuristic
correspondences between both approaches. In particular, we introduce the conditional intensity of a point process and a fundamental construction, called Ogata’s
thinning 29 , which allows a microscopic understanding of the dynamics of a point
process. Thanks to Ogata’s thinning, in Section 3, we have been able to rigorously
derive a microscopic random weak version of (PPS) and to propose its expectation deterministic counterpart. An independent and identically distributed (i.i.d)
population version is also available. Several examples of applications are discussed
in Section 4. To facilitate reading, technical results and proofs are included in two
appendices. The present work is clearly just a first to link point processes and PDE:
there are much more open questions than answered ones and this is discussed in the
final conclusion. However, we think that this can be fundamental to acquire a deeper
understanding of spike train models, their advantages as well as their limitations.
1. Synaptic integration and (PPS) equation
Based on the intuition that every neuron in the network should behave in the same
way, Pakdaman, Perthame and Salort proposed in 31 a deterministic PDE denoted
(PPS) in the sequel. The origin of this PDE is the classical (IF) model. In this section
we describe the link between the (IF) microscopic model and the mesoscopic (PPS)
June 9, 2015
4
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
model, the main aim being to show thereafter the relation between (PPS) model
and other natural microscopic models for spike trains: point processes.
1.1. Integrate-and-fire
Integrate-and-fire models describe the time evolution of the membrane potential,
V (t), by means of ordinary differential equations as follows
dtV
= −gL (V − VL ) + I(t),
(1.1)
dt
where Cm is the capacitance of the membrane, gL is the leak conductance and VL
is the leak reversal potential. If V (t) exceeds a certain threshold θ, the neuron fires
/ emits an action potential (spike) and V (t) is reset to VL . The synaptic current
I(t) takes into account the fact that other presynaptic neurons fire and excite the
neuron of interest, whose potential is given by V (t).
As stated in 31 , the origin of (PPS) equation comes from 35 , where the explicit
solution of a classical IF model as (1.1) has been discussed. To be more precise the
membrane voltage of one neuron at time t is described by:
Z t
V (t) = Vr + (VL − Vr )e−(t−T )/τm +
h(t − u)Ninput (du),
(1.2)
Cm
T
where Vr is the resting potential satisfying VL < Vr < θ, T is the last spike emitted
by the considered neuron, τm is the time constant of the system (normally τm =
gL /Cm ), h is the excitatory post synaptic potential (EPSP) and Ninput is the sum
of Dirac masses at each spike of the presynaptic neurons. Since after firing, V (t)
is reset to VL < Vr , there is a refractory period when the neuron is less excitable
than at rest. The constant time τm indicatesR whether the next spike can occur
t
more or less rapidly. The other main quantity, T h(t − u)Ninput (du), is the synaptic
integration term.
In 35 , they consider a whole random network of such IF neurons and look at the
behavior of this model, where the only randomness is in the network. In many other
studies 7,8,9,11,26,42,30 IF models as (1.1) are considered to finally obtain other systems of partial differential equations (different to (PPS)) describing neural networks
behavior. In these studies, each presynaptic neuron is assumed to fire as an independent Poisson process and via a diffusion approximation, the synaptic current is then
approximated by a continuous in time stochastic process of Ornstein-Uhlenbeck.
1.2. The (PPS) equation
The deterministic PDE proposed by Pakdaman, Perthame and Salort, whose origin
is also the microscopic
IF model (1.2), is the following:
(
∂n(s,t)
+ ∂n(s,t)
+ p (s, X (t)) n (s, t) = 0
∂t
∂s
(PPS)
R +∞
m (t) := n (0, t) = 0 p (s, X (t)) n (s, t) ds.
In this equation, n(s, t) represents a probability density of neurons at time t having
discharged at time t − s. Therefore, s represents the time elapsed since the last
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
5
discharge. The fact that the equation is an elapsed time structured equation is
natural, because the IF model (1.2) clearly only depends on the time since the last
spike. More informally, the variable s represents the ”age” of the neuron.
The first equation of the system (PPS) represents a pure transport process and
means that as time goes by, neurons of age s and past given by X(t) are either
aging linearly or reset to age 0 with rate p (s, X (t)).
The second equation of (PPS) describes the fact that when neurons spike, the
age (the elapsed time) returns to 0. Therefore, n(0, t) depicts the density of neurons
undergoing a discharge at time t and it is denoted by m(t). As a consequence of
this boundary condition, for n at s = 0, the following conservation law is obtained:
Z ∞
Z ∞
n (s, t) ds =
n (s, 0) ds
0
0
This means that if n (·, 0) is a probabilistic density then n (·, t) can be interpreted
as a density at each time t. Denoting by dt the Lebesgue measure and since m(t) is
the density of firing neurons at time t in (PPS), m(t)dt can also be interpreted as
the limit of Ninput (dt) in (1.2) when the population of neurons becomes continuous.
The system (PPS) is nonlinear since the rate p (s, X(t)) depends on n(0, t) by
means of the quantity X(t):
Zt
Zt
h(u)m(t − u)du =
X(t) =
0
h(u)n(0, t − u)du.
(1.3)
0
The quantity X(t) represents the interactions between neurons. It ”takes into account the averaged propagation time for the ionic pulse in this network” 31 . More
precisely with respect to the IF models (1.2), this is the synaptic integration term,
once the population becomes continuous. The only difference is that in (1.2) the
memory is cancelled once the last spike has occurred and this is not the case here.
However informally, both quantities have the same interpretation. Note nevertheless, that in 31 , the function h can be much more general than the h of the IF models
which clearly corresponds to EPSP. From now on and in the rest of the paper, h is
just a general non negative function without forcing the connection with EPSP.
The larger p (s, X(t)) the more likely neurons of age s and past X(t) fire. Most of
the time (but it is not a requisite), p is assumed to be less than 1 and is interpreted
as the probability that neurons of age s fire. However, as shown in Section 3 and
as interpreted in many population structured equation 14,19,34 , p(s, X(t)) is closer
to a hazard rate, i.e. a positive quantity such that p (s, X(t)) dt is informally the
probability to fire given that the neuron has not fired yet. In particular, it could be
not bounded by 1 and does not need to integrate to 1. A toy example is obtained
if p (s, X(t)) = λ > 0, where a steady state solution is n(s, t) = λe−λs 1s≥0 : this is
the density of an exponential variable with parameter λ.
However, based on the interpretation of p (s, X(t)) as a probability bounded
by 1, one of the main model that Pakdaman, Perthame and Salort consider is
p (s, X(t)) = 1s≥σ(X(t)) . This again can be easily interpreted by looking at (1.2).
June 9, 2015
6
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
Indeed, since in the IF models the spike happens when the threshold θ is reached,
one can consider that p (s, X(t)) should be equal to 1 whenever
V (t) = Vr + (VL − Vr )e−(t−T )/τm + X(t) ≥ θ,
and 0 otherwise. Since VL − Vr < 0, p (s, X(t)) = 1 is indeed equivalent to s = t − T
larger than some decreasing function of X(t). This has the double advantage to give
a formula for the refractory period (σ(X(t))) and to model excitatory systems: the
refractory period decreases when the whole firing rate increases via X(t) and this
makes the neurons fire even more. This is for this particular case that Pakdaman,
Perthame and Salort have shown existence of oscillatory behavior 32 .
Another important parameter in the (PPS)
model and introduced in 31 is J,
R
which can be seen with our formalism as h and which describes the network
connectivity or the strength of the interaction. In 31 it has been proved that, for
highly or weakly connected networks, (PPS) model exhibits relaxation to steady
state and periodic solutions have also been numerically observed for moderately
connected networks. The authors in 32 have quantified the regime where relaxation
to a stationary solution occurs in terms of J and described periodic solution for
intermediate values of J.
Recently, in 33 , the (PPS) model has been extended including a fragmentation
term, which describes the adaptation and fatigue of the neurons. In this sense, this
new term incorporates the past activity of the neurons. For this new model, in
the linear case there is exponential convergence to the steady states, while in the
weakly nonlinear case a total desynchronization in the network is proved. Moreover,
for greater nonlinearities, synchronization can again been numerically observed.
2. Point processes and conditional intensities as models for spike
trains
We first start by quickly reviewing the main basic concepts and notations of point
processes, in particular, conditional intensities and Ogata’s thinning 29 . We refer
the interested reader to 3 for exhaustiveness and to 6 for a much more condensed
version, with the main useful notions.
2.1. Counting processes and conditional intensities
We focus on locally finite point processes on R, equipped with the borelians B(R).
Definition 2.1 (Locally finite point process). A locally finite point process N
on R is a random set of points such that it has almost surely (a.s.) a finite number
of points in finite intervals. Therefore, associated to N there is an ordered sequence
of extended real valued random times (Tz )z∈Z : · · · ≤ T−1 ≤ T0 ≤ 0 < T1 ≤ · · · .
For a measurable set A, NA denotes the number of points of N in A. This is a
random variable with values in N ∪ {∞}.
Definition 2.2 (Counting process associated to a point process). The process on R+ defined by t 7→ Nt := N(0,t] is called the counting process associated to
the point process N .
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
7
The natural and the predictable filtrations are fundamental for the present work.
Definition 2.3 (Natural
filtration of a point process). The natural filtration
of N is the family FtN t∈R of σ-algebras defined by FtN = σ (N ∩ (−∞, t]).
Definition 2.4 (Predictable filtration of a point
process). The NpreN
defined by Ft− =
dictable filtration of N is the family of σ-algebra Ft−
t∈R
σ (N ∩ (−∞, t)).
The intuition behind this concept is that FtN contains all the information given
by the point process at time t. In particular, it contains the information whether
N
only contains the information given
t is a point of the process or not while Ft−
by the point process strictly before t. Therefore, it does not contain (in general)
N
represents (the
the information whether t is a point or not. In this sense, Ft−
information contained in) the past.
Under some rather classical conditions 3 , which are always assumed to be satN
), which is
isfied here, one can associate to (Nt )t≥0 a stochastic intensity λ(t, Ft−
N
a non negative random quantity. The notation λ(t, Ft− ) for the intensity refers
to the Rpredictable version of the intensity associated to the natural filtration and
t
N
N
(Nt − 0 λ(u, Fu−
)du)t≥0 forms a local martingale 3 . Informally, λ(t, Ft−
)dt represents the probability to have a new point in interval [t, t + dt) given the past. Note
N
) should not be understood as a function, in the same way as density
that λ(t, Ft−
is for random variables. It is a ”recipe” explaining how the probability to find a
new point at time t depends on the past configuration: since the past configuration
depends on its own past, this is closer to a recursive formula. In this respect, the
intensity should obviously depend on N ∩(−∞, t) and not on N ∩(−∞, t] to predict
the occurrence at time t, since we cannot know whether t is already a point or not.
The distribution of the point process N on R is completely characterized by the
N
) on R+ and the distribution of N− = N ∩ R− ,
knowledge of the intensity λ(t, Ft−
which is denoted by P0 in the sequel. The information about P0 is necessary since
each point of N may depend on the occurrence of all the previous points: if for all
N
t > 0, one knows the ”recipe” λ(t, Ft−
) that gives the probability of a new point at
time t given the past configuration, one still needs to know the distribution of N−
to obtain the whole process.
Two main assumptions
are used depending on the type of results we seek:
1
RT
,a.s.
N
ALλ,loc
for any T ≥ 0, 0 λ(t, Ft−
)dt is finite a.s.
1
hR
i
T
,exp
N
ALλ,loc
for any T ≥ 0, E 0 λ(t, Ft−
)dt is finite.
1
1
1
Clearly ALloc,exp implies ALloc,a.s. . Note that ALloc,a.s. implies non-explosion in
finite time for the counting processes (Nt ).
Definition 2.5 (Point measure associated to a point process). The point
P
measure associated to N is denoted by N (dt) and defined by N (dt) = i∈Z δTi (dt),
where δu is the Dirac mass in u.
By analogy with (PPS), and since points of point processes correspond to spikes
June 9, 2015
8
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
(or times of discharge) for the considered neuron in spike train analysis, N (dt) is the
microscopic equivalent of the distribution of discharging neurons m(t)dt. Following
this analogy, and since TNt is the last point less or equal to t for every t ≥ 0, the
age St at time t is defined by St = t − TNt . In particular, if t is a point of N , then
St = 0. Note that St is FtN measurable for every t ≥ 0 and therefore, S0 = −T0 is
F0N measurable. To define an age at time t = 0, one assumes that
(AT0 ) There exists a first point before 0 for the process N− , i.e. −∞ < T0 .
As we have remarked before, conditional intensity should depend on N ∩ (−∞, t).
Therefore, it cannot be function of St , since St informs us if t is a point or not.
N
That is the main reason for considering this Ft−
measurable variable
St− = t − TNt− ,
(2.1)
where TNt− is the last point strictly before t (see Figure 1). Note also that knowing
(St− )t≥0 or (Nt )t≥0 is completely equivalent given F0N .
The last and most crucial equivalence between (PPS) and the present point
N
process set-up, consists in noting that the quantities p(s, X(t)) and λ(t, Ft−
) have
informally the same meaning: they both represent a firing rate, i.e. both give the
rate of discharge as a function of the past. This dependence is made more explicit
N
).
in p(s, X(t)) than in λ(t, Ft−
2.2. Examples
Let us review the basic point processes models of spike trains and see what kind of
analogy is likely to exist between both models ((PPS) equation and point processes).
These informal analogies are possibly exact mathematical results (see Section 4).
N
Homogeneous Poisson process This is the simplest case where λ(t, Ft−
) = λ,
with λ a fixed positive constant representing the firing rate. There is no dependence
in time t (it is homogeneous) and no dependence with respect to the past. This
case should be equivalent to p(s, X(t)) = λ in (PPS). This can be made even more
explicit. Indeed in the case where the Poisson process exists on the whole real line
(stationary case), it is easy to see that
P (St− > s) = P N[t−s,t) = 0 = exp(−λs),
meaning that the age St− obeys an exponential distribution with parameter λ, i.e.
the steady state of the toy example developed for (PPS) when p(s, X(t)) = λ.
Inhomogeneous Poisson process To model non stationarity, one can use
N
λ(t, Ft−
) = λ(t), which only depends on time. This case should be equivalent to the
replacement of p(s, X(t)) in (PPS) by λ(t).
Renewal process This model is very useful to take refractory period into account.
It corresponds to the case where the ISIs (delays between spikes) are independent
and identically distributed (i.i.d.) with a certain given density ν on R+ . The asso-
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
9
ciated hazard rate is
ν(s)
f (s) = R +∞
,
ν(x)dx
s
R +∞
when s ν(x)dx > 0. Roughly speaking, f (s)ds is the probability that a neuron
spikes with age s given that its age is larger than s. In this case, considering the set
of spikes as the point process N , it is easy to show (see the Appendix B.1) that its
N
corresponding intensity is λ(t, Ft−
) = f (St− ) which only depends on the age. One
can also show quite easily that the process (St− )t>0 , which is equal to (St )t>0 almost
everywhere (a.e.), is a Markovian process in time. This renewal setting should be
equivalent in the (PPS) framework to p(s, X(t)) = f (s).
Note that many people consider IF models (1.2) with Poissonian inputs with or
without additive white noise. In both cases, the system erases all memory after each
spike and therefore the ISIs are i.i.d. Therefore as long as we are only interested by
the spike trains and their point process models, those IF models are just a particular
case of renewal process 8,10,17,35 .
Wold process and more general structures Let A1t be the delay (ISI) between
the last point and the occurrence just before (see also Figure 1), A1t = TNt− −TNt− −1 .
N
) = f (St− , A1t ). This model
A Wold process 24,16 is then characterized by λ(t, Ft−
has been matched to several real data thanks to goodness-of-fit tests 38 and is
therefore one of our main example with the next discussed Hawkes process case.
One can show in this case that the successive ISI’s form a Markov chain of order 1
and that the continuous time process (St− , A1t ) is also Markovian.
This case should be equivalent to the replacement of p(s, X(t)) in (PPS) by
f (s, a1 ), with a1 denoting the delay between the two previous spikes. Naturally in
this case, one should expect a PDE of higher dimension with third variable a1 .
More generally, one could define
Akt = TNt− −(k−1) − TNt− −k ,
(2.2)
N
and point processes with intensity λ(t, Ft−
) = f (St− , A1t , ..., Akt ). Those processes
satisfy more generally that their ISI’s form a Markov chain of order k and that the
continuous time process (St− , A1t , ..., Akt ) is also Markovian (see the Appendix B.2).
Remark 2.1. The dynamics of the successive ages is pretty simple. On the one
hand, the dynamics of the vector of the successive ages (St− , A1t , ..., Akt )t>0 is deterministic between two jumping times. The first coordinate increases with rate 1.
On the other hand, the dynamics at any jumping time T is given by the following
shift:
the age process goes to 0, i.e. ST = 0,
(2.3)
the first delay becomes the age, i.e. A1T + = ST − ,
i−1
i
the other delays are shifted, i.e. AT + = AT for all i ≤ k.
June 9, 2015
10
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
Hawkes processes The most classical setting is the linear (univariate) Hawkes
process, which corresponds to
N
λ(t, Ft−
)
t−
Z
h(t − u)N (du),
=µ+
−∞
where the positive parameter µ is called the spontaneous rate and the non negative
function h, with support
in R+ , is called the interaction function, which is generally
R
assumed to satisfy R+ h < 1 to guarantee the existence of a stationary version 16 .
This model has also been matched to several real neuronal data thanks to goodnessof-fit tests 40 . Since it can mimic synaptic integration, as explained below, this
represents the main example of the present work.
In the case where T0 tends to −∞, this is equivalent to say that there is no point
on the negative half-line and in this case, one can rewrite
N
λ(t, Ft−
)
Z
t−
h(t − u)N (du).
=µ+
0
R t−
By analogy between N (dt) and m(t)dt, one sees that 0 h(t − u)N (du) is indeed
the analogous of X(t) the synaptic integration in (1.3). So one could expect that
the PDE analogue is given by p(s, X(t)) = µ + X(t). In Section 4, we show that
this does not hold stricto sensu, whereas the other analogues work well.
Note that this model shares also some link with IF models. Indeed, the formula
for the intensity is close to the formula for the voltage (1.2), with the same flavor for
the synaptic integration term. The main difference comes from the fact that when
the voltage reaches a certain threshold, it fires deterministically for the IF model,
whereas the higher the intensity, the more likely is the spike for the Hawkes model,
but without certainty. In this sense Hawkes models seem closer to (PPS) since as we
discussed before, the term p(s, X(t)) is closer to a hazard rate and never imposes
deterministically the presence of a spike.
To model inhibition (see 41 for instance), one can use functions h that may take
R t−
N
negative values and in this case λ(t, Ft−
) = µ + −∞ h(t − u)N (du) , which
+
N
should correspond to p(s, X(t)) = (µ + X(t))+ . Another possibility is λ(t, Ft−
)=
R t−
exp µ + −∞ h(t − u)N (du) , which is inspired by the generalized linear model as
used by 36 and which should correspond to p(s, X(t)) = exp (µ + X(t)).
Note finally that Hawkes models in Neuroscience (and their variant) are usually
multivariate meaning that they model interaction between spike trains thanks to
interaction functions between point processes, each process representing a neuron.
To keep the present analogy as simple as possible, we do not deal with those multivariate models in the present article. Some open questions in this direction are
presented in conclusion.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
11
2.3. Ogata’s thinning algorithm
N
To turn the analogy between p(s, X(t)) and λ(t, Ft−
) into a rigorous result on the
PDE level, we need to understand the intrinsic dynamics of the point process. This
dynamics is often not explicitly described in the literature (see e.g. the reference
book by Brémaud 3 ) because martingale theory provides a nice mathematical setting in which one can perform all the computations. However, when one wants to
simulate point processes based on the knowledge of their intensity, there is indeed
a dynamics that is required to obtain a practical algorithm. This method has been
described at first by Lewis in the Poisson setting 25 and generalized by Ogata in
29
. If there is a sketch of proof in 29 , we have been unable to find any complete
mathematical proof of this construction in the literature and we propose a full and
mathematically complete version of this proof with minimal assumptions in the
Appendix B.4. Let us just informally describe here, how this construction works.
The principle consists in assuming that is given an external homogeneous Poisson
process Π of intensity 1 in R2+ and with associated point measure Π (dt, dx) =
P
(T,V )∈Π δ(T,V ) (dt, dx). This means in particular that
E [Π(dt, dx)] = dt dx.
(2.4)
Once a realisation of N− fixed, which implies that F0N is known and which can be
seen as an initial condition for the dynamics, the construction of the process N on
R+ only depends on Π.
N
) in the sense of the ”recipe”
More precisely, if we know the intensity λ(t, Ft−
that explicitly depends on t and N ∩(−∞, t), then once a realisation of Π and of N−
N
) for t ∈ R+
is fixed, the dynamics to build a point process N with intensity λ(t, Ft−
is purely deterministic. It consists (see also Figure 1) in successively projecting on
N
the abscissa axis the points that are below the graph of λ(t, Ft−
). Note that a point
N
projection may change the shape of λ(t, Ft− ), just after the projection. Therefore the
N
graph of λ(t, Ft−
) evolves thanks to the realization of Π. For a more mathematical
description, see Theorem B.11 in the Appendix B.4.
Note in particular that the
construction ends on any finite interval [0, T ] a.s. if A1,a.s
λ,loc holds.
Then the point process N , result of Ogata’s thinning, is given by the union of
N
N− on R− and the projected points on R+ . It admits the desired intensity λ(t, Ft−
)
on R+ . Moreover, the point measure can be represented by
1t>0 N (dt) =
X
(T,X)∈Π /
X≤λ(T,FTN− )
N
Z λ(t,Ft−
)
δT (dt) =
!
Π (dt, dx) .
(2.5)
x=0
NB: The last equality comes from the following convention. If δ(c,d) is a Dirac mass
Rb
in (c, d) ∈ R2+ , then x=a δ(c,d) (dt, dx), as a distribution in t, is δc (dt) if d ∈ [a, b]
and 0 otherwise.
June 9, 2015
12
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
Fig. 1. Example of Ogata’s thinning algorithm on a linear Hawkes process with interaction function
h(u) = e−u and no point before 0 (i.e. N− = ∅). The crosses represent a realization of Π, Poisson
N ),
process of intensity 1 on R2+ . The blue piecewise continuous line represents the intensity λ(t, Ft−
which starts in 0 with value µ and then jumps each time a point of Π is present underneath it.
N )) is given by the blue circles. Age S
The resulting Hawkes process (with intensity λ(t, Ft−
t− at
1
time t and the quantity At are also represented.
3. From point processes to PDE
Let us now present our main results. Informally, we want to describe the evolution
of the distribution in s of the age St according to the time t. Note that at fixed time
t, St− = St a.s. and therefore it is the same as the distribution of St− . We prefer
to study St− since its predictability, i.e. its dependence in N ∩ (−∞, t), makes all
definitions proper from a microscopic/random point of view. Microscopically, the
interest lies in the evolution of δSt− (ds) as a random measure. But it should also be
seen as a distribution in time, for equations like (PPS) to make sense. Therefore,
we need to go from a distribution only in s to a distribution in both s and t. Then
one can either focus on the microscopic level, where the realisation of Π in Ogata’s
thinning construction is fixed or focus on the expectation of such a distribution.
3.1. A clean setting for bivariate distributions in age and time
In order to obtain, from a point process, (PPS) system we need to define bivariate
distributions in s and t and marginals (at least in s), in such a way that weak solutions of (PPS) are correctly defined. Since we want to possibly consider more than
two variables for generalized Wold processes, we consider the following definitions.
In the following, < ϕ, ν > denotes the integral of the integrable function ϕ with
respect to the measure ν.
Let k ∈ N. For every bounded measurable function ϕ of (t, s, a1 , ..., ak ) ∈ Rk+2
+ ,
one can define
(1)
ϕt (s, a1 , ..., ak ) = ϕ(t, s, a1 , ..., ak )
and ϕ(2)
s (t, a1 , ..., ak ) = ϕ(t, s, a1 , ..., ak ).
Let us now define two sets of regularities for ϕ.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
Mc,b (Rk+2
+ )
13
The function ϕ belongs to Mc,b (Rk+2
+ ) if and only if
• ϕ is a measurable bounded function,
(1)
• there exists T > 0 such that for all t > T , ϕt = 0.
∞
The function ϕ belongs to Cc,b
(Rk+2
+ ) if and only if
•
ϕ
is
continuous,
uniformly
bounded,
∞
Cc,b
(Rk+2
+ )
• ϕ has uniformly bounded derivatives of every order,
(1)
• there exists T > 0 such that for all t > T , ϕt = 0.
Let (ν1t )t≥0 be a (measurable w.r.t. t) family of positive measures on Rk+1
+ , and
k+1
s
(ν2 )s≥0 be a (measurable w.r.t. s) family of positive measures R+ . Those families
satisfy the Fubini property if
R (1) t
R (2)
(PF ubini ) for any ϕ ∈ Mc,b (Rk+2
hϕt , ν1 idt = hϕs , ν2s ids.
+ ),
k+2
In this case, one can define ν, measure on Rk+2
+ , by the unique measure on R+
such that for any test function ϕ in Mc,b (Rk+2
+ ),
Z
Z
(1)
s
< ϕ, ν >= hϕt , ν1t idt = hϕ(2)
s , ν2 ids.
To simplify notations, for any such measure ν(t, ds, da1 , ..., dak ), we define
ν(t, ds, da1 , ..., dak ) = ν1t (ds, da1 , ..., dak ),
ν(dt, s, da1 , ..., dak ) = ν2s (dt, da1 , ..., dak ).
In the sequel, we need in particular a measure on R2+ , ηx , defined for any real x
by its marginals that satisfy (PF ubini ) as follows
∀ t, s ≥ 0,
ηx (t, ds) = δt−x (ds)1t−x>0
and ηx (dt, s) = δs+x (dt)1s≥0 . (3.1)
It represents a Dirac mass ”travelling” on the positive diagonal originated in (x, 0).
3.2. The microscopic construction of a random PDE
For a fixed realization of Π, we therefore want to define a random distribution
U (dt, ds) in terms of its marginals, thanks to (PF ubini ), such that, U (t, ds) represents the distribution at time t > 0 of the age St− , i.e.
∀ t > 0,
U (t, ds) = δSt− (ds)
(3.2)
and satisfies similar equations as (PPS). This is done in the following proposition.
N
Proposition 3.1. Let Π, F0N and an intensity λ(t, Ft−
) t>0 be given as in Section
1
,a.s.
2.3 and satisfying (AT0 ) and ALλ,loc
. On the event Ω of probability 1, where
Ogata’s thinning is well defined, let N be the point process on R that is constructed
thanks to Ogata’s thinning with associated predictable age process (St− )t>0 and
whose points are denoted (Ti )i∈Z . Let the (random) measure U and its corresponding
marginals be defined by
U (dt, ds) =
+∞
X
i=0
ηTi (dt, ds) 10≤t≤Ti+1 .
(3.3)
June 9, 2015
14
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
Then, on Ω, U satisfies (PF ubini ) and U (t, ds) = δSt− (ds). Moreover, on Ω, U is a
solution in the weak sense of the following system
!
N
Z λ(t,Ft−
)
∂
∂
U (dt, ds) +
U (dt, ds) +
Π (dt, dx) U (t, ds) = 0,
∂t
∂s
x=0
!
N
Z
Z λ(t,Ft−
)
U (dt, 0) =
Π (dt, dx) U (t, ds) + δ0 (dt)1T0 =0 ,
s∈R+
(3.4)
(3.5)
x=0
U (0, ds) = δ−T0 (ds)1T0 <0 = U in (ds)1s>0 ,
(3.6)
∞
where U in (ds) = δ−T0 (ds). The weak sense means that for any ϕ ∈ Cc,b
(R2+ ),
Z
∂
∂
ϕ (t, s) +
ϕ (t, s) U (dt, ds) +
∂t
∂s
R+ ×R+
!
N
Z
Z λ(t,Ft−
)
[ϕ (t, 0) − ϕ (t, s)]
Π (dt, dx) U (t, ds) + ϕ(0, −T0 ) = 0. (3.7)
R+ ×R+
x=0
The proof of Proposition 3.1 is included in Appendix A.1. Note also that thanks to
the Fubini property, the boundary condition (3.5) is satisfied also in a strong sense.
System (3.4)–(3.6) is a random microscopic version of (PPS) if T0 < 0, where
n(s, t) the density of the age at time t is replaced by U (t, ·) = δSt− , the Dirac mass
in the age at time t. The assumption T0 < 0 is satisfied a.s. if T0 has a density, but
this may not be the case for instance if the experimental device gives an impulse
at time zero (e.g. 38 studied Peristimulus time histograms (PSTH), where the spike
trains are locked on a stimulus given at time 0).
This result may seem rather poor from a PDE point of view. However, since this
equation is satisfied at a microscopic level, we are able to define correctly all the
important quantities at a macroscopic level. Indeed, the analogy between p(s, X(t))
N
and λ(t, Ft−
) is actually on the random microscopic scale a replacement of p(s, X(t))
N
R λ(t,Ft−
)
Π (dt,dx), whose expectancy given the past is, heuristically speaking,
by x=0
N
equal to λ t, Ft−
because the mean behaviour of Π is given by the Lebesgue
measure (see (2.4)). Thus, the main question at this stage is : can we make this
argument valid by taking the expectation of U ? This is addressed in the next section.
The property (PF ubini ) and the quantities ηTi mainly allows to define U (dt, 0)
as well as U (t, ds). As expected, with this definition, (3.2) holds as well as
U (dt, 0) = 1t≥0 N (dt),
(3.8)
i.e. the spiking measure (the measure in time with age 0) is the point measure.
Note also that the initial condition is given by F0N , since F0N fixes in particular
the value of T0 and (AT0 ) is required to give sense to the age at time 0. To understand
the initial condition, remark that if T0 = 0, then U (0, ·) = 0 6= limt→0+ U (t, ·) = δ0
by definitions of ηTi but that if T0 < 0, U (0, ·) = limt→0+ U (t, ·) = δ−T0 .
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
15
R∞
The conservativeness (i.e. for all t ≥ 0, 0 U (t, ds) = 1) is obtained by using (a
sequence of test functions converging to) ϕ = 1t≤T .
Proposition 3.1 shows that the (random) measure U , defined by (3.3), in terms
of a given point process N , is a weak solution of System (3.4)-(3.6). The study of
the well-posedness of this system could be addressed following, for instance, the
ideas given in 12 . In this case U should be the unique solution of system (3.4)–(3.6).
As last comment about Proposition 3.1, we analyse the particular case of the
linear Hawkes process, in the following remark.
R t−
N
) = µ + −∞ h(t − z)N (dz).
Remark 3.1. In the linear Hawkes process, λ(t, Ft−
Thanks to (3.8) one decomposes the intensity into a term given Rby the initial condit−
N
) = µ+F0 (t)+ 0 h(t−z)U (dz, 0),
tion plus a term given by the measure U , λ(t, Ft−
R0
where F0 (t) = −∞ h(t − z)N− (dz) is (F0N )-measurable and considered as an initial
N
) is
condition. Hence, (3.4)–(3.6) becomes a closed system in the sense that λ(t, Ft−
now an explicit function of the solution of the system. This is not true in general.
3.3. The PDE satisfied in expectation
In this section, we want to find the system satisfied by the expectation of the
random measure U . First, we need to give a proper definition of such an object. The
construction is based on the construction of U and is summarized in the following
proposition. (The proofs of all the results of this subsection are in Appendix A.1).
N
Proposition 3.2. Let Π, F0N and an intensity λ(t, Ft−
) t>0 be given as in Section
1
,exp
2.3 and satisfying (AT0 ) and ALλ,loc
. Let N be the process resulting of Ogata’s
thinning and let U be the random measure defined by (3.3). Let E denote the expectation with respect to Π and F0N .
R
2
RThen for any test function ϕ in Mc,b (R+ ), E ϕ(t, s)U (t, ds) and
E ϕ(t, s)U (dt, s) are finite and one can define u(t, ds) and u(dt, s) by
Z
Z
ϕ(t, s)u(t, ds) = E
ϕ(t, s)U (t, ds) ,
∀ t ≥ 0,
Z
R
ϕ(t, s)u(dt, s) = E
ϕ(t, s)U (dt, s) .
∀ s ≥ 0,
Moreover, u(t, ds) and u(dt, s) satisfy (PF ubini ) and one can define u(dt, ds) =
u(t, ds)dt = u(dt, s)ds on R2+ , such that for any test function ϕ in Mc,b (R2+ ),
Z
Z
ϕ(t, s)u(dt, ds) = E
ϕ(t, s)U (dt, ds) ,
quantity which is finite.
R
R
In particular, since ϕ(t, s)u(t, ds) = E ϕ(t, s)U (t, ds) = E [ϕ(t, St− )], u(t, ·)
is therefore the distribution of St− , the (predictable version of the) age at time t.
Now let us show that as expected, u satisfies a system similar to (PPS).
N
Theorem 3.3. Let Π, F0N and an intensity λ(t, Ft−
) t>0 be given as in Section
June 9, 2015
16
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
2.3 and satisfying (AT0 ) and
L1 ,exp
Aλ,loc
. If N is the process resulting of Ogata’s
thinning, (St− )t>0 its associated predictable age process, U its associated random
measure, defined by (3.3), and u its associated mean measure, defined in Proposition 3.2, then, there exists a bivariate measurable function ρλ,P0 satisfying
Z TZ
∀ T ≥ 0,
ρλ,P0 (t, s)u(dt, ds) < ∞,
(3.9)
0
s
N
u(dt, ds)- a.e
ρλ,P0 (t, s) = E λ t, Ft− St− = s
and such that u is solution in the weak sense of the following system
∂
∂
u (dt, ds) +
u (dt, ds) + ρλ,P0 (t, s)u (dt, ds) = 0,
∂t
∂s
Z
ρλ,P0 (t, s)u (t, ds) dt + δ0 (dt)uin ({0}),
u (dt, 0) =
(3.10)
(3.11)
s∈R+
u (0, ds) = uin (ds)1s>0 ,
(3.12)
∞
where uin is the law of −T0 . The weak sense means here that for any ϕ ∈ Cc,b
(R2+ ),
Z
∂
∂
+
ϕ (t, s) u (dt, ds) +
∂t ∂s
R+ ×R+
Z
Z
[ϕ(t, 0) − ϕ(t, s)]ρλ,P0 (t, s)u(dt, ds) +
ϕ(0, s)uin (ds) = 0, (3.13)
R+ ×R+
R+
Comparing this system to (PPS), one first sees that n(·, t), the density of the
age at time t, is replaced by the mean measure u(t, ·). If uin ∈ L1 (R+ ) we have
uin ({0}) = 0 so we get an equation which is exactly of renewal type, as (PPS).
In the general case where uin is only a probability measure, the difference with
(PPS) lies in the term δ0 (dt)uin ({0}) in the boundary condition for s = 0 and in
the term 1s>0 in the initial condition for t = 0. Both these extra terms are linked
to the possibility for the initial measure uin to charge zero. This possibility is not
considered in 31 - else, a similar extra term would be needed in the setting of 31 as
well. As said above in the comment of Proposition 3.1, we want to keep this term
here since it models the case where there is a specific stimulus at time zero 38 .
In general and without more assumptions on λ, it is not clear that u is not only
a measure satisfying (PF ubini ) but also absolutely continuous wrt to dt ds and that
the equations can be satisfied in a strong sense.
Concerning p(s, X(t)), which has always been thought of as the equivalent of
N
N
λ(t, Ft−
), it is not replaced by λ(t, Ft−
), which would
have no meaning in general
N
since this is a random quantity, nor
by
E
λ(t,
F
)
t− which would have been a first
N
possible guess; it is replaced by E λ(t, Ft− )|St− = s . Indeed intuitively, since
"Z
#
N
λ(t,Ft−
)
N
N
= λ t, Ft−
dt,
E
Π (dt, dx) Ft−
x=0
the corresponding weak term can be interpreted as, for any test function ϕ,
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
N
Z λ(t,Ft−
)
"Z
E
ϕ (t, s)
!
#
Z
Π (dt, dx) U (t, ds) = E
ϕ (t, s) λ
N
t, Ft−
17
δSt− (ds)dt
x=0
Z
=
Zt
=
N
E ϕ (t, St− ) λ t, Ft−
dt
N
E ϕ (t, St− ) E λ t, Ft−
|St− dt,
t
R
which is exactly ϕ(t, s)ρλ,P0 (t, s)u(dt, ds).
This conditional expectation makes dependencies particularly complex, but this
also enables to derive equations even in non-Markovian setting (as Hawkes processes
for instance, see Section 4). More explicitly, ρλ,P0 (t, s) is a function of the time t,
of the age s, but it also depends on λ, the shape of the intensity of the underlying
process and on the distribution of the initial condition N− , that is P0 . As explained
in Section 2, it is both the knowledge of P0 and λ that characterizes the distribution
of the process and in general the conditional expectation cannot be reduced to
something depending on less than that. In Section 4, we discuss several examples
of point processes where one can (or cannot) reduce the dependence.
Note that here again, we can prove that the equation is conservative by taking
(a sequence of functions converging to) ϕ = 1t≤T as a test function.
A direct corollary of Theorem 3.3 can be deduced thanks to the law of large
numbers. This can be seen as the interpretation of (PPS) equation at a macroscopic
level, when the population of neurons is i.i.d..
∞
Corollary 3.4. Let N i i=1 be some i.i.d. point processes with intensity given by
1
,exp
Ni
λ(t, Ft−
) on (0, +∞) satisfying ALλ,loc
and associated predictable age processes
i
(St−
)t>0 . Suppose furthermore that the distribution of N 1 on (−∞, 0] is given by
1
P0 which is such that P0 (N−
= ∅) = 0.
Then there exists a measure u satisfying (PF ubini ), weak solution of Equations (3.10) and (3.11), with ρλ,P0 defined by
h
i
N1
1
ρλ,P0 (t, s) = E λ t, Ft−
|St−
= s , u(dt, ds) − a.e.
∞
and with uin distribution of the age at time 0, such that for any ϕ ∈ Cc,b
(R2+ )
!
Z
Z
n
1X
a.s.
δ i (ds) −−−−→ ϕ(t, s)u(t, ds),
(3.14)
∀ t > 0,
ϕ(t, s)
n→∞
n i=1 St
In particular, informally, the fraction of neurons at time t with age in [s, s + ds)
in this i.i.d. population of neurons indeed tends to u(t, ds).
4. Application to the various examples
Let us now apply these results to the examples presented in Section 2.2.
June 9, 2015
18
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
4.1. When the intensity only depends on time and age
N
= f (t, St− ) (homogeneous and inhomogeneous Poisson processes and
If λ t, Ft−
renewal processes are particular
examples) then the intuition giving that p(s, X(t))
N
is analogous to λ t, F
works.
Let us assume that f (t, s) ∈ L∞ (R2+ ). We have
t−
N
E λ t, Ft−
|St− = s = f (t, s). Under this assumption, we may apply Theorem 3.3,
so that we know that the mean measure u associated to the random process is a solution of System (3.10)–(3.12). Therefore the mean measure u satisfies a completely
explicit PDE of the type (PPS) with ρλ,P0 (t, s) = f (t, s) replacing p(s, X(t)). In
particular, in this case ρλ,P0 (t, s) does not depend on the initial condition. As already underlined,
in general, the distribution of the process is characterized by
N
= f (t, St− ) and by the distribution of N− . Therefore, in this special
λ t, Ft−
case, this dependence is actually reduced
to the function f and the distribution of
−T0 . Since f (·, ·) ∈ L∞ [0, T ] × R+ , assuming also uin ∈ L1 (R+ ), it is well-known
that there exists a unique solution u such that (t 7→ u(t, ·)) ∈ C [0, T ], L1 (R+ ) ,
see for instance 34 Section 3.3. p.60. Note that following 12 uniqueness for measure solutions may also be established, hence the mean measure u associated to
the random process
is the unique solution of System (3.10)–(3.12), and it is in
1
C [0, T ], L (R+ ) : the PDE formulation, together with existence and uniqueness,
has provided a regularity result on u which is obtained under weaker assumptions
than through Fokker-Planck / Kolmogorov equations. This is another possible application field of our results: using the PDE formulation to gain regularity. Let us
now develop the Fokker-Planck / Kolmogorov approach for renewal processes.
N
Renewal processes The renewal process, i.e. when λ t, Ft−
= f (St− ), with f a
continuous function on R+ , has particular properties. As noted in Section 2.2, the
renewal age process (St− )t>0 is an homogeneous Markovian process. It is known
for a long time that it is easy to derive PDE on the corresponding density through
Fokker-Planck / Kolmogorov equations, once the variable of interest (here the age)
is Markovian (see for instance 1 ). Here we briefly follow this line to see what kind of
PDE can be derived through the Markovian properties and to compare the equation
with the (PPS) type system derived in Theorem 3.3.
Since f is continuous, the infinitesimal generatorb of (St )t>0 is given by
(Gφ)(x) = φ0 (x) + f (x) (φ(0) − φ(x)) ,
(4.1)
for all φ ∈ C 1 (R+ ) (see 2 ). Note that, since for every t > 0 St− = St a.s., the process
(St− )t>0 is also Markovian with the same infinitesimal generator.
b The
infinitesimal generator of an homogeneous Markov process (Zt )t≥0 is the operator G which
is defined to act on every function φ : Rn → R in a suitable space D by
Gφ(x) = lim
t→0+
E [φ(Zt )|Z0 = x] − φ(x)
.
t
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
19
Let us now define for all t > 0 and all φ ∈ C 1 (R+ ),
Z
Pt φ(x) = E [φ(St− )|S0 = x] = φ(s)ux (t, ds),
where x ∈ R+ and ux (t, ·) is the distribution of St− given that S0 = x. Note
that ux (t, ds) corresponds to the marginal in the sense of (PF ubini ) of ux given by
Theorem 3.3 with ρλ,P0 (t, s) = f (s) and initial condition δx , i.e. T0 = −x a.s.
In this homogeneous Markovian case, the forward Kolmogorov equation gives
∂
Pt = Pt G.
∂t
∞
Let ϕ ∈ Cc,b
(R2+ ) and let t > 0. This implies that
∂
∂
(Pt ϕ(t, s)) = Pt Gϕ(t, s) + Pt ϕ(t, s)
∂t
∂t
∂
∂
= Pt
ϕ(t, s) + f (s) (ϕ(t, 0) − ϕ(t, s)) + ϕ(t, s) .
∂s
∂t
Since ϕ is compactly supported in time, an integration with respect to t yields
Z
Z
∂
∂
−P0 ϕ(0, s) = Pt
+
ϕ(t, s)dt + Pt f (s) (ϕ(t, 0) − ϕ(t, s)) dt,
∂t ∂s
or equivalently
Z
Z
∂
∂
+
ϕ (t, s) ux (t, ds) dt − (ϕ(t, s) − ϕ(t, 0))f (s)ux (t, ds)dt,
− ϕ(0, x) =
∂t ∂s
(4.2)
in
in terms of ux . This is exactly Equation (3.13) with u = δx .
The result of Theorem 3.3 is stronger than the application of the forward Kolmogorov equation on homogeneous Markovian systems since the result of Theorem
3.3 never used the Markov assumption and can be applied to non Markovian processes (see Section 4.3). So the present work is a general set-up where one can deduce
PDE even from non Markovian microscopic random dynamics. Note also that only
boundedness assumptions and not continuity ones are necessary to directly obtain
(4.2) via Theorem 3.3: to obtain the classical Kolmogorov theorem, one would have
assumed f ∈ C 0 (R2+ ) rather than f ∈ L∞ (R2+ ).
4.2.
Generalized Wold process
N
= f (St− , A1t , ..., Akt ), with f being a non-negative funcIn the case where λ t, Ft−
tion, one can define in a similar way uk (t, s, a1 , . . . , ak ) which is informally the
distribution at time t of the processes with age s and past given by a1 , ...ak for
the last k ISI’s. We want to investigate this case not for its Markovian properties,
which are nevertheless presented in Proposition B.2 in the appendix for sake of
completeness, but because this is the first basic example where the initial condition
is indeed impacting ρλ,P0 in Theorem 3.3.
June 9, 2015
20
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
To do so, the whole machinery
applied on u(dt, ds) is first extended in the next result
1
k
to uk dt, ds, da , . . . , da which represents the dynamics of the age and the last
k ISI’s. This could have been done in a very general way by an easy generalisation
of Theorem 3.3. However to avoid too cumbersome equations, we express it only
for generalized Wold processes to provide a clean setting to illustrate the impact
of the initial conditions on ρλ,P0 . Hence, we similarly define a random distribution
Uk (dt, ds, da1 , . . . , dak ) such that its evaluation at any given time t exists and is
Uk (t, ds, da1 , . . . , dak ) = δ(St− ,A1t ,...,Akt ) (ds, da1 , . . . , dak ).
(4.3)
The following result states the PDE satisfied by uk = E [Uk ].
Proposition 4.1. Let k be a positive integer and f be some non negative function on Rk+1
+ . Let N be a generalized Wold process with predictable age process
N
) = f (St− , A1t , ..., Akt ) sat(St− )t>0
points (Ti )i∈Z and intensity λ(t, Ft−
, associated
1
,exp
isfying ALλ,loc
, where A1t , . . . , Akt are the successive ages defined by (2.2). Suppose
that P0 is such that P0 (T−k > −∞) = 1. Let Uk be defined by
Uk (dt, ds, da1 , . . . , dak ) =
+∞
X
i=0
ηTi (dt, ds)
k
Y
j=1
δAj (daj ) 10≤t≤Ti+1 ,
(4.4)
Ti
If N is the result of Ogata’s thinning on the Poisson process Π, then Uk satisfies
(4.3) and (PF ubini ) a.s. in Π and F0N . Assume that the initial condition uin
k , defined
1
k
k+1
as the distribution of (−T0 , A0 , . . . , A0 ) which is a random vector in R
, is such
k
)
=
0.
Then
U
admits
a
mean
measure
u
which
also
satisfies
({0}
×
R
that uin
k
k
+
k
,
(PF ubini ) and the following system in the weak sense: on R+ × Rk+1
+
∂
∂
+
uk (dt, ds, da1 , ..., dak )+f (s, a1 , ..., ak )uk (dt, ds, da1 , ..., dak ) = 0,
∂t ∂s
Z∞
uk (dt, 0, ds, da1 , ..., dak−1 ) =
f (s, a1 , ..., ak ) uk (t, ds, da1 , ..., dak ) dt,
(4.5)
(4.6)
ak =0
uk (0, ds, da1 , . . . , dak ) = uin
k (ds, da1 , . . . , dak ) .
(4.7)
k
We have assumed uin
k ({0}×R+ ) = 0 (i.e. T0 6= 0 a.s.) for the sake of simplicity,
but this assumption may of course be relaxed and Dirac masses at 0 should then
be added in a similar way as in Theorem 3.3.
If f ∈ L∞ (Rk+1
+ ), we may apply Proposition 4.1, so that the mean measure
k+1
1
uk satisfy System (4.5)–(4.7). Assuming an initial condition uin
k ∈ L (R+ ), we
can prove exactly as for the renewal equation (with a Banach fixed point argument for instance)
34that there exists a unique solution uk such that (t 7→ uk (t, ·)) ∈
C R+ , L1 (Rk+1
)
to the generalized Wold case, the boundary assumption on the
+
kth penultimate point before time 0 being necessary to give sense to the successive
ages at time 0. By uniqueness, this proves
that the mean measure uk is this solution,
so that it belongs to C R+ , L1 (Rk+1
: Proposition 4.1 leads to a regularity result
)
+
on the mean measure.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
21
Now that we have clarified the dynamics of the successive ages, one can look
at this system from the point of view of Theorem 3.3, that is when only two variables s and t are considered.
In this respect, let us note that U defined by (3.3) is
R
such that U (dt, ds) = a1 ,...,ak Uk (dt, ds, da1 , . . . , dak ). Since the integrals and the
expectations are exchangeable in the weak
R sense, the mean measure u defined in
Proposition 3.2 is such that u(dt, ds) = a1 ,...,ak uk (dt, ds, da1 , . . . , dak ). But (4.5)
∞
in the weak sense means, for all ϕ ∈ Cc,b
(Rk+2 ),
Z
∂
∂
+
ϕ(t, s, a1 , ..., ak )uk (dt, ds, da1 , . . . , dak )
∂t ∂s
Z
+ [ϕ (t, 0, a1 , . . . , ak ) − ϕ(t, s, a1 , . . . , ak )] f (s, a1 , . . . , ak ) uk (dt, ds, da1 , . . . , dak )
Z
+ ϕ (0, s, a1 , . . . , ak ) uin
k (ds, da1 , . . . , dak ) = 0. (4.8)
∞
∞
Letting ψ ∈ Cc,b
(R2 ) and ϕ ∈ Cc,b
(Rk+2 ) being such that
∀ t, s, a1 , . . . , ak ,
ϕ(t, s, a1 , . . . , ak ) = ψ(t, s),
we end up proving that the function ρλ,P0 defined in Theorem 3.3 satisfies
Z
ρλ,P0 (t, s)u (dt, ds) =
f (s, a1 , . . . , ak ) uk (dt, ds, da1 , . . . , dak ) ,
(4.9)
a1 ,...,ak
u(dt, ds)−almost everywhere (a.e.). Equation (4.9) means exactly from a probabilistic point of view that
ρλ,P0 (t, s) = E f (St− , A1t , ..., Akt )|St− = s , u(dt, ds) − a.e.
Therefore, in the particular case of generalized Wold process, the quantity ρλ,P0
depends on the shape of the intensity (here the function f ) and also on uk . But,
by Proposition 4.1, uk depends on its initial condition given by the distribution of
(−T0 , A10 , . . . , Ak0 ), and not only −T0 as in the initial condition for u. That is, as
announced in the remarks following Theorem 3.3, ρλ,P0 depends in particular on the
whole distribution of the underlying process before time 0, namely P0 and not only
on the initial condition for u. Here, for generalized Wold processes, it only depends
on the last k points before time 0. For more general non Markovian settings, the
integration cannot be simply described by a measure uk in dimension (k + 2) being
integrated with respect to da1 ...dak . In general, the integration has to be done on
N
all the ”randomness” hidden behind the dependence of λ(t, Ft−
) with respect to
the past once St− is fixed and in this sense it depends on the whole distribution
P0 of N− . This is made even clearer on the following non Markovian example: the
Hawkes process.
4.3. Hawkes process
As we have seen in Section 2.2, there are many different
examples of Hawkes
proR
t−
N
cesses that can all be expressed as λ t, Ft− = φ −∞ h (t − x) N (dx) , where the
main case is φ(θ) = µ + θ, for µ some positive constant, which is the linear case.
June 9, 2015
22
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
R
t−
N
When there is no point before 0, λ t, Ft−
= φ 0 h (t − x) N (dx) . In this
case, the interpretation is so close to (PPS) that the first guess, which is wrong,
would be that the analogous in (PPS) is
p(s, X(t)) = φ(X(t)),
(4.10)
hR
i
Rt
t−
where X(t) = E 0 h (t − x) N (dx) = 0 h (t − x) u(dx, 0). This is wrong, even
N
in the linear case since λ t, Ft−
depends on all the previous points. Therefore ρλ,P0
defined by (3.9) corresponds to a conditioning given only the last point.
By looking at this problem through the generalized Wold approach, one can
hope that for h decreasing fast enough:
N
λ t, Ft−
' φ h(St− ) + h(St− + A1t ) + ... + h(St− + A1t + ... + Akt ) .
In this sense and with respect to generalized Wold processes described in the
previous section, we are informally integrating on ”all the previous points” except
the last one and not integrating over all the previous points. This is informally
why (4.10) is wrong even in the linear case. Actually, ρλ,P0 is computableR for linear
t
Hawkes processes R: we show in the next section that ρλ,P0 (t, s) 6= φ( −∞ h(t −
∞
x)u(dx, 0)) = µ + 0 h(t − x)u(dx, 0) and that ρλ,P0 explicitly depends on P0 .
4.3.1. Linear Hawkes process
We are interested in Hawkes processes with a past before time 0 given by F0N , which
is not necessarily the past given by a stationary Hawkes process. To illustrate the
fact
that
the past is impacting the value of ρλ,P0 , we focus on two particular cases:
A1N
N− = {T0 } a.s. and T0 admits a bounded density f0 on R−
−
A2N−
N− is an homogeneous Poisson process with intensity α on R−
Before stating the main result, we need some technical definitions. Indeed the
proof is based on the underlying branching structure of the linear Hawkes process
described in Section B.3.1 of the appendix and the following functions (Ls , Gs ) are
naturally linked to this branching decomposition (see Lemma B.7).
Lemma 4.2. Let h ∈ L1 (R+ ) such that khkL1 < 1. For all s ≥ 0, there exist a
unique solution (Ls , Gs ) ∈ L1 (R+ ) × L∞ (R+ ) of the following system
Z
(x−s)∨0
Z
Gs (x − w)h(w)dw −
log(Gs (x)) =
0
x
h(w)dw,
(4.11)
(h (w) + Ls (w)) Gs (w)h(x − w) dw,
(4.12)
0
Z
x
Ls (x) =
s∧x
where a ∨ b (resp. a ∧ b) denotes the maximum (resp. minimum) between a and b.
Moreover, Ls (x ≤ s) ≡ 0, Gs : R+ → [0, 1], and Ls is uniformly bounded in L1 .
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
23
This result allows to define two other important quantities, Ks and q, by, for all
s, t ≥ 0, z ∈ R,
Z
(t−s)∨0
[h(t − x) + Ls (t − x)] Gs (t − x)h(x − z)dx,
Ks (t, z) :=
0
Z
t
log(q(t, s, z)) := −
Z
(t−s)∨0
h(x − z)dx −
(t−s)∨0
[1 − Gs (t − x)] h(x − z)dx. (4.13)
0
Finally, the following result is just an obvious remark that helps to understand the
resulting system.
Remark 4.1. For a non negative Φ ∈ L∞ (R+ ) and v in ∈ L∞ (R+ ), there exists a
unique solution v ∈ L∞ (R2+ ) in the weak sense to the following system,
∂
∂
v(t, s) +
v(t, s) + Φ(t, s)v(t, s) = 0,
∂t
∂s
v(t, 0) = 1
v(t = 0, s) = v in (s)
(4.14)
(4.15)
Moreover t 7→ v(t, .) is in C(R+ , L1loc (R+ )).
If v in is a survival function (i.e. non increasing from 0 to 1), then v(t, .) is a
survival function and −∂s v is a probability measure for all t > 0.
Proposition 4.3. Using the notations of Theorem
3.3,
let
N be
a Hawkes process
with past before 0 given by N− satisfying either A1N− or A2N− and with intensity
on R+ given by
Z t−
N
λ(t, Ft−
)=µ+
h(t − x)N (dx),
−∞
where µ is a positive realRnumber and h ∈ L∞ (R+ ) is a non-negative function with
support in R+ such that h < 1.
Then, the mean measure u defined in Proposition 3.2 satisfies Theorem 3.3 and
R∞
moreover its integral v(t, s) := u(t, dσ) is the unique solution of the system (4.14)–
s
∞
(4.15) where v in is the survival function of −T0 , and where Φ = Φµ,h
P0 ∈ L (R+ ) is
defined by
µ,h
h
Φµ,h
P0 = Φ+ + Φ−,P0 ,
(4.16)
where for all non negative s, t
Z
Φµ,h
(t,
s)
=
µ
1
+
+
t
(h(x) + Ls (x))Gs (x)dx ,
(4.17)
s∧t
and where under Assumption A1N− ,
R 0∧(t−s)
Φh−,P0 (t, s)
=
−∞
(h(t − t0 ) + Ks (t, t0 )) q(t, s, t0 )f0 (t0 )dt0
,
R 0∧(t−s)
q(t, s, t0 )f0 (t0 )dt0
−∞
(4.18)
June 9, 2015
24
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
or, under Assumption A2N− ,
Φh−,P0 (t, s) = α
Z
0∧(t−s)
(h(t − z) + Ks (t, z)) q(t, s, z)dz.
(4.19)
−∞
In these formulae, Ls , Gs , Ks and q are given by Lemma 4.2 and (4.13). Moreover
Z +∞
Z +∞
∀ s ≥ 0,
ρλ,P0 (t, x)u(t, dx) = Φµ,h
(t,
s)
u(t, dx).
(4.20)
P0
s
s
The proof is included in Appendix B.3. Proposition 4.3 givesa purely
analytical
1
definition for v, and thus for u, in two specific cases, namely AN− or A2N− .
In the general case, treated in Appendix B (Proposition B.5), there remains a
dependence with respect to the initial condition P0 , via the function Φh−,P0 .
Remark 4.2. Contrarily to the general result in
Theorem 3.3, Proposition 4.3
R +∞
focuses on the equation satisfied by v(dt, s) = s u(dt, dx) because in Equation (4.14) the function parameter Φ = Φµ,h
P0 may be defined independently of the
definitions of v or u, which is not the case for the rate ρλ,P0 appearing in Equation (3.10). Thus, it is possible to depart from the system of equations defining v,
study it, prove existence, uniqueness and regularity for v under some assumptions
on the initial distribution uin as well as on the birth function h, and then deduce
regularity or asymptotic properties for u without any previous knowledge on the
underlying process.
In Sections 4.1 and 4.2, we were able to use the PDE formulation to prove that the
distribution of the ages u has a density. Here, since we only obtain a closed formula
for v and not for u, we would need to derive Equation (4.14) in s to obtain a similar
µ,h
result, so that we need to prove more regularity on Φµ,h
P0 . Such regularity for ΦP0
is not obvious since it depends strongly on the assumptions on N− . This paves the
way for future research, where the PDE formulation would provide regularity on
the distribution of the ages, as done above for renewal and Wold processes.
Remark 4.3. These two cases A1N− and A2N− highlight the dependence with
respect to all the past before time 0 (i.e. P0 ) and not only the initial condition (i.e.
in
the
age at time 0). In fact, they can give the same initial condition u : for instance,
A1N−
with −T0 exponentially distributed with parameter α > 0 gives the same
law for −T0 as A2N− with parameter α. However, if we fix some non-negative real
number s, one can show that Φh−,P0 (0, s) is different in those two cases. It is clear
from the definitions that for every real number z, q(0, s, z) = 1 and Ks (0, z) = 0.
Thus, in the first case,
R −s
R∞
h(−t0 )αeαt0 dt0
h(z)αe−αz dz
−∞
h
Φ−,P0 (0, s) =
= s R ∞ −αz
,
R −s
αe
dz
αeαt0 dt0
s
−∞
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
25
R −s
R∞
while in the second case, Φh−,P0 (0, s) = α −∞ h(−z)dz = α s h(w)dw. Therefore
Φh−,P0 clearly depends on P0 and not just on the distribution of the last point before
0, and so is ρλ,P0 .
Rt
Remark 4.4. If we follow our first guest, ρλ,P0 would be either µ + 0 h(t −
Rt
x)u(dx, 0) or µ+ −∞ h(t−x)u(dx, 0). In particular, it would not depend on the age
s. Therefore by (4.20), so would Φµ,h
P0 . But for instance at time t = 0, when N− is
R +∞
an homogeneous Poisson process of parameter α, Φµ,h
h(w)dw,
P0 (0, s) = µ + α s
which obviously depends on s. Therefore the intuition linking Hawkes processes and
(PPS) does not apply.
4.3.2. Linear Hawkes process with no past before time 0
A classical framework in point processes theory is the case in A1N− where T0 →
R t−
N
) = µ + 0 h(t − x)N (dx). The
−∞, or equivalently, when N has intensity λ(t, Ft−
problem in this case is that the age at time 0 is not finite. The age is only finite for
times greater than the first spiking time T1 .
Here again, the quantity v(t, s) reveals more informative and easier to use: having
the distribution of T0 going to −∞ means that Supp(uin ) goes to +∞, so that the
initial condition for v tends to value uniformly 1 for any 0 ≤ s < +∞. If we
can prove that the contribution of Φh−,P0 vanishes, the following system is a good
candidate to be the limit system:
∂ ∞
∂ ∞
∞
v (t, s) +
v (t, s) + Φµ,h
+ (t, s) v (t, s) = 0,
∂t
∂s
v ∞ (t, 0) = 1,
v ∞ (0, s) = 1,
(4.21)
(4.22)
where Φµ,h
+ is defined in Proposition 4.3. This leads us to the following proposition.
Proposition 4.4. Under the assumptions and notations of Proposition 4.3, consider for all M ≥ 0, vM theunique solution of system (4.14)-(4.15) with Φ given by
Proposition 4.3, case A1N− , with T0 uniformly distributed in [−M −1, −M ]. Then,
as M goes to infinity, vM converges uniformly on any set of the type (0, T ) × (0, S)
towards the unique solution v ∞ of System (4.21)-(4.22).
Conclusion
We present in this article a bridge between univariate point processes, that can
model the behavior of one neuron through its spike train, and a deterministic age
structured PDE introduced by Pakdaman, Perthame and Salort, named (PPS).
More precisely Theorem 3.3 present a PDE that is satisfied by the distribution u
of the age s at time t, where the age represents the delay between time t and the
last spike before t. This is done in a very weak sense and some technical structure,
namely (PF ubini ), is required.
June 9, 2015
26
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
The main point is that the ”firing rate” which is a deterministic quantity written
as p(s, X(t)) in (PPS) becomes the conditional expectation of the intensity given
the age at time t in Theorem 3.3. This first makes clear that p(s, X(t)) should
be interpreted as a hazard rate, which gives the probability that a neuron fires
given that it has not fired yet. Next, it makes clearly rigorous several ”easy guess”
bridges between both set-ups when the intensity only depends on the age. But it
also explained why when the intensity has a more complex shape (Wold, Hawkes),
this term can keep in particular the memory of all that has happened before time 0.
One of the main point of the present study is the Hawkes process, for which
what was clearly expected was a legitimation of the term X(t) in the firing rate
p(s, X(t)) of (PPS), which models the synaptic integration. This is not the case,
and the interlinked equations that have been found for the cumulative distribution
function v(t, ·) do not have a simple nor direct deterministic interpretation. However
one should keep in mind that the present bridge, in particular in the population wide
approach, has been done for independent neurons. This has been done to keep the
complexity of the present work reasonable as a first step. But it is also quite obvious
that interacting neurons cannot be independent. So one of the main question is: can
we recover (PPS) as a limit with precisely a term of the form X(t) if we consider
multivariate Hawkes processes that really model interacting neurons ?
Acknowledgment
This research was partly supported by the European Research Council (ERC Starting Grant SKIPPERAD number 306321), by the french Agence Nationale de la
Recherche (ANR 2011 BS01 010 01 projet Calibration) and by the interdisciplanary axis MTC-NSC of the University of Nice Sophia-Antipolis. MJC acknowledges
support from the projects MTM2014-52056-P (Spain) and P08-FQM-04267 from
Junta de Andalucı́a (Spain). We warmly thank François Delarue for very fruitful
discussions and ideas.
A. Proofs linked with the PDE
A.1. Proof of Proposition 3.1
First, let us verify that U satisfies Equation (3.2). For any t > 0,
U (t, ds) =
X
ηTi (t, ds)10≤t≤Ti+1 ,
i≥0
by definition of U . Yet, ηTi (t, ds) = δt−Ti (ds)1t>Ti , and the only i ∈ N such that
Ti < t ≤ Ti+1 is i = Nt− . So, for all t > 0, U (t, ds) = δt−TNt− (ds) = δSt− (ds).
Secondly, let us verify that U satisfies (PF ubini ). Let ϕ ∈ Mc,b (R2+ ), and let T be
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
27
(1)
P+∞
= 0. Then since U (t, ds) = i=0 ηTi (t, ds)10≤t≤Ti+1 ,
!
Z
Z
X
ϕ(t, s)U (t, ds) dt ≤
|ϕ(t, s)|
ηTi (t, ds)10≤t≤Ti+1 dt
such that for all t > T , ϕt
Z
Z
R+
=
R+
R+
XZ
i≥0
R+
|ϕ(t, t − Ti )|1t>Ti 10≤t≤Ti+1 dt =
R+
i≥0
XZ
i≥0
Z
T1
0
|ϕ(t, t − Ti )|dt
max(0,Ti )
Z
X
|ϕ(t, t − T0 )| +
=
Ti+1
Ti+1
|ϕ(t, t − Ti )|dt.
i/0<Ti <T
Ti
Since there is a finite number of points of NR between
0 and T , on Ω, this quantity
P
+∞ R +∞
is finite and one can exchange i≥0 and t=0 s=0 . Therefore, since all the ηTi
satisfy (PF ubini ) and ϕ(t, s)10≤t≤Ti+1 is in Mc,b (R2+ ), so does U .
∞
For the dynamics of U , similar computations lead for every ϕ ∈ Cc,b
(R+ 2 ) to
Z
X Z Ti+1 −Ti
ϕ (t, s) U (dt, ds) =
ϕ (s + Ti , s) ds.
i≥0
max(0,−Ti )
We also have
Z
X Z Ti+1 −Ti ∂
∂
∂
∂
+
ϕ (t, s) U (dt, ds) =
+
ϕ (s + Ti , s) ds
∂t ∂s
∂t ∂s
i≥0 max(0,−Ti )
X
=
[ϕ (Ti+1 , Ti+1 − Ti ) − ϕ (Ti , 0)] + ϕ(T1 , T1 − T0 ) − ϕ(0, −T0 ).
(A.1)
i≥1
R λ(t,F N )
P
It remains to express the term with x=0 t− Π (dt, dx) = i≥0 δTi+1 (dt), that is
X
Z
Z Z
X
ϕ (t, s) U (t, ds)
δTi+1 (dt) =
ϕ (t, s) U (t, ds)
δTi+1 (dt)
i≥0
i≥0
Z
=
ϕ (t, St− )
X
i≥0
and, since
R
δTi+1 (dt) =
X
ϕ (Ti+1 , Ti+1 − Ti ) , (A.2)
i≥0
U (t, ds) = 1 for all t > 0,
Z Z
X
X
ϕ (t, 0) U (t, ds)
δTi+1 (dt) =
ϕ (Ti+1 .0) ,
i≥0
(A.3)
i≥0
Identifying all the terms in the right-hand side of Equation (A.1), this lead to
Equation (3.7), which is the weak formulation of System (3.4)–(3.6).
A.2. Proof of Proposition 3.2
(1)
Let ϕ ∈ Mc,b (R2+ ), and let T be such that for all t > T , ϕt = 0. Then,
Z
|ϕ(t, s)|U (t, ds) ≤ ||ϕ||L∞ 10≤t≤T ,
(A.4)
June 9, 2015
28
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
R
since
R at any fixed time t > 0, U (t, ds) = 1. Therefore, the expectation
E ϕ(t, s)U (t, ds) is well-defined and finite and so u(t, .) is well-defined.
On the other hand, at any fixed age s,
Z
∞
X
|ϕ(t, s)|U (dt, s) =
|ϕ(s + Ti , s)|10≤s≤Ti+1 −Ti
i=0
=
X
|ϕ(s + Ti , s)|10≤s+Ti ≤T 10≤s≤Ti+1 −Ti ,
i≥0
(1)
because for all t > T , ϕt
Z
|ϕ(t, s)|U (dt, s)
= 0. Then, one can deduce the following bound
≤ |ϕ(s + T0 , s)|1−T0 ≤s≤T −T0 10≤s≤T1 −T0 +
X
|ϕ(s + Ti , s)|10≤s≤T 1Ti ≤T
i≥1
Since the intensity is L1loc
Z
E
≤ ||ϕ||L∞ (1−T0 ≤s≤T −T0 + NT 10≤s≤T ) .
hR
i
T
N
in expectation, E [NT ] = E 0 λ(t, Ft−
)dt <∞ and
|ϕ(t, s)|U (dt, s) ≤ ||ϕ||L∞ (E [1−T0 ≤s≤T −T0 ] + E [NT ] 10≤s≤T ) ,
(A.5)
so the expectation is well-defined and finite and so u(·, s) is well-defined.
Now, let us show (PF ubini ). First Equation (A.4) implies
Z
Z
E
|ϕ(t, s)|U (t, ds) dt ≤ T ||ϕ||L∞ ,
and Fubini’s theorem implies that the following integrals are well-defined and that
the following equality holds,
Z
Z Z
Z
E
ϕ(t, s)U (t, ds) dt = E
ϕ(t, s)U (t, ds)dt .
(A.6)
Secondly, Equation (A.5) implies
Z
Z
E
|ϕ(t, s)|U (dt, s) ds ≤ ||ϕ||L∞ (T + T E [NT ]) ,
by exchanging the integral with the expectation and Fubini’s theorem implies that
the following integrals are well-defined and that the following equality holds,
Z
Z Z
Z
E
ϕ(t, s)U (dt, s) ds = E
ϕ(t, s)U (dt, s)ds .
(A.7)
Now, it only remains to use (PF ubini ) for U to deduce that the right members of
Equations (A.6) and (A.7)
Rare
(PF ubini ) for U tells that these two
R equal. Moreover,
quantities are equal to E
ϕ(t, s)U (dt, ds) . This concludes the proof.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
A.3. Proof of Theorem 3.3
N
E λ(t,Ft−
)1|S
t− −s|≤ε
29
, for every t > 0 and s ≥ 0. Since
Let ρλ,P0 (t, s) := lim inf ε↓0
P(|St− −s|≤ε)
N
(λ(t, Ft− ))t>0 and (St− )t>0 are predictable processes, and a fortiori progressive
processes (see page 9 in 3 ), ρλ,P0 is a measurable function of (t, s).
N
For every t > 0, let µt be the measure defined by
µ
(A)
=
E
λ(t,
F
)1
(S
)
t
A
t−
t−
1
,exp
for all measurable set A. Since Assumption ALλ,loc
implies that dt-a.e.
N
E λ(t, Ft− ) < ∞ and since u(t, ds) is the distribution of St− , µt is absolutely
continuous with respect to u(t, ds) for dt-almost every t.
Let ft denote the Radon Nikodym derivative
of µt with respect to u(t, ds).
N
) St− = s by definition of the conditional
For u(t, ds)-a.e. s, ft (s) = E λ(t, Ft−
expectation. Moreover, a Theorem of Besicovitch 27 claims
that for u(t, ds)-a.e.
N
) St− = s holds
s, ft (s) = ρλ,P0 (t, s). Hence, the equalityρλ,P0 (t, s) = E λ(t, Ft−
u(t, ds)dt = u(dt, ds)-almost everywhere.
Next, in order to use (PF ubini ), let us note that for any T, K > 0,
2
ρK,T
(A.8)
λ,P0 : (t, s) 7→ (ρλ,P0 (t, s) ∧ K) 10≤t≤T ∈ Mc,b (R+ )
R R K,T
R R K,T
Hence,
ρλ,P0 (t, s)u(dt, ds) =
ρλ,P0 (t, s)u(t, ds) dt which is always upper
RT R
RT
RT
N
bounded by 0
ρλ,P0 (t, s)u(t, ds) dt = 0 µt (R+ )dt = 0 E λ(t, Ft−
) dt < ∞.
RT R
Letting K → ∞, one has that 0 ρλ,P0 (t, s)u(dt, ds) is finite for all T > 0.
Once ρλ,P0 correctly defined, the proof of Theorem 3.3 is a direct consequence
of Proposition 3.1.
More precisely, let us show that (3.7) implies (3.13). Taking the expectation
∞
of (3.7) gives that for all ϕ ∈ Cc,b
(R2+ ),
N
Z λ(t,Ft−
)
"Z
E
[ϕ (t, s) − ϕ(t, 0)]
!
#
Z
Π (dt, dx) U (t, ds) −
ϕ (0, s) uin (ds)
x=0
Z
−
(∂t + ∂s ) ϕ (t, s) u (dt, ds) = 0. (A.9)
denote ψ(t,s) := ϕ(t, s) − ϕ(t, 0). Due to Ogata’s thinning construction,
Let us
N
R λ(t,Ft−
)
Π (dt, dx) = N (dt)1t>0 where N is the point process constructed by
x=0
thinning, and so,
!
#
"Z
N
Z
Z λ(t,Ft−
)
E
ψ (t, s)
Π (dt, dx) U (t, ds) = E
x=0
ψ (t, St− ) N (dt) .
t>0
(A.10)
But ψ(t, St− ) is a (FtN )-predictable process and
Z
E
t>0
N
|ψ(t, St− )|λ(t, Ft−
)dt
"Z
≤ kψkL∞ E
0
#
T
N
λ(t, Ft−
)dt
< ∞,
June 9, 2015
30
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
hence, using the martingale property of the predictable intensity,
Z
Z
N
E
ψ (t, St− ) N (dt) = E
ψ (t, St− ) λ t, Ft−
dt .
t>0
(A.11)
t>0
Moreover,
thanks to Fubini’s Theorem, the right-hand term is finite and equal to
R
N
E[ψ (t, St− ) λ(t, Ft−
)]dt, which can also be seen as
Z
Z
E [ψ (t, St− ) ρλ,P0 (t, St− )] dt = ψ(t, s)ρλ,P0 (t, s)u(t, ds)dt.
(A.12)
For all K > R0, ((t, s) 7→ ψ(t, s) (ρλ,P0 (t, s) ∧ K)) ∈RMc,b (R2+ ) and, from (PF ubini ), it
is clear that ψ(t, s) (ρλ,P0 (t, s) ∧ K) u(t, ds)dt = ψ(t, s) (ρλ,P0 (t, s) ∧ K) u(dt, ds).
Since RoneR can always upper-bound this quantity in absolute value by
T
kψkL∞ 0 s ρλ,P0 (t, s)u(dt, ds), this is finite. Letting K → ∞ one can show that
Z
Z
ψ(t, s)ρλ,P0 (t, s)u(t, ds)dt = ψ(t, s)ρλ,P0 (t, s)u(dt, ds).
(A.13)
Gathering (A.10)-(A.13) with (A.9) gives (3.13).
A.4. Proof of Corollary 3.4
i
i
For all i ∈ N∗ , let us denote N+
= N i ∩ (0, +∞) and N−
= N i ∩ R− . Thanks
i
can be seen as constructed via thinning
to Proposition B.12, the processes N+
2
of independent Poisson processes on R+ . Let (Πi )i∈N be the sequence of point
measures associated to independent Poisson processes of intensity 1 on R2+ given
i
. In particular,
by Proposition B.12. Let T0i denote the closest point to 0 in N−
i
(T0 )i∈N∗ is a sequence of i.i.d. random variables.
For each i, let U i denote the solution of the microscopic equation corresponding
to Πi and T0i as defined in Proposition 3.1 by (3.3). Using (3.2), it is clear that
Pn
Pn
i
∞
2
i (ds) =
i=1 δSt−
i=1 U (t, ds) for all t > 0. Then, for every ϕ ∈ Cc,b (R+ ),
!
Z
n
n Z
1X
1X
δ i (ds) =
ϕ(t, s)U i (t, ds).
ϕ(t, s)
n i=1 St
n i=1
R
The right-hand side is a sum n i.i.d. random variables with mean ϕ(t, s)u(t, ds),
so (3.14) clearly follows from the law of large numbers.
B. Proofs linked with the various examples
B.1. Renewal process
Proposition B.1. With the notations of Section 2, let N be a point process on R,
with predictable age process (St− )t>0 , such that T0 = 0 a.s. The following statements
are equivalent:
(i) N+ = N ∩ (0, +∞) is a renewal process with ISI’s distribution given by some
density ν : R+ → R+ .
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
31
N
N
(ii) N admits λ(t, Ft−
) = f (St− ) as an intensity on (0, +∞) and λ(t, Ft−
) t>0
1
,a.s.
satisfies ALλ,loc
, for some f : R+ → R+ .
In such a case, for all x ≥ 0, f and ν satisfy
Z x
• ν(x) = f (x) exp(−
f (y)dy) with the convention exp(−∞) = 0,
0
Z ∞
ν(x)
R
• f (x) = ∞
if
ν(y)dy 6= 0, else f (x) = 0.
ν(y)dy
x
x
(B.1)
(B.2)
Proof. For (ii) ⇒ (i). Since T0 = 0 a.s., Point (2) of Proposition B.2 given later
on for the general Wold case implies that the ISI’s of N forms a Markov chain of
order 0 i.e. they are i.i.d. with density given
R ∞ by (B.1).
For (i) ⇒ (ii). Let x0 = inf{x ≥ 0, x ν(y)dy = 0}. It may be infinite. Let us
define f by (B.2) for every 0 ≤ x < x0 and let Ñ be a point process on R such
Ñ
Ñ
that Ñ− = N− and Ñ admits λ(t, Ft−
) = f (St−
) as an intensity on (0, +∞) where
Ñ
(St− )t>0 is the predictable age process associated to Ñ . Applying (ii) ⇒ (i) to Ñ
gives that the ISI’s of Ñ are i.i.d. with density given by
!
Z x
ν(x)
ν(y)
R∞
ν̃(x) = R ∞
exp −
dy ,
ν(y)dy
ν(z)dz
0
x
y
for every 0 ≤ x < x
0 and ν̃(x) = 0 for x
≥ x0 . It is clear that ν = ν̃ since the function
Rx
ν(y)
1
exp − 0 R ∞
dy is differentiable with derivative equal to 0.
x 7→ R ∞
x
ν(y)dy
y
ν(z)dz
Since N and Ñ are renewal processes with same density ν and same first point
T0 = 0, they have the same distribution. Since the intensity characterizes a point
N
N
process, N also admits λ(t, Ft−
) = f (St−
) as an intensity on (0, +∞). Moreover,
N
since N is a renewal process, it is non-explosive in finite time and so λ(t, Ft−
) t>0
1
,a.s.
satisfies ALλ,loc
.
B.2. Generalized Wold processes
In this Section, we suppose that there exists k ≥ 0 such that the underlying point
process N has intensity
N
λ t, Ft−
= f (St− , A1t , ..., Akt ),
(B.3)
where f is a function and the Ai ’s are defined by Equation (2.2).
B.2.1. Markovian property and the resulting PDE
Let N be a point process of intensity given by (B.3). If T−k > −∞, its associated
age process (St )t can be defined up to t > T−k . Then let, for any integer i ≥ −k,
Ai = Ti+1 − Ti = STi+1 −
(B.4)
June 9, 2015
32
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
and denote (FiA )i≥−k the natural filtration associated to (Ai )i≥−k .
For any t ≥ 0, and point process Π on R2+ , let us denote Π≥t (resp. Π>t ) the
restriction to R2+ (resp. (0, +∞) × R+ ) of the point process Π shifted t time units
to the left on the first coordinate. That is, Π≥t (C × D) = Π((t + C) × D) for all
C ∈ B(R+ ), D ∈ B(R+ ) (resp. C ∈ B((0, +∞))).
Proposition B.2.
Let consider k a non-negative integer, f some non negative function on Rk+1
+
and N a generalized Wold process of intensity given by (B.3).Supposethat P0 is
1
,a.s.
N
) t>0 satisfies ALλ,loc
. Then,
such that P0 (T−k > −∞) = 1 and that λ(t, Ft−
(1) If (Xt )t≥0 = (St− , A1t , ..., Akt ) t≥0 , then for any finite non-negative stopping time τ , (Xtτ )t≥0 = (Xt+τ )t≥0 is independent of FτN− given Xτ .
(2) the process (Ai )i≥1 given by (B.4) forms a Markov chain of order k with
transition measure given by
Z x
ν(dx, y1 , ..., yk ) = f (x, y1 , ..., yk ) exp −
f (z, y1 , ..., yk )dz dx. (B.5)
0
If T0 = 0 a.s., this holds for (Ai )i≥0 .
If f is continuous then G, the infinitesimal generator of (Xt )t≥0 , is given by
∀φ ∈ C 1 (Rk+1
(Gφ)(s, a1 , ..., ak ) =
+ ),
∂
φ(s, a1 , ..., ak ) + f (s, a1 , ..., ak ) (φ(0, s, a1 , ..., ak−1 ) − φ(s, a1 , ..., ak )) . (B.6)
∂s
Proof. First, let us show the first point of the Proposition. Let Π be such that N
is the process resulting of Ogata’s thinning with Poisson measure Π. The existence
of such a measure is assured by Proposition B.12. We show that for any finite
stopping time τ , the process (Xtτ )t≥0 can be expressed as a function of Xτ and
Π≥τ which is the restriction to R2+ of the Poisson process Π shifted τ time units
to the left on the first coordinate. Let e1 = (1, 0, . . . , 0) ∈ Rk+1 . For all t ≥ 0, let
Yt = Xτ + te1 and define
(
)
Z
Z
f (Yw )
R0 = inf
t ≥ 0,
Π≥τ (dw, dx) = 1 .
[0,t]
x=0
Note that R0 may be null, in particular when τ is a jumping time of the underlying
point process N . It is easy to check that R0 can be expressed as a measurable
τ
function of Xτ and Π≥τ . Moreover, it is clear that Xt∧R
= Yt∧R0 for all t ≥ 0.
0
So, R0 can be seen as the delay until the first point of the underlying process N
after time τ . Suppose that Rp , the delay until the (p + 1)th point, is constructed
for some p ≥ 0 and let us show how Rp+1 can be constructed. For t ≥ Rp , let
τ
Zt = θ(XR
)+te1 , where θ : (x1 , . . . , xk+1 ) 7→ (0, x1 , . . . , xk ) is a right shift operator
p
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
33
modelling the dynamics described by (2.3). Let us define
(
Rp+1 = inf
Z
Z
f (Zw )
t > Rp ,
)
Π≥τ (dw, dx) = 1 .
(Rp ,Rp +t]
(B.7)
x=0
Note that for any p ≥ 0, Rp+1 cannot be null. It is coherent with the fact that the
counting process (Nt )t>0 only admits jumps with height 1. It is easy to check that
τ
Rp+1 can be expressed as a measurable function of θ(XR
) and Π>τ +Rp . It is also
p
τ
clear that Xt∧Rp+1 = Zt∧Rp+1 for all t ≥ Rp . So, Rp+1 can be seen as the delay
τ
can be
until the (p + 2)th point of the process N after time τ . By induction, XR
p
τ
expressed as a function of Xτ and Π≥τ , and this holds for Rp+1 and XRp+1 too.
To conclude, remark that the process (Xtτ )t≥0 is a measurable function of Xτ
and all the Rp ’s for p ≥ 0. Thanks to the independence of the Poisson measure
Π, FτN− is independent of Π≥τ . Then, since (Xtτ )t≥0 is a function of Xτ and Π≥τ ,
(Xtτ )t≥0 is independent of FτN− given Xτ which concludes the first point.
For Point (2), fix i ≥ 1 and apply Point (1) with τ = Ti . It appears that
in this case, R0 = 0 and R1 = Ai . Moreover, R1 = Ai can be expressed as a
A
function of θ(Xτ ) and Π>τ . However, θ(Xτ ) = (0, Ai−1 , . . . , Ai−k ) and Fi−1
⊂ FTNi .
A
given
Since τ = Ti , Π>τ is independent of FTNi and so Ai is independent of Fi−1
(Ai−1 , . . . , Ai−k ). That is, (Ai )i≥1 forms a Markov chain of order k.
Note that if T0 = 0 a.s. (in particular it is non-negative), then one can use the
previous argumentation with τ = 0 and conclude that the Markov chain starts one
time step earlier, i.e. (Ai )i≥0 forms a Markov chain of order k.
For (B.5),R1 = Ai , defined by (B.7), has the same distribution as the first
point of a Poisson process with intensity λ(t) = f (t, Ai−1 , . . . , Ai−k ) thanks to the
thinning Theorem. Hence, the transition measure of (Ai )i≥1 is given by (B.5).
Now that (Xt )t≥0 is Markovian, one can compute its infinitesimal generator.
Suppose that f is continuous and let φ ∈ Cb1 (Rk+1
+ ), The generator of (Xt )t≥0 is
defined by Gφ(s, a1 , . . . , ak ) = limh→0+ Phh−Id φ, where
Ph φ (s, a1 , . . . , ak ) = E [φ (Xh )|X0 = (s, a1 , . . . , ak )]
= E φ (Xh ) 1{N ([0,h])=0} X0 = (s, a1 , . . . , ak )
+E φ (Xh ) 1{N ([0,h])>0} X0 = (s, a1 , . . . , ak )
= E0 + E>0 .
The case with no jump is easy to compute,
E0 = φ (s + h, a1 , . . . , ak ) (1 − f (s, a1 , . . . , ak ) h) + o(h),
(B.8)
thanks to the continuity of f . When h is small, the probability to have more than
two jumps in [0, h] is a o(h), so the second case can be reduced to the case with
June 9, 2015
34
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
exactly one random jump (namely T ),
E>0 = E φ (Xh ) 1{N ([0,h])=1} X0 = (s, a1 , . . . , ak ) + o(h)
= E φ (θ(X0 + T ) + (h − T )e1 ) 1{N ∩[0,h]={T }} X0 = (s, a1 , . . . , ak ) + o(h)
= E (φ (0, s, a1 , . . . , ak−1 ) + o(1)) 1{N ∩[0,h]={T }} X0 = (s, a1 , . . . , ak ) + o(h)
= φ (0, s, a1 . . . , ak−1 ) (f (s, a1 , . . . , ak ) h) + o (h) ,
(B.9)
thanks to the continuity of φ and f . Gathering (B.8) and (B.9) with the definition
of the generator gives (B.6).
B.2.2. Sketch of proof of Proposition 4.1
Let N be the point process construct by Ogata’s thinning of the Poisson process Π
and Uk be as defined in Proposition 4.1. By an easy generalisation of Proposition
3.1, one can prove that on the event Ω of probability 1, where Ogata’s thinning is
well defined, and where T0 < 0, Uk satisfies (PF ubini ), (4.3) and on R+ × Rk+1
+ , the
following system in the weak sense
!
Z f (s,a1 ,...,ak )
∂
∂
+
Uk (dt, ds, da) +
Π (dt, dx) Uk (t, ds, da) = 0,
∂t ∂s
x=0
!
Z
Z
f (s,a1 ,...,ak )
Uk (dt, 0, ds, da1 , ..., dak−1 ) =
Π (dt, dx) Uk (t, ds, da) ,
ak ∈R
x=0
with da = da1 × ... × dak and initial condition U in = δ(−T0 ,A10 ,...,Ak0 ) .
Similarly to R
Proposition 3.2, one can
ϕ in
also prove
R that for any test function
Mc,b (Rk+2
),
E
ϕ(t,
s,
a)U
(t,
ds,
da)
and
E
ϕ(t,
s,
a)U
(dt,
s,
da)
are
finite
k
k
+
and one can define uk (t, ds, da) and uk (dt, s, da) by, for all ϕ in Mc,b (Rk+2
+ ),
Z
Z
ϕ(t, s, a)uk (t, ds, da) = E
ϕ(t, s, a)Uk (t, ds, da) ,
for all t ≥ 0, and
Z
Z
ϕ(t, s, a)uk (dt, s, da) = E
ϕ(t, s, a)Uk (dt, s, da) ,
for all s ≥ 0. Moreover, uk (t, ds, da) and uk (dt, s, da) satisfy (PF ubini ) and one can
define uk (dt, ds, da) = uk (t, ds, da)dt = uk (dt, s, da)ds on R2+ , such that for any
test function ϕ in Mc,b (Rk+2
+ ),
Z
Z
ϕ(t, s, a)uk (dt, ds, da) = E
ϕ(t, s, a)Uk (dt, ds, da) ,
quantity which is finite. The end of the proof is completely analogous to the one of
Theorem 3.3.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
35
B.3. Linear Hawkes processes
B.3.1. Cluster decomposition
Proposition B.3. Let g be a non negative L1loc (R+ ) function and h a non negative
L1 (R+ ) function such that khk1 < 1. Then the branching point process N is defined
as ∪∞
k=0 Nk the set of all the points in all generations constructed as follows:
• Ancestral points are Nanc distributed as a Poisson process of intensity g;
N0 := Nanc can be seen as the points of generation 0.
• Conditionally to Nanc , each ancestor a ∈ Nanc gives birth, independently
of anything else, to children points N1,a according to a Poisson process of
intensity h(. − a); N1 = ∪a∈Nanc N1,a forms the first generation points.
Then the construction is recursive in k, the number of generations:
• Denoting Nk the set of points in generation k, then conditionally to Nk ,
each point x ∈ Nk gives birth, independently of anything else, to children
points Nk+1,x according to a Poisson process of intensity h(. − x); Nk+1 =
∪x∈Nk Nk+1,x forms the points of the (k + 1)th generation.
This construction ends almost surely in every finite interval. Moreover the intensity
of N exists and is given by
Z t−
N
λ(t, Ft−
) = g(t) +
h(t − x)N (dx).
0
This is the cluster representation of the Hawkes process. When g ≡ ν, this has
been proved in 20 . However up to our knowledge this has not been written for a
general function g.
Proof. First, let us fix some A > 0. The process ends up almost surely in [0, A]
because there is a.s. a finite number of ancestors in [0, A]: if we consider the family of
points attached to one particular ancestor, the number of points in each generation
form a sub-critical Galton
Watson process with reproduction distribution, a Poisson
R
variable with mean h < 1 and whose extinction is consequently almost sure.
Next, to prove that N has intensity
Z t−
H(t) = g(t) +
h(t − x)N (dx),
0
we exhibit a particular thinning construction, where on one hand, N is indeed a
branching process by construction as defined by the proposition and, which, on the
other hand, guarantees that Ogata’s thinning project the points below H(t). We
can always assume that h(0) = 0, since changing the intensity of Poisson process
in the Rbranching structure at one particular point has no impact. Hence H(t) =
t
g(t) + 0 h(t − x)N (dx).
The construction is recursive in the same way. Fix some realisation Π of a Poisson
process on R2+ .
June 9, 2015
36
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
For Nanc , project the points below the curve t → g(t) on [0, A]. By construction,
Nanc is a Poisson process of intensity g(t) on [0, A]. Note that for the identification
(see Theorem B.11) we just need to do it on finite intervals and that the ancestors
that may be born after time A do not have any descendants in [0, A], so we can
discard them, since they do not appear in H(t), for t ≤ A.
Enumerate the points in Nanc ∩ [0, A] from T1 to TN0,∞ .
• The children of T1 , N1,T1 , are given by the projection of the points of Π
whose ordinates are in the strip t 7→ (g(t), g(t)+h(t−T1 )]. As before, by the
property of spatial independence of Π, this is a Poisson process of intensity
h(. − T1 ) conditionally to Nanc .
• Repeat until TN0,∞ , where N1,TN0,∞ are given by the projection of the points
PN0,∞ −1
of Π whose ordinates are in the strip t 7→ (g(t) + i=1
h(t − Ti ), g(t) +
PN0,∞
h(t
−
T
)].
As
before,
by
the
property
of
independence
of Π, this is a
i
i=1
Poisson process of intensity h(. − TN0,∞ ) conditionally to Nanc and because
the consecutive strips do not overlap, this process is completely independent
of the previous processes (N1,Ti )’s that have been constructed.
Note that at the end of this first generation, N1 = ∪T ∈Nanc N1,T consists of the
PN0,∞
projection of points of Π in the strip t 7→ (g(t), g(t) + i=1
R h(t − Ti )]. They
PN0,∞
therefore form a Poisson process of intensity i=1 h(t − Ti ) = h(t − u)Nanc (du),
conditionally to Nanc .
For generation
k + 1 replace in the previous construction Nanc by Nk and g(t)
Pk−1 R
by g(t) + j=0 h(t − u)dNj (u). Once again we end up for each point x in Nk
with a process of children Nk+1,x which is a Poisson process of intensity h(t − x)
conditionally to Nk and which is totally independent of the other Nk+1,y ’s.
R Note
also that as before, Nk+1 = ∪x∈Nk Nk+1,x is a Poisson process of intensity h(t −
u)Nk (du), conditionally to N0 , ..., Nk .
Hence we are indeed constructing a branching process as defined by the proposition. Because the underlying Galton Watson process ends almost surely, as shown
before, it means that there exists a.s. one generation Nk∗ which will be completely
empty and our recursive contruction ends up too.
The main point is to realize that at the end the points in N = ∪∞
k=0 Nk are
exactly the projection of the points in Π that are below
∞ Z
∞ Z t
X
X
t 7→ g(t) +
h(t − u)Nk (du) = g(t) +
h(t − u)Nk (du)
k=0
k=0
0
hence below
Z
t 7→ g(t) +
t
h(t − u)N (du) = H(t).
0
Moreover H(t) is FtN predictable. Therefore by Theorem B.11, N has intensity
H(t), which concludes the proof.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
37
A cluster process Nc , is aR branching process, as defined before, which admits
t−
Nc
intensity λ(t, Ft−
) = h(t) + 0 h(t − z)Nc (dz). Its distribution only depends on
the function h. It corresponds to the family generated by one ancestor at time 0 in
Proposition B.3. Therefore, by Proposition
B.3, a Hawkes process with empty past
R t−
N
(N− = ∅) of intensity λ(t, Ft−
) = g(t) + 0 h(t − z)N (dz) can always be seen as
the union of Nanc and of all the a + Nca for a ∈ Nanc where the Nca are i.i.d. cluster
processes.
For a Hawkes process N with non empty past, N− , this
is more technical. Let
Nanc be a Poisson process of intensity g on R+ and NcV V ∈Nanc be a sequence of
i.i.d. cluster processes associated to h. Let also
!
[
V
N>0 = Nanc ∪
V + Nc .
(B.10)
V ∈Nanc
As we prove below, this represents the points in N that do not depend on N− . The
points that are depending
on N− are constructed as follows independently of N>0 .
T
Given N− , let N1 T ∈N− denote a sequence of independent Poisson processes with
respective intensities λT (v) = h(v − T )1(0,∞) (v). Then, given N− and N1T T ∈N− ,
let NcT,V V ∈N T ,T ∈N be a sequence of i.i.d. cluster processes associated to h. The
−
1
points depending on the past N− are given by the following formula as proved in
the next Proposition:
[
[
N≤0 = N− ∪
N1T ∪
V + NcT,V .
(B.11)
T ∈N−
V ∈N1T
Proposition B.4. Let N = N≤0 ∪ N>0 , where N>0 and N≤0 are given by (B.10)
and (B.11). Then N is a linear HawkesR process with past given by N− and intensity
t−
N
) = g(t) + −∞ h(t − x)N (dx), where g and h are as in
on (0, ∞) given by λ(t, Ft−
Proposition B.3.
Proof. Proposition B.3 yields that N>0 has intensity
Z t−
N>0
λN>0 (t, Ft−
) = g(t) +
h(t − x)N>0 (dx),
(B.12)
0
T
and that, given N− , for any T ∈ N− , NH
= N1T ∪
NT
λNHT (t, Ft−H )
S
V
∈N1T
V + NcT,V
has intensity
t−
Z
T
h(t − x)NH
(dx),
= h(t − T ) +
(B.13)
0
Moreover, all these processes are
Windependent
given N− . For any t ≥ 0, one can
T
N≤0
NH
N−
note that Ft
⊂ Gt := F0 ∨
, and so N≤0 has intensity
T ∈N− Ft
λN≤0 (t, Gt− ) =
X
T ∈N−
NT
λNHT (t, Ft−H )
Z
t−
h(t − x)N≤0 (dx)
=
−∞
(B.14)
June 9, 2015
38
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
N
on (0, +∞). Since this last expression is Ft ≤0 -predictable, by page 27 in 3 , this is
N
also λN≤0 (t, Ft−≤0 ). Moreover, N≤0 and N>0 are independent by construction and,
N≤0
for any t ≥ 0, FtN ⊂ Ft
given by
∨ FtN>0 . Hence, as before, N has intensity on (0, +∞)
N
N>0
N
) = g(t) +
λ(t, Ft−
) = λ(t, Ft−≤0 ) + λ(t, Ft−
Z
t−
h(t − x)N (dx).
−∞
B.3.2. A general result for linear Hawkes processes
The following proposition is a consequence of Theorem 3.3 applied to Hawkes processes with general past N− .
Proposition B.5. Using the notations of Theorem 3.3, let N be a Hawkes process
with past before 0 given by N− of distribution P0 and with intensity on R+ given by
Z t−
N
λ(t, Ft− ) = µ +
h(t − x)N (dx),
−∞
where µ is a positive
real number and h is a non-negative function with support in
R
R+ such that h < 1. Suppose that P0 is such that
Z 0
sup E
h(t − x)N− (dx) < ∞.
(B.15)
t≥0
−∞
Then, the mean measure u defined in Proposition 3.2 satisfies Theorem 3.3 and
R∞
moreover its integral v(t, s) := u(t, dσ) is a solution of the system (4.14)–(4.15)
s
µ,h
where v in is the survival function of −T0 , and where Φ = Φµ,h
P0 is given by ΦP0 =
µ,h
µ,h
µ,h
Φµ,h
+ + Φ−,P0 , with Φ+ given by (4.17) and Φ−,P0 given by,
Z t−
∀ s, t ≥ 0, Φµ,h
(t,
s)
=
E
h(t
−
z)N
(dz)
N
([t
−
s,
t))
=
0
. (B.16)
≤0
≤0
−,P0
−∞
Moreover, (4.20) holds.
B.3.3. Proof of the general result of Proposition B.5
Before proving Proposition B.5, we need some technical preliminaries.
Events of the type {St− ≥ s} are equivalent to the fact that the underlying
process has no point between t − s and t. Therefore, for any point process N and
any real numbers t, s ≥ 0, let
Et,s (N ) = {N ∩ [t − s, t) = ∅}.
(B.17)
Various sets Et,s (N ) are used in the sequel and the following lemma, whose proof is
obvious and therefore omitted, is applied several times to those sets.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
39
Lemma B.6. Let Y be some random variable and I(Y ) some countable set of
indices depending on Y . Suppose that (Xi )i∈I(Y ) is a sequence of random variables
which are independent conditionally on Y . Let A(Y ) be some event depending on
Y and ∀ j ∈ I(Y ), Bj = Bj (Y, Xj ) be some event depending on Y and Xj . Then,
for any i ∈ I(Y ), and for all sequence of measurable functions (fi )i∈I(Y ) such that
the following quantities exist,
X
X
E [fi (Y, Xi )| Y, Bi ] A#B ,
E
fi (Y, Xi ) A#B = E
i∈I(Y )
i∈I(Y )
E[fi (Y,Xi )1Bi | Y ]
P(Bi | Y )
where E [fi (Y, Xi )| Y, Bi ] =
and A#B = A(Y ) ∩
T
B
.
j
j∈I(Y )
The following lemma is linked to Lemma 4.2.
Lemma B.7. Let N be a linear Hawkes process with no pastR before time 0 (i.e.
t−
N
) = g(t) + 0 h(t − x)N (dx),
N− = ∅) and intensity on (0, ∞) given by λ(t, Ft−
where g and h are as in Proposition B.3 and let for any x, s ≥ 0
Z x
Lg,h (x) = E
h(x − z)N (dz) Ex,s (N )
s
0
Gg,h (x) = P (E
x,s (N )) ,
s
Then, for any x, s ≥ 0,
Lg,h
s (x) =
x
Z
h,h
h (z) + Lh,h
s (z) Gs (z)g(x − z) dz,
(B.18)
s∧x
and
log(Gg,h
s (x))
Z
(x−s)∨0
=
Gh,h
s (x
Z
− z)g(z)dz −
0
x
g(z)dz.
(B.19)
0
h,h
1
∞
In particular, (Lh,h
and is a solution of (4.11)-(4.12).
s , Gs ) is in L × L
Proof. The statement only depends on the distribution of N . Hence, thanks
to
V
Proposition B.4, it is sufficient to consider N = Nanc ∪
∪
V
+
N
.
c
P V ∈Nanc
Let us show (B.18). First, let us write Lg,h
s (x) = E
X∈N h(x − X) Ex,s (N ) .
and note that Lg,h
s (x) = 0 if x ≤ s. The following decomposition holds
X
X
h(x − V ) +
Lg,h
h(x − V − W ) Ex,s (N ) .
s (x) = E
V ∈Nanc
W ∈NcV
According to Lemma B.6 and the following decomposition,
!
Ex,s (N ) = Ex,s (Nanc ) ∩
\
V ∈Nanc
Ex−V,s (NcV
) ,
(B.20)
June 9, 2015
40
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
let us denote Y = Nanc , XV = NcV and BV = Ex−V,s (NcV ) for all V ∈ Nanc . Let
us fix V ∈ Nanc and compute the conditional expectation of the inner sum with
respect to the filtration of Nanc which is
#
"
X
X
h((x − V ) − W ) Ex−V,s (Nc )
E
h(x − V − W ) Y, BV = E
W ∈Nc
W ∈NcV
=
Lh,h
s (x
− V ),
(B.21)
since, conditionally on Nanc , NcV has the same distribution as N
c which is a linear
R t−
Nc
Hawkes process with conditional intensity λ(t, Ft− ) = h(t) + 0 h(t − z)Nc (dz).
Using the conditional independence of the cluster processes with respect to Nanc ,
one can apply Lemma B.6 and deduce that
"
#
X
g,h
h,h
Ls (x) = E
h(x − V ) + Ls (x − V ) Ex,s (N )
V ∈Nanc
The following argument is inspired by Moller 28 . For every V ∈ Nanc , we say that V
has mark 0 if V has no descendant or himself in [x − s, x) and mark 1 otherwise. Let
0
1
0
us denote Nanc
the set of points with
mark 0 and Nanc
= Nanc \ Nanc
. For any V ∈
h,h
0
Nanc , we have P V ∈ Nanc Nanc = Gs (x−V )1[x−s,x)c (V ), and all the marks are
1
0
are independent Poisson
and Nanc
chosen independently given Nanc . Hence, Nanc
0
h,h
processes and the intensity
of
N
is
given
by
λ(v)
=
g(v)G
anc
s (x − v)1[x−s,x)c (v).
1
Moreover, the event Nanc = ∅ can be identified to Ex,s (N )and
X
1
Lg,h
h(x − V ) + Lh,h
s (x) = E
s (x − V ) Nanc = ∅
0
V ∈Nanc
Z
=
x−
h,h
h (x − w) + Lh,h
s (x − w) g(w)Gs (x − w)1[x−s,x)c (w)dw
−∞
(x−s)∨0
Z
h,h
h (x − w) + Lh,h
s (x − w) Gs (x − w)g(w) dw,
=
0
where we used the independence between the two Poisson processes. It suffices to
substitute w by z = x − w in the integral to get the desired formula. Since Gh,h
is
s
h,h
1
bounded, it is obvious that Ls is L .
Then, let us show (B.19). First note that if x < 0, Gg,h
s (x) = 1. Next, following
Q
(B.20) one has Gg,h
(x)
=
E
1
1
Ex,s (Nanc )
s
X∈Nanc Ex−X,s (NcX ) . This is also
Y
Gg,h
1Ex−V,s (NcV ) ,
s (x) = E 1Nanc ∩[x−s,x)=∅
V ∈Nanc ∩[x−s,x)c
= E 1Nanc ∩[x−s,x)=∅
Y
V ∈Nanc ∩[x−s,x)c
Gh,h
s (x − V ) ,
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
41
by conditioning with respect to Nanc . Since Nanc ∩ [x − s, x) is independent of
Nanc ∩ [x − s, x)c , this gives
"
!#
Z
Z x
h,h
g,h
g(z)dz)E exp
log(Gs (x − z))Nanc (dz)
.
Gs (x) = exp(−
[x−s,x)c
x−s
Rx
R
h,h
This leads to log(Gg,h
s (x)) = − x−s g(z)dz+ [x−s,x)c (Gs (x−z)−1)g(z)dz, thanks
to Campbell’s Theorem 23 . Then, (B.19) clearly follows from the facts that if z >
x > 0 then Gh,h
s (x − z) = 1 and g(z) = 0 as soon as z < 0.
Proof of Lemma 4.2 In turn, we use a Banach fixed point argument to prove
that for all s ≥ 0 there exists a unique couple (Ls , Gs ) ∈ L1 (R+ )×L∞ (R+ ) solution
to these equations. To do so, let us first study
Equation (4.11) and define TG,s:
R
Rx
(x−s)∨0
L∞ (R+ ) → L∞ (R+ ) by TG,s (f )(x) := exp 0
f (x − z)h(z)dz − 0 h(z)dz .
The right-hand side is well-defined since h ∈ L1 and f ∈ L∞ . Moreover we have
R
Rx
(x−s)∨0
R (x−s)∨0
kf kL∞
h(z)dz−
h(z)dz
(kf kL∞ −1)
h(z)dz
0
0
0
TG,s (f )(x) ≤ e
≤e
.
This shows that TG,s maps the ball of radius 1 of L∞ into itself, and more precisely
into the intersection of the positive cone and the ball. We distinguish two cases:
Rx
− If x < s, then TG,s (f )(x) = exp(− h(z)dz) for any f , thus, the unique fixed
0
Rx
point is given by Gs : x 7→ exp(− h(z)dz), which does not depend on s > x.
0
− And if x > s, the functional TG,s is a k−contraction in {f ∈ L∞ (R+ ), kf kL∞ ≤
R∞
1}, with k ≤ h(z)dz < 1, by convexity of the exponential. More precisely, using
0
that for all x, y, |ex − ey | ≤ emax(x,y) |x − y| we end up with, for kf k, kgkL∞ ≤ 1,
−
TG,s (f )(x) − TG,s (g)(x) ≤ e
Rx
0
x−s
R
h(z)dz
h(z)dz
e0
Z
≤ kf − gkL∞
Z
kf − gkL∞
(x−s)
h(z)dz
0
h(z)dz.
R+
Hence there exists only one fixed point Gs that we can identify with Gh,h
given
s
in Proposition B.7 and Gh,h
being
a
probability,
G
takes
values
in
[0,
1].
s
s
1
1
Analogously,
we
define
the
functional
T
:
L
(R
L,s
+ ) → L (R+ ) by TL,s (f )(x) :=
Rx
(h (z) + f (z)) Gs (z)h(x − z) dz, and it is easy to check that TL,s is well-defined
s∧x
as well. We similarly distinguish the two cases:
− If x < s, then the unique fixed point is given by Ls (x) = 0.
R∞
− And if x > s, thus TL,s is a k−contraction with k ≤ h(y)dy < 1 in L1 ((s, ∞))
0
since kGs kL∞ ≤ 1 :
June 9, 2015
42
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
kTL,s (f ) − TL,s (g)kL1 =
R∞ Rx
s
f (z) − g(z) Gs (z)h(x − z)dz dx
s
≤ kGs kL∞
R∞ R∞
f (z) − g(z) h(x − z)dxdz
s v
= kGs kL∞ kf − gkL1 ((s,∞))
R∞
h(y)dy.
0
In the same way, there exists only one fixed point Ls = Lh,h
given by Proposition
s
B.7. In particular Ls (x ≤ s) ≡ 0.
Finally, as a consequence of Equation
R (4.12) we find that if Ls is the unique
fixed point of TL,s , then kLs kL1 (R+ ) ≤
(
∞
0
1−
bounded in L1 with respect to s.
h(y) dy)2
R∞
0
h(y) dy
and therefore Ls is uniformly
Lemma B.8. Let N be a linear Hawkes process withR past before time 0 given by
t−
N
N− and intensity on (0, ∞) given by λ(t, Ft−
) = µ + −∞ h(t − x)N (dx), where µ
is a positive real number and h is a non-negative function with support in R+ , such
1
,exp
that ||h||L1 < 1. If the distribution of N− satisfies (B.15) then (ALλ,loc
) is satisfied.
N
)
. By iProposition B.4, λ(t) =
λ(t)
=
E
λ(t,
F
Proof.
For
all
t
>
0,
let
t−
h
i
hR
R t−
t−
E µ + 0 h(t − x)N>0 (dx) + E −∞ h(t − x)N≤0 (dx) which is possibly infinite.
Let us apply Proposition B.7 with g ≡ µ and s = 0, the choice s = 0 implying
that Et,0 (N>0 ) is of probability 1. Therefore
t−
Z
E µ+
Z t
h(t − x)N>0 (dx) = µ 1 +
(h(x) + L0 (x))dx ,
0
0
where (L0 , G0 = 1) is the solution
of Lemma 4.2 fori s = 0, by identification of
h
R t−
Proposition B.7. Hence E µ + 0 h(t − x)N>0 (dx) ≤ µ(1 + ||h||L1 + ||L0 ||L1 ).
On the other hand, thanks to Lemma B.9, we have
Z
t−
E
−∞
Z t
X
h(t − x)N≤0 (dx) = E
h(t − T ) +
[h(t − x) + L0 (t − x)] h(x − T )dx .
0
T ∈N−
Since all the quantities are non negative, one can exchange all the integrals and
deduce that
Z
t−
h(t − x)N≤0 (dx) ≤ M (1 + ||h||L1 + ||L0 ||L1 ),
E
−∞
with M = supt≥0 E
hR
0
−∞
i
h(t − x)N− (dx) which is finite by assumption. Hence,
1
,exp
λ(t) ≤ (µ + M )(1 + ||h||L1 + ||L0 ||L1 ), and therefore (ALλ,loc
) is satisfied.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
43
Proof of Proposition B.5 First, by Proposition B.4
N
E λ(t, Ft−
) St− ≥ s =
Z t−
Z t−
h(t − z)N≤0 (dz) Et,s (N )
h(t − z)N>0 (dz) Et,s (N ) + E
µ+E
Z
= µ+E
0
t−
Z
h(t − z)N>0 (dz) Et,s (N>0 ) +E
−∞
t−
h(t − z)N≤0 (dz) Et,s (N≤0 )
−∞
0
h
N
By Lemma B.7, we obtain E λ(t, Ft−
) St− ≥ s = µ + Lµ,h
s (t) + Φ−,P0 (t, s). Idenand Gs = Gh,h
tifying by Lemma 4.2, Ls = Lh,h
s , we obtain
s
h
N
E λ(t, Ft−
) St− ≥ s = Φµ,h
+ (t, s) + Φ−,P0 (t, s).
N
Hence Φµ,h
P0 (t, s) = E λ(t, Ft− ) St− ≥ s .
Lemma B.8 ensures that the assumptions of Theorem 3.3 are fulfilled. Let u
and ρµ,h
3.3. With respect
to the
P0 = ρλ,P0 be defined accordingly as in Theorem
N
)1{St− ≥s} . The first
PDE system, there are two possibilities to express E λ(t, Ft−
h
i
one involves ρλ,P0 and is E ρµ,h
(t,
S
)1
, whereas the second one involves
t−
S
≥s
t−
P0
µ,h
Φµ,h
P0 and is ΦP0 (t, s)P (St− ≥ s) .
R +∞
R +∞
µ,h
This leads to s ρµ,h
u(t, dx), since u(t, ds) is
P0 (t, x)u(t, dx) = ΦP0 (t, s) s
R +∞
the distribution of St− . Let us denote v(t, s) = s u(t, dx): this relation, together
with Equation (3.10) for u,
immediately gives us that v satisfies Equation (4.14)
R +∞
with Φ = Φµ,h
.
Moreover,
u(t, dx) = 1, which gives us the boundary condition
P0
0
in (4.15).
B.3.4. Study of the general case for Φh−,P0 in Proposition B.5
Lemma
B.9. Let consider h a non-negative function with support in R+ such that
R
h < 1, N− a point
hR process on R− with distributioniP0 and N≤0 defined by (B.11).
t−
h
If Φ−,P0 (t, s) := E −∞ h(t − z)N≤0 (dz) Et,s (N≤0 ) , for all s, t ≥ 0, then,
X
h
Φ−,P0 (t, s) = E
(h(t − T ) + Ks (t, T )) Et,s (N≤0 ) ,
(B.22)
T ∈N−
where Ks (t, u) is given by (4.13).
Proof. Following the decomposition given in Proposition B.4, one has
X
Φh−,P0 (t, s) = E
h(t − T )
T ∈N−
!!
+
X
V ∈N1T
h(t − V ) +
X
W ∈NcT ,V
h(t − V − W )
Et,s (N≤0 ) ,
June 9, 2015
44
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
where Et,s (N≤0 ) = Et,s (N− )
T
T 0 ∈N−
Et,s (N1T )
T
V 0 ∈N1T
0
Et−V 0 ,s (NcV ) . Let us fix
T ∈ N− , V ∈ N1T and compute the conditional expectation of the inner sum with
respect to N− and N1T . In the same way as for (B.21) we end up with
X
E
h(t − V − W ) N− , N1T , Et−V,s (NcT,V ) = Lh,h
s (t − V ),
W ∈NcT ,V
since, conditionally on N− and N1T , NcT,V has the same distribution as Nc . Using
the conditional independence of the cluster processes (NcT,V )V ∈N1T with respect
to N− , (N1T )T ∈N− , one can apply Lemma B.6 with Y = N− , (N1T )T ∈N− and
X(T,V ) = NcT,V and deduce that
X
X
h(t − T ) +
Et,s (N≤0 ) .
Φh−,P0 (t, s) = E
h(t − V ) + Lh,h
s (t − V )
T ∈N−
V ∈N1T
Let us fix T ∈ N− and compute the conditional expectation of the inner sum with
respect to N− which is
X
T
Γ := E
h(t − V ) + Lh,h
(B.23)
s (t − V ) N− , At,s ,
V ∈N1T
where ATt,s = Et,s (N1T ) ∩
T
V 0 ∈N1T
0
Et−V 0 ,s (NcT,V ) . For every V ∈ N1T , we say that
V has mark 0 if V has no descendant or himself in [t − s, t) and mark 1 otherwise.
T,1
T,0
T
Let us denote N1T,0 theset of points with
mark 0 and N1 = N1 \ N1 .
For any V ∈ N1T , P V ∈ N1T,0 N1T
= Gh,h
s (t−V )1[t−s,t)c (V ) and all the marks
are chosen independently given N1T . Hence, N1T,0 and N1T,1 are independent Poisson
h,h
processes and the intensity of N1T,0 is given
n by λ(v)o= h(v − T )1[0,∞) (v)Gs (t −
v)1[t−s,t)c (v). Moreover, ATt,s is the event N1T,1 = ∅ , so
n
o
X
T,1
h(t − V ) + Lh,h
=∅
Γ = E
s (t − V ) N− , N1
V ∈N1T ,0
Z
t−
=
h,h
h(t − v) + Lh,h
s (t − v) h(v − T )1[0,∞) (v)Gs (t − v)1[t−s,t)c (v)dv
−∞
= Ks (t, T ).
Using the independence
of the cluster processes, one can apply Lemma B.6 with
T
Y = N− and XT = N1 , (NcT,V )V ∈N1T and (B.22) clearly follows.
Lemma B.10. Under the assumptions and notations of Proposition B.5 and
Lemma 4.2, the function Φh−,P0 of Proposition B.5 can be identified with (4.18)
under (A1N− ) and with (4.19) under (A2N− ) and (B.15) is satisfied in those two
cases.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
45
Proof. Using
i
hP
E
(N
)
.
Lemma B.9, we have Φh−,P0 (t, s) = E
(h(t
−
T
)
+
K
(t,
T
))
t,s
≤0
s
T ∈N−
Under A1N− . On the one hand, for every t ≥ 0,
Z
0
h(t − x)N− (dx) = E [h(t − T0 )]
E
−∞
Z
0
Z
h(t − t0 )f0 (t0 )dt0 ≤ ||f0 ||L∞
=
−∞
∞
h(y)dy,
0
hence P0 satisfies (B.15). On
to one point T0 ,
the other hand, since N− is reduced
1
h
Φ−,P0 (t, s) = P E (N ) E (h(t − T0 ) + Ks (t, T0 )) 1Et,s (N≤0 ) , using the definition
( t,s ≤0 )
of the conditional expectation. First, we compute P(Et,s (N≤0 |T0 ). To do so, we use
T
T0 ,V
T0 Et−V,s (Nc
)
the decomposition Et,s (N≤0 ) = {T0 < t − s} ∩ Et,s (N1T0 ) ∩
V ∈N
1
and the fact that, conditionally on N1T0 , for all V ∈ N1T0 , NcT0 ,V has the same
distribution as Nc to deduce that
h
i
Y
Gs (t − V ) T0 ,
E 1Et,s (N≤0 ) T0 = 1T0 <t−s E 1Et,s (N T0 ) T0 E
1
T
V ∈N1 0 ∩[t−s,t)c
because the event Et,s (N1T0 ) involves N1T0 ∩ [t − s, t) whereas the product involves
N1T0 ∩ [t − s, t)c , both of those processes being two independent Poisson processes.
Their respective intensities are λ(x) = h(x − T0 )1[(t−s)∨0,t) (x) and λ(x) = h(x −
T0 )1[0,(t−s)∨0) (x), so we end up with
h
i
R
t
T0
E
1
T
=
exp
−
h(x
−
T
)1
(x)dx
0
0
[0,∞)
t−s
Et,s (N1 )
R
Q
= exp − (t−s)∨0 [1 − Gs (t − x)] h(x − T0 )dx .
E
G
(t
−
V
)
T
s
0
0
T
V ∈N 0 ∩[t−s,t)c
1
The product of these two last quantities is exactly q(t, s, T0 ) given by (4.13). Note
that q(t, s, T0 ) is exactly the probability that T0 has no descendant in [t − s, t) given
R 0∧(t−s)
T0 . Hence, P (Et,s (N≤0 )) = −∞
q(t, s, t0 )f0 (t0 )dt0 and (4.18) clearly follows.
2
Under AN− . On the one hand, for any t ≥ 0,
Z
0
E
−∞
Z
h(t − x)N− (dx) = E
0
−∞
Z
h(t − x)αdx ≤ α
∞
h(y)dy,
0
hence P0 satisfies (B.15). On the other hand, since we are dealing with a Poisson
process, we can use the same argumentation of marked Poisson processes as in the
proof of Lemma B.7. For every T ∈ N− , we say that T has mark 0 if T has no
0
descendant or himself in [t − s, t) and mark 1 otherwise. Let us denote N−
the set
1
0
of points with mark 0 and N− = N− \ N− . For any T ∈ N− , we have
0
P T ∈ N−
N− = q(t, s, T )1[t−s,t)c (T ),
June 9, 2015
46
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
0
1
and all the marks are chosen independently given N− . Hence, N−
and N−
are
0
independent Poisson processes and the intensity of N− is given by
λ(z) = α1z≤0 q(t, s, z)1[t−s,t)c (z)
1
Moreover, Et,s (N≤0 ) = N−
= ∅ . Hence,
X
1
= ∅
(h(t − T ) + Ks (t, T )) N−
Φh−,P0 (t, s) = E
0
T ∈N−
1
0
.
and N−
which gives (4.19) thanks to the independence of N−
B.3.5. Proof of Propositions 4.3 and 4.4
Since we already proved Proposition B.5 and Lemma B.10, to obtain Proposi∞
2
tion 4.3 it only remains to prove that Φµ,h
P0 ∈ L (R+ ), to ensure uniqueness of the
solution by Remark 4.1. To do so, it is easy to see that the assumption h ∈ L∞ (R+ )
combined with Lemma 4.2 giving that Gs ∈ [0, 1] and Ls ∈ L1 (R+ ) ensures that
∞
h
Φµ,h
+ , q and Ks are in L (R+ ). In turn, this implies that Φ−,P0 in both (4.18) and
∞
(4.19) is in L (R+ ), which concludes the proof of Proposition 4.3.
Proof of Proposition 4.4 The method of characteristics leads us to rewrite the
solution v of (4.14)–(4.15) by defining f in ≡ v in on R+ , f in ≡ 1 on R− such that
Rt
f in (s − t)e− (t−s)∨0 Φ(y,s−t+y) dy , when s ≥ t
Rs
v(t, s) =
(B.24)
−
Φ(y+t−s,y) dy
in
f (s − t)e (s−t)∨0
, when t ≥ s.
1
Let PM
0 be the distribution of the past given by AN− and T0 ∼ U([−M −1, −M ]).
By Proposition 4.3, let vM be the solution of System (4.14)–(4.15) with Φ = Φµ,h
PM
0
in
and v in = vM
, (i.e. the survival function of a uniform variable on [−M − 1, −M ]).
∞
Let also vM be the solution of System (4.14)–(4.15) with Φ = Φµ,h
and v in ≡ 1,
PM
0
and v∞ the solution of (4.21)-(4.22). Then
∞
∞
kvM − v ∞ kL∞ ((0,T )×(0,S)) ≤ kvM − vM
kL∞ ((0,T )×(0,S)) + kvM
− v ∞ kL∞ ((0,T )×(0,S)) .
in
in
By definition of vM
, it is clear that vM
(s) = 1 for s ≤ M, so that Formula (B.24) im∞
∞
plies that vM (t, s) = vM (t, s) as soon as s−t ≤ M and so kvM −vM
kL∞ ((0,T )×(0,S)) =
0 as soon as M ≥ S.
∞
To evaluate the distance kvM
− v ∞ kL∞ ((0,T )×(0,S)) , it remains to prove that
Rt h
−
Φ
M
(y,s−t+y) dy
e 0 −,P0
→ 1 uniformly on (0, T ) × (0, S) for any T > 0, S > 0. For
this, it suffices to prove that Φh−,PM (t, s) → 0 uniformly on (0, T ) × (0, S). Since q
0
given by (4.13) takes values in [exp(−2||h||L1 ), 1], (4.18) implies
R 0∧(t−s)
(h(t − t0 ) + Ks (t, t0 )) 1[−M −1,−M ] (t0 )dt0
h
Φ−,PM (t, s) ≤ −∞R 0∧(t−s)
.
0
exp(−2||h||L1 )1[−M −1,−M ] (t0 )dt0
−∞
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
47
Since ||Gs ||L∞ ≤ 1, Ls and h are non-negative, it is clear that
Z +∞
[h(t − x) + Ls (t − x)] h(x − t0 )dx,
Ks (t, t0 ) ≤
0
and so
Z −M
Z
+∞
Ks (t, t0 )dt0 ≤
−M −1
Z
[h(t − x) + Ls (t − x)]
h(x − t0 )dt0
dx
−M −1
0
Z
!
−M
∞
≤
Z
+∞
[h(t − x) + Ls (t − x)] dx
h(y)dy
ZM∞
0
h(y)dy [||h||L1 + ||Ls ||L1 ] .
≤
M
R∞
Hence, for M large enough
Φh−,PM (t, s)
0
≤
M
h(y)dy [||h||L1 +||Ls ||L1 ]
exp(−2||h||L1 )
→ 0, uniformly
1
in (t, s) since Ls is uniformly bounded in L , which concludes the proof.
B.4. Thinning
The demonstration of Ogata’s thinning algorithm uses a generalization of point
processes, namely the marked point processes. However, only the basic properties
of simple and marked point processes are needed (see 3 for a good overview of point
processes theory). Here (Ft )t>0 denotes a general filtration such that FtN ⊂ Ft for
all t > 0, and not necessarily the natural one, i.e. (FtN )t>0 .
Theorem B.11. Let Π be a (Ft )-Poisson process with intensity 1 on R2+ . Let
1
λ(t, Ft− ) be a non-negative (F
R t )-predictable process which is Lloc a.s. and define the
point process N by N (C) = C×R+ 1[0,λ(t,Ft− )] (z) Π (dt × dz) , for all C ∈ B (R+ ).
Then N admits λ(t, Ft− ) as a (Ft )-predictable intensity. Moreover, if λ is in fact
N
N
), then N admits λ(t, Ft−
) as a FtN FtN -predictable, i.e. λ(t, Ft− ) = λ(t, Ft−
predictable intensity.
Proof. The goal is to apply the martingale characterization of the intensity (Chapter II, Theorem 9 in 3 ). We cannot consider Π as a point process on R+ marked in
R+ (in particular, the point with the smallest abscissa cannot be defined). However,
for every k ∈ N, we can define RΠ(k) , the restriction of Π to the points with ordinate
smaller than k, by Π(k) (C) = C Π (dt × dz) for all C ∈ B (R+ × [0, k]). Then Π(k)
can be seen as a point process on R+ marked in Ek := [0, k] with intensity kernel
1.dz with respect to (Ft ). In the same way, we define N (k) by
Z
N (k) (C) =
1z∈[0,λ(t,Ft− )] Π(k) (dt × dz) for all C ∈ B (R+ ) .
C×R+
Let P(Ft ) be the predictable σ-algebra (see page 8 of 3 ).
Let us denote Ek = B ([0, k]) and P̃k (Ft ) = P (Ft ) ⊗ Ek the associated marked
predictable σ-algebra.
June 9, 2015
48
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
For any fixed z in E, {(u, ω) ∈ R+ × Ω such that λ(u, Fu− ) (ω) ≥ z} ∈ P (Ft )
since λ is predictable. If Γk = {(u, ω, z) ∈ R+ × Ω × Ek , λ(u, Fu− ) (ω) ≥ z}, then
1
Γk = ∩ ∗ ∪ {(u, ω) ∈ R+ × Ω, λ(u, Fu− ) (ω) ≥ q} × 0, q +
∩ Ek .
n∈N q∈Q+
n
So, Γk ∈ P̃k (Ft ) and 1z∈[0,λ(u,Fu− )]∩Ek is P˜k (Ft )-measurable. Hence, one can apply
the Integration Theorem (Chapter VIII, Corollary 4 in 3 ). So,
Z t Z
1z∈[0,λ(u,Fu− )] M̄ (k) (du × dz)
(Xt )t≥0 :=
is a (Ft )-local martingale
0
where M̄
(k)
Ek
t≥0
(k)
(du × dz) = Π
(du × dz) − dzdu. In fact,
Z t
(k)
Xt = Nt −
min (λ(u, Fu− ), k) du.
0
Rt
(k)
Nt
Yet,
(respectively 0 min (λ(u, Fu− ), k) du) is non-decreasingly converging
Rt
towards Nt (resp. 0 λ(u, Fu− )du). Both of the limits are finite a.s. thanks to the
local integrability of the
(see page
27 of 3 ). Thanks to monotone conver intensity
Rt
gence we deduce that Nt − 0 λ(u, Fu− )du
is a (Ft )-local martingale. Then,
t≥0
thanks to the martingale characterization of the intensity, Nt admits λ(t, Ft− ) as
an (Ft )-intensity. The last point of the Theorem
is a reduction of the filtration.
N
), it is a fortiori FtN -progressive and the desired result
Since λ(t, Ft− ) = λ(t, Ft−
follows (see page 27 in 3 ).
This final result can be found in 4 .
Proposition B.12 (Inversion Theorem).
Let N = {Tn }n>0 be a non explosive point process on R+ with FtN -predictable
N
intensity λt = λ(t, Ft−
). Let {Un }n>0 be a sequence of i.i.d. random variables with
N
uniform distribution on [0, 1]. Moreover, suppose that they are independent of F∞
.
Denote Gt = σ (Un , Tn ≤ t). Let N̂ be an homogeneous Poisson process with intensity 1 on R2+ independent of F∞ ∨ G∞ . Define a point process N̄ on R2+ by
Z
Z
X
N̄ ((a, b] × A) =
1(a,b] (Tn ) 1A Un λ(Tn , FTNn − ) +
N̂ (dt × dz)
(a,b]
n>0
N )]
A−[0,λ(t,Ft−
for every 0 ≤ a < b and A ⊂ R+ .
Then, N̄ is an homogeneous
Poisson process
on R2+ with intensity 1 with respect
to the filtration (Ht )t≥0 = Ft ∨ Gt ∨ FtN̂
.
t≥0
References
1. M. Bossy, N. Champagnat, et al. Markov processes and parabolic partial differential
equations. Encyclopedia of Quantitative Finance, pages 1142–1159, 2010.
2. O. Boxma, D. Perry, W. Stadje, and S. Zacks. A markovian growth-collapse model.
Advances in applied probability, pages 221–243, 2006.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
49
3. P. Brémaud. Point processes and queues. Springer-Verlag, New York, 1981. Martingale
dynamics, Springer Series in Statistics.
4. P. Brémaud and L. Massoulié. Stability of nonlinear Hawkes processes. The Annals of
Probability, 24(3):1563–1588, 1996.
5. D.R. Brillinger, H.L. Bryant, and J.P. Segundo. Identification of synaptic interactions.
Biol. Cybernetics, 22:213–228, 1976.
6. E.N. Brown, R. Barbieri, V. Ventura, R.E. Kass, and L.M. Frank. The time rescaling theorem and its application to neural spike train analysis. Neural Computation,
14(2):325–346, 2002.
7. N. Brunel. Dynamics of sparsely connected networks of excitatory and inhibitory
spiking neurons. Journal of Computational Neuroscience, 8:183–208, 2000.
8. N. Brunel and V. Hakim. Fast global oscillations in networks of integrate-and-fire
neurons with low firing rates. Neural Computation, 11:1621–1671, 1999.
9. M. J Cáceres, J. A Carrillo, and Benoı̂t Perthame. Analysis of nonlinear noisy integrate&fire neuron models: blow-up and steady states. The Journal of Mathematical
Neuroscience, 1(1):1–33, 2011.
10. M. J. Cáceres, J. A. Carrillo, and L. Tao. A numerical solver for a nonlinear
fokker-planck equation representation of neuronal network dynamics. J. Comp. Phys.,
230:1084–1099, 2011.
11. M. J Cáceres and B. Perthame. Beyond blow-up in excitatory integrate and fire neuronal networks: refractory period and spontaneous activity. Journal of theoretical biology, 350:81–89, 2014.
12. José A. Cañizo, José A. Carrillo, and Sı́lvia Cuadrado. Measure solutions for some
models in population dynamics. Acta Appl. Math., 123:141–156, 2013.
13. E.S. Chornoboy, L.P. Schramm, and A.F. Karr. Maximum likelihood identification of
neural point process systems. Biol. Cybernetics, 59:265–275, 1988.
14. B. Cloez. Limit theorems for some branching measure-valued processes.
arXiv:1106.0660v2, 2012.
15. A. Compte, N. Brunel, P. S. Goldman-Rakic, and X.-J. Wang. Synaptic mechanisms
and network dynamics underlying spatial working memory in a cortical network model.
Cerebral Cortex 10, 10:910–923, 2000.
16. D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes, volume 2. Springer, 1988.
17. F. Delarue, J. Inglis, S. Rubenthaler, and E Tanré. Global solvability of a networked
integrate-and-fire model of McKean-Vlasov type. Annals of Applied Probability, 2012.
to appear.
18. S. Ditlevsen and A. Samson. Estimation in the partially observed stochastic morrislecar neuronal model with particle filter and stochastic approximation methods. Annals of Applied Statistics, to appear.
19. M. Doumic, M. Hoffmann, N. Krell, and L. Robert. Statistical estimation of a growthfragmentation model observed on a genealogical tree. Bernoulli, in press, 2014.
20. A. G. Hawkes and D. Oakes. A cluster process representation of a self-exciting process.
Journal of Applied Probability, 11(3):493–503, 1974.
21. P. Jahn, R. W. Berg, J. Hounsgaard, and S. Ditlevsen. Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process. J. Comput. Neurosci.,
31:563–579, 2011.
22. P. Jahn, R.W. Berg, J. Hounsgaard, and S. Ditlevsen. Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process. Journal of Computational
Neuroscience, 31:563–579, 2011.
23. J.F.C. Kingman. Poisson processes. Oxford Science Publications, 1993.
June 9, 2015
50
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
24. C.D. Lai. An example of Wold’s Point Processes with Markov-Dependent Intervals.
Journal of Applied Probability, 15(4):748–758, 1978.
25. P. A. W. Lewis and G. S. Shedler. Simulation of nonhomogeneous Poisson processes
by thinning. Naval Research Logistics Quarterly, 26(3):403–413, 1979.
26. M. Mattia and P. Del Giudice. Population dynamics of interacting spiking neurons.
Phys. Rev. E, 66:051917, 2002.
27. P. Mattila. Geometry of sets and measures in Euclidean spaces: fractals and rectifiability. Number 44 in Cambridge studies in advanced mathematics. Cambridge University
Press, 1999.
28. J. Møller and J. G. Rasmussen. Perfect simulation of Hawkes processes. Advances in
Applied Probability, pages 629–646, 2005.
29. Y. Ogata. On Lewis’ simulation method for point processes. IEEE Transactions on
Information Theory, 27(1):23–31, 1981.
30. A Omurtag, Knight B. W., and L. Sirovich. On the simulation of large populations of
neurons. J. Comp. Neurosci., 8:51–63, 2000.
31. K. Pakdaman, B. Perthame, and D. Salort. Dynamics of a structured neuron population. Nonlinearity, 23(1):55–75, 2010.
32. K. Pakdaman, B. Perthame, and D. Salort. Relaxation and self-sustained oscillations
in the time elapsed neuron network model. SIAM Journal on Applied Mathematics,
73(3):1260–1279, 2013.
33. K. Pakdaman, B. Perthame, and D. Salort. Adaptation and fatigue model for neuron networks and large time asymptotics in a nonlinear fragmentation equation. The
Journal of Mathematical Neuroscience, 4(14):1–26, 2014.
34. B. Perthame. Transport equations in biology. Frontiers in Mathematics. Birkhäuser
Verlag, Basel, 2007.
35. J. Pham, K. Pakdaman, J. Champagnat, and J.-F. Vibert. Activity in sparsely connected excitatory neural networks: effect of connectivity. Neural Networks, 11(3):415–
434, 1998.
36. J.W. Pillow, J. Shlens, L. Paninski, A. Sher, A.M. Litke, E.J. Chichilnisky, and E.P.
Simoncelli. Spatio-temporal correlations and visual signalling in a complete neuronal
population. Nature, 454:995–999, 2008.
37. G. Pipa, S. Grün, and C. van Vreeswijk. Impact of spike train autostructure on probability distribution of joint spike events. Neural Computation, 25:1123–1163, 2013.
38. C. Pouzat and A. Chaffiol. Automatic spike train analysis and report generation. an
implementation with R, R2HTML and STAR. Journal of Neuroscience Methods, pages
119–144, 2009.
39. A. Renart, N. Brunel, and X.-J. Wang. Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. In Jianfeng
Feng, editor, Computational Neuroscience: A comprehensive approach. Chapman &
Hall/CRC Mathematical Biology and Medicine Series, 2004.
40. P. Reynaud-Bouret, V. Rivoirard, F. Grammont, and C. Tuleau-Malot. Goodness-offit tests and nonparametric adaptive estimation for spike train analysis. The Journal
of Mathematical Neuroscience, 4(3), 2014.
41. P. Reynaud-Bouret, V. Rivoirard, and C. Tuleau-Malot. Inference of functional connectivity in neurosciences via Hawkes processes. In 1st IEEE Global Conference on
Signal and Information Processing, 2013, Austin Texas.
42. L. Sirovich, A Omurtag, and K. Lubliner. Dynamics of neural populations: Stability
and synchrony. Network: Computation in Neural Systems, 17:3–29, 2006.
43. V. Ventura, R. Carta, R.E. Kass, S.N. Gettner, and C.R. Olson. Statistical analysis
of temporal evolution in single-neuron firing rates. Biostatistics, 3(1):1–20, 2002.
| 9 |
F -PURE THRESHOLDS OF HOMOGENEOUS POLYNOMIALS
arXiv:1404.3772v1 [math.AC] 14 Apr 2014
DANIEL J. HERNÁNDEZ, LUIS NÚÑEZ-BETANCOURT, EMILY E. WITT, AND WENLIANG ZHANG
A BSTRACT. In this article, we investigate F -pure thresholds of polynomials that are homogeneous
under some N-grading, and have an isolated singularity at the origin. We characterize these invariants in terms of the base p expansion of the corresponding log canonical threshold. As an application,
we are able to make precise some bounds on the difference between F -pure and log canonical thresholds established by Mustaţă and the fourth author. We also examine the set of primes for which the
F -pure and log canonical threshold of a polynomial must differ. Moreover, we obtain results in special cases on the ACC conjecture for F -pure thresholds, and on the upper semi-continuity property
for the F -pure threshold function.
1. I NTRODUCTION
The goal of this article is to investigate F -pure thresholds, and further study their relation with
log canonical thresholds. The F -pure threshold, first defined in [TW04], is an invariant of singularities in positive characteristic defined via splitting conditions and the Frobenius (or pth -power)
endomorphism. Though F -pure thresholds may be defined more generally, we will only consider
F -pure thresholds of polynomials over fields of prime characteristic, and thus follow the treatment
given in [MTW05]. Given such a polynomial f , the F -pure threshold of f , denoted by fpt (f ), is
always a rational number in (0, 1], with smaller values corresponding to “worse” singularities
[BMS08, BMS09, BSTZ09].
The log canonical threshold of a polynomial fQ over Q, denoted lct (fQ ), is an important invariant of singularities of fQ , and can be defined via integrability conditions, or more generally, via
resolution of singularities. Like the F -pure threshold, lct (fQ ) is also a rational number contained
in (0, 1]; see [BL04] for more on this (and related) invariants. In fact, the connections between
F -pure and log canonical thresholds run far deeper: As any ab ∈ Q determines a well-defined
element of Fp whenever p ∤ b, we may reduce the coefficients of fQ modulo p ≫ 0 to obtain polynomials fp over Fp . Amazingly, the F -pure thresholds of these so-called characteristic p models of
fQ are related to the log canonical threshold of fQ as follows [MTW05, Theorems 3.3 and 3.4]:
(1.0.1)
fpt (fp ) ≤ lct (fQ ) for all p ≫ 0 and lim fpt (fp ) = lct (fQ ) .
p→∞
In this article, we will not need to refer to the definition of lct (fQ ) via resolutions of singularities, and instead take the limit appearing in (1.0.1) as our definition of lct (fQ ). Via reduction to
characteristic p > 0, one may reduce polynomials (and more generally, ideals of finite type algebras) over any field of characteristic zero to characteristic p ≫ 0 (e.g., see [Smi97b]). Moreover,
the relations in (1.0.1) are just two of many deep connections between invariants of characteristic
p models defined via the Frobenius endomorphism, and invariants of the original characteristic
zero object that are often defined via resolution of singularities. For more in this direction, see, for
example, [MTW05, BMS06, Smi00, Smi97a, Har98, HW02, HY03, Tak04, Sch07, BST, STZ12].
Motivated by the behavior exhibited when fQ defines an elliptic curve over Q, it is conjectured
that for any polynomial fQ over Q, there exist infinitely many primes for which fpt (fp ) equals
1
lct (fQ ) [MTW05, Conjecture 3.6]. This conjecture, along with other characteristic zero considerations, has fueled interests in understanding various properties of fpt (fp ). In particular, arithmetic properties of the denominator of fpt (fp ) have recently been investigated, most notably by
Schwede (e.g., see [Sch08]). Assuming fpt (fp ) 6= lct (fQ ), Schwede has asked when p must divide
the denominator of fpt (fp ), and the first author has asked when the denominator of fpt (fp ) must
be a power of p, and more specifically, when fpt (fp ) must be a truncation of lct (fQ ).1 Recall that,
P
given the unique non-terminating (base p) expansion lct (fQ ) = e≥1 λ(e) · p−e ∈ (0, 1], we call
P
(e) · p−e for some L ≥ 1.
fpt (fp ) a truncation of lct (fQ ) (base p) if fpt (fp ) = L
e=1 λ
In this paper, we study F -pure thresholds associated to polynomials that are homogeneous
under some (possibly, non-standard) N-grading, and that have an isolated singularity. The F purity of such polynomials was originally investigated by Fedder (e.g., see [Fed83, Lemma 2.3
and Theorem 2.5]), and more recently, by Bhatt and Singh, who showed the following: Given a
(standard-graded) homogeneous polynomial f over Fp of degree n in n variables with an isolated
singularity at the origin, if p ≫ 0, then fpt (fp ) = 1 − Ap for some integer 0 ≤ A ≤ n − 2. Bhatt and
Singh also show that, if f is (standard-graded) homogeneous of arbitrary degree with an isolated
singularity at the origin, and if fpt (fp ) 6= lct (fQ ), then the denominator of fpt (fp ) is a power of
p whenever p ≫ 0 [BS, Theorem 1.1 and Proposition 5.4].
We combine a generalization of the methods in [BS] with a careful study of base p expansions
to obtain our main result, Theorem 3.5, which characterizes F -pure thresholds of polynomials
with an isolated singularity at the origin that are homogeneous under some N-grading. Our result
states that such F -pure thresholds must have a certain (restrictive) form; in particular, it confirms
that the denominator of fpt (fp ) is a power of p whenever fpt (fp ) 6= lct (fQ ) for this larger class
of polynomials. Notably, the result also gives a bound for the power of p appearing in the denominator of fpt (fp ) for p ≫ 0. To minimize technicalities, we omit the statement of Theorem 3.5, and
instead discuss the two variable case, where our main result takes the following concrete form;
note that in what follows, we use Jac (f ) to denote the ideal generated by the partial derivatives
of a polynomial f , and ord(p, b) to denote the least positive integer k such that pk ≡ 1 mod b.
Theorem A p
(cf. Theorem 4.4). Fix an N-grading on Fp [x, y], and consider a homogeneous polynoxy
mial f with Jac (f ) = (x, y) such that deg f ≥ deg xy. If p ∤ deg f and fpt (f ) 6= deg
deg f , then
q L
y
p deg xy % deg f
deg xy
fpt (f ) =
−
for some integer 1 ≤ L ≤ ord(p, deg f ),
deg f
pL deg f
q
y
where apL % b denotes the least positive residue of apL modulo b.
In fact, we are able to give a slightly more refined description of the F -pure threshold, even in
the two variable case; we refer the reader to Theorem 4.4 for the detailed statement. Moreover, we
may recast Theorem A as a theorem relating F -pure and log canonical thresholds: If fQ ∈ Q[x, y]
is a homogenous
and satisfies the conditions appearing in Theorem A (i.e., deg fQ ≥ deg xy and
p
xy
(x, y) = Jac (fQ )), then it is well-known (e.g., see Theorem 6.2) that lct (fQ ) = deg
deg f . Substituting this identity into Theorem A leads to a description of fpt (fp ) in terms of lct (fQ ), and in fact
is enough to show that fpt (fp ) is a truncation of lct (fQ ) (e.g., see Lemma 2.5).
Though the situation is more subtle, many of the properties highlighted by Theorem A and the
subsequent discussion hold in general (after some slight modifications); we refer the reader to
Theorem 3.5 for a detailed description of F -pure thresholds in higher dimensions. Moreover, motivated by (the bounds for L appearing in) Theorem A, one may ask whether there always exists a
(small) finite list of possible values for F -pure thresholds, say, as a function of the class of p modulo deg f . This question turns out to have a positive answer for homogeneous polynomials with
1https://sites.google.com/site/computingfinvariantsworkshop/open-questions
2
isolated singularities. Furthermore, these lists can be minimal, and strikingly, can even precisely
determine fpt (fp ). For examples of such lists, see Examples 4.6, 4.7, and 4.9.
The remaining results in this article are all applications of our description of F -pure thresholds.
The first such application concerns uniform bounds for the difference between log canonical and
F -pure thresholds. We recall the following result, due to Mustaţă and the fourth author: Given a
polynomial fQ over Q, there exist constants C ∈ R>0 and N ∈ N (depending only on fQ ) such that
C
1
≤ lct (fQ ) − fpt (fp ) ≤
N
p
p
whenever fpt (fp ) 6= lct (fQ ) and p ≫ 0 [MZ, Corollaries 3.5 and 4.5]. We stress that the preceding
result applies to an arbitrary polynomial, and that the constants C and N are not explicitly stated
as functions of fQ . In the special case of a homogeneous polynomial with an isolated singularity at
the origin, we give a new proof of this result that makes explicit one optimal choice of constants.
Theorem B ( cf. Theorem 6.2). Suppose fQ ∈ Q[x1 , · · · , xn ] is homogeneous under some N-grading
with an isolated singularity at the origin, and write the rational number lct (fQ ) = ab in lowest
terms. If p ≫ 0, then either fpt (fp ) = lct (fQ ), or
b−1
≤ lct (fQ ) − fpt (fp ) ≤
pord(p,b)
Moreover, these bounds are sharp (see Remark 6.4).
n − 1 − b−1
.
p
Much of the focus of this article is on studying the form of the F -pure threshold when it differs
from the log canonical threshold. In Section 6.2, we give a simple criterion that, when satisfied,
guarantees that the F -pure and log canonical threshold must differ. The main result of this section,
Proposition 6.7, holds quite generally; that is, it can be applied to polynomials that are neither
homogeneous, nor have an isolated singularity. Moreover, the proof of this result is elementary,
and is based upon the fact that the base p expansion of an F -pure threshold must satisfy certain
rigid conditions, as was observed in [BMS09, Her12]
Theorem C (cf. Proposition 6.7). Let fQ denote any polynomial over Q, and write lct (fQ ) = ab in
lowest terms. If a 6= 1, then the set of primes for which lct (fQ ) is not an F -pure threshold (of any
polynomial) is infinite, and contains all primes p such that pe · a ≡ 1 mod b for some e ≥ 1. In
1
,
particular, the density of the set of primes {p : fpt (fp ) 6= lct (fQ )} is greater than or equal to φ(b)
where φ denotes Euler’s phi function.
As a further application of our main theorem, we are also able to construct a large class of
polynomials fQ over Q for which the density of the set {p : fpt (fp ) 6= lct (fQ )} is larger than any
prescribed bound between zero and one.
Theorem D (cf. Example 6.8). For every ε > 0, there exists an integer n with the following property: If fQ ∈ Q[x1 , . . . , xn−1 ] is homogeneous (under the standard grading) of degree n with an
isolated singularity at the origin, then the density of the set of primes {p : fpt (fp ) 6= lct (fQ )} is
greater than 1 − ε.
The remaining applications deal with another connection between F -pure and log canonical
thresholds: Motivated by results in characteristic zero, it was conjectured in [BMS09, Conjecture
4.4] that the set of all F -pure thresholds of polynomials in a (fixed) polynomial ring over a field
of characteristic p > 0 satisfies the ascending chain condition (ACC), i.e., contains no strictly increasing sequences. In Proposition 7.3, we prove that a restricted set of F -pure thresholds satisfies
ACC. Though the characteristic zero analog of Proposition 7.3 (that is, the statement obtained by
replacing “Fp ” with “Q” and “F -pure threshold” with “log canonical threshold,” as appropriate)
is obvious, our result relies strongly on the description of F -pure thresholds given in Theorem 3.5.
3
Finally, as detailed in [BMS09, Remark 4.5], the ACC conjecture for F -pure thresholds predicts
that for any polynomial f ∈ Fp [x1 , · · · , xn ], there exists an integer N (which may depend on f )
such that fpt (f ) ≥ fpt (f + g) for all g ∈ (x1 , · · · , xn )N . In our final application, we are able to
confirm this prediction in the following special case.
Theorem E (cf. Proposition
7.10). Suppose that f ∈ Fp [x1 , · · · , xn ] is homogeneous under some
p
P
deg xi . Then fpt (f ) = fpt (f + g) for
N-grading such that Jac (fP) = (x1 , . . . , xn ) and deg f ≥
n
deg
f
−
deg
x
+1
i
each g ∈ (x1 , · · · , xn )
.
Notation. Throughout this article, p denotes a prime number and Fp denotes the field with p
e
elements. For every ideal I of a ring of characteristic p >
0, and every e ≥ 1, I [p ] denotes the
e
eth Frobenius power of I, the ideal generated by the set gp : g ∈ I . For a real number a, ⌈a⌉
(respectively, ⌊a⌋) denotes the least integer that is greater than or equal to (respectively, greatest
integer less or equal to) a.
Acknowledgements. The authors are indebted to Bhargav Bhatt and Anurag Singh; the latter
shared ideas on their joint work during the Midwest Commutative Algebra and Geometry Conference at Purdue University in 2011 that would eventually form the foundation of our approach.
We would also like to thank Benjamin Weiss and Karen Smith for their comments on an earlier
draft. The first author gratefully acknowledges support from the Ford Foundation (FF) through
a FF Postdoctoral Fellowship. The second author thanks the National Council of Science and
Technology (CONACyT) of Mexico for support through Grant 210916. The fourth author was
partially supported by NSF grants DMS #1247354 and DMS #1068946, and a Nebraska EPSCoR
First Award. This collaboration began during visits supported by a travel grant from the AMS
Mathematical Research Communities 2010 Commutative Algebra program. Finally, much of the
authors’ collaborations took place at the University of Michigan, the University of Minnesota, and
the Mathematical Sciences Research Institute; we thank these institutions for their hospitality.
2. B ASICS
OF BASE
p EXPANSIONS
Definition 2.1. Given α ∈ (0, 1], there exist unique integers α(e) for every e ≥ 1 such that 0 ≤
P
α(e) ≤ p − 1, α = e≥1 α(e) · p−e , and such that the integers α(e) are not all eventually zero. We call
P
α(e) the eth digit of α (base p), and we call the expression α = e≥1 α(e) · p−e the non-terminating
expansion of α (base p).
Definition 2.2. Let α ∈ (0, 1], and fix e ≥ 1. We call hαie := α(1) · p−1 + · · · + α(e) · p−e the eth
truncation of α (base p). We adopt the convention that hαi0 = 0 and hαi∞ = α.
Notation 2.3. We adopt notation analogous to the standard decimal notation, using “ : ” to distinguish between consecutive digits. For example, we often write hαie = . α(1) : α(2) : · · · :
α(e) (base p).
Convention 2.4. Given a natural number b > 0 and an integer m, Jm % bK denotes the least positive
residue of m modulo b. In particular, we have
that 1 q≤ kJm %ybK ≤ b for all m ∈ Z. Moreover, if
p and b are relatively prime, ord(p, b) = min k ≥ 1 : p % b = 1 , which we call the order of p
modulo b. In particular, note that ord(p, 1) = 1.
Lemma 2.5. Fix λ ∈ (0, 1] ∩ Q. If we write λ = ab , not necessarily in lowest terms, then
q e−1
y
ap
% b · p − Jape % bK
Jape % bK
(e)
and hλie = λ −
.
λ =
b
bpe
Note that it is important to keep in mind Convention 2.4 when interpreting these identities.
4
Proof. Since λ(e) = pe (hλie − hλie−1 ), the first identity follows from the second. Setting δ = λ − hλie
and multiplying both sides of the equality ab = λ = hλie + δ by bpe shows that
ape = bpe hλie + bpe δ.
As 0 < δ ≤ p−e and pe hλie ∈ N, it follows that bpe δ is the least positive residue of ape modulo b.
Finally, substituting δ = λ − hλie into bpe δ = Jape % bK establishes the second identity.
We gather some of the important basic properties of base p expansions below.
Lemma 2.6. Fix α and β in [0, 1].
(1) α ≤ β if and only if hαie ≤ hβie for all e ≥ 1; if α < β, then these inequalities are strict for
e ≫ 0.
(2) If (ps − 1) · α ∈ N, then the base p expansion of α is periodic, with period dividing s. In
particular, if λ = ab with p ∤ b, then the base p expansion of λ is periodic with period equal
to ord(p, b).
(3) Suppose λ = ab with p ∤ b. If s = ord(p, b), then for all k ≥ 1, pks · hλiks = (pks − 1) · λ.
Proof. (1) follows by definition; (2) follows immediately from Lemma 2.5; (3) follows from (2).
Lemma 2.7. Consider α < β in (0, 1], and set ∆e := pe hβie − pe hαie . Note that, by Lemma 2.6, the
integer ℓ = min {e : ∆e ≥ 1} is well-defined. Moreover, the following hold:
(1) The sequence {∆e }e≥1 is non-negative, non-decreasing, and unbounded.
(2) Suppose β = ab with p ∤ b. If s = ord(p, b), then ∆ℓ+s+k ≥ pk + 1 for every k ≥ 0.
Proof. We first observe that the following recursion holds.
∆e+1 = p · ∆e + β (e+1) − α(e+1) for every e ≥ 0.
(2.0.2)
Setting e = ℓ in (2.0.2) and noting that ∆ℓ ≥ 1 shows that
∆ℓ+1 = p · ∆ℓ + β (ℓ+1) − α(ℓ+1) = (p − 1) · ∆ℓ + ∆ℓ + β (ℓ+1) − α(ℓ+1)
≥ (p − 1) · 1 + ∆ℓ + β (ℓ+1) − α(ℓ+1)
≥ ∆ℓ + β (ℓ+1) .
Furthermore, an induction on e ≥ ℓ shows that
∆e+1 ≥ ∆e + β (e+1) for every e ≥ ℓ.
(2.0.3)
Thus, {∆e }e≥1 is non-decreasing, and as we consider non-terminating expansions, β (e) 6= 0 for
infinitely many e, so that (2.0.3) also shows that ∆e+1 > ∆e for infinitely many e. We conclude
that {∆e }e≥1 is unbounded, and it remains to establish (2).
By definition, β (ℓ) − α(ℓ) = ∆ℓ ≥ 1, and hence β (ℓ) ≥ 1. In fact, setting s = ord(p, b), Lemma 2.6
states that β (ℓ+s) = β (ℓ) ≥ 1, and applying (2.0.3) with e = ℓ + s − 1 then shows that
∆ℓ+s ≥ ∆ℓ+s−1 + β (ℓ+s) ≥ 2.
Hence, (2) holds for k = 0. Utilizing (2.0.2), an induction on k completes the proof.
3. F - PURE
THRESHOLDS OF HOMOGENEOUS POLYNOMIALS :
A
DISCUSSION
We adopt the following convention from this point onward.
Convention 3.1. Throughout this article, L will denote a field of characteristic p > 0, and m will
denote the ideal generated by the variables in R = L[x1 , · · · , xn ].
5
Definition 3.2. Consider a polynomial f ∈ m, and for every e ≥ 1, set
o
n
e
νf (pe ) = max N : f N ∈
/ m[p ] .
An important property of these integers is that {p−e · νf (pe )}e≥1 is a non-decreasing sequence
contained in the open unit interval [MTW05]. Consequently, the limit
νf (pe )
∈ (0, 1]
e→∞
pe
fpt (f ) := lim
exists, and is called the F -pure threshold of f .
The following illustrates important properties of F -pure thresholds; we refer the reader to
[MTW05, Proposition 1.9] or [Her12, Key Lemma 3.1] for a proof of the first, and [Her12, Corollary
4.1] for a proof of the second.
Proposition 3.3. Consider a polynomial f contained in m.
(1) The base p expansion of the F -pure threshold determines {νf (pe )}e≥1 ; more precisely,
νf (pe ) = pe · hfpt (f )ie for every e ≥ 1.
(2) The F -pure threshold is bounded above by the rational numbers determined by its trailing
digits (base p); more precisely, fpt (f ) is less than or equal to
. fpt (f )(s) : fpt (f )(s+1) : · · · : fpt (f )(s+k) : · · · (base p) for every s ≥ 1.
3.1. A discussion of the main results. In this subsection, we gather the main results of this article.
Note that the proofs of these results appear in Section 5.
Convention 3.4. Given a polynomial f , we use Jac (f ) to denote the ideal of R generated by the
partial derivatives of f . If f is homogeneous under some N-grading on R, each partial derivative
∂i (f ) of f is also homogeneous, and if ∂i (f ) 6= 0, then deg ∂i (f ) = deg f − deg xi . Furthermore, if
p ∤ deg f , then Euler’s relation
X
deg f · f =
deg xi · xi · ∂i (f )
shows that f ∈ Jac (f ). Thus, if p ∤ deg(f ) and L is perfect, the Jacobian criterion states that
p
Jac (f ) = m if and only if f has an isolated singularity at the origin.
Theorem 3.5. Fix an N-grading on
L[x1 , · · · ,oxn ]. Consider a homogeneous polynomial f with
nP
p
deg xi
a
Jac (f ) = m, and write λ := min
deg f , 1 = b in lowest terms.
(1) If fpt (f ) 6= λ, then
fpt (f ) = λ −
q
= hλiL −
for some pair (L, E) ∈
N2
!
y
apL % b + bE
bpL
E
pL
with L ≥ 1 and 0 ≤ E ≤ n − 1 −
J apL % b K+a
b
.
(2) If p > (n − 2) · b and p ∤ b, then 1 ≤ L ≤ ord(p, b); note that ord(p, 1) = 1.
(3) If p > (n − 2) · b and p > b, then a < Jape % bK for all 1 ≤ e ≤ L − 1.
(4) If p > (n − 1) · b, then there exists a unique pair (L, E) satisfying the conclusions of (1).
We postpone the proof of Theorem 3.5 to Subsection 5.2. The remainder of this subsection is
focused on parsing the statement of Theorem 3.5, and presenting some related results. The reader
interested in seeing examples should consult Section 4.
6
Remark 3.6 (Two points of view). Each of the two descriptions of fpt (f ) in Theorem 3.5, which
are equivalent by Lemma 2.5, are useful in their own right. For example, the first description
plays a key role in Section 4. On the other hand, the second description makes it clear that either
fpt (f ) = λ, or fpt (f ) is a rational number whose denominator is a power of p, and further,
describes how “far” fpt (f ) is from being a truncation of λ; these observations allow us to address
the questions of Schwede and of the first author noted in the introduction.
The second point of Theorem 3.5 also immediately gives a bound on the power of p appearing
in the denominator of fpt (f ) whenever fpt (f ) 6= λ and p ≫ 0. For emphasis, we record this
bound in the following corollary.
Corollary 3.7. In the context of Theorem 3.5, if fpt (f ) 6= λ, and both p > (n − 2) · b and p ∤ b, then
pord(p,b) · fpt (f ) ∈ N. In particular, for all such primes, pφ(b) · fpt (f ) ∈ N, where φ denotes Euler’s
phi function.
Using the techniques of the proof of Theorem 3.5, we can analogously find a bound for the
power of p appearing in the denominator of fpt (f ) whenever fpt (f ) 6= λ and p is not large,
which we record here.
Corollary 3.8. In the setting of Theorem 3.5, if fpt (f ) 6= λ and p ∤ b, then pM · fpt (f ) ∈ N, where
M := 2 · φ(b) + ⌈log2 (n − 1)⌉ , and φ denotes Euler’s phi function.
Remark 3.9. We emphasize that
the constant M in Corollary 3.8 depends only on the number of
P
deg(xi )
variables n and the quotient deg f = ab , but not on the particular values of deg xi and deg f ; this
subtle point will play a key role in Proposition 7.3.
Remark 3.10 (Towards minimal lists). For p ≫ 0, the bounds for L and E appearing in Theorem
3.5 allows one to produce a finite list of possible values of fpt (f ) for each class of p modulo deg f .
We refer the reader to Section 4 for details and examples.
The uniqueness statement in point (4) of the Theorem 3.5 need not hold in general.
Example 3.11 (Non-uniqueness in low characteristic). If p = 2 and f ∈ L[x1 , x2 , x3 ] is any L∗ linear combination of x71 , x72 , x73 , then f satisfies the hypotheses of Theorem 3.5, under the standard
grading. Using [Hera], one can directly compute that fpt (f ) = 41 . On the other hand, the identities
!
q
y
3 · 22 % 7 + 7 · 0
1
3
3
= −
=
2
4
7
7·2
7 2
!
q
y
3 · 23 % 7 + 7 · 1
3
1
3
=
− 3
= −
3
7
7·2
7 3 2
show that the pairs (L, E) = (2, 0) and (L, E) = (3, 1) both satisfy the conclusions in Theorem 3.5.
We point out that the proof of Theorem 3.5, being somewhat constructive, predicts the choice of
(L, E) = (2, 0), but does not “detect” the choice of (L, E) = (3, 1).
Before concluding this section, we present the following related result; like Theorem 3.5 and
Corollary 3.8, its proof relies heavily on Proposition
5.6. However, in contrast to these results, its
P
focus is on showing that fpt (f ) = min {( deg xi ) / deg f, 1} for p ≫ 0 in a very specific setting,
as opposed to describing fpt (f ) when it differs from this value.
P
P
deg xi
Theorem 3.12. In the context of Theorem 3.5, suppose that deg xi > deg f , so that ρ := deg
f
is greater than 1. If p > n−3
,
then
fpt
(f
)
=
1.
ρ−1
As we see below, Theorem 3.12 need not hold in low characteristic.
7
Example 3.13 (Illustrating the necessity of p ≫ 0 in Theorem 3.12). Set f = xd1 + · · · + xdn . If
e−1
e
n > d > p, then f ∈ m[p] , and hence f p
∈ m[p ] for all e ≥ 1. Consequently, νf (pe ) ≤ pe−1 − 1,
and therefore fpt (f ) = lim p−e · νf (pe ) ≤ p−1 .
e→∞
4. F - PURE
THRESHOLDS OF HOMOGENEOUS POLYNOMIALS :
E XAMPLES
In this section, we illustrate, via examples, how Theorem 3.5 may be used to produce “short,” or
even minimal, lists
P of possible values for F -pure thresholds. We begin with the most transparent
case: If deg f =
deg xi , then the statements in Theorem 3.5 become less technical. Indeed, in this
case, a = b = 1, and hence ord(p, b) = 1 = Jm % bK for every m ∈ N. In this context, substituting
these values into Theorem 3.5 recovers the following identity, originally discovered by Bhatt and
Singh under the standard grading.
Example 4.1. [BS, Theorem 1.1] Fix an N-grading
on L[x1 , · · · , xn ]. Consider a homogeneous polyp
P
nomial f with d := deg f = deg xi and Jac (f ) = m. If p > n − 2 and fpt (f ) 6= 1, then
fpt (f ) = 1 − A · p−1 for some integer 1 ≤ A ≤ d − 2.
P
Next, we consider the situation when deg f =
deg xi + 1; already, we see that this minor
modification leads to a more complex statement.
Corollary 4.2. Fix an N-grading
p on L[x1 , · · · , xn ]. Consider a homogeneous polynomial f with
P
d := deg f = deg xi + 1 and Jac (f ) = m, and suppose that p > (n − 2) · d.
(1) If fpt (f ) = 1 − d1 , then p ≡ 1 mod d.
(2) If fpt (f ) 6= 1 − d1 , then fpt (f ) = 1 − 1d − A −
(a) 1 ≤ A ≤ d − 2 if p ≡ −1 mod d, and
(b) 1 ≤ A ≤ d − 3 otherwise.
Jp % dK
d
· p−1 for some integer A satisfying
(s)
(1)
for s ≥ 1, and hence that
≤ d−1
Proof. We begin with (1): Lemma 2.5 implies that d−1
(s)
(1)
(1)
(s)
(4.0.1)
1 − d−1
= p − 1 − d−1
≥ p − 1 − d−1
= 1 − d−1
(1)
for every s ≥ 1. However, if fpt (f ) = 1 − d−1 , Proposition 3.3 implies that 1 − d−1
≤
(s)
(1)
−1
−1
1−d
for every s ≥ 1. Consequently, equality holds throughout (4.0.1), and hence d
=
(s)
−1
d
for every s ≥ 1, which by Lemma 2.5 occurs if and only if p ≡ 1 mod d.
We now address the second point: In this setting, Theorem 3.5 states that fpt (f ) ∈ p−L · N for
some integer L ≥ 1. We will now show that L must equal one: Indeed, otherwise L ≥ 2, which
allows us to set e = 1 in the third point Theorem 3.5 to deduce that
1 ≤ d − 1 < Jp(d − 1) % dK = d − Jp % dK ,
and hence that Jp % dK < 1, which is impossible, as Jp % dK is always a positive integer. We
conclude that L = 1, and the reader may verify that substituting L = 1, Jp(d − 1) % dK = d −
Jp % dK, and A := E + 1 into Theorem 3.5 produces the desired description of fpt (f ).
4.1. The two variable case. We now shift our focus to the two variable case of Theorem 3.5, motivated by the following example.
Example 4.3. In [Har06, Corollary 3.9], Hara and Monsky independently described the possible
values of fpt (f ) whenever f is homogeneous in two variables (under the standard grading) of
degree 5 with an isolated singularity at the origin over an algebraically closed field (and hence, a
product of five distinct linear forms), and p 6= 5; we recall their computation below (the description
in terms of truncations is our own).
8
• If p ≡ 1 mod 5, then fpt (f ) =
• If p ≡ 2 mod 5, then fpt (f ) =
• If p ≡ 3 mod 5, then fpt (f ) =
• If p ≡ 4 mod 5, then fpt (f ) =
2p−2
2
5 or 5p =
2p2 −3
= 25 2
5p2
2p−1
2
5p = 5 1 .
2p−3
2
5 or 5p =
2
5 1.
3
or 2p5p−1
3
2
5 1
or
=
2p2 −2
5p2
2
5 3.
=
2
5 2.
The methods used in [Har06] rely on so-called “syzygy gap” techniques and the geometry of P1 ,
and hence differ greatly from ours. In this example, we observe the following: First, the F -pure
threshold is always λ = 25 , or a truncation of 25 . Secondly, there seem to be fewer choices for the
truncation point L than one might expect, given Theorem 3.5.
In this subsection, we show that the two observations from Example 4.3 hold in general in the
two variable setting. We now work in the context of Theorem 3.5 with n = 2, and relabel the
variables so that f ∈ L[x, y]. Note that if deg f < deg xy, then fpt (f ) = 1, by Theorem 3.12
(an alternate justification: this inequality is satisfied if and only if, after possibly re-ordering the
variables, f = x + y m for some m ≥ 1, in which case one can directly compute that νf (pe ) = pe − 1,
and hence that fpt (f ) = 1). Thus, the interesting case here is when deg f ≥ deg xy. In this case,
one obtains the following result.
y]. Consider a homogeneous polynomial
Theorem
p 4.4 (cf. Theorem 3.5). Fix an N-grading on L[x,
deg xy
f with Jac (f ) = m and deg f ≥ deg xy. If fpt (f ) 6= deg f = ab , written in lowest terms, then
q L
y
ap % b
deg xy
deg xy
−
=
fpt (f ) =
deg f L
deg f
b · pL
for some integer L satisfying the following properties:
(1) If p ∤ b, then 1 ≤ L ≤ ord(p, b).
e
(2) If p >
q b,Lthen ya < Jap % bK for all 1 ≤ e ≤ L − 1.
(3) 1 ≤ ap % b ≤ b − a for all possible values of p.
Proof. Assuming fpt (f ) 6=
deg xy
deg f ,
the bounds for E in Theorem 3.5 become
'
&q
y
apL % b + a
0≤E ≤1−
.
b
As the rounded term above is always either one or two, the inequality forces it to equal one, so
xy
that E = 0, which shows that fpt (f ) is a truncation of deg
deg f . Moreover, the fact that the rounded
q L
y
term above equals one also implies that ap % b + a ≤ b.
Remark 4.5. Though the first two points in Theorem 4.4 appear in Theorem 3.5, the third condition
is special to the setting of two variables. Indeed, this extra condition will be key in eliminating
potential candidate F -pure thresholds. For example, this extra condition allows us to recover the
data in Example 4.3. Rather than justify this claim, we present two new examples.
Example 4.6. Let f ∈ L[x, y] be as in Theorem 4.4, with
• If p ≡ 1 mod 3, then fpt (f ) =
• If p ≡ 2 mod 3, then fpt (f ) =
1
3
1
3
or
or
1
3 1
1
3 1
=
=
1
3
1
3
−
−
deg(xy)
deg f
1
3p .
2
3p or
= 13 . For p ≥ 5, the following hold:
1
3 2
=
1
3
−
1
.
3p2
In Example 4.6, the second and third points of Theorem 4.4 were uninteresting, as they did not
“whittle away” any of the candidate F -pure thresholds identified by the first point of Theorem
4.4. The following example is more interesting, as we will see that both of the second and third
points of Theorem 4.4, along with Proposition 3.3, will be used to eliminate potential candidates.
9
deg(xy)
2
deg f = 7 . For p ≥ 11, the following
2
.
= 72 − 7p
1
2
4
2
2
1
7 − 7p or 7 2 = 7 − 7p2 .
2
4
2
5
2
1
2
2
7 − 7p2 or 7 3 = 7 − 7p3 or 7 4 = 7 − 7p4 .
2
1
7 − 7p .
2
3
2
2
1
7 − 7p or 7 2 = 7 − 7p2 .
5
or 72 2 = 72 − 7p22 .
= 72 − 7p
1
Example 4.7. Let f ∈ L[x, y] be as in Theorem 4.4, with
•
•
•
•
•
•
If p ≡ 1 mod 7, then fpt (f ) =
If p ≡ 2 mod 7, then fpt (f ) =
If p ≡ 3 mod 7, then fpt (f ) =
If p ≡ 4 mod 7, then fpt (f ) =
If p ≡ 5 mod 7, then fpt (f ) =
If p ≡ 6 mod 7, then fpt (f ) =
2
7
2
7
or
2
7
2
7
2
7
2
7
1
2
1
1
or
2
7
=
=
=
=
2
7
hold:
For the sake of brevity, we only indicate how to deduce the lists for p ≡ 3 mod 7 and p ≡ 4 mod 7.
Similar methods can be used for the remaining cases.
(1)
2 (5)
= 2p−6
= p−3
(p ≡ 3 mod 7). In this case, it follows from Lemma 2.5 that 72
7 and 7
7 . In
light of this, the second point of Proposition 3.3, which shows that the first digit of fpt (f ) must be
the smallest digit, implies that fpt (f ) 6= 27 . Thus, the first point of Theorem 4.4 states that
2
for some 1 ≤ L ≤ ord(p, 7) = 6, as p ≡ 3 mod 7.
fpt (f ) =
7 L
q
y
However, as 2 6≤ 2p4 % 7 = 1, the second point of Theorem 4.4 eliminates the possibilities that
L = 5 or 6. Moreover, as J2p % 7K = 6 6≤ 7 − 2 = 5, the third point of Theorem 4.4 eliminates the
possibility that L = 1. Thus, the only remaining possibilities are fpt (f ) = 27 2 , 27 3 , and 27 4 .
(1)
(2)
(p ≡ 4 mod 7). As before, we compute that 72
= 2p−1
= p−4
is greater than 72
7
7 , and
2
hence it again follows the second point of Proposition 3.3 that fpt (f ) 6= 7 . Consequently, the first
point of Theorem 4.4 states that
2
for some 1 ≤ L ≤ ord(p, 7) = 3, as p ≡ 4 mod 7.
fpt (f ) =
7 L
q
y
However, we observe that 2 6≤ 2p2 % 7 = 1, and hence the second point of Theorem 4.4 eliminates the possibility that L = 2 or 3. Thus, the only remaining option is that fpt (f ) = 27 1 .
Remark 4.8 (Minimal lists). In many cases, we are able to verify that the “whittled down” list
obtained through the application of Theorems 4.4 and 3.5 and Proposition 3.3 is, in fact, minimal.
For example, every candidate listed in Example 4.3 is of the form fpt (f ), where f varies among
the polynomials x5 + y 5 , x5 + xy 4 , and x5 + xy 4 + 7x2 y 3 , and p various among the primes less than
or equal to 29.
An extreme example of the “minimality” of the lists of candidate thresholds appears below.
Note that, in this example, the list of candidate thresholds is so small that it actually determines
the precise value of fpt (f ) for p ≫ 0.
Example 4.9 (F -pure thresholds are precisely determined). Let f ∈ L[x, y] be as in Theorem 4.4,
= 53 ; for example, we may take f = x5 + x3 y + xy 2 , under the grading given by
with deg(xy)
deg f
(deg x, deg y) = (1, 2). Using Theorem 4.4 and Proposition 3.3 in a manner analogous to that used
in Example 4.7, we obtain the following complete description of fpt (f ) for p ≥ 7.
• If p ≡ 1 mod 5, then fpt (f ) = 35 .
• If p ≡ 2 mod 5, then fpt (f ) = 35
• If p ≡ 3 mod 5, then fpt (f ) = 35
• If p ≡ 4 mod 5, then fpt (f ) =
1
2
3
5 1
=
=
=
3
5
3
5
3
5
−
−
−
1
5p .
2
5p2 .
2p
5 .
10
We conclude this section with one final example illustrating “minimality”. In this instance,
however, we focus on the higher dimensional case. Although the candidate list for F -pure thresholds produced by Theorem 3.5 is more complicated (due to the possibility of having a non-zero
“E” term when n > 2), the following example shows that we can nonetheless obtain minimal lists
in these cases using methods analogous to those used in this section’s previous examples.
Example 4.10 (Minimal lists for n ≥ 3). Let f ∈ L[x, y, z] satisfy the hypotheses of Theorem 3.5,
xyz
2
with deg
deg f = 3 . Using the bounds for E and L therein, we obtain the following for p ≥ 5:
• If p ≡ 1 mod 3, then fpt (f ) =
• If p ≡ 2 mod 3, then fpt (f ) =
2
3
2
2
2
3 1 = 3 − 3p .
2
2
1
2
1
2
4
3 1 = 3 − 3p or 3 1 − p = 3 − 3p
fact, if f = x9 + xy 4 + z 3 , homogeneous
or
We claim that this list is minimal. In
under the grading
determined by (deg x, deg y, deg z) = (1, 2, 3), we obtain each of these possibilities as p varies.
5. F - PURE
THRESHOLDS OF HOMOGENEOUS POLYNOMIALS :
D ETAILS
Here, we prove the statements referred to in Section 3; we begin with some preliminary results.
5.1. Bounding the defining terms of the F -pure threshold. This subsection is dedicated to deriving bounds for νf (pe ). Our methods for deriving lower bounds are an extension of those employed
by Bhatt and Singh in [BS].
Lemma 5.1.
L[x1 , · ·k· , xn ] is homogeneous undernsome
N-grading,
then for every e ≥ 1,
j If f ∈ P
o
P
deg
x
deg
x
i
νf (pe ) ≤ (pe − 1) · deg f i . In particular, fpt (f ) ≤ min
deg f , 1 .
e
/
Proof. By Definition 3.2, it suffices to establish the upper bound on νf (pe ). However, as f νf (p ) ∈
e
e
e
m[p ] , there is a supporting monomial µ = xa11 · · · xann of f νf (p ) not in m[p ] , and comparing degrees
shows that
X
X
νf (pe ) · deg f = deg µ =
ai · deg xi ≤ (pe − 1) ·
deg xi .
Corollary 5.2. n
Let
f ∈ L[x
o1 , · · · , xn ] be a homogeneous polynomial under some N-grading, and
P
deg xi
a
e
e
write λ = min
deg f , 1 = b in lowest terms. If fpt (f ) 6= λ, then ∆e := p hλie − p hfpt (f )ie
defines a non-negative, non-decreasing, unbounded sequence. Moreover, if p ∤ b, then
1 ≤ min {e : ∆e 6= 0} ≤ ord(p, b).
Proof. By Lemma 5.1, the assumption that fpt (f ) 6= λ implies that fpt (f ) < λ, the so the asserted
properties of {∆e }e follow from Lemma 2.7. Setting s := ord(p, b), it follows from Lemma 2.5 that
λ := . λ(1) : · · · : λ(s) (base p).
By means of contradiction, suppose ∆s = 0, so that hλis = hfpt (f )is , i.e., so that
(5.1.1)
fpt (f ) = . λ(1) : · · · : λ(s) : fpt (f )(s+1) : fpt (f )(s+2) : · · · (base p).
As fpt (f ) ≤ λ, comparing the tails of the expansions of fpt (f ) and λ shows that
. fpt (f )(s+1) : · · · : fpt (f )(2s) (base p) ≤ . λ(1) : · · · : λ(s) (base p).
On the other hand, comparing the first s digits appearing in the second point of Proposition 3.3,
recalling the expansion (5.1.1), shows that
. λ(1) : · · · : λ(s) (base p) ≤ . fpt (f )(s+1) : · · · fpt (f )(2s) (base p),
11
and thus we conclude that fpt (f )(s+e) = λ(s+e) for every 1 ≤ e ≤ s, i.e., that ∆2s = 0. Finally, a
repeated application of this argument will show that ∆ms = 0 for every m ≥ 1, which implies that
fpt (f ) = λ, a contradiction.
Notation 5.3. If R is any N-graded ring, and M is a graded R-module, [M ]d will denote the degree
d component of M , and [M ]≤d and [M ]≥d the obvious [R]0 submodules of M . Furthermore, we
P
use HM (t) := d≥0 dim [M ]d · td to denote the Hilbert series of M .
For the remainder of this subsection, we work in the following context.
Setupp5.4. Fix an N-grading on R = L[x1 , · · · , xn ], and consider a homogeneous polynomial f ∈ m
with Jac (f ) = m. In this context, ∂1 (f ), · · · , ∂n (f ) form a homogeneous system of parameters
for R, and hence a regular sequence. Consequently, if we set Jk = (∂1 (f ), · · · , ∂k (f )), the sequences
∂k (f )
0 → (R/Jk−1 ) (− deg f + deg xk ) −→ R/Jk−1 → R/Jk → 0
are exact for every 1 ≤ k ≤ n. Furthermore, using the fact that
series is additive across
Qn the Hilbert
1
short exact sequences, the well-known identities HR (t) = i=1 1−tdeg xi and HM (−s) (t) = ts HM (t)
imply that
HR/ Jac(f ) (t) =
(5.1.2)
n
Y
1 − tdeg f −deg xi
i=1
1 − tdeg xi
,
an identity that will play a key role in what follows.
e
e
Lemma 5.5. Under Setup 5.4, we have that m[p ] : Jac (f ) \ m[p ] ⊆ [R]≥(pe +1)·P deg xi −n·deg f .
Proof. To simplify notation, set J = Jac (f ). By (5.1.2), the degree of HR/J (t) (a polynomial, as
√
P
J = m) is N := n deg f − 2 deg xi , and so [R/J]d = 0 whenever d ≥ N + 1. It follows that
[R]≥N +1 ⊆ J, and to establish the claim, it suffices to show that
e
e
(5.1.3)
m[p ] : [R]≥N +1 \ m[p ] ⊆ [R]≥(pe −1)·P deg xi −N = [R]≥(pe +1)·P deg xi −n·deg f .
By means of contradiction, suppose (5.1.3) is false. Consequently, there exists a monomial
e
e
e
µ = xp1 −1−s1 · · · xpn −1−sn ∈ m[p ] : [R]≥N +1
such that deg µ ≤ (pe − 1) · deg xi − (N + 1). This condition
implies that the monomial µ◦ :=
e]
s1
[p
s
x1 · · · xnn is in [R]≥N +1 , and as µ ∈ m : [R]≥N +1 , it follows that µµ◦ (which is apparently
equal to (x1 · · · xn )p
e −1
e
) is in m[p ] , a contradiction.
l
Proposition 5.6. In the setting of Setup 5.4, if p ∤ (νf (pe ) + 1), then νf (pe ) ≥ (pe + 1) ·
P
deg xi
deg f
m
−n .
e
e
e
e
Proof. The Leibniz rule shows that ∂i m[p ] ⊆ m[p ] , and so differentiating f νf (p )+1 ∈ m[p ] shows
e
e
that (νf (pe ) + 1) · f νf (p ) · ∂i (f ) ∈ m[p ] for all i. Our assumption that p ∤ νf (pe ) + 1 then implies that
e
e
e
f νf (p ) ∈ m[p ] : J \ m[p ] ⊆ [R]≥(pe +1)·P deg xi −n·deg f , where the exclusion follows by definition,
and the final containment by Lemma 5.5. Therefore,
X
deg f · νf (pe ) ≥ (pe + 1) ·
deg xi − n · deg f,
and the claim follows.
12
Corollary 5.7. In the setting of Setup 5.4, write λ = min
(e)
fpt (f )
, the
eth
nP
o
deg xi
deg f , 1
=
a
b,
in lowest terms. If
digit of fpt (f ), is not equal to p − 1, then
Jape % bK + a
e
e
p hλie − p hfpt (f )ie ≤ n −
.
b
Proof. By Proposition 3.3, νf (pe ) = pe hfpt (f )ie ≡ fpt (f )(e) mod p, and so the condition that
fpt (f )(e) 6= p − 1 is equivalent to the condition that p ∤ P
(νf (pe ) + 1). In light of this, we are
free to apply Proposition 5.6. In what follows, we set δ := ( deg xi ) · (deg f )−1 .
First, suppose that min {δ, 1} = 1, so that a = b = 1. Then (Jape % bK + a) · b−1 = 2, and so it
suffices to show that pe h1ie − pe hfpt (f )ie ≤ n − 2. However, the assumption that min {δ, 1} = 1
implies that δ ≥ 1, and Proposition 5.6 then shows that
pe · hfpt (f )ie = νf (pe ) ≥ ⌈(pe + 1) · δ − n⌉ ≥ ⌈pe + 1 − n⌉ = pe − 1 + 2 − n
= pe · h1ie + 2 − n.
If instead min {δ, 1} = δ, Proposition 5.6 once again shows that
pe hfpt (f )ie = νf (pe ) ≥ ⌈(pe + 1) · δ − n⌉ = ⌈pe · δ + δ − n⌉
Jape % bK
+
δ
−
n
= pe · hδie +
b · pe
e
Jap % bK
e
+ δ − n,
= p · hδie +
b
the second to last equality following from Lemma 2.5.
Example 5.8 (Illustrating that Corollary 5.7 is not an equivalence). If p = 2 and f is any L∗ -linear
(e)
15
combination of x15
6= 1, then ∆e := 2e 3−1 e −
1 , · · · , x5 , Corollary 5.7 states that if fpt (f )
e
2 hfpt (f )ie ≤ 4. We claim that the converse fails when e = 4. Indeed, a direct computation, made
possible by [Hera], shows that fpt (f ) = 18 , and comparing the base 2 expansions of fpt (f ) = 18
and λ = 13 shows that ∆4 = 4, even though fpt (f )(4) = 1 = p − 1.
5.2. Proofs of the main results. In this subsection, we return to the statements in Section 3 whose
proofs were postponed. For the benefit of the reader, we restate these results here.
p
Theorem 3.5. Fix annN-gradingoon R. Consider a homogeneous polynomial f with Jac (f ) = m,
P
deg xi
a
and write λ := min
deg f , 1 = b in lowest terms.
(1) If fpt (f ) 6= λ, then
q
!
y
apL % b + b · E
E
fpt (f ) = λ −
= hλiL − L
L
b·p
p
J apL % b K+a
2
.
for some (L, E) ∈ N with L ≥ 1 and 0 ≤ E ≤ n − 1 −
b
(2) If p > (n − 2) · b and p ∤ b, then 1 ≤ L ≤ ord(p, b); note that ord(p, 1) = 1.
(3) If p > (n − 2) · b and p > b, then a < Jape % bK for all 1 ≤ e ≤ L − 1.
(4) If p > (n − 1) · b, then there exists a unique pair (L, E) satisfying the conclusions of (1).
Proof. We begin by establishing (1): The two descriptions of fpt (f ) are equivalent by Lemma
2.5, and so it suffices to establish the identity in terms of truncations. Setting ∆e := pe hλie −
13
pe hfpt (f )ie , Corollary 5.2, states that {∆e }e≥1 is a non-negative, non-decreasing, unbounded sequence; in particular, min {e : ∆e 6= 0} ≥ 1 is well-defined, and we claim that
n
o
ℓ := min {e : ∆e 6= 0} ≤ L := max e : fpt (f )(e) 6= p − 1 ,
l e
m
the latter also being well-defined. Indeed, set µe := Jap %b bK+a . As 1 ≤ µe ≤ 2, the sequence
{n − µe }e≥1 is bounded above by n − 1, and therefore ∆e > n − µe for e ≫ 0. For such e ≫ 0,
Corollary 5.7 implies that fpt (f )(e) = p − 1, which demonstrates that L is well-defined. Note that,
by definition, ∆ℓ = λ(ℓ) − fpt (f )(ℓ) ≥ 1, so that fpt (f )(ℓ) ≤ λ(ℓ) − 1 ≤ p − 2; by definition of L, it
follows that ℓ ≤ L.
As fpt (f )(e) = p − 1 for e ≥ L + 1,
(5.2.1)
fpt (f ) = hfpt (f )iL +
1
∆L
1
E
= hλiL − L + L = hλiL − L ,
L
p
p
p
p
where E := ∆L − 1. In order to conclude this step of the proof, it suffices to note that
(5.2.2)
1 ≤ ∆ℓ ≤ ∆L ≤ n − µL ≤ n − 1;
indeed, the second bound in (5.2.2) follows from the fact that L ≥ ℓ, the third follows from Corollary 5.7, and the last from the bound 1 ≤ µe ≤ 2.
For point (2), we continue to use the notation adopted above. We begin by showing that
(5.2.3)
∆e = 0 for all 0 ≤ e ≤ L − 1 whenever p > (n − 2) · b.
As the sequence ∆e is non-negative and non-decreasing, it suffices to show that ∆L−1 = 0. Therefore, by way of contradiction, we suppose that ∆L−1 ≥ 1. By definition 0 ≤ fpt (f )(L) ≤ p − 2, and
hence
∆L = p · ∆L−1 + λ(L) − fpt (f )(L) ≥ λ(L) + 2.
Comparing this with (5.2.2) shows that λ(L) + 2 ≤ ∆L ≤ n − 1, so that
λ(L) ≤ n − 3.
On the other hand, if p > (n − 2) · b, then it follows from the explicit formulas in Lemma 2.5 that
q e−1
y
ap
% b · p − Jape % bK
p−b
(n − 2) · b − b
(e)
≥
>
= n − 3 for every e ≥ 1.
(5.2.4)
λ =
b
b
b
In particular, setting e = L in this identity shows that λ(L) > n−3, contradicting our earlier bound.
Thus, we conclude that (5.2.3) holds, which when combined with (5.2.2) shows that L = min {e : ∆e 6= 0}.
In summary, we have just shown that L = ℓ when p > (n − 2) · b. If we assume further that p ∤ b,
the desired bound L = ℓ ≤ ord(p, b) then follows from Corollary 5.2.
We now focus on point (3), and begin by observing that
(5.2.5)
fpt (f ) = λ(1) : · · · : λ(L−1) : λ(L) − ∆L : p − 1 (base p) whenever p > (n − 2) · b.
Indeed, by (5.2.3), the first L − 1 digits of fpt (f ) and λ agree, while fpt (f )(e) = p − 1 for e ≥ L + 1,
by definition of L. Finally, (5.2.3) shows that ∆L = λ(L) − fpt (f )(L) , so that fpt (f )(L) = λ(L) − ∆L .
Recall that, by the second point of Proposition 3.3, the first digit of fpt (f ) is its smallest digit,
and it follows from (5.2.5) that λ(1) ≤ λ(e) for all 1 ≤ e ≤ L, with this inequality being strict for e = L.
However, it follows from the explicit formulas in Lemma 2.5 that whenever p > b,
q
y
λ(1) ≤ λ(e) ⇐⇒ a · p − Jap % bK ≤ ape−1 % b · p − Jape % bK
q
y
⇐⇒ a ≤ ape−1 % b ,
14
where
q the second
y equivalence relies on the fact that p > b. Summarizing, we have just shown that
a ≤ ape−1 % b for all 1 ≤ e ≤ L whenever p > (n − 2) · b and p > b; relabeling our index, we see
that
a ≤ Jape % bK for all 0 ≤ e ≤ L − 1 whenever p > (n − 2) · b and p > b.
It remains to show that this bound is strict for 1 ≤ e ≤ L − 1. By contradiction, assume that
a = Jape % bK for some such e. In this case, a ≡ a · pe mod b, and as a and b are relatively prime,
we conclude that pe ≡ 1 mod b, so that ord(p, b) | e. However, by definition 1 ≤ e ≤ L − 1 ≤
ord(p, b) − 1, where the last inequality follows point (2). Thus, we have arrived at a contradiction,
and therefore conclude that our asserted upper bound is strict for 1 ≤ e ≤ L − 1.
To conclude our proof, it remains to establish the uniqueness statement in point (4). To this end,
let (L′ , E ′ ) denote any pair of integers satisfying the conclusions of point (1) of this Theorem; that
is,
′
fpt (f ) = hλiL′ − E ′ · p−L with 1 ≤ E ′ ≤ n − 1 − µL′ ≤ n − 2.
A modification of (5.2.4) shows that λ(e) > n − 2, and hence that λ(e) ≥ E ′ + 1, whenever p >
(n − 1) · b, and it follows that
′
′
′
fpt (f ) = hλiL′ − E ′ · p−L = .λ(1) : · · · : λ(L −1) : λ(L ) − (E + 1) : p − 1 whenever p > (n − 1) · b.
The uniqueness statement then follows from comparing this expansion with (5.2.5) and invoking
the uniqueness of non-terminating base p expansions.
Corollary 3.8. In the setting of Theorem 3.5, if fpt (f ) 6= λ and p ∤ b, then pM · fpt (f ) ∈ N, where
M := 2 · φ(b) + ⌈log2 (n − 1)⌉ , and φ denotes Euler’s phi function.
Proof. We adopt the notation used in the
3.5. In particular, ℓ ≤ L and fpt (f ) ∈
proof of Theorem
p−L · N. Setting s = ord(p, b), and k = logp (n − 1) in Lemma 2.7 shows that
(5.2.6)
∆ℓ+s+⌈log
⌈logp (n−1)⌉ + 1 ≥ n.
⌉≥p
p (n−1)
By definition of L, Corollary 5.7 states that ∆L ≤ n − 1, and as {∆e }e≥1 is non-decreasing, (5.2.6)
then shows that L is bounded above by ℓ + s + logp (n − 1) . To obtain a uniform bound, note that
ℓ ≤ s, by Corollary 5.2, while s ≤ φ(b), by definition, and logp (n − 1) ≤ log2 (n − 1), as p ≥ 2.
P
P
deg xi
Theorem 3.12. In the context of Theorem 3.5, suppose that deg xi > deg f , so that ρ := deg
f
,
then
fpt
(f
)
=
1.
is greater than 1. If p > n−3
ρ−1
Proof. We begin with the following elementary manipulations, the first of which relies on the assumption that ρ − 1 is positive: Isolating n − 3 in our assumption that p > (n − 3) · (ρ − 1)−1 − 1
shows that (p + 1) · (ρ − 1) > n − 3, and adding p + 1 and subtracting n from both sides then shows
that (p + 1) · ρ − n > p − 2; rounding up, we see that
(5.2.7)
⌈(p + 1) · ρ − n⌉ ≥ p − 1.
Assume, by means of contradiction, that fpt (f ) 6= 1. By hypothesis, 1 = min {ρ, 1}, and Corollary 5.2 then states that 1 = min {e : pe h1ie − pe hfpt (f )ie ≥ 1}; in particular,
νf (p) = fpt (f )(1) = p · hfpt (f )i1 ≤ p h1i1 − 1 = p − 2.
However, this bound allows us to apply Proposition 5.6, which when combined with (5.2), implies
that
νf (p) ≥ ⌈(p + 1) · ρ − n⌉ ≥ p − 1.
Thus, we have arrived at a contradiction, which allows us to conclude that fpt (f ) = 1.
15
6. A PPLICATIONS
TO LOG CANONICAL THRESHOLDS
Given a polynomial fQ over Q, we will denote its log canonical threshold by lct (fQ ). In this article,
we will not need to refer to the typical definition(s) of lct (fQ ) (e.g., via resolution of singularities),
and will instead rely on the limit in (6.0.8) below as our definition. However, so that the reader
unfamiliar with this topic may better appreciate (6.0.8), we present the following characterizations.
In what follows, we fix fQ ∈ Q[x1 , · · · , xn ].
n
n
(1) If π : X → AQ is a log resolution of the pair AQ , V(fQ ) , then lct (fQ ) is the supremum
over all λ > 0 such that the coefficients of the divisor Kπ − λ · π ∗ div(f ) are all greater than
−1; here, Kπ denotes the relative canonical divisor of π.
(2) For every λ > 0, consider the function Γλ (fQ ) : Cn → R given by
(z1 , · · · , zn ) 7→ |f (z1 , · · · , zn )|−2λ ,
where | · | ∈ R denotes the norm of a complex number; note that Γλ (fQ ) has a pole at all
(complex) zeros of fQ . In this setting, lct (fQ ) := sup {λ : Γλ (fQ ) is locally R-integrable} ,
where here, “locally R-integrable” means that we identify Cn = R2n , and require that this
function be (Lebesque) integrable in a neighborhood of every point in its domain.
(3) The roots of the Bernstein-Sato polynomial bfQ of fQ are all negative rational numbers, and
−lct (fQ ) is the largest such root [Kol97].
For more information on these invariants, the reader is referred to the surveys [BL04, EM06].
We now recall the striking relationship between F -pure and log canonical thresholds: Though
there are many results due to many authors relating characteristic zero and characteristic p > 0
invariants, the one most relevant to our discussion is the following theorem, which is due to
Mustaţă and the fourth author.
Theorem 6.1. [MZ, Corollary 3.5, 4.5] Given an polynomial fQ over Q, there exist constants C ∈
R>0 and N ∈ N (depending only on fQ ) with the following property: For p ≫ 0, either fpt (fp ) =
lct (fQ ), or
1
C
≤ lct (fQ ) − fpt (fp ) ≤ .
N
p
p
Note that, as an immediate corollary of Theorem 6.1,
(6.0.8)
fpt (fp ) ≤ lct (fQ ) for all p ≫ 0 and lim fpt (fp ) = lct (fQ ) .
p→∞
We point out that (6.0.8) (which follows from the work of Hara and Yoshida) appeared in the
literature well before Theorem 6.1 (see, e.g., [MTW05, Theorem 3.3, 3.4]).
6.1. Regarding uniform bounds. Though the constants C ∈ R>0 and N ∈ N appearing in Theorem 6.1 are known to depend only on fQ , their determination is complicated (e.g., they depend
on numerical invariants coming from resolution of singularities), and are therefore not explicitly
described. In Theorem 6.2 below, we give an alternate proof of this result for homogeneous polynomials with an isolated singularity at the origin; in the process of doing so, we also identify
explicit values for C and N .
p
Theorem 6.2. If fQ ∈nQ[x1 , · · · ,o
xn ] is homogeneous under some N-grading, with Jac (fQ ) = m,
P
deg xi
a
then lct (fQ ) = min
deg f , 1 , which we write as b in lowest terms. Moreover, if fpt (fp ) 6=
lct (fQ ), then
b−1
n − 1 − b−1
≤
lct
(f
)
−
fpt
(f
)
≤
for p ≫ 0,
Q
p
p
pord(p,b)
where ord(p, b) denotes the order of p mod b (which equals one when b = 1, by convention).
16
p
Proof. As the reduction of ∂k (f ) mod p equals ∂k (fp ) for large values of p, the equality Jac (fQ ) =
m reduces
n P modop for p ≫ 0. Taking p → ∞, it follows from Theorem 3.5 and (6.0.8) that lct (fQ ) =
min
(6.1.1)
deg xi
deg f , 1
, and in light of this, Theorem 3.5 states that
q L
y
ap % b
E
lct (fQ ) − fpt (fp ) =
+ L.
L
b·p
p
If lct (fQ ) 6= 1, then
q L
y
ap % b
1
1 − b−1
≤
≤
.
b · pL
b · pL
pL
Furthermore, Theorem 3.5q implies ythat 1 ≤ L ≤ φ(b) and 0 ≤ E ≤ n − 2 for p ≫ 0. If instead
lct (fQ ) = a = b = 1, then apL % b = φ(b) = 1, and hence
q L
y
ap % b
b−1
=
.
b · pL
pL
Moreover, in this case, Theorem 3.5 shows that L = 1 and 0 ≤ E ≤ n − 3 for p ≫ 0. Finally, it is left
to the reader to verify that substituting these inequalities into (6.1.1) produces the desired bounds
in each case.
Remark 6.3 (On uniform bounds). Of course, ord(p, b) ≤ φ(b), where φ denotes Euler’s phi function. By enlarging p, if necessary, it follows that the lower bound in Theorem 6.2 is itself bounded
below by p−φ(b)−1 . In other words, in the language of Theorem 6.1, we may take N = φ(b) + 1 and
C = n − 1 − b−1 .
Remark 6.4 (Regarding sharpness). The bounds appearing in Theorem 6.2 are sharp: If d > 2 and
fQ = xd1 + · · · + xdd , then lct (fQ ) = 1, and Theorem 6.2 states that
(6.1.2)
1
d−2
≤ lct (fQ ) − fpt (fp ) ≤
p
p
whenever fpt (fp ) 6= 1 and p ≫ 0. However, it is shown in [Hera, Corollary 3.5] that
lct (fQ ) − fpt (fp ) = 1 − fpt (fp ) =
Jp % dK − 1
p
whenever p > d. If d is odd and p ≡ 2 mod d, then the lower bound in (6.1.2) is obtained, and
similarly, if p ≡ d − 1 mod d, then the upper bound in (6.1.2) is obtained; in both these cases,
Dirichlet’s theorem guarantees that there are infinitely many primes satisfying these congruence
relations.
6.2. On the size of a set of bad primes. In this subsection, we record some simple observations
regarding the set of primes for which the F -pure threshold does not coincide with the log canonical
threshold, and we begin by recalling the
pcase of elliptic curves: Let fQ ∈ Q[x, y, z] be a homogeneous polynomial of degree three with Jac (fQ ) = m, so that E := V(f ) defines an elliptic curve
in P2Q . As shown in the proof of Theorem 6.2, the reductions fp ∈ Fp [x, y, z] satisfy these same
conditions for p ≫ 0, and thus define elliptic curves Ep = V(fp ) ⊆ P2Fp for all p ≫ 0. Recall that
the elliptic curve Ep is called supersingular if the natural Frobenius action on the local cohomology
2
module H(x,y,z)
(Fp [x, y, z]/(fp )) is injective, or equivalently, if (fp )p−1 ∈
/ (xp , y p , z p ) (see, e.g, [Sil09,
Chapters V.3 and V.4] for these and other characterizations of supersingularity). Using these descriptions, one can show that Ep is supersingular if and only if fpt (fp ) = 1 [MTW05, Example
4.6]. In light of this, Elkies’ well-known theorem on the set of supersingular primes, which states
that Ep is supersingular for infinitely many primes p, can be restated as follows.
17
Theorem 6.5. [Elk87] If fQ ∈ Q[x, y, z] is as above, the set of primes {p : fpt (fp ) 6= lct (fQ )} is
infinite.
Recall that given a set S of prime numbers, the density of S, δ(S), is defined as
δ(S) = lim
n→∞
# {p
∈ S : p ≤ n}
.
: p ≤ n}
# {p
In the context of elliptic curves over Q, the set of primes {p : fpt (fp ) 6= lct (fQ )}, which is infinite by Elkies’ result, may be quite large (i.e., have density 12 ), or may be quite small (i.e., have
density zero); see [MTW05, Example 4.6] for more information. This discussion motivates the
following question.
Question 6.6. For which polynomials fQ is the set of primes {p : fpt (fp ) 6= lct (fQ )} infinite? In
the case that this set is infinite, what is its density?
As illustrated by the case of an elliptic curve, Question 6.6 is quite subtle, and one expects it to
be quite difficult to address in general. However, as we see below, when the numerator of lct (fQ )
is not equal to 1, one is able to give a partial answer to this question using simple methods. Our
main tool will be Proposition 3.3, which provides us with a simple criterion for disqualifying a
rational number from being an F -pure threshold. We stress the fact that Proposition 6.7 is not
applicable when lct (fQ ) = 1, and hence sheds no light on the elliptic curve case discussed above.
Proposition 6.7. Let fQ denote any polynomial over Q, and write lct (fQ ) = ab in lowest terms. If
a 6= 1, then the set of primes for which lct (fQ ) is not an F -pure threshold (of any polynomial) is
infinite, and contains all primes p such that pe · a ≡ 1 mod b for some e ≥ 1. In particular,
δ ({p : fpt (fp ) 6= lct (fQ )}) ≥
1
.
φ(b)
Proof. As a and b are relatively prime, there exists c ∈ N such that a · c ≡ 1 mod b. We claim that
{p : p ≡ c mod b} ⊆ {p : pe · a ≡ 1 mod b for some e ≥ 1}
⊆ {p : lct (fQ ) is not an F -pure threshold in characteristic p > 0} .
1
Once we establish this, the proposition will follow, as δ ({p : p ≡ c mod b}) = φ(b)
by Dirichlet’s
theorem. By definition of c, the first containment holds by setting e = 1, and so it suffices to
establish the second containment. However, if pe · a ≡ 1 mod b for some e ≥ 1, then Lemma 2.5
shows that
q
y
q
y
Jape % bK · p − ape+1 % b
p − ape+1 % b
(e+1)
lct (fQ )
=
=
.
b
b
On the other hand, Lemma 2.5 also shows that
lct (fQ )(1) =
a · p − Jap % bK
,
b
and as a ≥ 2, by assumption, we see that lct (fQ )(1) > lct (fQ )(e) for all p ≫ 0. In light of this, the
second point of Proposition 3.3, which shows that the first digit of an F -pure threshold must be its
smallest, shows that lct (fQ ) could not be the F -pure threshold of any polynomial in characteristic
p > 0.
We conclude this section with the following example, which follows immediately from Corollary 4.2, and which illustrates a rather large family of polynomials whose set of “bad” primes
greatly exceeds the bound given by Proposition 6.7.
18
Example
(under the standard grading) of degree d
p 6.8. If fQ ∈ Q[x1 , · · · , xd−1 ] is homogeneous
with Jac (fQ ) = m, then {p : p 6≡ 1 mod d} ⊆ p : fpt (fp ) 6= lct (fQ ) = 1 − 1d . In particular,
δ ({p : fpt (fp ) 6= lct (fQ )}) ≥ δ ({p : p 6≡ 1 mod d}) = 1 −
7. A
SPECIAL CASE OF
ACC
AND LOCAL
m- ADIC
CONSTANCY FOR
1
.
φ(d)
F - PURE
THRESHOLDS
Motivated by the relationship between F -pure thresholds and log canonical thresholds, Blickle,
Mustaţă, and Smith conjectured the following.
Conjecture 7.1. [BMS09, Conjecture 4.4] Fix an integer n ≥ 1.
(1) The set {fpt (f ) : f ∈ L[x1 , · · · , xn ]} satisfies the ascending chain condition (ACC); i.e., it
contains no strictly increasing, infinite sequence.
(2) For every f ∈ L[x1 , · · · , xn ], there exists an integer N (which may depend on f ) such that
fpt (f ) ≥ fpt (f + g) for all g ∈ mN .
As discussed in [BMS09, Remark 4.5], the first conjecture implies the second, which states that the
F -pure threshold function f 7→ fpt (f ) is locally constant (in the m-adic topology).
In this section, we confirm the first conjecture for a restricted set of F -pure thresholds (see
we confirm the second in the case that f is homogeneous under
Proposition 7.3). Additionally,
p
some N-grading with Jac (f ) = m (see Propositions 7.8 and 7.10).
7.1. A special case of ACC.
Definition 7.2. For every ω ∈ Nn , let Wω denote the set of polynomials f ∈ L[x1 , · · · , xn ] satisfying
the following conditions:
p
(1) Jac (f ) = m.
(2) f is homogeneous under the grading determined by (deg x1 ,n· P
· · , deg xno) = ω.
deg xi
(3) p ∤ deg f (and hence, does not divide the denominator of min
deg f , 1 , in lowest terms).
S
Given N ∈ N, set W4N := ω Wω , where the union is taken over all ω = (ω1 , . . . , ωn ) ∈ Nn with
ωi ≤ N for each 1 ≤ i ≤ n.
Proposition 7.3. For every N ∈ N and µ ∈ (0, 1], the set
{fpt (f ) : f ∈ W4N } ∩ (µ, 1]
is finite. In particular, this set of F -pure thresholds satisfies ACC.
Proof. Fix f ∈ W4N such that fpt (f ) > µ. By definition, there exists an N-grading on L[x1 , · · · , xn ]
such that deg xi ≤ N for all 1 ≤ i ≤ N , and under which f is homogeneous. Moreover, by Lemma
5.1,
Pn
deg xi
n·N
µ < fpt (f ) ≤ i=1
≤
.
deg f
deg f
Consequently, deg f ≤ n·N
µ , and it follows that
Pn
a
n·N
i=1 deg xi
, 1 ⊆ S := (0, 1] ∩
∈Q:b≤
,
λ := min
deg f
b
µ
a finite set. We will now show that fpt (f ) can take on only finitely many values: If fpt (f ) 6= λ,
then by Corollary 3.8, there exists an integer Mλ , depending onlynon λ and n,o such that pMλ ·
fpt (f ) ∈ N. If M := max {Mλ : λ ∈ S}, it follows that fpt (f ) ∈
set.
19
a
pM
: a ∈ N ∩ (0, 1], a finite
7.2. A special case of local m-adic constancy of the F -pure threshold function. Throughout this
subsection, we fix an N-grading on L[x1 , · · · , xn ].
Lemma 7.4. Consider f ∈ m such that pL · fpt (f ) ∈ N for some L ∈ N. If g ∈ m, then
fpt (f + g) ≤ fpt (f ) ⇐⇒ (f + g)p
L
L
s
L ·fpt(f )
s
L
∈ m[p ] .
Proof. If (f + g)p ·fpt(f ) ∈ m[p ] , then (f + g)p ·fpt(f ) ∈ m[p ] for s ≥ L. Consequently, νf +g (ps ) <
ps · fpt (f ) for s ≫ 0, and hence fpt (f + g) ≤ fpt (f ). We now focus on the remaining implication.
L
)−1
+ p1L shows
By the hypothesis, pL · fpt (f ) − 1 ∈ N, and hence the identity fpt (f ) = p ·fpt(f
pL
that
pL · fpt (f ) − 1
hfpt (f )iL =
.
pL
If fpt (f + g) ≤ fpt (f ), the preceding identity and Proposition 3.3 show that
νf +g (pL ) = pL hfpt (f + g)iL ≤ pL hfpt (f )iL = pL fpt (f ) − 1,
and consequently, this bound for νf +g (pL ) shows that (f + g)p
L fpt(f )
L
∈ m[p ] .
P
e
Lemma 7.5. If h is homogeneous and h ∈
/ m[p ] , then deg h ≤ (pe − 1) · ni=1 deg xi .
Proof. Every supporting monomial of h is of the form xp1
e −a
1
e
· · · xpn −an , where each ai ≥ 1. Then
n
n
X
X
e
e
deg xi .
(p − ai ) deg xi ≤ (p − 1)
deg h =
i=1
i=1
Proposition 7.6. Fix f ∈ m homogeneous. If g ∈ [R]≥deg f +1 , then fpt (f ) ≤ fpt (f + g).
Proof. If suffices to show that for every e ≥ 1, νf (pe ) ≤ νf +g (pe ); i.e., if N := νf (pe ), then (f +
P
e
e
N N −k k
g ∈ m[p ] ;
g)N ∈
/ m[p ] . Suppose, by way of contradiction, that (f + g)N = f N + N
k=1 k f
e
note that, as f N ∈
/ m[p ] by definition, each monomial summand of f N must cancel with one of
PN N N −k k
g . However, for any monomial summand µ of any f N −k gk , k ≥ 1,
k=1 k f
deg µ ≥ (N − k) deg f + k(deg f + 1) > N deg f = deg(f N ),
and such cancelation is impossible.
Lemma 7.7. Fix f ∈ m homogeneous such that λ :=
e
e
e
[R]≥deg f +1 , then (f + g)p hλie ≡ f p hλie mod m[p ] .
P
deg xi
deg f
≤ 1. If (pe − 1) · λ ∈ N and g ∈
Proof. We claim that
(7.2.1)
fp
e hλi −k
e
e
gk ∈ m[p ] for all 1 ≤ k ≤ pe hλie .
Indeed, suppose that (7.2.1) is false. As g ∈ [R]≥deg f +1 and µ is a supporting monomial of
we also have that
e
f p hλie −k gk ,
(7.2.2)
deg µ ≥ deg f · (pe hλie − k) + (deg f + 1) · k = deg f · pe hλie + k.
However, as (pe − 1) · λ ∈ N, it follows from Lemma 2.6 that pe hλie = (pe − 1) · λ. Substituting this
into (7.2.2) shows that
n
X
deg xi ,
deg µ ≥ deg f · (pe − 1) · λ + k = k + (pe − 1)
i=1
which contradicts Lemma 7.5 as k ≥ 1. Thus, (7.2.1) holds, and it follows from the Binomial
e
e
e
Theorem that (f + g)p hλie ≡ f p hλie mod m[p ] .
20
We are now able to prove our first result on the m-adic constancy of the F -pure threshold function, which does not require the isolated singularity hypothesis.
P
deg xi
≤ 1, and suppose that either
Proposition 7.8. Fix f ∈ m homogeneous such that λ := deg
f
L
fpt (f ) = λ, or fpt (f ) = hλiL and (p − 1) · λ ∈ N for some L ≥ 1. Then fpt (f + g) = fpt (f ) for
each g ∈ [R]≥deg f +1 .
Proof. By Proposition 7.6, it suffices to show that fpt (f ) ≥ fpt (f + g). First say that fpt (f ) = λ.
e
e
It is enough to show that for all e ≥ 1, (f + g)νf (p )+1 ∈ m[p ] , so that νf (pe ) ≥ νf +g (pe ). By the
e
e
Binomial Theorem, it suffices to show that for all 0 ≤ k ≤ νf (pf ) + 1, f νf (p )+1−k gk ∈ m[p ] . To this
e
end, take any monomial µ of such an f νf (p )+1−k gk . Then
(7.2.3) deg µ ≥ (νf (pe )+ 1− k)·deg f + k ·(deg f + 1) = (νf (pe )+ 1)·deg f + k ≥ (νf (pe )+ 1)·deg f.
By Lemma 3.3, νf (pe ) = pe hλie , and by definition, hαie ≥ α −
1
pe
for all 0 < α ≤ 1. Then by (7.2.3),
deg µ ≥ (pe hλie + 1) · deg f
1
e
≥ p λ − e + 1 · deg f
p
e
= p · λ · deg f
X
= pe ·
deg xi .
e
We may now conclude that µ ∈ m[p ] Lemma 7.5.
Now say that fpt (f ) = hλiL and (pL − 1) · λ ∈ N for some L ≤ 1. By Lemma 7.4, it suffices
L
L
to show that (f + g)p ·fpt(f ) ∈ m[p ] . Indeed, pL · fpt (f ) > pL hfpt (f )iL = νf (pL ) (the equality
L
L
L
L
L
by Proposition 3.3), so that f p ·fpt(f ) ∈ m[p ] ; thus, (f + g)p ·fpt(f ) ≡ f p ·fpt(f ) ≡ 0 mod m[p ] by
Lemma 7.7.
We see that the hypotheses of Proposition 7.8 are often satisfied in Example 7.9 below. We also
see that the statement of the proposition is sharp in the sense that there exist f and g satisfying its
hypotheses such that fpt (f ) = hλiL for some L ≥ 1, (pL − 1) · λ ∈
/ N, and fpt (f + g) > fpt (f ).
Example 7.9. Let f = x15 + xy 7 ∈ L[x, y], which is homogeneous with deg f = 15 under the
grading determined by (deg x, deg y) = (1, 2), and has an isolated singularity at the origin when
p ≥ 11. It follows from Theorem 4.4 that
1+2
1
fpt (f ) =
=
,
15 L
5 L
where 1 ≤ L ≤ ord(p, 5) ≤ 4, or L = ∞ (i.e., fpt (f ) = 15 ). Furthermore, as f is a binomial, we can
use the algorithm given in [Herb], and recently implemented by Sara Malec, Karl Schwede, and
the third author in an upcoming Macaulay2 package, to compute the exact value fpt (f ), and
hence, the exact value of L for a fixed p. We list some of these computations in Figure 7.9.
We see that the hypotheses of Proposition 7.8 are often satisfied in this example, and it follows
that fpt (f ) = fpt (f + g) for every g ∈ [R]≥16 whenever either “∞” appears in the second column
3
or “Yes” appears in the third. When p = 17, however, we have that fpt (f ) = 15 1 = 17
, and when
14
12
2
8
13
2
14
2
3
[17]
g ∈ x y, x y , y , x y , x y ⊆ [R]≥16 , one may verify that (f + g) ∈
/ m , so that it follows
3
from Lemma 7.4 that fpt (f
+ g) > 17 . For another example of this behavior, it can be computed
that when p = 47 and g ∈ x12 y 2 , x10 y 3 , x8 y 4 , x4 y 6 , x9 y 4 , x10 y 4 , then fpt (f + g) > fpt (f ).
We now present the main result of this subsection.
21
p L (pL − 1) · 15 ∈ N?
11 1
Yes
13 1
No
17 1
No
19 2
Yes
23 4
Yes
29 ∞
–
31 1
Yes
p L (pL − 1) · 15 ∈ N?
37 4
Yes
41 1
Yes
43 ∞
–
47 1
No
53 4
Yes
59 2
Yes
61 1
Yes
p
L (pL − 1) · 15 ∈ N?
67 1
No
71 ∞
–
73 3
No
79 2
Yes
83 2
No
97 1
No
101 1
Yes
F IGURE 1. Some data on F -pure thresholds of f = x15 + xy 7 ∈ L[x, y].
Proposition
7.10. Suppose that f ∈ Fp [x1 , · · · , xn ] is homogeneous under some N-grading such
p
P
that Jac (f ) = (x1 , . . . , xn ) and deg f ≥
deg xi . Then fpt (f + g) = fpt (f ) for each g ∈
P
[R]≥n deg f − deg xi +1 .
P
deg xi
Proof. Let λ = deg
f . If fpt (f ) = λ, then Proposition 7.8 implies that fpt (f ) = fpt (f + g). For
the remainder of this proof, we will assume that fpt (f ) 6= λ. By Proposition 7.6, it suffices to
show that fpt (f ) ≥ fpt (f + g). Since fpt (f ) = hλiL − pEL for some integers E ≥ 0 and L ≥ 1 by
L
L
Theorem 3.5, it is enough to show that (f + g)p fpt(f ) ∈ m[p ] by Lemma 7.4.
To this end, note that
1
E
n−1
E
(7.2.4)
fpt (f ) = hλiL − L ≥ λ − L − L ≥ λ − L ,
p
p
p
p
where the first inequality follows from Lemma 2.6, and the second from our bounds on E. SupL
L
/ m[p ] . As
pose, by way of contradiction, that (f + g)p ·fpt(f ) ∈
pL ·fpt(f )
(f + g)
=f
pL ·fpt(f )
+
pL ·fpt(f ) L
X
p
k=1
pL
pL hfpt (f )i
· fpt (f ) pL ·fpt(f )−k k
f
g ,
k
L
L
L
p ·fpt(f ) ∈ m[p ] , and so there
the inequality
· fpt (f ) >
L = νf (p ) implies that f
L
L
must exist 1 ≤ k ≤ pL · fpt (f ) for which f p ·fpt(f )−k gk ∈
/ m[p ] . We will now show, as in the
proof of Lemma 7.7, that this is impossible for degree reasons. Indeed, for such a k, there exists a
P
L
L
supporting monomial µ of f p fpt(f )−k gk not contained in m[p ] , so that deg µ ≤ (pL − 1) · deg xi
by Lemma 7.5. However, as g ∈ [R]≥n·deg f −P deg xi +1 ,
X
(7.2.5)
deg µ ≥ deg f · (pL · fpt (f ) − k) + k · n · deg f −
deg xi + 1 .
P
The derivative with respect to k of the right-hand side of (7.2.5)
P is (n − 1) deg f − deg xi + 1,
which is always nonnegative by our assumption that deg f ≥
deg xi . Thus, the right hand side
of (7.2.5) is increasing with respect to k, and as k ≥ 1,
X
deg µ ≥ deg f · (pL · fpt (f ) − 1) + n · deg f −
deg xi + 1
X
≥ deg f · (pL · λ − n) + n · deg f −
deg xi + 1
X
X
= pL · deg f · λ −
deg xi + 1 = pL − 1 ·
deg xi + 1,
where the second inequality above is a consequence of (7.2.4). Thus, we have arrived at a contraL
L
diction, and we conclude that (f + g)p ·fpt(f ) ∈ m[p ] , completing the proof.
22
R EFERENCES
[BL04]
[BMS06]
[BMS08]
[BMS09]
[BS]
[BST]
[BSTZ09]
[Elk87]
[EM06]
[Fed83]
[Har98]
[Har06]
[Hera]
[Herb]
[Her12]
[HW02]
[HY03]
[Kol97]
[MTW05]
[MZ]
[Sch07]
[Sch08]
[Sil09]
[Smi97a]
[Smi97b]
[Smi00]
[STZ12]
[Tak04]
[TW04]
Manuel Blickle and Robert Lazarsfeld. An informal introduction to multiplier ideals. In Trends in commutative
algebra, volume 51 of Math. Sci. Res. Inst. Publ., pages 87–114. Cambridge Univ. Press, Cambridge, 2004.
Nero Budur, Mircea Mustaţǎ, and Morihiko Saito. Roots of bernstein-sato polynomials for monomial ideals:
a positive characteristic approach. Math. Res. Lett., 13(1):125–142, 2006.
Manuel Blickle, Mircea Mustaţǎ, and Karen E. Smith. Discreteness and rationality of F -thresholds. Michigan
Math. J., 57:43–61, 2008. Special volume in honor of Melvin Hochster.
Manuel Blickle, Mircea Mustaţă, and Karen E. Smith. F -thresholds of hypersurfaces. Trans. Amer. Math. Soc.,
361(12):6549–6565, 2009.
Bhargav Bhatt and Anurag K. Singh. F -thresholds of Calabi-Yau hypersurfaces. arXiv:1307.1171.
Manuel Blickle, Karl Schwede, and Kevin Tucker. F -singularities via alterations. Preprint, arXiv:1107.3807.
Manuel Blickle, Karl Schwede, Shunsuke Takagi, and Wenliang Zhang. Discreteness and rationality of F jumping numbers on singular varieties. Mathematische Annalen, Volume 347, Number 4, 917-949, 2010, 06 2009.
Noam D. Elkies. The existence of infinitely many supersingular primes for every elliptic curve over Q. Invent.
Math., 89(3):561–567, 1987.
Lawrence Ein and Mircea Mustaţă. Invariants of singularities of pairs. In International Congress of Mathematicians. Vol. II, pages 583–602. Eur. Math. Soc., Zürich, 2006.
Richard Fedder. F -purity and rational singularity. Trans. Amer. Math. Soc., 278(2):461–480, 1983.
Nobuo Hara. A characterization of rational singularities in terms of injectivity of Frobenius maps. Amer. J.
Math., 120(5):981–996, 1998.
Nobuo Hara. F-pure thresholds and F-jumping exponents in dimension two. Math. Res. Lett., 13(5-6):747–760,
2006. With an appendix by Paul Monsky.
Daniel J. Hernández. F -invariants of diagonal hypersurfaces. To appear in Proceedings of the AMS.
Daniel J. Hernández. F -pure thresholds of binomial hypersurfaces. To appear in Proceedings of the AMS.
Daniel J. Hernández. F -purity of hypersurfaces. Math. Res. Lett., 19(02):1–13, 2012.
Nobuo Hara and Kei-Ichi Watanabe. F-regular and F-pure rings vs. log terminal and log canonical singularities. J. Algebraic Geom., 11(2):363–392, 2002.
Nobuo Hara and Ken-Ichi Yoshida. A generalization of tight closure and multiplier ideals. Trans. Amer. Math.
Soc., 355(8):3143–3174 (electronic), 2003.
János Kollár. Singularities of pairs. In Algebraic geometry—Santa Cruz 1995, volume 62 of Proc. Sympos. Pure
Math., pages 221–287. Amer. Math. Soc., Providence, RI, 1997.
Mircea Mustaţǎ, Shunsuke Takagi, and Kei-ichi Watanabe. F-thresholds and Bernstein-Sato polynomials. In
European Congress of Mathematics, pages 341–364. Eur. Math. Soc., Zürich, 2005.
Mircea Mustaţǎ and Wenliang Zhang. Estimates for F -jumping numbers and bounds for HartshorneSpeiser-Lyubeznik numbers. arXiv:1110.5687.
Karl Schwede. A simple characterization of Du Bois singularities. Compos. Math., 143(4):813–828, 2007.
Karl Schwede. Generalized test ideals, sharp F -purity, and sharp test elements. Math. Res. Lett., 15(6):1251–
1261, 2008.
Joseph H. Silverman. The arithmetic of elliptic curves, volume 106 of Graduate Texts in Mathematics. Springer,
Dordrecht, second edition, 2009.
Karen E. Smith. F -rational rings have rational singularities. Amer. J. Math., 119(1):159–180, 1997.
Karen E. Smith. Vanishing, singularities and effective bounds via prime characteristic local algebra. In Algebraic geometry—Santa Cruz 1995, volume 62 of Proc. Sympos. Pure Math., pages 289–325. Amer. Math. Soc.,
Providence, RI, 1997.
Karen E. Smith. The multiplier ideal is a universal test ideal. Comm. Algebra, 28(12):5915–5929, 2000. Special
issue in honor of Robin Hartshorne.
Karl Schwede, Kevin Tucker, and Wenliang Zhang. Test ideals via a single alteration and discreteness and
rationality of F -jumping numbers. Math. Res. Lett., 19(1):191–197, 2012.
Shunsuke Takagi. F-singularities of pairs and inversion of adjunction of arbitrary codimension. Invent. Math.,
157(1):123–146, 2004.
Shunsuke Takagi and Kei-ichi Watanabe. On F-pure thresholds. J. Algebra, 282(1):278–297, 2004.
D EPARTMENT OF M ATHEMATICS , U NIVERSITY
Email address: [email protected]
OF
U TAH , S ALT L AKE C ITY, UT 84112
D EPARTMENT OF M ATHEMATICS , U NIVERSITY
Email address: [email protected]
OF
V IRGINIA , C HARLOTTESVILLE , VA 22904
23
D EPARTMENT OF M ATHEMATICS , U NIVERSITY
Email address: [email protected]
OF
M INNESOTA , M INNEAPOLIS , MN 55455
D EPARTMENT OF M ATHEMATICS , U NIVERSITY
Email address: [email protected]
OF
N EBRASKA , L INCOLN , NE 68588
24
| 0 |
HOMOMORPHISMS INTO TOTALLY DISCONNECTED, LOCALLY
COMPACT GROUPS WITH DENSE IMAGE
arXiv:1509.00156v2 [math.GR] 3 Jan 2018
COLIN D. REID AND PHILLIP R. WESOLEK
Abstract. Let φ : G → H be a group homomorphism such that H is a totally disconnected locally compact (t.d.l.c.) group and the image of φ is dense. We show that all such
homomorphisms arise as completions of G with respect to uniformities of a particular kind.
Moreover, H is determined up to a compact normal subgroup by the pair (G, φ−1 (L)), where
L is a compact open subgroup of H. These results generalize the well-known properties of
profinite completions to the locally compact setting.
Contents
1. Introduction
2. Preliminaries
3. A general construction for completions
4. Classification and factorization of completions
5. Canonical completions of Hecke pairs
6. Invariant properties of completions
7. Completions compatible with homomorphisms
References
1
3
5
8
11
14
18
20
1. Introduction
For G a (topological) group, the profinite completion Ĝ of G is the inverse limit of the
finite (continuous) quotients of G. One can obtain other profinite groups by forming the
inverse limit of a suitable subset of the set of finite quotients of G. Such a profinite group H
is always a quotient of Ĝ, and obviously the composition map G → Ĝ → H has dense image.
On the other hand, given a (topological) group G, one can ask which profinite groups H
admit a (continuous) homomorphism ψ : G → H with dense image. Letting ι : G → Ĝ be the
canonical inclusion, it turns out that there is always a continuous quotient map ψe : Ĝ → H
such that ψ = ψe ◦ ι; cf. [9, Lemma 3.2.1]. In this way one obtains a complete description of
all profinite groups H and homomorphisms ψ : G → H such that the image of G is dense in
H: all such groups and morphisms arise exactly by forming inverse limits of suitable sets of
finite quotients of G.
The first named author is an ARC DECRA fellow. Research supported in part by ARC Discovery Project
DP120100996.
The second author was supported by ERC grant #278469.
1
2
COLIN D. REID AND PHILLIP R. WESOLEK
The aim of the present paper is to extend the well-known and well-established description
of homomorphisms to profinite groups with dense image to homomorphisms into totally disconnected locally compact (t.d.l.c.) groups with dense image. That is to say, we will develop
a theory of t.d.l.c. completions. Using the language of uniformities, we shall see that this
theory generalizes the profinite case.
Our approach generalizes previous work by Schlichting ([10]) and Belyaev ([1, §7]); Schlichting’s completion has also been studied in subsequent work, e.g. [11]. The novel contributions
of this work are to present a unified theory of t.d.l.c. completions and to identify properties
that hold for every completion.
1.1. Statement of results. We shall state our results in the setting of Hausdorff topological
groups. If one prefers, the group G to be completed can be taken to be discrete. The
topological group setting merely allows for finer control over the completions; for example,
given a topological group G, there is often an interesting difference between completions of
G as a topological group and completions of G as a discrete group.
Definition 1.1. For a topological group G, a (t.d.l.c.) completion map is a continuous
homomorphism ψ : G → H with dense image such that H is a t.d.l.c. group. We call H a
t.d.l.c. completion of G.
All t.d.l.c. completions arise as completions with respect to a certain family of uniformities.
Definition 1.2. Let G be a group and let S be a set of open subgroups of G. We say that
S is a G-stable local filter if the following conditions hold:
(a) S is non-empty;
(b) Any two elements of S are commensurate;
T
(c) Given a finite subset {V1 , . . . , Vn } of S, then ni=1 Vi ∈ S, and given V ≤ W ≤ G such
that |W : V | is finite, then V ∈ S implies W ∈ S;
(d) Given V ∈ S and g ∈ G, then gV g −1 ∈ S.
Each G-stable local filter S is a basis at 1 for a (not necessarily Hausdorff) group topology
on G, and thus, there is an associated right uniformity Φr (S) on G. The completion with
respect to this uniformity, denoted by ĜS , turns out to be a t.d.l.c. group (Theorem 3.9); we
denote by βG,S : G → ĜS the canonical inclusion homomorphism. All t.d.l.c. completions
moreover arise in this way.
Theorem 1.3 (see Theorem 4.3). If G is a topological group and φ : G → H is a t.d.l.c. completion map, then there is a G-stable local filter S and a unique topological group isomorphism
ψ : ĜS → H such that φ = ψ ◦ β(G,S) .
We next consider completion maps φ : G → H where a specified subgroup U of G is the
preimage of a compact open subgroup of H. In this case, there are two canonical completions
of G such that U is the preimage of a compact open subgroup of the completion: the Belyaev
completion, denoted by ĜU , and the Schlichting completion, denoted by G//U . These
completions are the ‘largest’ and ‘smallest’ completions in the following sense. We denote by
βU : G → ĜU and βG/U : G → G//U the canonical inclusion homomorphisms.
Theorem 1.4 (see Theorem 5.4). Suppose that G is a group and that φ : G → H is a t.d.l.c.
completion map. Letting U ≤ G be the preimage of some compact open subgroup of H, then
HOMOMORPHISMS INTO T.D.L.C. GROUPS
3
there are unique continuous quotient maps ψ1 : ĜU → H and ψ2 : H → G//U with compact
kernels such that the following diagram commutes:
G
✇ ❏❏❏❏
✇✇
❏❏βG/U
✇
✇
❏❏
φ
✇✇
❏❏
✇
$
{✇✇
/H
/ G//U.
βU
ĜU
ψ1
ψ2
We conclude by identifying several properties that are “independent” of the completion.
Doing so requires identifying a notion of size. Two subgroups U and V of a group G are
commensurate if U ∩ V has finite index in both U and V . A subgroup W ≤ G is commensurated in G if W is commensurate with gW g −1 for all g ∈ G. We write [U ] for the set
of closed subgroups commensurate with U . We call the collection [U ] a size if some (equivalently, any) W ∈ [U ] is commensurated; for [U ] a size, observe that [U ] = [gU g −1 ] for all
g ∈ G.
A compact open subgroup R of a t.d.l.c. group H is commensurated in H. Given a completion map ψ : G → H, the preimage U := ψ −1 (R) is an open subgroup of G that is
commensurated in G. We thus obtain a size [U ], and the size [U ] does not depend on the
choice of R. We say that [U ] is the size of ψ.
Theorem 1.5 (see §6). Let G be a topological group. For each of the following properties,
either every completion of G of size α has the property, or every completion of G of size α
fails to have the property.
(1) Being σ-compact.
(2) Being compactly generated.
(3) Being amenable.
(4) Being uniscalar.
(5) Having a quotient isomorphic to N where N is any specified t.d.l.c. group that has no
non-trivial compact normal subgroups.
Theorem 1.6 (See §6). Let G be a topological group. For each of the following properties,
either every second countable completion of G of size α has the property, or every second
countable completion of G of size α fails to have the property.
(1) Being elementary.
(2) Being elementary with transfinite rank β.
Acknowledgments 1.7. The first named author would like to thank Aleksander Iwanow for
pointing out the article [1] in response to an earlier preprint.
2. Preliminaries
A quotient of a topological group must have closed kernel (such that the resulting quotient
topology is Hausdorff). Topological group isomorphism is denoted by ≃. We use “t.d.”, “l.c.”,
and “s.c.” for “totally disconnected”, “locally compact”, and “second countable”, respectively.
For a topological group G, the set U (G) is defined to be the collection of compact open
subgroups of G.
Definition 2.1. A Hecke pair is a pair of groups (G, U ) where G is a topological group and
U is an open subgroup of G that is commensurated.
4
COLIN D. REID AND PHILLIP R. WESOLEK
2.1. Uniformities. Our approach to completions is via uniform spaces; our discussion of
uniform spaces follows [2].
Definition 2.2. Let X be a set. A uniformity is a set Φ of binary relations on X, called
entourages, with the following properties:
(a) Each A ∈ Φ is reflexive, that is, {(x, x) | x ∈ X} ⊆ A.
(b) For all A, B ∈ Φ, there exists C ∈ Φ such that C ⊆ A ∩ B.
(c) For all A ∈ Φ, there exists B ∈ Φ such that
B ◦ B := {(x, z) | ∃y ∈ X : {(x, y), (y, z)} ⊆ B}
is a subset of A.
(d) For all A ∈ Φ, there exists B ∈ Φ such that the set {(y, x) | (x, y) ∈ B} is a subset of A.
A set with a uniformity is called a uniform space. Two uniformities Φ, Φ′ on a set X are
equivalent if every A ∈ Φ contains some A′ ∈ Φ′ and vice versa.
Definition 2.3. Let (X, Φ) be a uniform space. A filter f of subsets of X is called a minimal
Cauchy filter if f is a ⊆-least filter such that for all U ∈ Φ there is A ∈ f with A × A ⊆ U .
A basis at 1 of a topological group gives rise to two canonical uniformities:
Definition 2.4. Suppose that G is a topological group (not necessarily Hausdorff) and let B
be a basis at 1. The left B-uniformity Φl (B) consists of entourages of the form
Ul := {(x, y) | x−1 y ∈ U }
(U ∈ B).
The right B-uniformity Φr (B) consists of entourages of the form
Ur := {(x, y) | xy −1 ∈ U }
(U ∈ B).
For both the left and right uniformities, the uniformity itself depends on the choice of basis,
but the equivalence class of the uniformity does not. We will thus omit references to the basis
where it is not significant. (For definiteness, one can take B to be the set of all open identity
neighborhoods, but it is often convenient to take another basis.) We will also use the definite
article when referring to the left or the right uniformity.
Definition 2.5 ([2, II.3.7]). The completion of a uniform space (X, Φ) is defined to be
X̂ := {f | f is a minimal Cauchy filter}.
along with the uniformity Φ̂ given by entourages of the form
Û := {(f, g) | ∃A ∈ f ∩ g with A × A ⊆ U }
where U ∈ Φ.
There is a canonical, continuous completion map β : X → X̂ which has dense image,
defined by x 7→ fx where fx is the minimal Cauchy filter containing the neighborhoods of x.
Note that as a topological space, X̂ is determined by the equivalence class of Φ.
In particular, if G is a topological group equipped with the right uniformity, we let β :
G → Ĝ be the completion map associated to this uniformity. Since β has dense image, there
is at most one way to equip Ĝ with a continuous group multiplication and inverse such that
β is a homomorphism; if these exist, we can say that Ĝ is a topological group in a canonical
sense. The completion Ĝ admits such a group multiplication exactly when the left and right
uniformities are equivalent; equivalently, the inverse function preserves the set of minimal
Φr -Cauchy filters.
HOMOMORPHISMS INTO T.D.L.C. GROUPS
5
Theorem 2.6 ([2, III.3.4 Theorem 1]). Suppose that G is a topological group and that Φr is
the right uniformity. The completion Ĝ is a topological group if and only if the inverse map
carries minimal Φr -Cauchy filters to minimal Φr -Cauchy filters.
Theorem 2.7 ([2, III.3.4 Theorem 1]). Suppose that G is a topological group, Φr is the right
uniformity, and the completion Ĝ is a topological group. Then the following hold:
(1) The map β : G → Ĝ is a continuous homomorphism with dense image.
(2) Multiplication on Ĝ is defined as follows: given f, f ′ ∈ Ĝ, then f f ′ is the minimal Cauchy
filter of subsets of G generated by sets AB ⊂ G where A ∈ f and B ∈ f ′ .
3. A general construction for completions
We describe a procedure for producing t.d.l.c. completions. The main idea in the construction is to form uniform completions of G with respect to a family of group topologies that
are in general coarser than the natural topology of G.
Definition 3.1. Let G be a group and let S be a set of open subgroups of G. We say that
S is a G-stable local filter if the following conditions hold:
(a) S is non-empty;
(b) Any two elements of S are commensurate;
(c) S is aTfilter in its commensurability class: that is, given a finite subset {V1 , . . . , Vn } of S,
then ni=1 Vi ∈ S, and given V ≤ W ≤ G such that |W : V | is finite, then V ∈ S implies
W ∈ S;
(d) S is stable under conjugation in G: that is, given V ∈ S and g ∈ G, then gV g −1 ∈ S.
We say S is a G-stable local filter of size [U ] if in addition S ⊆ [U ].
Remark 3.2. If S is a G-stable local filter, then S is a filter of [V ] for any V ∈ S. Furthermore, [V ] must be stable under the conjugation action of G.
Lemma 3.3. Let G be a group and let S be a G-stable local filter. Then S is a basis at 1 for
a (not necessarily Hausdorff ) group topology on G.
Proof. Let T be the topology generated by all right translates of elements of S. Since S is
invariant under conjugation in G, it is clear that every left coset of an element of S is a union
of right cosets of elements of S and vice versa; hence T is invariant under inverses. We see
that the multiplication map m : (g, h) 7→ gh is continuous with respect to T by observing that
given U ∈ S and g, h ∈ G, then m−1 (U gh) contains the open neighborhood (U g) × (g −1 U gh)
of (g, h).
We remark that the largest quotientTon which S induces a Hausdorff group topology is
G/K where K is the normal subgroup U ∈S U .
Equipping G with the group topology induced from S, we can form the left and right
uniformities Φl (S) and Φr (S).
Definition 3.4. Let S be a G-stable local filter. We define a right S-Cauchy filter f in
G to be a minimal Cauchy filter with respect to the uniformity Φr (S). In other words, f is a
filter of subsets of G with the following properties:
(a) For every V ∈ S, there is exactly one right coset V g of V in G such that V g ∈ f ;
(b) Every element of f contains a right coset of some element of S.
6
COLIN D. REID AND PHILLIP R. WESOLEK
Left S-Cauchy filters are defined similarly with respect to the left uniformity. Notice that
for each g ∈ G, there is a corresponding principal right S-Cauchy filter fg generated by
{V g | V ∈ S}. Where the choice of S is clear, we will write ‘Cauchy’ to mean ‘S-Cauchy’.
The next series of results will establish that the hypotheses of Theorem 2.6 are satisifed,
so completing G with respect to Φr (S) produces a topological group.
Lemma 3.5. Let G be a group, N be a commensurated subgroup of G, and g ∈ G. Then
there are h1 , . . . , hn ∈ G such that for
T all h ∈ G, the set N g ∩ hN is a (possibly empty) union
of finitely many right cosets of N ∩ ni=1 hi N h−1
i .
Proof. Suppose N g ∩hN 6= ∅ and put R := N ∩g−1 N g. For all h ∈ G, we have (N g ∩hN )R =
N g ∩ hN , so N g ∩ hN is a union of left cosets of R in G. The left cosets of R in G that are
subsets of N g are exactly those of the form gtR for t ∈ g−1 N g; indeed,
xR ⊆ N g ⇔ g −1 xR ⊆ g−1 N g ⇔ g −1 x ∈ g−1 N g.
Since R has finite index in g−1 N g, we deduce that only finitely many left cosets of the form
gtR exist. It follows that the set {N g ∩ hN | h ∈ G and N g ∩ hN 6= ∅} is finite.
Let h1 , . . . , hn ∈ G satisfy
{N g ∩ hN | h ∈ G and N g ∩ hN 6= ∅} = {N g ∩ h1 N, . . . , N g ∩ hn N }.
T
Setting M := N ∩ ni=1 hi N h−1
i , we see that M (N g ∩ hN ) = N g ∩ hN for all h ∈ G with
N g ∩ hN 6= ∅. Therefore, N g ∩ hN is a union of right cosets of M . That this union is finite
follows as in the previous paragraph.
Lemma 3.6. Let G be a topological group, S be a G-stable local filter, and f be a set of
subsets of G. Then f is a right S-Cauchy filter in G if and only if f is a left S-Cauchy filter
in G.
Proof. By symmetry, it suffices to assume f is a right Cauchy filter and prove that f is a left
Cauchy filter.
Fixing V ∈ S, the filter f contains some right coset V g of V . Applying Lemma 3.5 to V
and g, we produce a finite intersection W of conjugates of V such that W ≤ V and such that
for each h ∈ G, the set V g ∩ hV is a union of right cosets of W . Since S is closed under
conjugation and finite intersection, we additionally have W ∈ S.
Since f is right Cauchy, there is a unique right coset W k of W
S contained in f , and since
∅ 6∈ f , it must be the case that W k ⊆ V g. Observing that V g = h∈G V g ∩ hV , there is some
h ∈ G such that W k intersects V g ∩ hV . Lemma 3.5 then ensures that indeed W k ⊆ V g ∩ hV ,
hence hV ∈ f . We conclude that f contains a left coset of V for every V ∈ S. Since f is a
filter and ∅ 6∈ f , in fact f contains exactly one left coset of V proving (a) of the definition of
a left S-Cauchy filter.
Given any element A ∈ f , then A contains V g for some V ∈ S and g ∈ G; in particular, A
contains the left coset g(g−1 V g) of g−1 V g ∈ S, proving (b). We thus deduce that f is also a
left S-Cauchy filter.
Corollary 3.7. For G a topological group and S a G-stable local filter, the set of right SCauchy filters in G is preserved by the map on subsets induced by taking the inverse.
Proof. The map x 7→ x−1 induces a correspondence between the set of left Cauchy filters and
the set of right Cauchy filters. By Lemma 3.6, these two sets coincide, so the set of right
Cauchy filters is invariant under taking inverses.
HOMOMORPHISMS INTO T.D.L.C. GROUPS
7
We may now apply Theorem 2.6 to produce a completion of G with respect to Φr (S),
denoted ĜS . Specifically,
• The elements of ĜS are the (right) S-Cauchy filters in G.
• The set ĜS is equipped with a uniformity with entourages of the form
EU := {(f, f ′ ) | ∃g ∈ G : U g ∈ f ∩ f ′ }
(U ∈ S)
and topology generated by this uniformity.
• The map given by
β(G,S) : G → ĜS ; g 7→ fg
is continuous, since the topology induced
T by Φr (S) is coarser than the topology on G.
The image is dense, and the kernel is S.
• There is a unique continuous group multiplication on ĜS such that β(G,S) is a homomorphism. In fact, we can define multiplication on ĜS as follows: given f, f ′ ∈ ĜS ,
then f f ′ is the minimal Cauchy filter of subsets of G generated by sets AB ⊂ G where
A ∈ f and B ∈ f ′ .
Definition 3.8. For G a topological group and S a G-stable local filter, we call ĜS the
completion of G with respect to S.
We now establish a correspondence between t.d.l.c. completions and completions with respect to a G-stable local filter.
Theorem 3.9. If G is a group and S is a G-stable local filter, then the following hold:
(1) The topological group ĜS is a t.d.l.c. completion of G.
(2) There is a one-to-one correspondence between U (ĜS ) and S given as follows: For a
−1
(R) is the corresponding element of
compact open subgroup R of ĜS , the subgroup β(G,S)
S, and for V ∈ S, the subgroup β(G,S) (V ) is the corresponding compact open subgroup of
ĜS .
(3) For V ∈ S, the subgroup β(G,S) (V ) is naturally isomorphic as a topological group to the
profinite completion of V with respect to the quotients V /N such that N ∈ S and N E V .
Proof. For V ∈ S, set BV := {f ∈ ĜS | V ∈ f }. In view of the definition of the topology of
ĜS , the collection {BV | V ∈ S} is a base of identity neighborhoods in ĜS . Moreover, each
BV is closed under multiplication and inverse, so BV is a subgroup of ĜS . Therefore, ĜS has
a base of identity neighborhoods consisting of open subgroups.
Fix U ∈ S and write β := β(G,S) . The image β(U ) is a dense subgroup of BU , so in fact
BU = β(U ). Define
)
(
\
uV u−1 | U ≥ V ∈ S .
NU :=
u∈U
The set NU is precisely the set of elements of S that are normal in U , and these subgroups
necessarily have finite index in U . Form ÛS , the profinite completion of U with
Qrespect to the
finite quotients {U/N | N ∈ NU }. Representing ÛS as a closed subgroup of N ∈NU U/N in
the usual way, we define θ : BU → ÛS by setting θ(f ) := (N g)N ∈NU where N g is the unique
coset of N in f . One verifies that θ is an isomorphism of topological groups, proving (3).
The set {BV | V ∈ S} is thus a basis at 1 of compact open subgroups, so ĜS is a t.d.l.c.
group. Since β has a dense image, ĜS is a t.d.l.c. completion of G, verifying (1).
8
COLIN D. REID AND PHILLIP R. WESOLEK
Finally, we have a map η : S → U (ĜS ) given by η : V 7→ BV . We observe from the
definition of BV that in fact V = β −1 (BV ), so η is injective, with inverse on U (ĜS ) ∩ η(S)
given by R 7→ β −1 (R). Let R be a compact open subgroup of ĜS . Since {BV | V ∈ S} is a
base of identity neighborhoods in ĜS , there is some V ∈ S such that BV ≤ R. The subgroup
BV has finite index in R, since R is compact and BV is open, so β −1 (BV ) = V has finite
index in β −1 (R). By the definition of an S-stable local filter, we therefore have β −1 (R) ∈ S.
We conclude that η is a bijection of the form required for (2).
For V ∈ S, we will often abuse notation slightly and say that the profinite completion V̂S
equals the compact open subgroup β(G,S) (V ) of ĜS . We will also write β in place of β(G,S)
when G and S are clear from the context.
4. Classification and factorization of completions
Theorem 3.9 gives a method for producing t.d.l.c. completions of a group G. In fact, just
as in the profinite case, we see that all t.d.l.c. completions of G arise in this way. We first
prove a criterion for whether a homomorphism from G to a t.d.l.c. group factors through ĜS .
Proposition 4.1. Let φ : G → H be a continuous homomorphism such that H is a t.d.l.c.
group and let S be a G-stable local filter. Then the following are equivalent:
(1) There is a continuous homomorphism ψ : ĜS → H such that φ = ψ ◦ β(G,S) .
(2) Every open subgroup of H contains φ(V ) for some V ∈ S.
Moreover, if (1) holds, then ψ is unique.
Proof. Suppose that (1) holds and let U be an open subgroup of H. The preimage ψ −1 (U )
is an open subgroup of ĜS , so R ≤ ψ −1 (U ) for some compact open subgroup R of ĜS .
Theorem 3.9 ensures that V := β −1 (R) ∈ S, and V is such that φ(V ) = ψ(β(V )) ≤ U . We
conclude (2). Additionally, since β(G) is dense in ĜS , the equation φ = ψ ◦ φ determines the
restriction of ψ to β(G) and hence determines ψ uniquely as a continuous map.
Conversely, suppose that every open subgroup of H contains φ(V ) for some V ∈ S. For
f ∈ ĜS , define
\
fˆ := {φ(V g) | g ∈ G, V ∈ S, and V g ∈ f }.
Since H is a t.d.l.c. group, the open subgroups of H form a basis of identity neighborhoods.
By (1), it follows that the set {φ(V ) | V ∈ S} contains arbitrarily small subgroups of H, so
its intersection 1̂ is the trivial group.
For general f ∈ ĜS , |fˆ| ≤ 1 since fˆ is an intersection of cosets of arbitrarily small subgroups
of H. Fix R ∈ U (H) and Q ∈ S such that φ(Q) ≤ R, so in particular φ(Q) is compact. Fix
h ∈ G such that Qh ∈ f . Letting g ∈ G and V ∈ S be such that V g ∈ f , there is some k ∈ G
such that (V ∩Q)k ∈ f . The collection f is a proper filter, so we see that (V ∩Q)k ⊆ V g ∩Qh.
T
In particular, φ(V ∩ Q)k is contained in φ(V g). We can thus write fˆ as C∈C C where
C = {φ(V g) | g ∈ G, V ∈ S, V g ⊆ Qh and V g ∈ f }.
T
For any Q1 g1 , . . . , Qn gn ∈ f such that Qi gi ⊆ Qh, the intersection ni=0 Qi gi is non-empty as
it is an element of f . Thus C is a family of closed subsets of the compact set φ(Qg) with the
finite intersection property. We conclude that fˆ is nonempty, so |fˆ| = 1. We now define a
function ψ : ĜS → H by setting ψ(f ) to be the unique element of fˆ. One verifies that ψ is a
homomorphism satisfying φ = ψ ◦ β.
HOMOMORPHISMS INTO T.D.L.C. GROUPS
9
To see that ψ is continuous, fix (Vα )α∈I a basis at 1 of compact open subgroups for H
and for each α ∈ I, choose Wα ∈ S such that φ(Wα ) ≤ Vα . As in the previous paragraph,
T
we observe that α∈I φ(Wα ) = {1}. Consider a convergent net fδ → f in ĜS . For each
subgroup Wα , there is ηα ∈ I such that fγ fγ−1
contains Wα for all γ, γ ′ ≥ ηα . We conclude
′
′
that ψ(fγ fγ−1
′ ) ∈ φ(Wα ) ≤ Vα for all such γ, γ . In other words, ψ(fδ ) is a Cauchy net in H
with respect to the right uniformity of H. Since H is locally compact, it is complete with
respect to this uniformity, and ψ(fδ ) converges. It now follows that ψ(f ) = limδ∈I ψ(fδ ), so
ψ is continuous as claimed, completing the proof that (2) implies (1).
As a corollary, we note the case of Proposition 4.1 when H is itself the completion with
respect to a G-stable local filter.
Corollary 4.2. Let G be a group with G-stable local filters S1 and S2 . Then the following
are equivalent:
(1) There is a continuous homomorphism ψ : ĜS1 → ĜS2 such that β(G,S2 ) = ψ ◦ β(G,S1 ) .
(2) For all V2 ∈ S2 , there exists V1 ∈ S1 such that V1 ≤ V2 .
Proof. Set β1 := β(G,S1 ) and β2 := β(G,S2 ) .
Suppose that (1) holds and take V2 ∈ S2 . There exists L ∈ U (ĜS2 ) such that V2 = β2−1 (L).
Applying Proposition 4.1 with the filter S1 gives V1 ∈ S1 such that β2 (V1 ) ≤ L. We deduce
that V1 ≤ V2 , verifying (2).
Conversely, suppose that (2) holds. For every open subgroup U of ĜS2 , we have β2 (V2 ) ≤ U
for some V2 ∈ S2 by Theorem 3.9(2). By hypothesis, there exists V1 ∈ S1 such that V1 ≤ V2 ,
so β2 (V1 ) ≤ U . The conclusion (1) now follows by Proposition 4.1.
We will now demonstrate that all t.d.l.c. completions arise as completions with respect to
G-stable local filters.
Theorem 4.3. If G is a group and H is a t.d.l.c. completion of G via φ : G → H, then the
set of all preimages of compact open subgroups of H is a G-stable local filter S, and moreover,
there is a unique topological group isomorphism ψ : ĜS → H such that φ = ψ ◦ β(G,S) .
Proof. Setting S := {φ−1 (M ) | M ∈ U (H)}, one verifies that S is a G-stable local filter.
Define ψ : ĜS → H as in Proposition 4.1, so ψ is the unique continuous homomorphism such
that φ = ψ ◦ β. It is easily verified that ψ is injective.
The group ĜS has a base of identity neighborhoods of the form {V̂S | V ∈ S}. Given
V ∈ S, say that φ−1 (P ) = V for P ∈ U (H). The image φ(V ) is dense in P , and the image
β(V ) is dense in V̂S . Since φ = ψ ◦ β, we infer that ψ(V̂S ) is also a dense subgroup of P . The
map ψ is continuous and V̂S is compact, so in fact ψ(V̂S ) = P . Hence, ψ is an open map.
Since the image of ψ is dense, it follows that ψ is surjective and thereby bijective.
The map ψ is an open continuous bijective homomorphism. We conclude that ψ is an
isomorphism of topological groups, completing the proof.
Remark 4.4. Theorem 4.3 shows that, up to isomorphism, all t.d.l.c. completions of a group
G have the form ĜS for S some G-stable local filter. This applies to the particular case where
the t.d.l.c. completion is a profinite group. For instance, the profinite completion of a discrete
group G is ĜS where S is the set of all subgroups of finite index in G.
We conclude this section by characterizing properties of the kernel of the homomorphism
ψ obtained in Corollary 4.2 in terms of the G-stable local filters. In light of Theorem 4.3,
10
COLIN D. REID AND PHILLIP R. WESOLEK
these characterizations in fact apply to any continuous homomorphism ψ : H1 → H2 where
H1 and H2 are completions of G, via taking Si to be the set of preimages of the compact
open subgroups of Hi .
Proposition 4.5. Let G be a group with G-stable local filters S1 and S2 such that for all
V2 ∈ S2 there is V1 ∈ S1 with V1 ≤ V2 . Let ψ : ĜS1 → ĜS2 be the unique continuous
homomorphism such that β(G,S2 ) = ψ ◦ β(G,S1 ) , as given by Corollary 4.2. Then the following
holds:
(1) The kernel of ψ is compact if and only if S2 ⊆ S1 . If S2 ⊆ S1 , then ψ is also a quotient
map.
(2) The kernel of ψ is discrete if and only if there exists V1 ∈ S1 such that for all W1 ∈ S1
there is W2 ∈ S2 with V1 ∩ W2 ≤ W1 .
(3) The kernel of ψ is trivial if and only if for all V1 ∈ S1 ,
\
V1 = {V2 ∈ S2 | V1 ≤ V2 }.
Proof. Set β1 := β(G,S1 ) and β2 := β(G,S2 ) .
Proof of (1). Suppose that ker ψ is compact. Taking V ∈ S2 , the group ψ −1 (V̂S2 ) is a
compact open subgroup of ĜS such that the preimage under β1 is V . Given the characterization of compact open subgroups of ĜS1 established in Theorem 3.9(2), it follows that V ∈ S1 .
Hence, S2 ⊆ S1 .
Conversely, suppose that S2 ⊆ S1 and let V ∈ S2 . Given the construction of ψ in the proof
of Proposition 4.1, we see that ker ψ ≤ V̂S1 ; in particular, ker ψ is compact. We see from
Theorem 3.9 that ψ(V̂S1 ) = β2 (V ) = V̂S2 , so the image of ψ is open. Since the image of ψ is
also dense, it follows that ψ is surjective. We conclude that ψ is a quotient map with compact
kernel, as required.
Proof of (2). Suppose that ker ψ is discrete, so there is V1 ∈ S1 such that ker ψ ∩ (Vˆ1 )S1 =
{1}. The map ψ restricts to a topological group isomorphism from (Vˆ1 )S1 to its image. Take
W1 ∈ S1 and set Y := V1 ∩ W1 . The subgroup (Ŷ )S1 is open in (Vˆ1 )S1 , so ψ((Ŷ )S1 ) is open
in ψ((Vˆ1 )S1 ). There thus exists W2 ∈ S2 such that ψ((Vˆ1 )S1 ) ∩ (Ŵ2 )S2 ≤ ψ((Ŷ )S1 ). Since ψ
is injective on (Vˆ1 )S1 , it follows that
(Vˆ1 )S1 ∩ ψ −1 ((Ŵ2 )S2 ) ≤ (Ŷ )S1 .
We deduce that V1 ∩ W2 ≤ Y ≤ W1 since ψ ◦ β1 = β2 .
Conversely, suppose that V1 ∈ S1 is such that for all W1 ∈ S1 there is W2 ∈ S2 with
V1 ∩ W2 ≤ W1 . We claim that (Vˆ1 )S1 intersects ker ψ trivially, which will demonstrate that
ker ψ is discrete. Suppose that f ∈ (Vˆ1 )S1 ∩ ker ψ and suppose for contradiction that f is
non-trivial. There is then some W1 ∈ S1 such that f contains a nontrivial coset W1 g of W1 ;
we may assume that W1 ≤ V1 . Since f ∈ (Vˆ1 )S1 , we additionally have g ∈ V1 .
By the hypotheses, there is W2 ∈ S2 such that V1 ∩ W2 ≤ W1 , and there is Y ∈ S1
such that Y ≤ V1 ∩ W2 . Letting g′ ∈ G be such that Y g′ ∈ f , it must be the case that
Y g′ ⊆ W2 since ψ(f ) is trivial. On the other hand, Y g ′ ⊆ W1 g, so g ′ ∈ V1 . We now see that
g′ ∈ V1 ∩ W2 ≤ W1 and that W1 g = W1 g′ = W1 . Thus W1 g is the trivial coset of W1 , a
contradiction. We conclude that (Vˆ1 )S1 ∩ ker ψ = {1} as claimed.
HOMOMORPHISMS INTO T.D.L.C. GROUPS
11
Proof of (3). Suppose that ψ is injective. Let V1 ∈ S1 and set R := {V2 ∈ S2 | V1 ≤ V2 }.
The subgroup (Vˆ1 )S1 is a compact open subgroup of ĜS1 , so K := ψ((Vˆ1 )S1 ) is a compact,
hence profinite, subgroup of ĜS2 . It follows that K is the intersection of all compact open
subgroups of ĜS2 that contain K.
All compact open subgroups of ĜS2 are of the form ŴS2 for some W ∈ S2 . For ŴS2 ≥ K
compact and open, we thus have β2−1 (K) ≤ W , and since β1 = ψ ◦ β2 , we deduce further that
T
V1 = β1−1 ((V̂T1 )S1 ) ≤ W . Hence, K = {ŴS2 | W ∈ R}.
For g ∈ R, it is the case that β2 (g) ∈ β2 (W ) for all W ∈ R, so β2 (g) ∈ K. In other
words,
ψ ◦ β1 (g) ∈ ψ((Vˆ1 )S1 ).
T
As ψ is injective, β1 (g) ∈ (Vˆ1 )S1 , and thus, g ∈ β1−1 ((Vˆ1 )S1 ) = V1 ,Tshowing that R ⊆ V1 .
On the other hand it is clear from the definition of R that V1 ⊆ R, so equality holds as
required.
T
Conversely, suppose that for all V1 ∈ S1 , it is the case that V1 = {V2 ∈ S2 | V1 ≤ V2 }.
Fix f ∈ ker ψ. For each V1 ∈ S1 , we may write f = β1 (g)u for g ∈ G and u ∈ (V̂1 )S1 , since
β1 (G) is dense in ĜS1 , and it follows that β2 (g) ∈ ψ((V̂1 )S1 ). We now infer that g ∈ V2 for
all V2 ∈ S2 such that V2 ≥ V1 ; by our hypothesis, it follows that g ∈ V1 . Recalling that
f = β1 (g)u, it follows that f ∈ (V̂1 )S1 . The subgroups (V̂1 )S1 form a basis at 1, so indeed
f = 1. We conclude that ψ is injective as required.
5. Canonical completions of Hecke pairs
We now consider the t.d.l.c. completions H of G such that a specified subgroup U of G is
the preimage of a compact open subgroup of the completion. The pair (G, U ) is then a Hecke
pair.
Let us make several definitions to organize our discussion of t.d.l.c. completions. Recall
that all preimages of compact open subgroups of a given range group H are commensurate,
so the following definition does not depend on the choice of U .
Definition 5.1. Given a t.d.l.c. completion map φ : G → H, the size of φ is defined to be
[U ], where U is the preimage of a compact open subgroup of H. We also say that [U ] is the
size of H when the choice of completion map is not important, or can be inferred from the
context.
A completion map for a Hecke pair (G, U ) is a continuous homomorphism φ : G → H
with dense image such that H is a t.d.l.c. group and U is the preimage of a compact open
subgroup of H. We say that H is a completion of (G, U ). When H is also second countable,
we call H a second countable completion.
Given a Hecke pair (G, U ), there are two canonical G-stable local filters containing U ,
defined as follows: The Belyaev filter is [U ]. The Schlichting filter SG/U for (G, U ) is the
filter of [U ] generated by the conjugacy class of U – that is,
)
(
n
\
gi U gi−1 ≤ V .
SG/U := V ∈ [U ] | ∃g1 , . . . , gn ∈ G such that
i=1
Definition 5.2. The Belyaev completion for (G, U ), denoted by ĜU , is defined to be Ĝ[U ] .
The canonical inclusion map β(G,[U ]) is denoted by βU . The Schlichting completion for
12
COLIN D. REID AND PHILLIP R. WESOLEK
(G, U ), denoted by G//U , is defined to be ĜSG/U . The canonical inclusion map β(G,SG/U ) is
denoted by βG/U .
Remark 5.3. We stress that the commensurability class [U ] does not determine the Schlichting filter SG/U ; indeed, the only situation when there is only one Schlichting filter of a given
size is when the Schlichting filter is equal to the Belyaev filter of that size.
Given any G-stable local filter S of size [U ] that contains U , we have SG/U ⊆ S ⊆ [U ].
Amongst G-stable local filters that contain U , the Schlichting filter is minimal whilst [U ] is
maximal. The Belyaev and Schlichting completions are thus maximal and minimal completions of a Hecke pair in the following strong sense:
Theorem 5.4. Suppose that G is a group and that φ : G → H is a completion map. Letting
U ≤ G be the preimage of some compact open subgroup of H, then (G, U ) is a Hecke pair, H
is a completion of (G, U ), and there are unique continuous quotient maps ψ1 : ĜU → H and
ψ2 : H → G//U with compact kernels such that the following diagram commutes:
G
✇ ❏❏❏❏
✇✇
❏❏βG/U
✇
✇
❏❏
φ
✇✇
❏❏
✇
$
{✇✇
/H
/ G//U.
βU
ĜU
ψ1
ψ2
Proof. Fix L a compact open subgroup of H and set U := φ−1 (L). The subgroup U is an
open commensurated subgroup of G, so (G, U ) is a Hecke pair. Since φ has dense image, we
conclude that H is a completion of (G, U ). Set S := {φ−1 (V ) | V ∈ U (H)}. By Theorem 4.3,
S is a G-stable local filter, and there is a unique topological group isomorphism ψ : ĜS → H
such that φ = ψ ◦ β(G,S) . Observe that U ∈ S by Theorem 3.9(2).
Since S contains U , Proposition 4.5 ensures there are unique continuous quotient maps π1
and π2 with compact kernels such that the following diagram commutes:
G
❏❏❏
✈✈
❏❏❏βG/U
✈✈
✈
❏❏❏
β
✈
(G,S)
❏❏
✈✈
✈
%
z
✈
/ ĜS
/ G//U.
βU
ĜU
π1
π2
It follows that ψ1 := ψ ◦ π1 and ψ2 := π2 ◦ ψ −1 make the desired diagram commute and both
are continuous quotient maps with compact kernels. Uniqueness follows since ψ, π1 , and π2
are unique.
Theorem 5.4 shows all possible completions of a Hecke pair (G, U ) differ only by a compact
normal subgroup. The locally compact, non-compact structure of a t.d.l.c. completion thus
depends only on the Hecke pair; contrast this with the many different profinite completions
a group can admit. We give precise statements illustrating this phenomenon in Section 6.
We conclude this section by making two further observations. First, the Schlichting completion has a natural description. Suppose (G, U ) is a Hecke pair and let σ(G,U ) : G → Sym(G/U )
be the permutation representation given by left multiplication. We consider Sym(G/U ) to be
equipped with the topology of pointwise convergence.
Proposition 5.5. For (G, U ) a Hecke pair, there is a unique topological group isomorphism
ψ : G//U → σ(G,U ) (G) such that σ(G,U ) = ψ ◦ βG/U .
HOMOMORPHISMS INTO T.D.L.C. GROUPS
13
b is a
Proof. For Y ⊆ G, set Yb := σ(G,U ) (Y ). The orbits of σ(U ) are finite on G/U , so U
b
b = Stab b (U ), hence it is open. It now follows that G
profinite group. On the other hand, U
G
is a t.d.l.c. completion of G.
b is given by stabilizers of finite sets of cosets. Such stabilizers
A basis for the topology on G
T
b σ(g−1 ) with F ⊆ G finite. For every V ∈ U (G),
b the
are exactly of the form g∈F σ(g)U
T
−1
subgroup σ(G,U ) (V ) therefore contains g∈F gU g −1 for some F ⊆ G finite. The G-stable local
b is thus exactly the Schlichting filter SG/U . Theorem 4.3
filter S := {σ −1 (V ) | V ∈ U (G)}
(G,U )
now implies the proposition.
Via Theorem 5.4 and Proposition 5.5, we recover a result from the literature.
Corollary 5.6 (Shalom–Willis [11, Lemma 3.5 and Corollary 3.7]). Suppose that G is a
t.d.l.c. group and that φ : G → H is a completion map. If U is the preimage of a compact
open subgroup of H, then there is a unique quotient map ψ : H → G//U and closed embedding
ι : G//U → Sym(G/U ) such that σ(G,U ) = ι ◦ ψ ◦ φ.
We next show the Belyaev completion satisfies a stronger universality property.
Theorem 5.7. Let φ : G → H be a continuous homomorphism such that H is a t.d.l.c.
group. Suppose that U is a commensurated open subgroup of G such that φ(U ) is profinite.
Then there is a unique continuous homomorphism ψ : ĜU → H such that φ = ψ ◦ βU . If in
addition φ has dense image and φ(U ) is open in H, then ψ is a quotient map.
Proof. Let β := βU and set L := φ(U ). Let V be an open subgroup of H. Then V ∩ L is open
and of finite index in L, since L is compact. In particular, we see that W = φ−1 (V ) ∩ U is
an open subgroup of U of finite index. For all open subgroups V of H, there is thus W ∈ [U ]
such that φ(W ) ≤ V . We now obtain the unique continuous homomorphism ψ : ĜU → H
such that φ = ψ ◦ β via Proposition 4.1.
Now suppose that φ(U ) is open in H and that φ has dense image. The group Û := β(U )
is a compact open subgroup of ĜU , so ψ(Û ) is compact in H; in particular, ψ(Û ) is closed in
H. We see that ψ(Û ) ≥ φ(U ), so ψ(Û ) is indeed open in H. The image of ψ is therefore an
open and dense subgroup of H, so ψ is surjective. Since H is a Baire space, it follows that ψ
is a quotient map.
A standard universal property argument now shows that Theorem 5.7 characterizes the
Belyaev completion up to topological isomorphism, so one can take Theorem 5.7 as the
definition of the Belyaev completion.
Remark 5.8. We see that the problem of classifying all continuous homomorphisms with
dense image from a specified group G (possibly discrete) to an arbitrary t.d.l.c. group can, in
principle, be broken into two steps:
(1) Classify the possible sizes of completion; in other words, classify the commensurability
classes α = [U ] of open subgroups of G that are invariant under conjugation. (This
typically amounts to classifying commensurated open subgroups.)
(2) For each such class α, form the Belyaev completion Ĝα and classify the quotients of Ĝα
with compact kernel.
A number of researchers have already considered (1). Shalom and Willis classify commensurated subgroups of many arithmetic groups in [11]. Other examples include classifications
14
COLIN D. REID AND PHILLIP R. WESOLEK
of commensurated subgroups of almost automorphism groups ([5]) and Burger-Mozes simple
universal groups ([6]).
6. Invariant properties of completions
By Theorem 5.4, the t.d.l.c. completions of a group G of a given size differ only by a
compact normal subgroup, so ought to have the same “non-compact” properties. We here
make several precise statements showing that this intuition has substance.
6.1. First invariant properties.
Proposition 6.1. Let G be a group and let H be a t.d.l.c. completion of G of size α. Then,
(1) H is σ-compact if and only if |G : W | is countable for some (equivalently, any) W ∈ α;
(2) H is compactly generated if and only if G is generated by finitely many left cosets of W
for some (equivalently, any) W ∈ α.
Proof. Let β : G → H be the completion map and V ∈ U (H). By hypothesis, U := β −1 (V )
is an element of α.
For (1), if H is σ-compact, then V has only countably many left cosets in H. Since
U = β −1 (V ), it follows that |G : U | is countable. Conversely, suppose that |G : W | is
countable for some W ∈ α. It follows that |G : U | is countable. Since β(G) is dense in H,
there are only countably many left cosets of β(U ) in H, so H is σ-compact.
For (2), if H is compactly generated, then since β(G) is dense in H, there exists a finite
symmetric A ⊆ β(G) such that H = hAiV ; see for instance [12, Proposition 2.4]. Say
A = β(B) for a finite subset B of G. For every g ∈ G, there thus exists v ∈ V and g′ ∈ hBi
such that β(g) = β(g ′ )v. Since β −1 (V ) = U , it follows further that v = β(u) for some u ∈ U .
Thus, β(hB, U i) = β(G), and as ker β ≤ U , we infer that G = hB, U i. In particular, G
is generated by finitely many left cosets of U . Conversely, suppose that G is generated by
finitely many left cosets of some W ∈ α. It follows
that G is generated by finitely many left
S
cosets b1 U, b2 U, . . . , bn U of U . The image β( ni=1 bi U ) generates a dense subgroup of H, and
S
hence the compact subset X := ni=1 β(bi )β(U ) generates a dense open subgroup of H and
therefore generates H.
Corollary 6.2. For each of the following properties, either every completion of a group G of
size α has the property, or every completion of G of size α fails to have the property.
(1) Being σ-compact.
(2) Being compactly generated.
We next consider possible quotient groups and amenability.
Proposition 6.3. For each of the following properties, either every completion of a group G
of size α has the property, or every completion of G of size α fails to have the property.
(1) Having a quotient isomorphic to N where N is any specified t.d.l.c. group that has no
non-trivial compact normal subgroups.
(2) Being amenable.
Proof. For i ∈ {1, 2}, let φi : G → Hi be a completion map of size α and let Ui be the
preimage under φi of a compact open subgroup of Hi . The subgroup Ui is a member of α,
so by Theorem 5.4, there are quotient maps πi : ĜU → Hi with compact kernel. It therefore
suffices to show that for any t.d.l.c. group H and compact normal subgroup K of H, the
group H has the property if and only if H/K does.
HOMOMORPHISMS INTO T.D.L.C. GROUPS
15
For (1), if π : H → N is a quotient map, then π(K) is a compact normal subgroup of N .
Since N has no non-trivial compact normal subgroup, we deduce that K ≤ ker π, so N is a
quotient of H if and only if N is a quotient of H/K.
For (2), recall that every compact subgroup is amenable and that if L is a closed normal
subgroup of the locally compact group H, then H is amenable if and only if both H/L and
L are amenable. Since K is compact, we deduce that H is amenable if and only if H/K is
amenable.
Let us now consider topological countability axioms. These are more delicate and depend
on the choice of G-stable local filter, as opposed to only the size.
Proposition 6.4. Suppose G is a topological group and S is a G-stable local filter. Then ĜS
is first countable if and only if the set {V ∈ S | V ≤ U } is countable, for some (equivalently,
any) U ∈ S.
Proof. Fix U ∈ S. Let β : G → ĜS be the completion map and set SU := {V ∈ S | V ≤ U }.
By Theorem 3.9, we have |SU | = |O| where O is the set of open subgroups of U . If SU is
countable, then O is a countable base of identity neighborhoods, so ĜS is first countable.
Conversely if ĜS is first countable, then there is a countable base B of identity neighborhoods
consisting of open subsets of U . Since U is profinite, each B ∈ B contains a subgroup of U of
finite index, so there are only finitely many open subgroups of U that contain B. Hence, O
is countable, implying that SU is countable.
Proposition 6.5. Let G be a topological group and S be a G-stable local filter. Then ĜS
is second countable if and only if {V ∈ S | V ≤ U } and |G : U | are countable, for some
(equivalently, any) U ∈ S.
Proof. Via [4, (5.3)], a locally compact group is second countable if and only if it is σ-compact
and first countable. The proposition now follows from Propositions 6.4 and 6.1.
Corollary 6.6. If (G, U ) is a Hecke pair such that |G : U | is countable, then the Schlichting
completion G//U is a t.d.l.c.s.c. group.
6.2. Elementary groups. Let us now consider a more complicated algebraic property, the
property of being an elementary group.
Definition 6.7. The class E of elementary groups is the smallest class of t.d.l.c.s.c. groups
such that
(i) E contains all second countable profinite groups and countable discrete groups.
(ii) E is closed under taking closed subgroups.
(iii) E is closed under taking Hausdorff quotients.
(iv) E is closed under forming group extensions.
S
(v) If G is a t.d.l.c.s.c. group and G = i∈N Oi where (Oi )i∈N is an ⊆-increasing sequence
of open subgroups of G with Oi ∈ E for each i, then G ∈ E . We say that E is closed
under countable increasing unions.
The operations (ii)-(v) are often called the elementary operations. It turns out operations (ii) and (iii) follow from the others, and (iv) can be weakened to (iv)′ : E is closed under
extensions of profinite groups and discrete groups. These results are given by [12, Theorem
1.3].
16
COLIN D. REID AND PHILLIP R. WESOLEK
Remark 6.8. If G is a t.d.l.c.s.c. group that is non-discrete, compactly generated, and
topologically simple, then G is not a member of E . The class E is thus strictly smaller than
the class of all t.d.l.c.s.c. groups.
The class of elementary groups comes with a canonical successor ordinal valued rank called
the decomposition rank and denoted by ξ(G); see [12, Section 4]. The key property of the
decomposition rank that we will exploit herein is that it is well-behaved under applying
natural group building operations.
T For a t.d.l.c. group G, recall that the discrete residual
of G is defined to be Res(G) := {O E G | O open}.
Proposition 6.9. For G a non-trivial elementary group, the following hold.
(1) If H is a t.d.l.c.s.c. group, and ψ : H → G is a continuous, injective homomorphism,
then H is elementary with ξ(H) ≤ ξ(G). ([12, Corollary 4.10])
(2) If L E G
S is closed, then ξ(G/L) ≤ ξ(G). ([12, Theorem 4.19])
(3) If G = i∈N Oi with (Oi )i∈N an ⊆-increasing sequence of compactly generated open subgroups of G, then ξ(G) = supi∈N ξ(Res(Oi )) + 1. If G is compactly generated, then
ξ(G) = ξ(Res(G)) + 1. ([12, Lemma 4.12])
(4) If G is residually discrete, then G is elementary with ξ(G) ≤ 2. ([12, Observation 4.11])
(5) If G is elementary and lies in a short exact sequence of topological groups
{1} → N → G → Q → {1},
then ξ(G) ≤ (ξ(N ) − 1) + ξ(Q). (Here (ξ(N ) − 1) denotes the predecessor of ξ(N ), which
exists as ξ(N ) is a successor ordinal.) ([8, Lemma 3.8])
Proposition 6.10. Let G be a group.
(1) Either every second countable completion of G of size α is elementary, or all second
countable completions of G of size α are non-elementary.
(2) If every second countable completion of size α is elementary, then for any two second
countable completions H and L of G with size α, we have
ξ(L) ≤ 1 + ξ(H).
(3) If some second countable completion of size α is elementary with transfinite rank β, then
every second countable completion of size α is elementary with rank β.
Proof. Let H and L be second countable completions of G of size α. By Theorem 4.3 we
may assume H = ĜS1 and L = ĜS2 where S1 , S2 ⊆ α are G-stable local filters. Let S3 be
the smallest G-stable local filter containing S1 ∪ S2 . Via Proposition 6.5, {V ∈ Si | V ≤ U }
is countable for i ∈ {1, 2} and U ∈ Si . It follows that {V ∈ S3 | V ≤ U } is countable for
U ∈ S1 , and hence ĜS3 is first countable. By Corollary 6.2(1), ĜS3 is also σ-compact and
hence second countable.
By Proposition 4.5, both ĜS1 and ĜS2 are quotients of ĜS3 with compact kernel. It follows
that ĜSi is elementary for i ∈ {1, 2} if and only if ĜS3 is elementary; in particular, it is not
possible for ĜS1 to be elementary and ĜS2 to be non-elementary. This proves (1). Moreover,
if ĜS3 is elementary, then via Proposition 6.9 we obtain the inequalities
ξ(ĜSi ) ≤ ξ(ĜS3 ) ≤ 1 + ξ(ĜSi ) (i ∈ {1, 2}).
In particular, if ξ(ĜS3 ) is finite then ξ(ĜS1 ), ξ(ĜS2 ) ∈ {ξ(ĜS3 ) − 1, ξ(ĜS3 )}, and if ξ(ĜS3 ) is
transfinite then it is equal to both ξ(ĜS1 ) and ξ(ĜS2 ). This proves (2) and (3).
HOMOMORPHISMS INTO T.D.L.C. GROUPS
17
It can be the case that there are no second countable completions of a given size. In light
of Corollary 6.6, however, a group G has a second countable completion of size α if and only
if |G : U | is countable for some U ∈ α. With this in mind, we make the following definition.
Definition 6.11. A size α of a group G is called elementary if there is U ∈ α such that
|G : U | is countable and some (all) second countable completions are elementary. A Hecke
pair (G, U ) is called elementary if |G : U | is countable and some (all) s.c. completions are
elementary.
6.3. The scale function and flat subgroups. We conclude this section by considering the
scale function and flat subgroups in relation to completions; these concepts were introduced
in [13] and [14] respectively, although the term “flat subgroup” is more recent.
Definition 6.12. For G a t.d.l.c. group, the scale function s : G → Z is defined by
s(g) := min{|gU g −1 : gU g −1 ∩ U | | U ∈ U (G)}.
A compact open subgroup U of G is tidy for g ∈ G if it achieves s(g). We say g is uniscalar
if s(g) = s(g−1 ) = 1.
A subset X of G is flat if there exists a compact open subgroup U of G such that for all
x ∈ X, the subgroup U is tidy for x; in this case, we say U is tidy for X. If X is a finitely
generated flat subgroup, the rank of X is the least number of generators for the quotient
group X/{x ∈ X | s(x) = 1}.
The scale function and flatness are clearly locally compact non-compact phenomena. In
relation to t.d.l.c. completions, they only depend on the size [U ].
Proposition 6.13. For φ : G → H a t.d.l.c. completion of size [U ], the following hold:
(1) For ŝ and s the scale functions for ĜU and H, s ◦ φ = ŝ ◦ βU .
(2) For X ⊆ G, the subset φ(X) is flat if and only if βU (X) is flat.
(3) If K ≤ G is a finitely generated subgroup, then φ(K) is flat with rank k if and only if
βU (K) is flat with rank k.
Proof. By Theorem 5.4, we can factorize φ as φ = π ◦βU , where π : ĜU → H is a quotient map
with compact kernel. The result [7, Lemma 4.9] ensures that s ◦ π = ŝ, hence s ◦ φ = ŝ ◦ βU ,
proving (1).
Appealing again to [7, Lemma 4.9], if U is tidy for g in ĜU , then π(U ) is tidy for π(g) in
H, and conversely if V is tidy for π(g) in H, then π −1 (V ) is tidy for g in ĜU . Therefore, if
βU (X) has a common tidy subgroup, then so does φ(X). Conversely, if φ(X) has a common
tidy subgroup V in H, then π −1 (V ) is a common tidy subgroup for βU (X). We conclude that
φ(X) has a common tidy subgroup if and only if βU (X) does, verifying (2).
Finally, if K is a subgroup of G, then φ(K) is a flat subgroup of H if and only if βU (K)
is a flat subgroup of ĜU by (2). The rank of φ(K) is the number of generators of the factor
φ(K)/LH where LH := {x ∈ φ(K) | s(x) = 1}. Letting LĜU be the analogous subgroup of
ĜU , it follows from (1) that the map π induces an isomorphism π̃ : βU (K)/LĜU → φ(K)/LH .
We conclude that βU (K) has rank k if and only if φ(K) has rank k, proving (3).
The next corollary is immediate from Proposition 6.13 and the fact the scale function is
continuous (see [13]).
Corollary 6.14. For G a group and U a commensurated open subgroup of G, either all t.d.l.c.
completions of G of size α are uniscalar, or no completion of size α is uniscalar.
18
COLIN D. REID AND PHILLIP R. WESOLEK
7. Completions compatible with homomorphisms
e of
For an injective homomorphism θ : G → L, we may wish to find a t.d.l.c. completion G
e
G such that θ extends to an injective homomorphism from G to L. More precisely, we say the
e is compatible with θ if there is a continuous injective
t.d.l.c. completion map β : G → G
e
homomorphism ψ : G → L such that θ = ψ ◦ β; note that in this case ψ is necessarily unique.
Here we do not insist that L be locally compact; indeed, in many interesting examples, L
itself will not be locally compact (see Remark 7.3 below). We can characterize the t.d.l.c.
completions compatible with θ in terms of commensurated subgroups.
Theorem 7.1. Let θ : G → L be an injective continuous homomorphism of topological groups.
(1) Suppose that H is an open commensurated subgroup of G such that the closure θ(H) of
θ(H) in L is profinite and set H ∗ := θ −1 (θ(H)). Then H ∗ is open and commensurated
in G, and there is a t.d.l.c. completion map β : G → ĜH,θ compatible with θ such that
H ∗ is the preimage of a compact open subgroup of ĜH,θ . Moreover, β is unique up to
isomorphisms of ĜH,θ and is determined by the pair ([H ∗ ], θ).
e is a t.d.l.c. completion of G compatible with θ and ψ : G
e→L
(2) Suppose that β : G → G
e
is such that θ = ψ ◦ β, let U be a compact open subgroup of G, and set H := β −1 (U ).
Then H is a commensurated subgroup of G that is the preimage of a profinite subgroup
e ≃ ĜH,θ .
θ(H) = ψ(U ) of L, and G
Proof. For (1), H is an open commensurated subgroup of G such that the closure K = θ(H) of
θ(H) in L is profinite. The image θ(G) additionally commensurates K; consider, for instance,
[5, Lemma 2.7]. We thus conclude that H ∗ := θ −1 (K) is commensurated in G. Now let
R be the set of closed subgroups of L that are commensurate with K and let S be the set
of θ-preimages of elements of R. The collection S forms a G-stable local filter, and setting
ĜH,θ := ĜS , we obtain a t.d.l.c. completion β : G → ĜH,θ .
Let L′ be the group hθ(G), Ki, equipped with the unique group topology such that the
inclusion of K into L′ is continuous and open. The map θ induces a continuous homomorphism
θ ′ from G to L′ , and Theorem 4.3 provides a unique topological group isomorphism ψ ′ :
ĜH,θ → L′ such that θ ′ = ψ ′ ◦ β. In particular, since the natural inclusion of L′ into L
is continuous, we obtain a continuous injective homomorphism ψ : ĜH,θ → L such that
θ = ψ ◦ β. Thus β is compatible with θ. It is also clear that given θ, the construction of β
is determined by the commensurability class of K among closed subgroups of L, and hence
by the commensurability class of H ∗ , since K = θ(H ∗ ) and the mapping · 7→ θ(·) preserves
commensurability classes of subgroups.
To see that β is unique up to isomorphisms of the range, Theorem 4.3 ensures that it is
enough to show the following: given a t.d.l.c. completion β(G,T ) : G → ĜT that is compatible
with θ, where T is a G-stable local filter and H ∗ ∈ T , then T = S. Suppose ψ2 is the
injective continuous homomorphism from ĜT to L such that θ = ψ2 ◦ β(G,T ) . The collection
T is the set of β(G,T ) -preimages of compact open subgroups of ĜT . The images of the compact
open subgroups of ĜT give rise to a collection R′ of compact subgroups of L, so T is the
set of θ-preimages of elements of R′ . We see that K ∈ R′ and that all elements of R′ are
commensurate with K, so R′ ⊆ R and hence T ⊆ S. The argument that S ⊆ T is similar.
e so H is commensurated in G. Let K := ψ(U ). Since
For (2), U is commensurated in G,
ψ is injective and continuous and U is compact, we see that K is closed in L and isomorphic
HOMOMORPHISMS INTO T.D.L.C. GROUPS
19
to U as a topological group; in particular, K is profinite. The injectivity of ψ ensures that
e we see that
U = ψ −1 (K), so H = β −1 ψ −1 (K) = θ −1 (K). Since G has dense image in G,
e ≃ ĜH,θ follows from the
β(H) is dense in U and hence θ(H) is dense in K. The fact that G
uniqueness result established in part (1).
To illustrate the theorem, let us spell out what it means for certain classes of action. Given
a group G acting faithfully on a structure X, a t.d.l.c. completion of the action is a faithful
e on the same structure such that G
e contains a dense copy of G with
action of a t.d.l.c. group G
its original action. Say that a unitary representation X of a group H is locally finite if X is
the closure of the union of an increasing family (Xi ) of finite-dimensional subrepresentations,
such that the kernel of the action of H on Xi has finite index for each i.
Corollary 7.2. For G a group with H a commensurated subgroup, suppose that one of the
following hold:
(1) G acts faithfully by permutations on a set X, and H has finite orbits on X.
(2) G acts faithfully by homeomorphisms on a compact metrizable zero-dimensional space X,
and the action of H on X is equicontinuous.
(3) X is a faithful complex unitary representation of G, and X is locally finite as a representation of H.
e y X of the action such that H has compact open
Then there is a unique t.d.l.c. completion G
e which is continuous in cases (1) and (2), and strongly continuous in case (3).
closure in G,
Moreover, all t.d.l.c. completions of the action of G on X with the given continuity property
arise in this way.
Proof. For (1), consider G and H as subgroups of Sym(X) with the permutation topology.
As is well-known, a subgroup H of Sym(X) has compact closure if and only if it has finite
orbits. Furthermore, any topological permutation group that acts continuously on X must
map continuously into Sym(X), and conversely, Sym(X) itself acts continuously on X.
For (2), consider G and H as subgroups of Homeo(X) with the compact-open topology.
By the Arzelà–Ascoli theorem, the condition that H be equicontinuous on X is exactly the
e is a group acting faithfully by
condition that H has compact closure in Homeo(X). If G
e to Homeo(X) is
homeomorphisms on X, then the corresponding homomorphism from G
e on X is continuous.
continuous if and only if the action of G
For (3), consider G and H as subgroups of the unitary group U(X) with the strong operator
topology and let H denote the closure of H in U(H). Suppose that X is locally finite as
a representation of H, with (Xi ) the corresponding increasing family of finite-dimensional
subrepresentations. For each i, H acts on Xi with an open kernel of finite index, so H is
totally disconnected. In addition, given a net (hj )j∈J in H, there is a subnet (hj(k) ) that is
eventually constant on each Xi . It follows that (hj(k) ) converges pointwise on X to a unitary
map; in other words, hj(k) converges in H. Thus H is compact and hence a profinite group.
Conversely, suppose H is a subgroup of G with compact closure in U(X). By standard results
(see for instance [3, Theorem 5.2]), X is an orthogonal sum of finite-dimensional irreducible
representations Xj of H, and on each Xj , H acts as a compact Lie group. If H is profinite,
then the Lie quotients of H are in fact finite, so H acts on Xj with a kernel of finite index.
We conclude that X is a locally finite representation of H and hence of H. Therefore, H has
profinite closure in U(X) if and only if X is locally finite as a representation of H.
In all cases, the conclusions now follow by Theorem 7.1.
20
COLIN D. REID AND PHILLIP R. WESOLEK
Remark 7.3. If X is a countably infinite set, the Cantor set, or the infinite-dimensional
separable complex Hilbert space, then Sym(X), Homeo(X), and U(X) respectively are wellknown examples of Polish groups that are not locally compact. As such, simply taking the
closure of the image of ψ : G → L, with L one of the aforementioned groups, will not
always produce a locally compact group, and moreover, there are interesting examples of
continuous actions of t.d.l.c. groups that do not arise from taking the closure in L. For
example, Thompson’s group V acts faithfully by homemorphisms on the standard ternary
Cantor set X ⊂ [0, 1] and has a commensurated subgroup H consisting of those elements of
V that act by isometries of the visual metric. In particular, the action of H is equicontinuous.
There is thus a unique t.d.l.c. completion Ve y X of the action on X such that the closure
of H in Ve is a compact open subgroup. The group Ve is known as Neretin’s group N2,2 of
piecewise homotheties of the ternary Cantor set, and it carries a strictly finer topology than
the one induced by Homeo(X).
References
[1] V. V. Belyaev. Locally finite groups containing a finite inseparable subgroup. Siberian Math. J., 34(2):218–
232, 1993.
[2] N. Bourbaki. General topology. Chapters 1–4. Elements of Mathematics (Berlin). Springer-Verlag, Berlin,
1989. Translated from the French, Reprint of the 1966 edition.
[3] G. B. Folland. A course in abstract harmonic analysis. CRC Press, Boca Raton, FL, 2000.
[4] A. S. Kechris. Classical descriptive set theory, volume 156 of Graduate Texts in Mathematics. SpringerVerlag, New York, 1995.
[5] A. Le Boudec and P. Wesolek. Commensurated subgroups in tree almost automorphism groups. Groups
Geom. Dyn. to appear.
[6] F. Le Maı̂tre and P. Wesolek. On strongly just infinite profinite branch groups. J. Group Theory, 20(1):1–
32, 2017.
[7] C. D. Reid. Dynamics of flat actions on totally disconnected, locally compact groups. New York J. Math.,
22:115–190, 2016.
[8] C. D. Reid and P. R. Wesolek. Dense normal subgroups and chief factors in locally compact groups. Proc.
Lond. Math. Soc. to appear.
[9] L. Ribes and P. Zalesskii. Profinite groups, volume 40 of Ergebnisse der Mathematik und ihrer Grenzgebiete.
3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd
Series. A Series of Modern Surveys in Mathematics]. Springer-Verlag, Berlin, second edition, 2010.
[10] G. Schlichting. Operationen mit periodischen Stabilisatoren. Arch. Math., 34(1):97–99, 1980.
[11] Y. Shalom and G. A. Willis. Commensurated subgroups of arithmetic groups, totally disconnected groups
and adelic rigidity. Geom. Funct. Anal., 23(5):1631–1683, 2013.
[12] P. Wesolek. Elementary totally disconnected locally compact groups. Proc. Lond. Math. Soc. (3),
110(6):1387–1434, 2015.
[13] G. Willis. The structure of totally disconnected, locally compact groups. Math. Ann., 300(2):341–363,
1994.
[14] G. Willis. Tidy subgroups for commuting automorphisms of totally disconnected groups: An analogue of
simultaneous triangularisation of matrices. New York J. Math., 10:1–35, 2004.
University of Newcastle, School of Mathematical and Physical Sciences, University Drive,
Callaghan NSW 2308, Australia
E-mail address: [email protected]
Binghamton University, Department of Mathematical Sciences, PO Box 6000, Binghamton
New York 13902 USA
E-mail address: [email protected]
| 4 |
Submitted to the Annals of Applied Probability
ITERATIVE COLLABORATIVE FILTERING
FOR SPARSE MATRIX ESTIMATION ∗
arXiv:1712.00710v1 [math.ST] 3 Dec 2017
By Christian Borgs‡ , Jennifer Chayes‡ Christina
Lee‡ and Devavrat Shah§
Microsoft Research New England‡ and Massachusetts Institute of
Technology§
The sparse matrix estimation problem consists of estimating the
distribution of an n × n matrix Y , from a sparsely observed single instance of this matrix where the entries of Y are independent
random variables. This captures a wide array of problems; special instances include matrix completion in the context of recommendation
systems, graphon estimation, and community detection in (mixed
membership) stochastic block models. Inspired by classical collaborative filtering for recommendation systems, we propose a novel iterative, collaborative filtering-style algorithm for matrix estimation in
this generic setting. We show that the mean squared error (MSE) of
our estimator converges to 0 at the rate of O(d2 (pn)−2/5 ) as long as
ω(d5 n) random entries from a total of n2 entries of Y are observed
(uniformly sampled), E[Y ] has rank d, and the entries of Y have
bounded support. The maximum squared error across all entries converges to 0 with high probability as long as we observe a little more,
Ω(d5 n ln2 (n)) entries. Our results are the best known sample complexity results in this generality.
∗
A preliminary version of this work has been accepted to appear in proceedings of the
Neural Information Processing Systems Conference, 2017. The results have been improved,
strengthened, and expanded with new extensions since the preliminary version.
†
This work is supported in parts by NSF under grants CMMI-1462158 and CMMI1634259, by DARPA under grant W911NF-16-1-0551, and additionally by a NSF Graduate
Fellowship and Claude E. Shannon Research Assistantship.
MSC 2010 subject classifications: Primary 62F12, 62G05; secondary 60B20, 68W40
Keywords and phrases: graphon estimation, matrix estimation, latent variable models,
similarity based collaborative filtering, recommendation systems
1
2
BORGS-CHAYES-LEE-SHAH
CONTENTS
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 Problem Statement . . . . . . . . . . . . . . . . . . .
1.2 Related Work . . . . . . . . . . . . . . . . . . . . . .
1.3 Summary of Contributions . . . . . . . . . . . . . . .
2 Model, Assumptions and Variations . . . . . . . . . . . .
2.1 Notations, Assumptions . . . . . . . . . . . . . . . .
2.1.1 Mixed Membership Models . . . . . . . . . .
2.1.2 Finite Degree Polynomials . . . . . . . . . . .
2.2 Variation: Asymmetric . . . . . . . . . . . . . . . . .
2.3 Variation: Categorical Labels . . . . . . . . . . . . .
3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1 Algorithm Intuition . . . . . . . . . . . . . . . . . . .
3.2 Algorithm Details . . . . . . . . . . . . . . . . . . . .
3.3 Reducing computation by subsampling vertices . . .
3.4 Choosing radius parameter r . . . . . . . . . . . . .
3.5 Computational Complexity . . . . . . . . . . . . . .
3.6 Belief Propagation and Non-Backtracking Operator .
3.7 Knowledge of the observation set E . . . . . . . . . .
4 Main Results . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Modified algorithm with subsampled Anchor vertices
4.2 Asymmetric Matrix . . . . . . . . . . . . . . . . . . .
4.3 Categorical Valued Data . . . . . . . . . . . . . . . .
4.4 Non-uniform sampling . . . . . . . . . . . . . . . . .
5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . .
6 Proof Sketch . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Author’s addresses . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
4
7
9
9
10
11
12
14
15
15
16
18
19
20
22
22
22
28
30
32
33
36
37
41
43
Appendices
A Proof of Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1 Sparsity of Local Neighborhood Grows Exponentially . . .
A.2 Concentration of Paths within Local Neighborhood . . . .
A.3 Showing that Majority of Local Neighborhoods are Good
A.4 Concentration of Edges Between Local Neighborhoods . .
A.5 Existence of Close Neighbors . . . . . . . . . . . . . . . .
A.6 Error Bounds of Final Estimator . . . . . . . . . . . . . .
B Proof of Main Results . . . . . . . . . . . . . . . . . . . . . . .
B.1 Bounding the Max Error . . . . . . . . . . . . . . . . . . .
B.2 Using Subsampled Anchor Vertices . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
. 44
. 44
. 49
. 58
. 64
. 82
. 94
. 109
. 109
. 110
ITERATIVE COLLABORATIVE FILTERING
3
1. Introduction. We consider the question of sparse matrix completion
with noisy observations. As a prototype for such a problem, consider a noisy
observation of a social network where observed interactions are signals of
true underlying connections. We might want to predict the probability that
two users would choose to connect if recommended by the platform, e.g.
LinkedIn. As a second example, consider a recommendation system where
we observe movie ratings provided by users, and we may want to predict the
probability distribution over ratings for specific movie-user pairs. A popular
collaborative filtering approach suggests using “similarities” between pairs
of users to estimate the probability of a connection being formed or a movie
being liked. Traditionally, the similarities between pair of users in a social
network is computed by comparing the set of their friends or in the context
of movie recommendation, by comparing commonly rated movies. In the
sparse setting, however most pairs of users have no common friends, or most
pairs of users have no commonly rated movies; thus there is insufficient data
to compute the traditional similarity metrics.
In this work, the primary interest is to provide a principled way to extend such a simple, intuitive approach to compute similarities between pair
of users so to achieve sparse matrix completion. We propose to do so by
incorporating information within a larger radius neighborhood rather than
restricting only to immediate neighbors. In the process, we establish that it
achieves best-known sample complexity which matches well known, conjectured lower bound for a special instance of the generic problem, the mixed
membership stochastic block model.
1.1. Problem Statement. The question discussed above can be mathematically formulated as a matrix estimation problem. Let F be an n × n
matrix which we would like to estimate, and let Z be a noisy signal of matrix F such that E[Z] = F . The available data is denoted by (E, M ), where
E ⊂ [n] × [n] denotes the subset of indices for which data is observed, and M
is the n × n symmetric data matrix where M (u, v) = Z(u, v) for (u, v) ∈ E,
and M (u, v) = 0 for (u, v) ∈
/ E. We can equivalently represent the data with
an undirected weighted graph G with vertex set [n], edge set E, and edge
weights given by M . We shall use graph and matrix notations in an interchangeable manner. Given the data (E, M ), we would like to estimate the
original matrix F . We assume a uniform sampling model, where each entry
is observed with probability p independently of all other entries.
We shall assume that each u ∈ [n] is associated to a latent variable αu ∈
X1 , which is drawn independently across indices [n] as per distribution P1
over a bounded compact space X1 . We shall assume that the expected data
4
BORGS-CHAYES-LEE-SHAH
matrix can be described by the latent function f , i.e. F (u, v) = f (αu , αv ),
where f : X1 × X1 → R is a symmetric function. We note that such a
structural assumption or the so-called Latent Variable Model is a canonical representation for exchangeable arrays as shown by Aldous and Hoover
Aldous (1981); Hoover (1981); Austin (2012). For each observation, we assume that E[Z(u, v)] = F (u, v), Z(u, v) is bounded and {Z(u, v)}1≤u<v≤n
are independent conditioned on the node latent variables.
The goal is to find smallest p, as a function of n and structural properties
of f , so that there exists an algorithm that can produce F̂ , an estimate
of P
matrix F , so that the Mean-Squared-Error (MSE) between F̂ and F ,
1
2
u,v∈[n] (F̂ (u, v) − F (u, v)) , converges to 0 as n → ∞.
n2
1.2. Related Work. The matrix estimation problem introduced above,
as special cases, includes problems from different areas of literature: matrix
completion popularized in the context of recommendation systems, graphon
estimation arising from the asymptotic theory of graphs, and community
detection using the stochastic block model or its generalization known as the
mixed membership stochastic block model. The key representative results
for each of these are mentioned in Table 1. We discuss the scaling of the
sample complexity with respect to d (model complexity, usually rank) and
n for polynomial time algorithms, including results for both mean squared
error convergence, exact recovery in the noiseless setting, and convergence
with high probability in the noisy setting.
Now we provide a brief overview of prior works reported in the Tables
1. In the context of matrix completion, there has been much progress under the low-rank assumption and additive noise model. Most theoretically
founded methods are based on spectral decompositions or minimizing a loss
function with respect to spectral constraints Keshavan, Montanari and Oh
(2010a,b); Candes and Recht (2009); Candès and Tao (2010); Recht (2011);
Negahban and Wainwright (2011); Davenport et al. (2014); Chen and Wainwright (2015); Chatterjee (2015). A work that is closely related to ours is
by Lee et al. (2016). It proves that a similarity based collaborative filteringstyle algorithm provides a consistent estimator for matrix completion under
the generic model when the latent function is Lipschitz, not just low rank;
however, it requires Õ(n3/2 ) samples. In a sense, our work can be viewed
as an algorithmic generalization of Lee et al. (2016) that handles the sparse
sampling regime and a generic noise model.
Table 1
Sample Complexity of Related Literature
Sample Complexity
Data/Noise model
Expected matrix
Guarantee
KMO09
KMO10
NW11
CW15
C14
LLSS16
CT09
KMO09
R11
ω(dn)
Ω(dn max(log n, d)), ω(dn)
ω(dn log n)
Ω(n max(d, log2 n))
ω(dn log6 n)
Ω(n3/2 )
Ω(dn log2 n max(d, log4 n))
Ω(dn max(d, log n))
Ω(dn log2 n)
noiseless
additive iid Gaussian
additive iid Gaussian
additive iid Gaussian
independent bounded
additive iid bounded
noiseless
noiseless
noiseless
rank d
rank d
rank d
rank d
rank d
Lipschitz
rank d
rank d
rank d
MSE→ 0
MSE→ 0
MSE→ 0
MSE→ 0
MSE→ 0
MSE→ 0
exact recovery
exact recovery
exact recovery
1-bit matrix
completion
CW15
DPBW14
Ω(n max(d log n, log2 n, d2 ))
Ω(n max(d, log n)), ω(dn)
binary entries
binary entries
rank d
rank d
MSE→ 0
MSE→ 0
SBM
AS15a; AS16
AS15a
XML14
ω(n)∗
Ω(n log n)∗
Ω(n log n)∗
binary entries
binary entries
binary entries
d blocks
d blocks (SBM)
rank d
partial recovery
exact recovery
MSE→ 0
MMSBM
AGHK13
SH17
Ω(d2 n polylog n)
Ω(d2 n)
binary entries
binary entries
rank d
rank d
whp error → 0
detection
ACC13
ZLZ16
BCCG15
Ω(n2 )
Ω(n2 )
ω(n)
binary entries
binary entries
binary entries
monotone row sum
piecewise Lipschitz
monotone row sum
MSE→ 0
MSE→ 0
MSE→ 0
This work
This work
ω(d5 n)
Ω(d5 n log2 n)
rank d, Lipschitz
rank d, Lipschitz
MSE→ 0
whp error → 0
matrix
completion
graphon
matrix est.
independent bounded
independent bounded
does not indicate dependence on d.
*result
ITERATIVE COLLABORATIVE FILTERING
Paper
5
6
BORGS-CHAYES-LEE-SHAH
Most of the results in matrix completion require additive noise models,
which do not extend to setting when the observations are binary or quantized. The Universal Singular Value Thresholding (USVT) estimator Chatterjee (2015) is able to handle general bounded noise, although it requires a
few log factors more in its sample complexity. Our work removes the extra
log factors while still allowing for general bounded noise.
There is also a significant amount of literature which looks at the estimation problem when the data matrix is binary, also known as 1-bit matrix
completion, stochastic block model (SBM) parameter estimation, or graphon
estimation. The latter two terms are found within the context of community
detection and network analysis, as the binary data matrix can alternatively
be interpreted as the adjacency matrix of a graph – which are symmetric, by
definition. Under the SBM, each vertex is associated to one of d community
types, and the probability of an edge is a function of the community types of
both endpoints. Estimating the n×n parameter matrix becomes an instance
of matrix estimation. In SBM, the expected matrix is at most rank d due to
its block structure. Precise thresholds for cluster detection (better than random) and estimation have been established by Abbe and Sandon (2015a,b,
2016). Our work, both algorithmically and technically, draws insight from
this sequence of works, extending the analysis to a broader class of generative models through the design of an iterative algorithm, and improving the
technical results with precise MSE bounds.
The mixed membership stochastic block model (MMSBM) allows each
vertex to be associated to a length d vector, which represents its weighted
membership in each of the d communities. The probability of an edge is a
function of the weighted community memberships vectors of both endpoints,
resulting in an expected matrix with rank at most d. Recent work by Steurer
and Hopkins (2017) provides an algorithm for weak detection for MMSBM
with sample complexity d2 n, when the community membership vectors are
sparse and evenly weighted. They provide partial results to support a conjecture that d2 n is a computational lower bound, separated by a gap of d
from the information theoretic lower bound of dn. This gap was first shown
in the simpler context of the stochastic block model Decelle et al. (2011). Xu,
Massoulié and Lelarge (2014) proposed a spectral clustering method for inferring the edge label distribution for a network sampled from a generalized
stochastic block model. When the expected function has a finite spectrum
decomposition, i.e. low rank, then they provide a consistent estimator for
the sparse data regime, with Ω(n log n) samples.
Graphon estimation extends SBM and MMSBM to the generic Latent
Variable Model where the probability of an edge can be any measurable
ITERATIVE COLLABORATIVE FILTERING
7
function f of real-valued types (or latent variables) associated to each endpoint. Graphons were first defined as the limiting object of a sequence of
large dense graphs Borgs et al. (2008); Diaconis and Janson (2008); Lovász
(2012), with recent work extending the theory to sparse graphs Borgs et al.
(2014a,b, 2016); Veitch and Roy (2015). In the graphon estimation problem, we would like to estimate the function f given an instance of a graph
generated from the graphon associated to f . Gao, Lu and Zhou (2015);
Klopp, Tsybakov and Verzelen (2015) provide minimax optimal rates for
graphon estimation; however a majority of the proposed estimators are not
computable in polynomial time, since they require optimizing over an exponentially large space (e.g. least squares or maximum likelihood) Wolfe and
Olhede (2013); Borgs et al. (2015); Borgs, Chayes and Smith (2015); Gao,
Lu and Zhou (2015); Klopp, Tsybakov and Verzelen (2015). Borgs et al.
(2015) provided a polynomial time method based on degree sorting in the
special case when the expected degree function is monotonic. To our knowledge, existing positive results for sparse graphon estimation require either
strong monotonicity assumptions Borgs et al. (2015), or rank constraints as
assumed in the SBM, the 1-bit matrix completion, and in this work.
We call special attention to the similarity based methods which are able to
bypass the rank constraints, relying instead on smoothness properties of the
latent function f (e.g. Lipschitz) Zhang, Levina and Zhu (2015); Lee et al.
(2016). They hinge upon computing similarities between rows or columns
by comparing commonly observed entries. Similarity based methods, also
known in the literature as collaborative filtering, have been successfully
employed across many large scale industry applications (Netflix, Amazon,
Youtube) due to its simplicity and scalability Goldberg et al. (1992); Linden, Smith and York (2003); Koren and Bell (2011); Ning, Desrosiers and
Karypis (2015); however the theoretical results have been relatively sparse.
These recent results suggest that the practical success of these methods
across a variety of applications may be due to its ability to capture local
structure. A key limitation of this approach is that it requires a dense dataset
with sufficient entries in order to compute similarity metrics, requiring that
each pair of rows or columns has a growing number of overlapped observed
entries, which does not hold when p = o(n−1/2 ).
1.3. Summary of Contributions. In this work, we present a novel algorithm for estimating F = [F (i, j)] by extending the notion of similarity for
sparse regime in an intuitive and simple way: rather than only considering
directly overlapped entries as done in Zhang, Levina and Zhu (2015); Lee
et al. (2016), we consider longer “paths” of data associated to each row,
8
BORGS-CHAYES-LEE-SHAH
expanding the set of associated datapoints until there is sufficient overlap.
We show that this does not introduce bias and variance due to the sparse
sampling. In fact, the MSE associated with the resulting estimate does converge to 0 as long as the latent function f when regarded as an integral
operator has finite spectrum with rank d and p = ω(d5 n). More precisely,
if f is piecewise Lipschitz with rank d and the latent variables are sampled uniformly over the unit interval, we prove that the mean squared error
(MSE) of our estimates converges to zero at a rate of O(d2 (pn)−1/2+θ ) as
long as the sparsity p = ω(dmax(1/2θ,4/(1−2θ)) n−1 ) for some θ ∈ (0, 41 ) (i.e.
ω(d5 n) total observations for θ = 1/10). In addition, with high probability,
the maximum squared error converges to zero at a rate of O(d2 (pn)−1/2+θ )
as long as the sparsity p = Ω(dmax(1/2θ,4/(1−2θ)) n−1 ln2 (n)). Our analysis
applies to a generic noise setting as long as Z(i, j) has bounded support.
Our work takes inspiration from Abbe and Sandon (2015a,b, 2016), which
estimates clusters of the stochastic block model by computing distances from
local neighborhoods around vertices. We improve upon their analysis to provide MSE bounds for the general latent variable model with finite spectrum,
which includes a larger class of generative models such as mixed membership
stochastic block models, while they consider the stochastic block model with
non-overlapping communities. We show that our results hold even when the
rank d increases with n, as long as d = o((pn)2θ,(1−2θ)/4 ). for θ ∈ (0, 41 ). As
compared to spectral methods such as Keshavan, Montanari and Oh (2010b);
Recht (2011); Davenport et al. (2014); Chen and Wainwright (2015); Chatterjee (2015), our analysis handles the general bounded noise model and
holds for sparser regimes with respect to n for constant d, only requiring
p = ω(n−1 ).
In summary, as can be seen from Table 1, our result provides the best
sample complexity with respect to n for the general matrix estimation problem with bounded entries noise model and constant rank d, as the other
models either require extra log factors, or impose additional requirements
on the noise model or the expected matrix. Similarly, ours is the best known
sample complexity for high probability max-error convergence to 0 for the
general rank d bounded entries setting, as other results either assume block
constant or noiseless. Recently, Steurer and Hopkins (2017) showed a partial
result that this computational lower bound holds for algorithms that rely on
fitting low-degree polynomials to the observed data. Given the conjectured
lower bound (with partial support in Steurer and Hopkins (2017)), it seems
that our result is nearly optimal if not optimal in terms of its dependence
on both n for MSE convergence as well as high probability (near) exact
recovery.
ITERATIVE COLLABORATIVE FILTERING
9
2. Model, Assumptions and Variations.
2.1. Notations, Assumptions. Recall that, of our interest is an n × n
symmetric matrix F ; Z is a noisy signal of matrix F such that E[Z] = F . The
available data is denoted by (E, M ), where E ⊂ [n]×[n] denotes the subset of
indices for which data is observed, and M is the n×n symmetric data matrix
where M (u, v) = Z(u, v) for (u, v) ∈ E, and M (u, v) = 0 for (u, v) ∈
/ E.
That is, our observations can be denoted as an undirected weighted graph
G with vertex set [n], edge set E, and edge weights given by M . For all
u, v ∈ [n], we shall assume that Z(u, v) are independent across indices with
E[Z(u, v)] = F (u, v); and F (u, v), Z(u, v) ∈ [0, 1].
We assume that f has finite spectrum with rank d when regarded as an
integral operator, i.e. for any αu , αv ∈ X1 ,
f (αu , αv ) =
d
X
λk qk (αu )qk (αv ),
k=1
where λk ∈ R for 1 ≤ k ≤ d, qk are orthonormal `2 functions for 1 ≤ k ≤ d
such that
Z
Z
2
qk (y) dP1 (y) = 1 and
qk (y)qh (y)dP1 (y) = 0 for k 6= h.
X1
X1
We assume that there exists some Bq such that supy∈[0,1] |qk (y)| ≤ Bq for
all k. Let us define
(2.1)
φ(ξ) := ess inf α0 ∈X1 P1
!
d
o
n
X
α ∈ X1 s.t.
λ2k (qk (α) − qk (α0 ))2 < ξ 2
..
k=1
Let Λ denote the d × d diagonal matrix with {λk }k∈[d] as the diagonal entries, and let Q denote the d × n matrix where Q(k, u) = qk (αu ). Since Q
is a random matrix depending on the sampled α, it is not guaranteed to
be an orthonormal matrix (even though qk are orthonormal functions). By
definition, it follows that F = QT ΛQ. Let d0 ≤ d be the number of distinct
valued eigenvalues amongst λk , 1 ≤ k ≤ d. Let Λ̃ denote the d × d0 matrix
where Λ̃(a, b) = λab−1 .
For convenience, we shall define ρ ∈ Rd as
Z
(2.2)
ρ = [ρk ]k∈[d] , where ρk = E[qk (α)] =
qk (y)dP1 (y).
X1
10
BORGS-CHAYES-LEE-SHAH
We note that the finite spectrum assumption also implies that the model
can be represented by latent variables in the d dimensional Euclidean space,
where the latent variable for node i would be the vector (q1 (αi ), . . . qd (αi )),
and the latent function would be linear, having the form
X
f (~q, ~q0 ) =
λk qk qk0 = q T Λq 0 .
k
This condition also implies that the expected matrix F is low rank, which
includes scenarios such as the mixed membership stochastic block model and
finite degree polynomials. Although we assume observations are sampled
independently with probability p, we will also discuss a solution for dealing
with non-uniform sampling in Section 5. Since the finite spectrum condition
imposes that the model can be described by a linear function, we will not
need the additional Lipschitz condition, although we will need bounds on
the underestimator φ which captures the local geometry in the d-dimensional
representation.
2.1.1. Mixed Membership Models. We show that any mixed membership
model can be represented with a finite spectrum latent variable model. In
the mixed membership model, each node is associated to a vector π ∈ ∆d ,
sampled iid from a distribution P . For two nodes with respective types π
and π 0 , the observed interaction is
X
f (π, π 0 ) =
πi πj0 Bij = π T Bπ 0 ,
ij
where B ∈ [0, 1]d×d and assumed to be symmetric. Since B is symmetric,
there exists a diagonal decomposition B = U Λ̃U T with uk denoting the
eigenvectors, such that the function f : ∆d × ∆d → [0, 1] corresponds to
f (π, π 0 ) =
d
X
λ̃k uTk πuTk π 0 .
k=1
We can verify that
Z
∆d
Z
f (π, π 0 )dP dP < ∞.
∆d
Let W : L2 (∆d , [0, 1]) → L2 (∆d , [0, 1]) denote the Hilbert-Schmidt integral operator associated to the kernel f , such that for some function η ∈
ITERATIVE COLLABORATIVE FILTERING
11
L2 (∆d , [0, 1]),
Z
0
(W η)(π ) =
f (π, π 0 )η(π)dP
∆d
Z
=
=
=
d
X
λ̃k uTk πuTk π 0 η(π)dP
∆d k=1
Z
d
X
T 0
uTk πη(π)dP
λ̃k uk π
∆d
k=1
d
X
λ̃k gk (π 0 )hgk ηi,
k=1
where the inner product between two functions η, η 0 is defined as
Z
0
hη, η i =
η(π)η 0 (π)dP,
∆d
and gk (·) is a set of functions such that gk (π) = uTk π. Therefore, the complement of the kernel of W must be contained within the span of gk , such
that W must have finite spectrum with rank at most d.
2.1.2. Finite Degree Polynomials. We demonstrate that finite degree polynomials lead to latent variable models with finite spectrum. Let f (x, y) be
any finite degree symmetric polynomial, represented in the form
f (x, y) =
d X
d
X
cij xi y j ,
i=0 j=0
were symmetry implies that for all ij, cij = cji . Let x = (1, x, x2 , . . . xd ) and
y = (1, y, y 2 , . . . y d ), and let C denote the (d+1)×(d+1) matrix with entries
[cij ]. It follows that f (x, y) = xT Cy. Then we can use the same argument as
above to show that the Hilbert-Schmidt integral operator associated to the
kernel f has finite spectrum. Since C is symmetric, there exists a diagonal
decomposition B = U Λ̃U T with uk denoting the eigenvectors, such that the
function f corresponds to
f (x, y) =
d
X
λ̃k uTk xuTk y.
k=1
Let W : L2 (∆d , [0, 1]) → L2 (∆d , [0, 1]) denote the Hilbert-Schmidt integral operator associated to the kernel f , such that for some function η ∈
12
BORGS-CHAYES-LEE-SHAH
L2 (∆d , [0, 1]),
Z
f (x, y)η(x)dP
(W η)(y) =
X1
d
X
Z
=
=
=
X1 k=1
d
X
λ̃k uTk xuTk yη(x)dP
λ̃k uTk y
k=1
d
X
Z
X1
uTk xη(x)dP
λ̃k gk (y)hgk ηi,
k=1
where gk (·) is a set of functions such that gk (y) = uTk y. Therefore, the
complement of the kernel of W must be contained within the span of gk ,
such that W must have finite spectrum with rank at most d.
2.2. Variation: Asymmetric. If the model that we would like to learn is
in fact asymmetric, we can actually transform it to an equivalent symmetric
model. Consider an n × m matrix F which we would like to learn, where E is
the set of observed indices generated via uniform sampling with density p,
and we assume the independent bounded noise model where Z(u, v) ∈ [0, 1],
and E[Z(u, v)] = Fuv . Row u ∈ [n] is associated with latent variable αu ∈ X1
drawn independently and as per distribution P1 , and column v ∈ [m] is
associated with latent variable βv ∈ X2 drawn independently and as per
distribution P2 . Let F (u, v) = f (αu , βv ) ∈ [0, 1], where f has finite spectrum
according to
d
X
λk q1k (αu )q2k (βv ),
f (αu , βv ) =
k=1
where q1k and q2k are orthonormal `2 functions with respect to the measures
P1 and P2 respectively.
We show how to translate this to an approximately equivalent symmetric
model satisfying the assumptions in Section 2.1. We augment the latent
space to be X10 = (X1 , 0) ∪ (0, X2 ) ⊂ X1 × X2 , where
n
n+m P1 if y ∈ (X1 , 0)
P 0 (y) =
.
m P2 if y ∈ (0, X2 )
n+m
13
ITERATIVE COLLABORATIVE FILTERING
We define the symmetric function f 0 to be
0
if θu , θv ∈ (X1 , 0)
0
if θu , θv ∈ (0, X2 )
f 0 (θu , θv ) =
.
f (αu , βv ) if θu = (αu , 0), θv = (0, βv )
f (α , β ) if θ = (0, β ), θ = (α , 0)
v u
u
u
v
v
Then we can verify that f 0 has finite spectrum
√
√
d
d
X
X
λk nm 0
λk nm
0
qk (θu )qk (θv ) −
qk (θu )qk0 (θv ),
f (θu , θv ) =
n+m
n+m
k=1
k=1
with orthogonal eigenfunctions
(
n+m 1/2
qk (θu ) =
and
(
qk0 (θu )
=
q1k (αu )
2n
n+m 1/2
q2k (βu )
2m
n+m 1/2
q1k (αu )
2n
n+m 1/2
− 2m
q2k (βu )
if θu = (αu , 0)
if θu = (0, βu )
if θu = (αu , 0)
if θu = (0, βu )
.
We can verify that this in fact equals f 0 as defined above, and that the qk , qk0
functions are orthonormal:
Z
Z
Z
1
1
q1k (α)2 dP1 (α) +
q2k (β)2 dP2 (β)
qk (y)2 dP1 (y) =
0
2
2
X1
X2
X1
1
1
=
+
= 1.
2
2
The same holds for qk0 . For k 6= h,
Z
qk (y)qh (y)dP1 (y)
X10
Z
Z
1
1
=
q1k (α)q1h (α)dP1 (α) +
q2k (β)q2h (β)dP2 (β)
2
2
X1
X2
= 0.
And for qk0 and qk ,
Z
qk (y)qk0 (y)dP1 (y)
X10
Z
Z
1
1
q1k (α)q1k (α)dP1 (α) +
q2k (β)(−q2k (β))dP2 (β)
=
2
2
X1
X2
1
1
=
−
= 0.
2
2
14
BORGS-CHAYES-LEE-SHAH
Therefore, the (n + m) × (n + m) matrix F 0 is effectively a permuted version
of
0 F
.
FT 0
The only difference in these models is that in the original asymmetric model
we fix n vertices with latent type in X1 , and m vertices with latent type in
X2 , whereas in the symmetric model we sample n + m vertices total, where
n
each vertex is type X1 with probability n+m
, and type X2 with probability
m
.
These
two
models
are
asymptotically
equivalent
for large n, m, as long
n+m
as the ratio remains fixed.
2.3. Variation: Categorical Labels. If the edge labels are categorical instead of real-valued, then it doesn’t make sense to compute the expected
value averaged over different edge label values, but rather the goal would
be instead to estimate the distribution over the different categories or labels. This is particularly suitable for a setting in which there is no obvious
metric between the categories such that an aggregate statistic would not be
meaningful.
Assume that there is a finite m category types, and the distribution over
categories is a function of the latent variables. Futhermore, we assume that
the model within each category satisfies the assumptions mentioned above in
Section 2.1. Assume a uniform sampling model with density p. Each u ∈ [n]
is associated to a latent variable αu ∈ X1 , drawn independently as per distribution P1 over a compacted bounded space X1 . F (u, v) = f (αu , αv ) ∈ ∆m ,
where f is a symmetric function, and ∆m denotes the m dimensional probability simplex. For each observed entry (u, v) ∈ E, the observed datapoint Z(u, v) ∈ [m] is drawn from the distribution f (αu , αv ), such that
P(Z(u, v) = i) = fi (αu , αv ). Assume that each of the functions {fi }i∈[m] has
finite spectrum with rank di when regarded as an integral operator,
fi (αu , αv ) =
di
X
λik qik (αu )qik (αv ),
k=1
where qik are orthonormal `2 functions such that
Z
Z
2
qik (y) dP1 (y) = 1 and
qik (y)qih (y)dP1 (y) = 0 for k 6= h.
X1
X1
Our basic estimator can be modified to estimate a categorical distribution,
where the error is measured in terms of total variation distance. Let M i be an
i = I(M (u, v) = i), such that E[M i |(u, v) ∈
n × n binary matrix where Muv
uv
ITERATIVE COLLABORATIVE FILTERING
15
E] = P(Z(u, v) = i) = fi (αu , αv ). We can apply the algorithm to the data
(E, M i ) to estimate fi for each i. Since the estimates should sum to 1 across
all categories, we apply the algorithm to the data matrices associated to the
first m − 1 categories, i.e. M i for i ∈ [m − 1], and then we let the estimate
for the m-th category be equal to 1 minus the sum of the m − 1 earlier
estimates.
3. Algorithm. The algorithm that we propose uses the concept of local
approximation, first determining which datapoints are similar in value, and
then computing neighborhood averages for the final estimate. All similaritybased collaborative filtering methods have the following basic format:
1. Compute distances between pairs of vertices, e.g.,
R
dist(u, a) ≈ X1 (f (αu , t) − f (αa , t))2 dP1 (t).
(3.1)
2. Form estimate by averaging over “nearby” datapoints,
1 P
F̂ (u, v) = |Euv
(3.2)
(a,b)∈Euv M (a, b),
|
where Euv := {(a, b) ∈ E s.t. dist(u, a) < ξ(n), dist(v, b) < ξ(n)}.
We will choose the threshold ξ(n) depending on dist in order to guarantee
that it is small enough to drive the bias to zero, ensuring the included datapoints are close in value, yet large enough to reduce the variance, ensuring
|Euv | diverges.
3.1. Algorithm Intuition. Various similarity-based algorithms differ in
the distance computation (Step 1). For dense datasets, i.e. p = ω(n−1/2 ),
previous works have proposed and analyzed algorithms which approximate
the L2 distance of (3.1) by using variants of the finite sample approximation,
P
dist(u, a) = |O1ua | y∈Oua (F (u, y) − F (a, y))2 ,
(3.3)
where y ∈ Oua iff (u, y) ∈ E and (a, y) ∈ E Airoldi, Costa and Chan (2013);
Zhang, Levina and Zhu (2015); Lee et al. (2016). For sparse datasets, with
high probability, Oua = ∅ for almost all pairs (u, a), such that this distance
cannot be computed.
In this paper we are interested in the sparse setting when p is significantly smaller than n−1/2 , down to the lowest threshold of p = ω(n−1 ). If
we visualize the data via a graph with edge set E, then (3.3) corresponds
to comparing common neighbors of vertices u and a. A natural extension
when u and a have no common neighbors, is to instead compare the r-hop
16
BORGS-CHAYES-LEE-SHAH
neighbors of u and a, i.e. vertices y which are at distance exactly r from
both u and a. We compare the product of weights along edges in the path
from u to y and a to y respectively, which in expectation approximates
R
Qr−2
Q
i∈[r−1] P1 (ti )
X1r−1 f (αu , t1 )( s=1 f (ts , ts+1 ))f (tr−1 , αy )d
P r
= k λk qk (αu )qk (αy )
(3.4)
= eTu QT Λr Qey .
We choose a large enough r such that there are sufficiently many “common”
vertices y which have paths to both u and a, guaranteeing that our distance
can be computed from a sparse dataset.
3.2. Algorithm Details. We present and discuss details of each step of the
algorithm, which primarily involves computing pairwise distances (or similarities) between vertices. The parameters of the algorithm are c1 , c2 , c3 , r, ξ1 (n)
and ξ2 (n).
Step 1: Sample Splitting. We partition the datapoints into disjoint sets,
which are used in different steps of the computation to minimize correlation
across steps for the analysis. Each edge in E is independently placed into E1 ,
E2 , or E3 , with probabilities c1 , c2 , and 1 − c1 − c2 respectively. Matrices M1 ,
M2 , and M3 contain information from the subset of the data in M associated
to E1 , E2 , and E3 respectively. M1 is used to define local neighborhoods of
each vertex, M2 is used to compute similarities of these neighborhoods, and
M3 is used to average over datapoints for the final estimate in (3.2).
Step 2: Expanding the Neighborhood. We first expand local neighborhoods of radius r around each vertex. Let Su,s denote the set of vertices
which are at distance s from vertex u in the graph defined by edge set E1 .
Specifically, i ∈ Su,s if the shortest path in G1 = ([n], E1 ) from u to i has
a length of s. Let Bu,s denote the set of vertices which are at distance at
most s from vertex u in the graph defined by E1 , i.e. Bu,s = ∪st=1 Su,t . Let
Tu denote a breadth-first tree in G1 rooted at vertex u. The breadth-first
property ensures that the length of the path from u to i within Tu is equal
to the length of the shortest path from u to i in G1 . If there is more than one
valid breadth-first tree rooted at u, choose one uniformly at random. Let
Nu,r ∈ [0, 1]n denote the following vector with support on the boundary of
the r-radius neighborhood of vertex u (we also call Nu,r the neighborhood
ITERATIVE COLLABORATIVE FILTERING
17
boundary):
(Q
Nu,r (i) =
(a,b)∈pathTu (u,i) M1 (a, b)
if i ∈ Su,r ,
if i ∈
/ Su,r ,
0
where pathTu (u, i) denotes the set of edges along the path from u to i in
the tree Tu . The sparsity of Nu,r (i) is equal to |Su,r |, and the value of the
coordinate Nu,r (i) is equal to the product of weights along the path from
u to i. Let Ñu,r denote the normalized neighborhood boundary such that
ln(1/p)
Ñu,r = Nu,r /|Su,r |. We will choose radius r = 86ln(c
.
1 pn)
Step 3: Computing the distances. We present two variants for estimating
the distance.
1. For each pair (u, v), compute dist1 (u, v) according to
1−c1 p
c2 p
Ñu,r − Ñv,r
T
M2 Ñu,r+1 − Ñv,r+1 .
2. For each pair (u, v), compute distance according to
P
dist2 (u, v) = i∈[d0 ] zi ∆uv (r, i),
where ∆uv (r, i) is defined as
1−c1 p
c2 p
Ñu,r − Ñv,r
T
M2 Ñu,r+i − Ñv,r+i ,
0
and z ∈ Rd is a vector that satisfies Λ2r+2 Λ̃z = Λ2 1. z always exists
and is unique because Λ̃ is a Vandermonde matrix (recall definitions
from Section 2.1), and Λ−2r 1 lies within the span of its columns.
Computing dist1 does not require knowledge of the spectrum of f . In our
analysis we prove that the expected squared error of the estimate computed
in (3.2) using dist1 converges to zero with n for p = ω(n−1+ ) for some
> 0, i.e. p must be polynomially larger than n−1 . Although computing
dist2 requires full knowledge of the eigenvalues (λ1 . . . λd ) to compute the
vector z, the expected squared error of the estimate computed in (3.2) using
dist2 conveges to zero for p = ω(n−1 ), which includes the sparser settings
when p is only larger than n−1 by polylogarithmic factors. It seems plausible
that the technique employed by Abbe and Sandon (2015b) could be used to
design a modified algorithm which does not need to have prior knowledge
of the spectrium. They achieve this for the stochastic block model case by
bootstrapping the algorithm with a method which estimates the spectrum
first and then computes pairwise distances with the estimated eigenvalues.
18
BORGS-CHAYES-LEE-SHAH
Step 4: Averaging datapoints to produce final estimate. The estimate
F̂ (u, v) is computed by averaging over nearby points defined by the distance estimates dist1 (or dist2 ). Recall that Bq ≥ 1 was assumed in the
model definition to upper bound supy∈[0,1] |qk (y)|.
Let Euv1 denote the set of undirected edges (a, b) such that (a, b) ∈ E3 and
1
both dist1 (u, a) and dist1 (v, b) are less than ξ1 (n) = 33Bq d|λ1 |2r+1 (c1 pn)− 2 +θ .
Here θ ∈ (0, 41 ) is a parameter whose choice may affect the performance of
the algorithm. The final estimate F̂ (u, v) produced by using dist1 is computed by averaging over the undirected edge set Euv1 ,
X
1
(3.5)
F̂ (u, v) =
M3 (a, b).
|Euv1 |
(a,b)∈Euv1
Let Euv2 denote the set of undirected edges (a, b) such that (a, b) ∈ E3 , and
1
both dist2 (u, a) and dist2 (v, b) are less than ξ2 (n) = 33Bq d|λ1 |(c1 pn)− 2 +θ .
Again, θ ∈ (0, 41 ) is a parameter whose choice may affect the performance
of the algorithm. The final estimate F̂ (u, v) produced by using dist2 is
computed by averaging over the undirected edge set Euv2 ,
X
1
F̂ (u, v) =
(3.6)
M3 (a, b).
|Euv2 |
(a,b)∈Euv2
3.3. Reducing computation by subsampling vertices. The most expensive
part of the algorithm as stated above, is that we would need to compute n2
pairwise distances. If the network was generated from the stochastic block
model where the function f is piecewise constant, there would be only k types
of vertices to distinguish amongst in order to determine the local distances.
Therefore, if we had a representative vertex from each of the k communities,
it would be sufficient to compare a vertex with the k representative vertices,
instead of all n other vertices (which all repeat these k types). This idea was
used in Abbe and Sandon (2016) to obtain a nearly-linear time algorithm
for clustering.
In our setting however, we do not have k distinct communities, but rather
the function f may be smooth. We can extend this idea by subsampling
sufficiently many “anchor” vertices K ⊂ [n] that cover the space well, i.e. for
any vertex u ∈ [n], there exists some anchor vertex i ∈ K which is “close”
to u in the sense that kΛQ(eu − ei )k22 is small. Then for all n vertices, we
compute the distances with each of the anchor vertices, and we let π : [n] →
K be a mapping from each vertex to the anchor vertex that minimizes the
estimated distance (as computed in Steps 2 and 3 above),
π(u) = arg min dist(u, i).
i∈K
19
ITERATIVE COLLABORATIVE FILTERING
The final estimate then is given by
F̂ (u, v) = F̂ (π(u), π(v)) =
1
|Eπ(u)π(v) |
X
M3 (a, b),
(a,b)∈Eπ(u)π(v)
where Eπ(u)π(v) denotes the set of undirected edges (a, b) such that (a, b) ∈ E3
and both dist(π(u), a) and dist(π(v), b) are less than some threshold ξ(n)
(as defined in Step 4 above). We can compute Eπ(u)π(v) because for each
anchor vertex, we have estimates of the distances to all other vertices. The
pair-wise distance calculations required by the above algorithm scales as
O(n|K|).
3.4. Choosing radius parameter r. The parameter r used to grow the
local neighborhoods in Step 1 of the algorithm must be chosen very precisely. When r is either too small or too large, the size of the nodes at the
neighorhood boundaries will be too small such that there is not sufficient
overlap. The vectors Ñu,r and Ñv,r+1 will be too sparse, and the measureT M Ñ
ments Ñu,r
2 v,r+1 will be noisy. Therefore, our analysis recommends that
r is chosen to satisfy the following conditions
ln(1/c1 p)
ln(1/c1 p)
r + d0 ≤ 8 7ln(9c
(3.7)
=
Θ
ln(c1 pn) ,
1 pn/8)
ln(1/p)
(3.8)
r + 12 ≥ 8 ln(7|λ6dln(1/p)
=
Θ
2
ln(c1 pn) .
| c1 pn/8|λ1 |)
In order to guarantee that there exists an integer value of r which satisfies
(3.7) and (3.8), we need to impose additional restrictions on c1 , p, and |λd |
(we assume |λ1 | is constant with respect to n). The assumption that |λd | =
ω((c1 pn)−1/4 ) guarantees that the left hand side of the second inequality in
(3.7) grows asymptotically with n. We need to ensure that the difference
between the upper and lower bounds on r is at least 1, which is guaranteed
by (when using dist2 )
7 ln(1/c1 p)
6 ln(1/p)
1
−
≥ d0 − + 1.
2
8 ln(9c1 pn/8) 8 ln(7λd c1 pn/8|λ1 |)
2
Because |λ1 | is constant with respect to n, asymptotically this inequality
reduces to
7 ln(1/c1 p)
6 ln(1/p)
1
−
≥ d0 + .
8 ln(c1 pn)
2
8 ln(λ2d c1 pn)
If p = o(n−1+ ) for all > 0, then for a constant c1 = Θ(1), the inequality is
satisfied asymptotically if we guarantee that |λd | = ω((c1 pn)−1/15 ) and d =
20
BORGS-CHAYES-LEE-SHAH
o(ln(n)/ ln(pn)). If p = n−1+ for some > 0, then the inequality is satisfied
asymptotically if we assume that |λd | = ω(n−γ ) ∀γ > 0, and c1 is chosen
0
0
such that c1 pn = ω(1) and c1 pn = o(n(6+1)/(8d +11) ) = o((p6 n7 )1/(8d +11) ).
When the algorithm is using dist1 , the bound would be simplified with
plugging in 1 instead of d0 .
3.5. Computational Complexity. Let’s discuss the precise computational
complexity of the algorithm. Since the algorithm involves growing local
neighborhoods and computing similarities between these neighborhoods, the
cost of computation depends
P 0 on the size of these neighborhoods, which is denoted by |Bu,r+d0 | = r+d
s=1 |Su,s |. In Lemmas A.1 and A.2, we proved that
with high probability the neighborhood sizes grow exponentially, |Su,s | <
ln(1/p)
( 9c18pn )s , assuming that ( 9c18pn )s < (c1 p)−7/8 . The choice of r = 86ln(c
1 pn)
guarantees that this assumption holds for all s ≤ r + d0 for a sufficiently
large size matrix as long as d0 is not too large such as any constant. It
follows that |Bu,r+d0 | is dominated by |Su,r+d0 | and is bounded above by
(c1 p)−7/8 with high probability.
First
we discuss the original algorithm which compares distances between
all n2 pairs of vertices. Recall that we split the edges into sets E1 , E2 , E3
which takes O(|E|) = O(pn2 ) random coin tosses, which we assume to be
O(1) amount of work. Then we build the r + d0 -radius neighborhood vectors
Nu,r+d0 , which takes O(|Bu,r+d0 |) = O(p−7/8 ) time for a total of O(np−7/8 )
for all n vertices. The bound of p−7/8 comes from the analysis and is related
to how the radius r is chosen to satisfy conditions in Lemmas A.1 and
A.2. These lemmas guarantee that the neighborhood grows exponentially
with the expansion factor of Θ(c1 pn). When r is too large, we “run out of
vertices” such that the neighborhood growth rate will slow down.
Next we compute the distances which involve at most d0 (≤ d) comT
putations of the form Ñu,r − Ñv,r M2 Ñu,r+i − Ñv,r+i . The complexity
of computing the inner product of two vectors is bounded above by the
sparsity of one of the vectors. By definition, the sparsity of Ñu,r − Ñv,r
is bounded by (|Bu,r | + |Bv,r |) = O(p−7/8 ), and in expectation the sparsity of any row or column of M2 is c2 pn, such that the average sparsity of
T
Ñu,r − Ñv,r M2 is bouned by O(p−7/8 c2 pn). Therefore the computation
of dist1 (u, v) given the neighborhoodvectors is bounded by O(p−7/8 c2 pn),
with a total of O(p−7/8 c2 pn3 ) for all n2 pairwise distances. Since dist2 (u, v)
requires computing the above expression for i ∈ [d0 ], this would be further
multiplied by d0 . In addition, computing dist2 requires the vector z which
involves computing the pseudoinverse of a d0 × d matrix. This takes at most
O(d3 ) time. Finally we have to compute the estimate for each value by
ITERATIVE COLLABORATIVE FILTERING
21
averaging over data-points within the neighborhoods. The number of datapoints included in the weighted average is |Euv2 | = O(pn2 ), which is as large
as O(pn4 ) computation if we separately average for each of the n(n − 1)/2
distinct locations to estimate. Thus, the total computational complexity can
be bounded above as
(3.9)
O(pn2 + np−7/8 + d0 p−7/8 pn3 + d3 + pn4 ) = O(d0 p−7/8 pn3 + pn4 ),
where d3 = O(pn4 ) for pn = ω(1), since d < n.
Next we discuss the computational complexity of the modified algorithm
which selects anchor vertices and computes distances only to the anchor vertices. First we select the |K| vertices at random, which takes O(|K| log(n))
time. Then we split the edges into sets E1 , E2 , E3 which takes O(|E|) =
O(pn2 ) time. We build the r-radius neighborhood vectors Nur , which takes
O(|Bu,r |) = O(( 9c18pn )r ) = O(p−7/8 ) time for a total of O(np−7/8 ) for all n
vertices. Next we compute the distances which involve at most d0 compuT
tations of the form Ñu,r − Ñv,r M2 Ñu,r+i − Ñv,r+i . The complexity of
computing the inner product of two vectors is bounded above by the sparT
sity of one of the vectors. Therefore, since the sparsity of Ñu,r − Ñv,r M2
is bounded by (|Bu,r | + |Bv,r |)c2 pn, the computation is thus bounded by
1
1
O(p 8 c2 n) for each distance computation. This results in a total of O(p 8 c2 n2 |K|)
for all n|K| pairwise distance between any vertex and an anchor. If we are
computing dist2 , we have an additional O(d3 ) from calculating the pseudoinverse of Λ̃. As we compute the distances, we can keep track for each
vertex which anchor is the closest, such that we know π(u) for every u ∈ [n]
without additional computational cost. Finally we have to compute the estimate for each value by averaging over datapoints within the neighborhoods. Since F̂ (u, v) = F̂ (π(u), π(v)), we only need to compute estimates
for |K|(|K|−1)/2 locations, and the rest of the estimates follow directly from
the mapping π. The number of datapoints included in the weighted average
is |Euv2 | = O(pn2 ), which results in a most O(pn2 |K|2 ) computation for all
locations to estimate. This results in a total computational complexity of
O(|K| log(n) + pn2 + np−7/8 + d0 p−7/8 pn2 |K| + pn2 |K|2 )
(3.10)
= O(d0 p−7/8 pn2 |K| + pn2 |K|2 ).
That is, subsampling the vertices can significantly reduce the computational
complexity in (3.9) if |K| n. We will show in our analysis, that it is
1
3θ
sufficient to choose |K| to be on the order of Θ(d−3/2 (c1 pn) 4 + 2 ). If the
data is very sparse, then |K| may be only logarithmic in n and still achieve
the same error guarantees as if we computed all pairwise distances.
22
BORGS-CHAYES-LEE-SHAH
3.6. Belief Propagation and Non-Backtracking Operator. The idea of comparing vertices by looking at larger radius neighborhoods was introduced in
Abbe and Sandon (2015a), and has connections to belief propagation Decelle
et al. (2011); Abbe and Sandon (2016) and the non-backtracking operator
Krzakala et al. (2013); Karrer, Newman and Zdeborová (2014); Mossel, Neeman and Sly (2017); Massoulié (2014); Bordenave, Lelarge and Massoulié
(2015). The non-backtracking operator was introduced to overcome the issue of sparsity. For sparse graphs, vertices with high-degree dominate the
spectrum, such that the informative components of the spectrum get hidden
behind the high degree vertices. The non-backtracking operator avoids paths
that immediately return to the previously visited vertex in a similar manner
as belief propagation, and its spectrum has been shown to be more wellbehaved, perhaps adjusting for the high degree vertices, which get visited
very often by paths in the graph. In our algorithm, the neighborhood paths
are defined by first selecting a rooted tree at each vertex, thus enforcing
that each vertex along a path in the tree is unique. This is important in our
analysis, as it guarantees that the distribution of vertices at the boundary of
each subsequent depth of the neighborhood is unbiased, since the sampled
vertices are freshly visited.
3.7. Knowledge of the observation set E. In our algorithm, we assumed
that we observed the edge set E. Specifically, this means that we are able
to distinguish between entries of the matrix that have value zero because
they are not observed, i.e. (i, j) ∈
/ E, or if the entry was observed to be value
zero, i.e. (i, j) ∈ E and M (i, j) = Z(i, j) = 0. This fits well for applications
such as recommendations, where the system does know the information of
which entries are observed or not. Some social network applications contain
this information (e.g. facebook would know if they have recommended a
link which was then ignored) but other network information may lack this
information, e.g. we do not know if link does not exist because observations
are sparse, or because observations are dense but the probability of an edge
is small.
4. Main Results. We prove bounds on the estimation error of our
algorithm in terms of the mean squared error (MSE), defined as
h
i
P
1
2 .
MSE ≡ E n(n−1)
(
F̂
(u,
v)
−
F
(u,
v))
u6=v
It follows from the model that for any αu , αv ∈ X ,
R
Pd
2
2
2
k=1 λk (qk (αu ) − qk (αv ))
X1 (f (αu , y) − f (αv , y)) dP1 (y) =
= kΛQ(eu − ev )k22 .
23
ITERATIVE COLLABORATIVE FILTERING
The key part of the analysis is to show that the computed distances are
in fact good estimates of kΛQ(eu − ev )k22 . The analysis essentially relies on
showing that the neighborhood growth around a vertex behaves according
to its expectation, according to some properly defined notion. The radius
r must be small enough to guarantee that the growth of the size of the
neighborhood boundary is exponential, increasing at a factor of approximately c1 pn. However, if the radius is too small, then the boundaries of
the respective neighborhoods of the two chosen vertices would have a small
intersection, so that estimating the similarities based on the small intersection of datapoints would result in high variance. Therefore, the choice of r
is critical to the algorithm and analysis. We are able to prove bounds on the
squared error when r is chosen to satisfy the following conditions:
ln(1/c1 p)
ln(1/c1 p)
r + d0 ≤ 8 7ln(9c
=
Θ
ln(c1 pn) ,
1 pn/8)
ln(1/p)
r + 12 ≥ 8 ln(7|λ6 ln(1/p)
=
Θ
ln(c1 pn) .
|2 c1 pn/8|λ1 |)
d
Recall that d0 (≤ d) denotes the number of distinct valued eigenvalues in the
spectrum of f , (λ1 . . . λd ), and determines the number of different radius
“measurements” involved in computing dist2 (u, v). Computing dist1 (u, v)
only involves a single measurement, thus the left hand side of (3.7) can
be reduced to r + 1 instead of r + d0 . When p is above a threshold, we
choose c1 to decrease with n to ensure (3.7) can be satisfied, sparsifying the
edge set E1 used for expanding the neighborhood around a vertex . When
the sample probability is polynomially larger than n−1 , i.e. p = n−1+ for
some > 0, these constraints imply that r is a constant with respect to n.
However, if p = Õ(n−1 ), we will need r to grow with n according to a rate
of 6 ln(1/p)/8 ln(c1 pn).
Theorems 4.1 and 4.2 provide bound on the error of any given entry with
high probability. These high probability error bounds naturally result in a
bound on the mean squared error as well. We state result for dist1 first.
Theorem 4.1.
Let the following hold for some θ ∈ (0, 14 ), > 0:
1. Conditions on sampling probability, p.
(4.1)
p = o(n−1+1/(5+8θ) ),
p = ω(n−1+ ), and c1 pn = ω(1).
2. Conditions on neighborhood radius, r. We have r = Θ(1), i.e. not
scaling with n, such that
(4.2)
9c1 pn
8
r+1
−7/8
≤ (c1 p)
and
1
7λ2d c1 pn r+ 2
8|λ1 |
≥ p−6/8
24
BORGS-CHAYES-LEE-SHAH
3. Condition on spectrum of Λ. The smallest eigenvalue λd is such that
1 1
|λd | = ω (c1 pn)− min( 4 , 2 +θ)
(4.3)
4. Condition on distribution of latent features. Define ξ1LB , ξ1U B as
2
ξ1LB
=
(4.4)
Bq d|λ1 |
1
(c1 pn) 2 −θ
2
and ξ1U
B =
|λ1 |
|λd |
2r
65Bq d|λ1 |
1
(c1 pn) 2 −θ
.
They satisfy
(4.5)
2θ
−1
1 pn)
φ(ξ1LB ) = ω max p, n−3/4 , ξ1LB
.
exp − (c8B
2d
q
For any u, v ∈ [n], with probability greater than
2
2
(c1 pn)2θ
(n − 1)φ(ξ2LB )
c3 pn2 ξ1U
B φ(ξ1LB )
1 − O d exp −
+
exp
−
+
exp
−
,
8Bq2 d
24
8
the error of the estimate produced by the algorithm when using dist1 and
parameter r is bounded by
1/2 !
r 3 2
√
Bq d |λ1 |
|λ1 |
|F̂ (u, v) − f (αu , αv )| = O(Bq dξ1U B ) = O
1 −θ
|λd |
(c1 pn) 2
And subsequently
2
MSE = O(Bq2 dξ1U
B )+
2
2
2θ
c pn2 ξ1U
(n−1)φ(ξ1LB )
1 pn)
B φ(ξ1LB )
O d exp − (c8B
+ exp − 3
+
exp
−
.
2d
24
8
q
The following result provides similar guarantee for dist2 .
Theorem 4.2.
Let the following hold for some θ ∈ (0, 41 ):
1. Conditions on sampling probability, p.
p = o(n−1+1/(5+8θ) )
(4.6)
and
c1 pn = ω(1).
2. Conditions on neighborhood radius, r.
(4.7)
9c1 pn
8
r+d0
≤ (c1 p)−7/8
and
1
7λ2d c1 pn r+ 2
8|λ1 |
≥ p−6/8
25
ITERATIVE COLLABORATIVE FILTERING
3. Condition on spectrum of Λ. The smallest eigenvalue λd is such that
1 1
(4.8)
|λd | = ω (c1 pn)− min( 4 , 2 +θ) .
The number of distinct magnitude eigenvalues d0 satisfies
(d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 )
r≥
,
ln(2)
θ
1
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ).
ln
2(1 + 2θ)
p
(4.9)
(4.10)
4. Condition on distribution of latent features. Define ξ1LB , ξ1U B as
2
ξ2LB
=
(4.11)
Bq d|λ1 |
1
(c1 pn) 2 −θ
2
and ξ2U
B =
65Bq d|λ1 |
1
(c1 pn) 2 −θ
.
They satisfy
(4.12)
2θ
−1
1 pn)
exp − (c8B
.
φ(ξ2LB ) = ω max p, n−3/4 , ξ2LB
2d
q
For any u, v ∈ [n], with probability greater than
2
(c1 pn)2θ
ξ2U B c3 pn2 φ(ξ2LB )2
(n − 1)φ(ξ2LB )
1 − O d exp −
+
exp
−
(1
−
o(1))
+
exp
−
,
8Bq2 d
24
8
the error of the estimate produced by the algorithm when using dist2 and
parameter r is bounded by
1/2 !
√
Bq3 d2 |λ1 |
|F̂ (u, v) − f (αu , αv )| = O(Bq dξ2U B ) = O
.
1 −θ
(c1 pn) 2
And subsequently,
Bq3 d2 |λ1 |
MSE = O
+
1
(c1 pn) 2 −θ
2
2
(c1 pn)2θ
c3 pn2 ξ2U
(n−1)φ(ξ2LB )
B φ(ξ2LB )
+ exp −
.
O d exp −
+ exp −
24
8
8Bq2 d
In order to guarantee that the mean squared error converges to zero,
we need to enforce that d = o((c1 pn)min(2θ,(1−2θ)/4) ). Thus, by choosing
θ = 1/10, then this condition would imply p = ω(d5 n) for constant c1 .
Furthermore, we can in fact obtain bounds on the maximum error over all
entries with high probability as presented in Theorems 4.3 and 4.4. The
results follow from using union bound to control the error among all entries.
We first state result for dist1 .
26
BORGS-CHAYES-LEE-SHAH
Theorem 4.3. Let (4.1)-(4.5) hold for some θ ∈ (0, 41 ), > 0. Then,
with probability at least
2
2θ
ξ
c pn2 φ(ξ
)2
1 pn)
1LB )
+ nd exp − (c4B
,
1 − O n2 exp − 1U B 3 24 1LB (1 − o(1)) + n exp − (n−1)φ(ξ
2d
8
q
the maximum error over all entries of the estimate produced by the algorithm
when using dist1 and parameter r is bounded by
√
kF̂ − F kmax ≡ max |F̂ (i, j) − F (i, j)| = O(Bq dξ1U B )
i,j
1/2 !
r 3 2
Bq d |λ1 |
|λ1 |
.
=O
1 −θ
|λd |
(c1 pn) 2
The following result provides similar guarantee for dist2 .
Theorem 4.4. Let (4.6)-(4.12) hold for some θ ∈ (0, 14 ). Then, with
probability at least
2
2θ
ξ
c pn2 φ(ξ
)2
1 pn)
2LB )
1 − O n2 exp − 2U B 3 24 2LB (1 − o(1)) + n exp − (n−1)φ(ξ
+ nd exp − (c4B
,
2d
8
q
the maximum error over all entries of the estimate produced by the algorithm
when using dist2 and parameter r is bounded by
√
kF̂ − F kmax = O(Bq dξ2U B )
1/2 !
Bq3 d2 |λ1 |
.
=O
1 −θ
(c1 pn) 2
The probability of error stated in Theorems 4.3 and 4.4 have n in the coef2θ
ficient such that we additionally
need to enforce that (c1 pn) = Ω(d ln(dn))
2θ
1 pn)
to guarantee nd exp − (c4B
converges to zero.
2d
q
Local geometry. The local geometry of the latent probability measure
with respect to the latent function affects the error results through the
function φ. As an example, consider the case where the latent variables
are sampled from the uniform distribution on [0, 1] and the function f is
piecewise L-Lipschitz with minimum piece size of ` ≤ 1. We can show that
φ(ξ) ≥ min(`, ξ/2L), which we can then plug into the results in the above
theorems. The above bounds can be reduced as the terms involving φ will
be dominated by others. We can show that the mean squared error for the
algorithm using dist1 is bounded by
2r 3 2
Bq d |λ1 |
|λ1 |
2 2
2
− 21 +θ
MSE = O(Bq dξ1U B ) = O
=
O
d
(c
pn)
.
1
1
|λd |
−θ
(c1 pn) 2
27
ITERATIVE COLLABORATIVE FILTERING
Similarly, if φ(ξ) ≥ min(`, ξ/2L) the mean squared error bound for the
algorithm using dist2 is bounded by
1
Bq3 d2 |λ1 |
MSE = O
= O d2 (c1 pn)− 2 +θ .
1 −θ
(c1 pn) 2
For Theorems 4.3 and 4.4, if φ(ξ) ≥ min(`, ξ/2L)
the probability
of error
(c1 pn)2θ
expression would be dominated by the term nd exp − 4B 2 d .
q
Comparing results of dist1 and dist2 . We compare the simplified results between the mean squared error bounds for Theorems 4.1 and 4.2 in
the setting where the latent variables are sampled from the uniform distribution on [0, 1] and the function f is piecewise L-Lipschitz with minimum
piece size of ` ≤ 1. Since φ(ξ) ≥ min(`, ξ/2L), the MSE bound when using
dist1 reduces to
1
2
2r 3 2
−( 2 −θ)
O(Bq2 dξ1U
),
B ) = O((|λ1 |/|λd |) Bq d |λ1 |(c1 pn)
while the MSE bound when using dist2 reduces to
2
3 2
−( 21 −θ)
O(Bq2 dξ2U
)
=
O
B
d
|λ
|(c
pn)
.
1
1
B
q
The only difference is the factor of (|λ1 |/|λd |)2r , which grows to be large when
r grows asymptotically with respect to n. As observed from the conditions
stated in (3.7), r is constant with respect to n when p = n−1+ for some
> 0.
In fact, the reason why the error blows up with a factor of (|λ1 |/|λd |)2r
when using dist1 is because we compute the distance by summing product
of weights over paths of length 2r. From (3.4), we see that in expectation,
when we take the product of edge weights over a path of length r from u
to y, instead of computing f (αu , αy ) = eTu QΛQey , the expression concentrates around eTu QΛr Qey , which contains extra factors of Λr−1 . Therefore,
by computing over neighborhoods of radius r, the calculation in dist1 will
approximate kΛr+1 Q(eu − ev )k22 rather than our intended kΛQ(eu − ev )k22 ,
thus leading to an error factor of |λd |−2r . We chose ξ1 (n) such that we multiply the estimated distance by |λ1 |2r . Therefore, if |λd | = |λ1 |, then the error
does not grow with r.
On the other hand, dist2 is computed from a combination of d0 measurements at different path lengths using specific coefficients in order to adjust
for this amplification. Each measurement approximates kΛr+i Q(eu − ev )k22
for a different value of i, which can also be written as a linear combination of
28
BORGS-CHAYES-LEE-SHAH
the terms (ek Q(eu − ev ))2 . Essentially the different measurements allow us
to separate between these terms for all k with distinct values of λk . This cor1
rection leads to a MSE bound of O(Bq3 d2 |λ1 |(c1 pn)− 2 +θ ), which converges
even in the ultra sparse sampling regime of p = ω(n−1 dmax(1/(2θ),4/(1−2θ)) ).
For a choice of θ = 1/10, this would reduce to p = ω(d5 n−1 ). As compared
to information theoretic lower bounds, this sample complexity is optimal
with respect to n although the exponent of d is suboptimal.
4.1. Modified algorithm with subsampled Anchor vertices. Recall that in
Section 3.3, we discussed a modification of the algorithm to reduce computation by subsampling anchor vertices, and comparing only to anchor vertices
rather than computing all n2 pairwise distances. Let K denote the set of anchor vertices. In order to prove error bounds for the modified algorithm for
some entry located at index (u, v), we need to ensure that with high probability there exist anchor vertices that are within “close” distances from
both u and v. Then we need the distance estimates between the vertices
u, v and the anchor vertices to be accurate enough such that the anchor
vertices π(u) and π(v) which minimize dist(u, π(u)) and dist(v, π(v)) (for
dist ∈ {dist1 , dist2 }) will also be close in terms of kΛQ(eu − eπ(u) )k2
and kΛQ(ev − eπ(v) )k2 . Finally, since the algorithm estimates F̂ (u, v) =
F̂ (π(u), π(v)), it only remains to show that |F̂ (π(u), π(v)) − f (απ(u) , απ(v) )|
is bounded, which follows from the proof which we showed before.
We state the error bound and proof for the modified algorithm that uses
dist2 to estimate pairwise distances, but an equivalent result holds when
using dist1 .
Theorem 4.5. Let (4.6)-(4.12) hold for some θ ∈ (0, 41 ). For some ξ > 0
and for any u, v ∈ [n], with probability at least
(c1 pn)2θ
1 − O exp (−|K|φ(ξ)) + |K|d exp −
4Bq2 d
2
ξ2U B c3 pn2 φ(ξ2LB )2
− O exp −
(1 − o(1))
,
24
the error of the estimate produced by the modified algorithm presented in
Section 3.3, using dist2 with parameter r and |K| anchor vertices, is upper
bounded by
√
|F̂ (u, v) − f (αu , αv )| = O Bq d min(ξ2U B , ξ) .
ITERATIVE COLLABORATIVE FILTERING
29
In the piecewise Lipschitz case where φ(ξ) ≥ min(`, ξ/2L), it follows from
the theorem that if we want to match the error bounds of Theorem 4.2, it
is sufficient to choose the number of anchor vertices
1
3θ
1
3θ
L(c1 pn) 4 + 2
= O(d−3/2 (c1 pn) 4 + 2 )
|K| ≥
1/2
5
3
2(Bq d |λ1 |)
and
ξ=
Bq d|λ1 |
1
(c1 pn) 2 −θ
!1/2
= O(ξ2U B ).
Thus it follows that with probability at least
(c1 pn)2θ
1 − O |K|d exp −
,
4Bq2 d
the error of the estimate produced by the modified algorithm presented in
Section 3.3 using dist2 with parameter r is upper bounded by
!1/2
Bq3 d2 |λ1 |
.
|F̂ (u, v) − f (αu , αv )| ≤ O
1
(c1 pn) 2 −θ
We can also present bounds on the maximum error. If we want to bound
the maximum error, we need to bound the event that all vertices are close
to at least one anchor vertex. In order to prove this event holds with high
probability, we will additionally show that we can bound the number of balls
of diameter ξ needed to cover the space X1 with respect to the measure P1 .
Theorem 4.6.
Let (4.6)-(4.12) hold for some θ ∈ (0, 41 ). In addition, let
|K|φ(ξ/4)2 ≥ 2.
(4.13)
For some ξ > 0, with probability at least
|K|φ(ξ/4)
(c1 pn)2θ
1 − O nd exp −
+
exp
−
4Bq2 d
8
2
ξ2U B c3 pn2 φ(ξ2LB )2
2
− O |K| exp −
(1 − o(1))
,
24
the error of the estimate produced by the modified algorithm presented in
Section 3.3, using dist2 with parameter r and |K| anchor vertices, is upper
bounded by
√
max
|F̂ (u, v) − f (αu , αv )| = O(Bq d min(ξ2U B , ξ)).
(u,v)∈[n]×[n]
30
BORGS-CHAYES-LEE-SHAH
In the piecewise Lipschitz case where φ(ξ) ≥ min(`, ξ/2L), it follows from
the theorem that if we want to match the max error bounds of Theorem 4.2,
it is sufficient to choose the number of anchor vertices
1
|K| ≥
3θ
1
3θ
16L(c1 pn) 4 + 2
= O(d−3/2 (c1 pn) 4 + 2 )
1/2
5
3
(Bq d |λ1 |)
and
ξ=
Bq d|λ1 |
1
(c1 pn) 2 −θ
!1/2
= O(ξ2U B ).
Thus it follows that with probability at least
(c1 pn)2θ
,
1 − O nd exp −
4Bq2 d
the error of the estimate produced by the modified algorithm presented in
Section 3.3 using dist2 with parameter r is upper bounded by
!1/2
Bq3 d2 |λ1 |
.
max
|F̂ (u, v) − f (αu , αv )| = O
1
(u,v)∈[n]×[n]
(c1 pn) 2 −θ
4.2. Asymmetric Matrix. Consider an n × m matrix F which we would
like to learn, where F (u, v) = f (αu , βv ) ∈ [0, 1], and f has finite spectrum
according to
d
X
f (αu , βv ) =
λk q1k (αu )q2k (βv ),
k=1
where q1k and q2k are orthonormal `2 functions. We use the transformation
presented in 2.2 and simply translate the results from the symmetric matrix
model. The new dimensions would be (n + m) × (n + m), the rank is 2d,
√
the eigenvalues are each multiplied by nm/(n
p + m), and the maximum
eigenfunction magnitude Bq is multiplied by (n + m)/ min(n, m). Let φ
denote the underestimator function for the probability measure after transforming to the symmetric latent model. We require that the ratio between
the dimensions of the matrix m and n must be a constant, i.e. they must
grow in proportion. We state the results for the algorithm when using dist2 ,
although equivalent results also carry over if we use dist1 .
Theorem 4.7.
Let the following hold for some θ ∈ (0, 41 ):
31
ITERATIVE COLLABORATIVE FILTERING
0. Scaling of n, m.
n
= Θ(1).
m
(4.14)
1. Conditions on sampling probability, p.
(4.15)
p = o((n + m)−1+1/(5+8θ) )
and
c1 p(n + m) = ω(1).
2. Conditions on neighborhood radius, r.
(4.16)
9c1 p(n+m)
8
r+d0
≤ (c1 p)
−7/8
and
1
√
7λ2d c1 p nm r+ 2
8|λ1 |
≥ p−6/8
3. Condition on spectrum of Λ. The smallest eigenvalue λd is such that
1 1
(4.17)
|λd | = ω (c1 p(n + m))− min( 4 , 2 +θ)
The number of distinct magnitude eigenvalues d0 satisfies
(4.18)
(4.19)
(2d0 − 1) ln(2|λ1 |/λgap ) + ln(2d0 )
r≥
,
ln(2)
θ
1
ln
≥ (2d0 − 1) ln(2|λ1 |/λgap ) + ln(2d0 ).
2(1 + 2θ)
p
4. Condition on distribution of latent features. Define ξ1LB , ξ1U B as
q
2Bq d|λ1 |
2
ξ2LB
= (n+m)nm
min(n,m) (c p(n+m)) 12 −θ ,
1
q
130Bq d|λ1 |
2
nm
ξ2U B = (n+m) min(n,m)
(4.20)
1 −θ
(c1 p(n+m)) 2
They satisfy
(4.21)
2θ
−1
1 p(n+m))
.
φ(ξ2LB ) = ω max p, (n + m)−3/4 , ξ2LB
exp − min(n,m)(c
2
16(n+m)B d
q
For any u, v ∈ [m] × [n], with probability greater than
2
2θ
ξ2U B c3 p(n+m)2 φ(ξ2LB )2
1 p(n+m))
(1
−
o(1))
1 − O d exp − min(n,m)(c
+
exp
−
2
24
16(n+m)Bq d
(n+m−1)φ(ξ2LB )
− O exp −
,
8
32
BORGS-CHAYES-LEE-SHAH
for any u, v ∈ [n], the error of the estimate produced by the algorithm when
using dist2 and parameter r is bounded by
1/2 !
√
Bq3 d2 |λ1 |
|F̂ (u, v) − f (αu , αv )| = O(Bq dξ2U B ) = O
.
1 −θ
(c1 p(n+m)) 2
It follows that the mean squared error is bounded by
min(n, m)(c1 p(n + m))2θ
Bq3 d2 |λ1 |
MSE = O
+
d
exp
−
1
16(n + m)Bq2 d
(c1 p(n+m)) 2 −θ
2
2
c3 p(n+m)2 ξ2U
(n+m−1)φ(ξ2LB )
B φ(ξ2LB )
+ O exp −
+ exp −
.
24
8
4.3. Categorical Valued Data. If the edge labels take values within m
category types, we use the reduction presented in 2.3, where the data is
split into m different matrices, each containing the information for a separate
category (or edge label). For each category or label i ∈ [m], the associated
matrix Fi represents the probability that each datapoint is labeled with i,
such that P(Z(u, v) = i) = Fi (u, v) = fi (αu , αv ), where f is a symmetric
function having finite spectrum with rank di ,
fi (αu , αv ) =
di
X
λik qik (αu )qik (αv ),
k=1
where {qik }k∈[di ] are orthonormal L2 functions, and d0i denote the number of
distinct valued eigenvalues. Let ri , Biq , φi denote the associated parameters
for the model associated to each label or category.
The algorithm then is applied to each matrix separately to estimate the
probability of each category across the different entries. Since we need the
estimates across different categories for the same entry to sum to 1, we can
simply let the estimate for the m-th category one minus the sum of the
estimates for the first m − 1 categories. To obtain an error bound, we can
simply use union bound across the m−1 applications of the algorithm, which
simply multiplies the error probability by m − 1. We state the results for
the algorithm when using dist2 , although equivalent results also carry over
if we use dist1 .
Theorem 4.8. For some θ ∈ (0, 14 ), let (4.6) hold as well as (4.16)(4.12) for all i ∈ [m − 1] (with ri , di , φi , ξ2LB (i) and ξ2U B (i) in place of r, d,
φ, ξ2LB and ξ2U B ). Then, for any (u, v) ∈ [n] × [n], with probability greater
ITERATIVE COLLABORATIVE FILTERING
33
than
2
X
2
2
2θ
ξ
(i)c pn φ (ξ
(i))
1 pn)
di exp − (c8B
1−O
+ exp − 2U B 3 24 i 2LB
(1 − o(1))
2 d
i∈[m−1]
iq i
+ O exp − (n−1)φi 8(ξ2LB (i)) ,
the total variation distance between the true label distribution and the estimate computed by combining m − 1 estimates for each label using the algorithm with dist2 is bounded by
X B 3 d2 |λi1 | 1/2
1 X
iq i
.
|F̂i (u, v) − fi (αu , αv )| = O
1
2
(c1 pn) 2 −θ
i∈[m]
i∈[m−1]
4.4. Non-uniform sampling. In the previous models, we always assumed
a uniform sampling model, where each entry is observed independently with
probability p. However, in reality the probability that entries are observed
may not be uniform across all pairs (i, j). In this section we discuss an
extension of our result that can handle variations in the sample probability as
long as they are still independent across entries, and the sample probability
is a function of the latent variables and scales in the same way with respect
to n across all entries. The idea is simply to apply the algorithm twice, first
to estimate the variations in the sample probability, and second to estimate
the data entries normalized by these variations in sample probability.
In order for our result to directly translate over, we assume that the data
is sampled in two phases, where a subset of the entries are first sampled
uniformly with probability p, and then this subset is further subsampled to
obtain the final set of entries for which data is observed, allowing for variation across entries in the second sampling phase. The algorithm is assumed
to have data on whether an entry is not observed because it was not sampled
in the first round or second round. One application for which this type of
data could be reasonable would be on an e-commerce platform, where users
might be shown items at random (with density p), and each user chooses
whether or not to buy and rate each product that s/he is shown, which
would then be a function of the user and product features. In this case, the
platform would have information about which user-product pairs were made
available to the user (uniformly with probability p), and whether or not we
observed a rating for each user-product pair. Given that a user decides to
purchase and rate the product, the expected rating will be according to
another latent function of the user and product features.
34
BORGS-CHAYES-LEE-SHAH
Assume a model in which the probability of observing (i, j) is given by
pg(αi , αj ), where p is the scaling factor (contains the dependence upon n
and is fixed across all edges), and g allows for constant factor variations in
the sample probability across entries as a function of the latent variables.
Let matrix X indicate the presence of an observation or not, and let matrix
M contain the data values for the subset of observed entries. We simply
apply our algorithm twice, first on matrix X to estimate the function g, and
then on data matrix M to estimate f times g. We can simply divide by the
estimate for g to obtain the estimate for f .
Assume that Xij distinguishes between an entry which is unobserved because of the uniform scaling parameter p, as opposed to unobserved because
of the edge sampling variation g(·, ·). This restriction is due to the fact that
our algorithm needs to distinguish between an entry which is observed to
have value zero as opposed to an entry which is not observed at all, as discussed in Section 3.7. In fact, for estimating the function g, this requirement
is reasonable, as the problem would be unidentifiable otherwise, since the
same observation set could result from multiplying p by some constant and
dividing the function g by the same constant.
We directly apply the theorems from before to obtain error bounds. Let
2
r, Bq , d, d0 , λ1 , λd , ξ2U
B , ξ2LB , φ refer to the parameters for estimating the
˜ d˜0 , λ̃1 , λ̃ ˜, ξ˜2 , ξ˜2LB , φ̃ refer to the parameters for
function g, and let r̃, B̃q , d,
d 1U B
estimating the function f times g. We present the results for the algorithm
using dist2 , however equivalent results follow when using dist1 .
Theorem 4.9.
Let the following hold for some θ ∈ (0, 14 ):
1. Conditions on sampling probability, p.
p = o(n−1+1/(5+8θ) )
(4.22)
and
c1 pn = ω(1).
2. Conditions on neighborhood radius, r, r̃.
(4.23)
9c1 pn
8
9c1 pn
8
r+d0
r̃+d˜0
≤ (c1 p)−7/8
−7/8
≤ (c1 p)
and
1
7λ2d c1 pn r+ 2
8|λ1 |
and
7λ2˜c1 pn
d
8|λ1 |
r̃+ 1
2
≥ p−6/8 ,
≥ p−6/8 .
3. Condition on spectrum of Λ, Λ̃. The smallest eigenvalues λd , λ̃d˜ are
such that
1 1
|λd |, |λ̃d | = ω (c1 pn)− min( 4 , 2 +θ) .
(4.24)
35
ITERATIVE COLLABORATIVE FILTERING
The number of distinct magnitude eigenvalues d0 and d˜0 satisfy
(d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 )
r≥
,
ln(2)
θ
1
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ),
ln
2(1 + 2θ)
p
˜ ) + ln(d˜0 )
(d˜0 − 1) ln(2|λ̃1 |/λgap
r̃ ≥
,
ln(2)
θ
1
˜ ) + ln(d˜0 ).
≥ (d˜0 − 1) ln(2|λ̃1 |/λgap
ln
2(1 + 2θ)
p
(4.25)
(4.26)
(4.27)
(4.28)
4. Condition on distribution of latent features. Define ξ2LB , ξ2U B , ξ˜2LB , ξ˜2U B :
2
ξ2LB
=
(4.29)
2
ξ˜2LB
=
Bq d|λ1 |
1
2
and ξ2U
B =
1
2
and ξ˜2U
B =
(c1 pn) 2 −θ
˜ 1|
B̃q d|λ
(c1 pn) 2 −θ
65Bq d|λ1 |
1
,
1
.
(c1 pn) 2 −θ
˜ 1|
65B̃q d|λ
(c1 pn) 2 −θ
They satisfy
(4.30)
2θ
−1
1 pn)
exp − (c8B
φ(ξ2LB ) = ω max p, n−3/4 , ξ2LB
,
2d
q
pn)2θ
−1
exp − (c81B̃
φ̃(ξ˜2LB ) = ω max p, n−3/4 , ξ˜2LB
.
2 d˜
q
For any u, v ∈ [n], with probability greater than
2
ξ2U B c3 pn2 φ(ξ2LB )2
(c1 pn)2θ
+ exp −
(1 − o(1))
1 − O d exp −
8Bq2 d
24
!!
2θ
(c
pn)
(n − 1)φ(ξ2LB )
1
− O exp −
− O d˜exp −
8
8B̃q2 d˜
!
!!
2
2 φ̃(ξ˜
2
˜
ξ˜2U
c
pn
)
(n
−
1)
φ̃(
ξ
)
2LB
2LB
B 3
− O exp −
(1 − o(1)) + exp −
,
24
8
the error of the estimate produced by the algorithm when using dist2 is
bounded by
p
√
|F̂ (u, v) − f (αu , αv )| = O((B̃q d˜ξ˜2U B + Bq dξ2U B )(g(αu , αv ) − Bq dξ2U B )−1 )
1/2
1/2 3 2
3 2
− 1−2θ
˜
Bq d |λ1 |
+ B̃q d |λ̃1 |
(c1 pn) 4
=O
×
g(αu , αv ) − O
Bq3 d3 |λ1 |
1
(c1 pn) 2 −θ
1/2 !!−1
.
36
BORGS-CHAYES-LEE-SHAH
The error bounds show that if g(αi , αj ) is very small, then the error
bounds for estimating f (αi , αj ) are larger. This is as expected, since a small
value of g(αi , αj ) means that the probability of observing the data is small,
such that there are fewer observations available to estimate f (αi , αj ). Since
g(αi , αj ) is a constant with respect to n, as n goes to infinity, the estimator
1−2θ
will still converge at the rate of (pn)− 4 .
Proof. We bound the error as a function of the error of the estimate ĝ
and the estimate fˆg.
fˆg
fˆuv − f (αu , αv ) = uv − f (αu , αv )
ĝuv
fˆg − f (αu , αv )g(αu , αv ) f (αu , αv )g(αu , αv ) f (αu , αv )ĝuv
= uv
+
−
ĝuv
ĝuv
ĝuv
fˆg − f (αu , αv )g(αu , αv ) + f (αu , αv )(g(αu , αv ) − ĝuv )
= uv
.
g(αu , αv ) − (g(αu , αv ) − ĝuv )
Therefore, using the fact that |f (αu , αv )| ≤ 1,
|fˆuv − f (αu , αv )| ≤
|fˆg uv − f (αu , αv )g(αu , αv )| + |g(αu , αv ) − ĝuv |
.
g(αu , αv ) − |g(αu , αv ) − ĝuv |
By application of Theorem 4.2 for bounding |fˆg uv − f (αu , αv )g(αu , αv )| and
|g(αu , αv ) − ĝuv |, we obtain the desired result.
5. Discussion. In this paper, we presented a similarity based collaborative filtering algorithm which is provably consistent in sparse sampling
regimes, as long as the sample probability p = ω(n−1 ). The algorithm computes similarity between two indices (rows, nodes or vertices) by comparing
their local neighborhoods. Our model assumes that the data matrix is generated according to a latent variable model, in which the weight on an observed
edge (u, v) is equal in expectation to a function f over associated latent variables αu and αv . We presented two variants for computing similarities (or
distances) between vertices. Computing dist1 does not require knowledge
of the spectrum of f , but the estimate requires p to be polynomially larger
than n in order to guarantee the expected squared error converges to zero.
Computing dist2 uses the knowledge of the spectrum of f , but it provides
an estimate that is provably consistent for a significantly sparse regime, only
requiring that p = ω(n−1 ). The mean squared error of both algorithms is
1
bounded by O((pn)− 2 +θ ) for any θ ∈ (0, 41 ). Since the computation is based
on of comparing local neighborhoods within the graph, the algorithm can
ITERATIVE COLLABORATIVE FILTERING
37
be easily implemented for large scale datasets where the data may be stored
in a distributed fashion optimized for local graph computations.
In practice, we do not know the model parameters, and we would use cross
validation to tune the radius r and threshold ξ(n). If r is either too small
or too large, then the vector Nu,r will be too sparse. The threshold ξ(n)
trades off between bias and variance of the final estimate. Since we do not
know the spectrum, dist1 may be easier to compute, and still enjoys good
properties as long as r is not too large. When the sampled observations are
not uniform across entries, the algorithm may require more modifications to
properly normalize for high degree hub vertices, as the optimal choice of r
may differ depending on the local sparsity. The key computational step of
our algorithm involves comparing the expanded local neighborhoods of pairs
of vertices to find the “nearest neighbors”. The local neighborhoods can be
computed in parallel, as they are independent computations. Furthermore,
the local neighborhood computations are suitable for systems in which the
data is distributed across different machines in a way that optimizes local
neighborhood queries. The most expensive part of our algorithm involves
computing similarities for all pairs of vertices in order to determine the set of
nearest neighbors. However, it would be possible to use approximate nearest
neighbor techniques to greatly reduce the computation such that approximate nearest neighbor sets could be computed with significantly fewer than
n2 pairwise comparisons. Additionally our modification of subsampling the
vertices reduces the pairwise comparisons to n|K|. When the latent function
governing the expected matrix behavior is piecewise Lipschitz, it is suffi1
3θ
cient to choose |K| = Θ(d−3/2 (c1 pn) 4 + 2 ), which is significantly smaller
than Θ(n).
6. Proof Sketch. The final estimate F̂ (u, v) is computed by averaging over datapoints, as specified in (3.2). In our analysis, we will show
that dist1 (u, v) and dist2 (u, v) are close estimates for kΛQ(eu − ev )k22 ,
so that for a majority of the datapoints (a, b) ∈ Euv , it indeed holds that
F (a, b) ≈ F (u, v), thus upper bounding the bias. We additionally show that
the number of included datapoints, |Euv | is sufficiently large to reduce the
variance.
To ensure that F (u, v) is close to F (a, b) for (a, b) ∈ Euv , it would be
sufficient to bound kΛQ(eu − ea )k2 and kΛQ(ev − eb )k2 , since
|f (αu , αv ) − f (αa , αb )| = |eTu QT ΛQev − eTa QT ΛQeb |
= |eTu QT ΛQ(ev − eb ) − (ea − eu )T QT ΛQeb |
√
√
≤ Bq dkΛQ(ev − eb )k2 + Bq dkΛQ(eu − ea )k2 ,
38
BORGS-CHAYES-LEE-SHAH
where the last step follows from assuming that |qk (α)| ≤ Bq for all k ∈ [d]
and α ∈ X1 .
Since Euv is defined by either the distances computed by dist1 or dist2 ,
by showing that dist1 (u, v) and dist2 (u, v) are good estimates of kΛQ(eu −
ev )k22 , it will follow that kΛQ(ev − eb )k2 and kΛQ(eu − ea )k2 are small for a
majority of (a, b) ∈ Euv . A key lemma of the proof is to show that with high
probability, the difference between
(6.1)
1−c1 p
c2 p
Ñu,r − Ñv,r
T
M2 Ñu,r+i − Ñv,r+i
and
(eu − ev )T QT Λ2r+i+1 Q(eu − ev ) = kΛr+
i+1
2
Q(eu − ev )k22
is vanishing at a rate of O(Bq d|λ1 |2r+i (c1 pn)−1/2+θ ). This results in bounds
on both dist1 and dist2 . There are a few steps that are involved in showing
this bound.
In Lemma 6.1, we show high probability concentration results on E1 , which
is used to expand the neighborhoods, and determines Nu,v and Su,r . First
we show that with high probability, the expansion factor |Su,s |/|Su,s−1 | is
close to c1 pn. Second we show that for all k ∈ [d], eTk QÑu,s grows as λsk ,
according to its expected behavior. And we similarly show that kÑu,s k1
grows as O(|λ1 |s ).
Lemma 6.1. For any u ∈ [n] and given θ ∈ (0, 14 ), if d = o((c1 pn)2θ ),
with probability at least
(c1 pn)2θ
1 − 4(d + 2) exp −
,
4Bq2 d
the following statements hold:
1. For all s ∈ [r],
|Su,s |
7c1 pn
9c1 pn
≤
.
≤
8
|Su,s−1 |
8
2. For all s ∈ [r], k ∈ [d], t ∈ [r − s],
|eTk QÑu,s+t
−
eTk Λt QÑu,s |
≤
2|λk |t−1
1
(c1 pn) 2 −θ
3. For all s ∈ [r],
kÑu,s k1 ≤ Bq2 d|λ1 |s (1 + o(1)).
|λd |2
2|λ1 |
s
.
ITERATIVE COLLABORATIVE FILTERING
39
Next, we show high probability concentration results on E2 , which is used
to measure edges between the neighborhood boundaries. Specifically, we can
show that
(6.2)
(6.3)
1−c1 p T
0
c2 p Ñu,r M2 Ñv,r
T
≈ Ñu,r
QΛQÑv,r0
0
≈ eTu QΛr+r Qev .
This allows us to proves bounds on kΛQ(eu −ev )k22 as a function of dist1 (u, v)
and dist2 (u, v).
Lemma 6.2. For any u, v ∈ [n] and given θ ∈ (0, 41 ), if d = o((c1 pn)2θ ),
with probability at least
!
1/4
−1/4
2θ
c
(n
−
1)
1
1 p)p
1 pn)
−8d0 exp − (1−c
−n exp −
,
1−8(d+2) exp − (c4B
2
5|λ1 |Bq2 d
qd
3
the following statements hold for large enough n:
kΛQ(eu − ev )k22 ≥ |λ1 |−2r dist1 (u, v) −
32Bq d|λ1 |2r+1
,
(c1 pn)−1/4
32Bq d|λ1 |2r+1
,
(c1 pn)−1/4
32Bq d|λ1 |
kΛQ(eu − ev )k22 ≤ |λd |−2r dist1 (u, v) +
kΛQ(eu − ev )k22 ≥ dist2 (u, v) −
kΛQ(eu − ev )k22 ≤ dist2 (u, v) +
1
(c1 pn)− 2 +θ
32Bq d|λ1 |
1
(c1 pn)− 2 +θ
,
].
While the above lemmas provide concentration results for a single vertex
or a pair of vertices, we need to show that the above properties hold for a
majority of the vertices. Let Vu1 denote the set which consists of vertices
a such that dist1 (u, a) < ξ1 (n), and similarly let Vu2 denote the set which
consists of vertices a such that dist2 (u, a) < ξ2 (n). We can show that Vu1
and Vu2 consist mostly of vertices a such that kΛQ(eu − ea )k2 is small, and
they do not contain many vertices for which kΛQ(eu − ea )k2 is large.
If the latent variables are sampled uniformly from the unit interval and
we assume f is Lipschitz, we will be able to prove lower bounds on the
number of vertices for which kΛQ(eu − ea )k2 is small, i.e. vertices a such
that f (αu , αb ) is close to f (αa , αb ) for all b. Then we use Lemmas 6.1 and
6.2 to show that a majority of vertices have “good distance estimates”, which
leads to a lower bound on the fraction of truly close vertices within Vu1 and
Vu2 .
40
BORGS-CHAYES-LEE-SHAH
Let Wu1 denote the vertices within Vu1 whose true distance is also small,
where the threshold is defined by the bounds presented in Lemma 6.2. A
vertex v ∈ Wu1 if and only if v ∈ Vu1 and
kΛQ(eu − ev )k22 <
|λ1 |
|λd |
2r
65Bq d|λ1 |
1
(c1 pn) 2 −θ
.
Lemma 6.2 implies that for a vertex v, with high probability, v ∈ Vu1 implies
that v ∈ Wu1 , which leads to a lower bound on the ratio |Wu1 |/|Vu1 |.
Similarly, we let Wu2 denote the vertices within Vu2 whose true distance
is also small, where the threshold is defined by the bounds presented in
Lemma 6.2. A vertex v ∈ Wu2 if and only if v ∈ Vu2 and
kΛQ(eu − ev )k22 <
65Bq d|λ1 |
1
(c1 pn) 2 −θ
.
Lemma 6.2 implies that for a vertex v, with high probability, v ∈ Vu2 implies that v ∈ Wu2 , which leads to a lower bound on the ratio |Wu2 |/|Vu2 |.
Therefore, we can show in Lemma 6.3 that the majority of vertices within
Vu1 and Vu2 are indeed truly close in function value.
Lemma 6.3. Assume that the latent variables are sampled from the uniform distribution on [0, 1] and the function f is L-Lipschitz. For any u ∈ [n]
and given θ ∈ (0, 41 ), if d = o((c1 pn)2θ ), with probability at least
(c1 pn)2θ
1 − O d exp −
,
8Bq2 d
the fraction of good neighbors is bounded below by
(c pn) 1−2θ
(c pn)2θ
4
|Wu1 |
1
1
=1−O
exp
−
,
|Vu1 |
8Bq2 d
(Bq d|λ1 |)1/2
(c pn) 1−2θ
(c pn)2θ
4
|Wu2 |
1
1
=1−O
exp
−
.
1/2
|Vu2 |
8Bq2 d
(Bq d|λ1 |)
Finally, we show high probability concentration results on E3 , which is
used to compute the final estimate. Conditioned on the neighbors being
chosen well, we show that E3 is evenly spread such that the majority of the
entries of Euv are within Wu1 × Wv1 or Wu2 × Wv2 . This bounds the bias
of the estimator. Then we show that the total number of these entries is
ITERATIVE COLLABORATIVE FILTERING
41
sufficiently large to reduce the variance from entry-level noise. These high
probability bounds on the squared error naturally lead to the stated bound
on the MSE. The results can be extended beyond L-Lipschitz and α ∼ U [0, 1]
as long as we can lower bound the probability that a randomly sampled
vertex a is within a close neighborhood of a vertex u with respect to the
quantity kΛQ(eu − ea )k22 . This includes piecewise Lipschitz functions and
higher dimensional latent variable representations.
References.
Abbe, E. and Sandon, C. (2015a). Community detection in general stochastic block
models: Fundamental limits and efficient algorithms for recovery. In Foundations of
Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on 670–688. IEEE.
Abbe, E. and Sandon, C. (2015b). Recovering communities in the general stochastic
block model without knowing the parameters. In Advances in neural information processing systems.
Abbe, E. and Sandon, C. (2016). Detection in the stochastic block model with multiple clusters: proof of the achievability conjectures, acyclic BP, and the informationcomputation gap. Advances in neural information processing systems.
Airoldi, E. M., Costa, T. B. and Chan, S. H. (2013). Stochastic blockmodel approximation of a graphon: Theory and consistent estimation. In Advances in Neural
Information Processing Systems 692–700.
Aldous, D. J. (1981). Representations for partially exchangeable arrays of random variables. J. Multivariate Anal. 11 581 - 598.
Anandkumar, A., Ge, R., Hsu, D. and Kakade, S. (2013). A tensor spectral approach
to learning mixed membership community models. In Conference on Learning Theory
867–881.
Austin, T. (2012). Exchangeable random arrays. Technical Report, Notes for IAS workshop.
Bordenave, C., Lelarge, M. and Massoulié, L. (2015). Non-backtracking spectrum of
random graphs: community detection and non-regular ramanujan graphs. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on 1347–1357.
IEEE.
Borgs, C., Chayes, J. and Smith, A. (2015). Private graphon estimation for sparse
graphs. In Advances in Neural Information Processing Systems 1369–1377.
Borgs, C., Chayes, J. T., Lovász, L., Sós, V. T. and Vesztergombi, K. (2008).
Convergent sequences of dense graphs I: Subgraph frequencies, metric properties and
testing. Advances in Mathematics 219 1801–1851.
Borgs, C., Chayes, J. T., Cohn, H. and Zhao, Y. (2014a). An Lp theory of sparse
graph convergence I: limits, sparse random graph models, and power law distributions.
arXiv preprint arXiv:1401.2906.
Borgs, C., Chayes, J. T., Cohn, H. and Zhao, Y. (2014b). An Lp theory of sparse
graph convergence II: LD convergence, quotients, and right convergence. arXiv preprint
arXiv:1408.0744.
Borgs, C., Chayes, J. T., Cohn, H. and Ganguly, S. (2015). Consistent nonparametric
estimation for heavy-tailed sparse graphs. arXiv preprint arXiv:1508.06675.
Borgs, C., Chayes, J. T., Cohn, H. and Holden, N. (2016). Sparse exchangeable
graphs and their limits via graphon processes. arXiv preprint arXiv:1601.07134.
42
BORGS-CHAYES-LEE-SHAH
Candes, E. and Recht, B. (2009). Exact matrix completion via convex optimization.
Communications of the ACM 55 111–119.
Candès, E. J. and Tao, T. (2010). The power of convex relaxation: Near-optimal matrix
completion. IEEE Transactions on Information Theory 56 2053–2080.
Chatterjee, S. (2015). Matrix estimation by universal singular value thresholding. The
Annals of Statistics 43 177–214.
Chen, Y. and Wainwright, M. J. (2015). Fast low-rank estimation by projected
gradient descent: General statistical and algorithmic guarantees. arXiv preprint
arXiv:1509.03025.
Davenport, M. A., Plan, Y., van den Berg, E. and Wootters, M. (2014). 1-bit
matrix completion. Information and Inference 3 189–223.
Decelle, A., Krzakala, F., Moore, C. and Zdeborová, L. (2011). Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications.
Phys. Rev. E 84 066106.
Diaconis, P. and Janson, S. (2008). Graph limits and exchangeable random graphs.
Rendiconti di Matematica VII 33-61.
Gao, C., Lu, Y. and Zhou, H. H. (2015). Rate-optimal graphon estimation. The Annals
of Statistics 43 2624–2652.
Goldberg, D., Nichols, D., Oki, B. M. and Terry, D. (1992). Using Collaborative
Filtering to Weave an Information Tapestry. Commun. ACM.
Hoover, D. N. (1981). Row-column exchangeability and a generalized model for probability. In Exchangeability in Probability and Statistics (Rome, 1981) 281 - 291.
Karrer, B., Newman, M. E. J. and Zdeborová, L. (2014). Percolation on Sparse
Networks. Phys. Rev. Lett. 113 208702.
Keshavan, R. H., Montanari, A. and Oh, S. (2010a). Matrix completion from a few
entries. IEEE Transactions on Information Theory 56 2980–2998.
Keshavan, R. H., Montanari, A. and Oh, S. (2010b). Matrix completion from noisy
entries. Journal of Machine Learning Research 11 2057–2078.
Klopp, O., Tsybakov, A. B. and Verzelen, N. (2015). Oracle inequalities for network
models and sparse graphon estimation. To appear in Annals of Statistics.
Koren, Y. and Bell, R. (2011). Advances in Collaborative Filtering. In Recommender
Systems Handbook 145-186. Springer US.
Krzakala, F., Moore, C., Mossel, E., Neeman, J., Sly, A., Zdeborov, L. and
Zhang, P. (2013). Spectral redemption in clustering sparse networks. Proceedings of
the National Academy of Sciences 110 20935-20940.
Lee, C. E., Li, Y., Shah, D. and Song, D. (2016). Blind Regression: Nonparametric Regression for Latent Variable Models via Collaborative Filtering. In Advances in Neural
Information Processing Systems 29 2155–2163.
Linden, G., Smith, B. and York, J. (2003). Amazon.Com Recommendations: Item-toItem Collaborative Filtering. IEEE Internet Computing 7 76–80.
Lovász, L. (2012). Large networks and graph limits 60. American Mathematical Society
Providence.
Massoulié, L. (2014). Community Detection Thresholds and the Weak Ramanujan Property. In Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing. STOC ’14 694–703. ACM, New York, NY, USA.
Mossel, E., Neeman, J. and Sly, A. (2017). A proof of the block model threshold
conjecture. Combinatorica.
Negahban, S. and Wainwright, M. J. (2011). Estimation of (near) low-rank matrices
with noise and high-dimensional scaling. The Annals of Statistics 1069–1097.
Ning, X., Desrosiers, C. and Karypis, G. (2015). Recommender Systems Handbook
ITERATIVE COLLABORATIVE FILTERING
43
A Comprehensive Survey of Neighborhood-Based Recommendation Methods, 37-76.
Springer US.
Recht, B. (2011). A simpler approach to matrix completion. Journal of Machine Learning
Research 12 3413–3430.
Steurer, D. and Hopkins, S. (2017). Bayesian estimation from few samples: community
detection and related problems.
Veitch, V. and Roy, D. M. (2015). The class of random graphs arising from exchangeable
random measures. arXiv preprint arXiv:1512.03099.
Wolfe, P. J. and Olhede, S. C. (2013). Nonparametric graphon estimation. arXiv
preprint arXiv:1309.5936.
Xu, J., Massoulié, L. and Lelarge, M. (2014). Edge label inference in generalized
stochastic block models: from spectral theory to impossibility results. In Conference on
Learning Theory 903–920.
Zhang, Y., Levina, E. and Zhu, J. (2015). Estimating network edge probabilities by
neighborhood smoothing. arXiv preprint arXiv:1509.08588.
One Memorial Drive, Cambridge, MA 02142
E-mail: [email protected] E-mail: [email protected] E-mail: [email protected]
77 Massachusetts Avenue, Cambridge, MA 02139
E-mail: [email protected]
44
BORGS-CHAYES-LEE-SHAH
APPENDIX A: PROOF OF LEMMAS
In this section we provide detailed proofs of the lemmas outlined above,
which we split further into a sequence of smaller lemmas. We order them
according to the concentration results associated to different sets of random
variables in the dataset, E1 , E2 , {θu }u∈[n] , and E3 , which also somewhat follows the steps of the algorithm. Lemma 6.1 follows directly from Lemmas
A.2, A.4, A.8, and A.12. Lemma 6.2 follows directly from Lemmas A.12,
A.19, A.17 and A.18.Lemma 6.3 follows directly from Lemmas A.12, A.13,
A.20, A.21 and A.22.
In figure 1, we show the dependencies amongst the Lemmas in Section 7.
The main theorem results from the final Lemmas A.32, A.34, A.33, and A.35.
The blue rectangles indicate Lemmas that prove probabilistic concentration
statements, i.e. bounding the probabilities of bad events. The red ovals indicate Lemmas that show good properties result from conditioning on the
good events, which leads to bounds on the error; these are not probabilistic
statements.
We introduce additional notation used within the proofs. For each vertex
u ∈ [n], let Fu,0 ⊂ Fu,1 ⊂ Fu,2 . . . Fu,n be the filtration associated with
revealing the latent vertex parameters and observed edge weights within
local neighborhoods of vertex u with respect to the edge set E1 . Note that
Su,s , Bu,s , {αv }v∈Su,s , Nu,s , and Ñu,s are all measurable with respect to
Fu,s . Let FE1 be the sigma algebra associated with revealing all latent vertex
parameters {αv }v∈[n] and edge weights of E1 . Let FE2 ,E1 be the sigma algebra
associated with revealing all latent vertex parameters {αv }v∈[n] and edge
weights of E1 and E2 . Note that dist1 (u, v), dist2 (u, v), Vu1 , Vu2 , Wu1 , and
Wu2 are all measurable with respect to FE2 ,E1 . We will use I(·) to denote the
indicator function, 1 to denote the vector of all ones, and ej to denote the
standard basis vector with a one at coordinate j and zero elsewhere.
A.1. Sparsity of Local Neighborhood Grows Exponentially. We
first begin by showing concentration results pertaining to the sparsity of observed entries. Specifically, we prove in Lemma
A.1 that |Su,s+1 | is approx
c | 1 − (1 − c p)|Su,s | conditioned on F . This will
imately equal to |Bu,s
1
u,s
allow us to show in Lemma A.2 that the size of the (s + 1)-radius boundary
of vertex u is approximately c1 pn times larger than the size of the s-radius
boundary of vertex u.
Lemma A.1.
For any s ∈ Z+ such that |Su,s | < (c1 p)−7/8 , and t ∈
45
ITERATIVE COLLABORATIVE FILTERING
Fig 1. Chart of Dependencies amongst Lemmas in Section 7. The blue rectangles indicate
Lemmas that bound probabilities of bad events. The red ovals indicate Lemmas that show
good properties result from conditioning on good events.
L 7.4
L 7.2
L 7.1
L 7.3
L 7.5
L 7.7
L 7.6
L 7.9
L 7.8
L 7.10
L 7.11
L 7.12
L 7.15
L 7.14
L 7.13
L 7.18
L 7.17
L 7.16
L 7.19
L 7.22
L 7.20
L 7.21
L 7.24
L 7.25
L 7.23
L 7.31
L 7.30
L 7.26
L 7.32
L 7.33
L 7.27
L 7.28
L 7.34
L 7.29
L 7.35
46
BORGS-CHAYES-LEE-SHAH
(0, 1 − c1 p|Su,s |),
c
c
P |Su,s+1 | − |Bu,s
| 1 − (1 − c1 p)|Su,s | > |Bu,s
||Su,s |c1 pt
!
c ||S |c p
t2 |Bu,s
u,s 1
≤ 2 exp −
.
3
Fu,s
Proof. Conditioned on Fu,s , the vertices which have a path to u which is
c contains
shorter than s+1 is denoted by Bu,s , such that the complement Bu,s
all vertices which are eligible to be in Su,s+1 . A vertex is at distance s + 1
from vertex u if it is connected to a vertex which is at distance s from u, and
it is not connected to any vertices at distance less than s from u. Therefore,
in order to count the number of vertices that are at distance s + 1, we can
c which have at least one edge to any
count the number of vertices within Bu,s
vertex in Su,s .
By construction, an edge is in E1 with probability c1 p, such that for a
c ,
vertex j ∈ Bu,s
E[I(j ∈ Su,s+1 ) | Fu,s ] = P(∪b∈Su,s I((b, j) ∈ E1 )
/ E1 )
= 1 − P(∩b∈Su,s I((b, j) ∈
= 1 − (1 − c1 p)|Su,s | .
c
such that i 6= j, I(i ∈ Su,s+1 ) is indepenFor a different vertex i ∈ Bu,s
dent from I(j ∈ Su,s+1 ) since the involved edge sets are disjoint. Therefore,
|Su,s+1 | conditioned on Fus is distributed as a Binomial random variable
c | and 1 − (1 − c p)|Su,s | . Therefore,
with parameters |Bu,s
1
c
E [ |Su,s+1 || Fu,s ] = |Bu,s
| 1 − (1 − c1 p)|Su,s | .
By Chernoff’s bound, for some δ ∈ (0, 1),
c
c
P |Su,s+1 | − |Bu,s
| 1 − (1 − c1 p)|Su,s | > δ|Bu,s
|(1 − (1 − c1 p)|Su,s | )
(A.1)
c |(1 − (1 − c p)|Su,s | )
δ 2 |Bu,s
1
< 2 exp −
3
!
.
By the condition that |Su,s | < (c1 p)−7/8 = o((c1 p)−1 ),
(A.2) 1 − (1 − c1 p)|Su,s | ≥ c1 p|Su,s |(1 − c1 p|Su,s |/2) = c1 p|Su,s |(1 − o(1)).
47
ITERATIVE COLLABORATIVE FILTERING
Therefore, for any t ∈ (0, 1 − c1 p|Su,s |), we can verify that δ ∈ (0, 1) for the
choice of
t|Su,s |c1 p
δ :=
.
(1 − (1 − c1 p)|Su,s | )
By substituting this choice of δ into (A.1), it follows that
c
c
||Su,s |c1 p
| 1 − (1 − c1 p)|Su,s | > t|Bu,s
P |Su,s+1 | − |Bu,s
!
c ||S |2 c2 p2
t2 |Bu,s
u,s
1
≤ 2 exp −
3(1 − (1 − c1 p)|Su,s | )
!
c ||S |c p
t2 |Bu,s
u,s 1
≤ 2 exp −
,
3
Fu,s
where the last inequality follows from 1 − (1 − c1 p)|Su,s | ≤ c1 p|Su,s |.
Let us define
(A.3)
n
− 1 +θ o
c
c
||Su,h−1 |c1 p c1 pnBq4 2
|(1 − (1 − c1 p)|Su,h−1 | )| < |Bu,h−1
.
A1u,h := ||Su,h | − |Bu,h−1
Lemma A.2.
satisfies
Assuming that c1 pn = ω(1), for any u ∈ [n] and s which
9c1 pn
8
s
≤ (c1 p)−7/8 ,
conditioned on ∩sh=1 A1u,h , for sufficiently large n, for all h ∈ [s],
"
h
h #
9
7
|Su,h | ∈
c1 pn ,
c1 pn
8
8
Proof. Recall the definition of event A1u,h from (A.3). Using the fact
c
that (1 − c1 p)|Su,h−1 | ≥ 1 − c1 p|Su,h−1 | and |Bu,h−1
| ≤ n, conditioned on
1
event Au,h ,
− 1 +θ
c
|Su,h | ≤ |Bu,h−1
| 1 − (1 − c1 p)|Su,h−1 | + |Su,h−1 |c1 p c1 pnBq4 2
− 1 +θ
≤ n c1 p|Su,h−1 | + |Su,h−1 |c1 p c1 pnBq4 2
= c1 pn|Su,h−1 | (1 + o(1)) ,
48
BORGS-CHAYES-LEE-SHAH
where the last inequality follows from the assumption that c1 pn = ω(1).
− 1 +θ
Specifically for c1 pnBq4 ≥ 26/(1−2θ) , it follows by c1 pnBq4 2 ≤ 1/8 that
9
c1 pn|Su,h−1 |.
|Su,h | ≤
8
Therefore, conditioned on ∩sh=1 A1u,h , for all h ∈ [n],
h
9
c1 pn .
|Su,h | ≤
8
(A.4)
By assumption on the value of s, it follows that
|Su,h | ≤ (c1 p)−7/8 = o((c1 p)−1 )
for all h ∈ [s]. Therefore,
(A.5) 1 − (1 − c1 p)|Su,h | ≥ c1 p|Su,h |(1 − c1 p|Su,h |/2) = c1 p|Su,h |(1 − o(1)).
Conditioned on event A1u,h ,
− 1 +θ
c
|Su,h | ≥ |Bu,h−1
| 1 − (1 − c1 p)|Su,h−1 | − |Su,h−1 |c1 p c1 pnBq4 2
− 1 +θ
c
| c1 p|Su,h−1 |(1 − c1 p|Su,h−1 |/2) − |Su,h−1 |c1 p c1 pnBq4 2
≥ |Bu,h−1
− 1 +θ
c
≥ |Bu,h−1
|c1 p|Su,h−1 | 1 − (c1 p)1/8 /2 − c1 pnBq4 2
(A.6)
c
|c1 p|Su,h−1 | (1 − o(1)) ,
= |Bu,h−1
where the last step followed from the assumption that c1 pn = ω(1) and
c
|, it follows that
c1 p = o(1). By plugging (A.4) into the definition of |Bu,h−1
c
|
|Bu,h−1
=n−
h−1
X
t=0
|Su,h | ≥ n −
h−1
X
9
t=0
8
t
c1 pn
h
9
−1
8 c1 pn
.
9
8 c1 pn − 1
=n−
By assumption on the value of s,
(A.7)
c
|Bu,h−1
|≥n−
(c1 p)−7/8 − 1
.
9
8 c1 pn − 1
Since c1 p = ω( n1 ), we have that n−1 (c1 p)−15/8 = o(n7/8 ). Further, (c1 p)−1 =
ω(1). Therefore, it follows that
(A.8)
c
|Bu,h−1
| = n − o(n7/8 ).
ITERATIVE COLLABORATIVE FILTERING
49
Therefore, by plugging into (A.6), it follows that
|Su,h | ≥ c1 pn|Su,h−1 |(1 − o(1)).
(A.9)
For sufficiently large n,
|Su,h | ≥
(A.10)
7c1 pn
|Su,h−1 |.
8
Therefore, conditioned on ∩sh=1 A1u,h , for all h ∈ [n],
h
7
|Su,h | ≥
c1 pn .
8
(A.11)
A.2. Concentration of Paths within Local Neighborhood. We
show concentration of the (s + 1)-radius boundary of node u in terms of
eTk QNu,s+1 conditioned on Fu,s and |Su,s+1 |. Specifically, for every k ∈ [d],
we prove that ek QÑu,s+1 is approximately equal to eTk ΛQÑu,s .
For any k ∈ [d] and s ∈ Z+ ,
Lemma A.3.
P
eTk QÑu,s+1
−
eTk ΛQÑu,s
>t
Fu,s , Su,s+1
3|Su,s+1 |t2
≤ 2 exp −
6kÑu,s k1 + 2Bq t
Proof. By definition,
X
eTk QNu,s+1 =
Nu,s+1 (i)qk (αi )
i∈Su,s+1
X
=
X
I((b, i) ∈ Tu )Nu,s (b)M1 (b, i)qk (αi ).
i∈Su,s+1 b∈Su,s
Due to the construction of Tu as a BFS tree, for all i,
X
I((b, i) ∈ Tu ) = I(i ∈ Su,s+1 ) ∈ {0, 1}.
b∈Su,s
If there were more than one possible parent for i in the graph defined by
edge set E1 , Tu is constructed by simply choosing one uniformly at random.
Therefore, conditioned on I(i ∈ Su,s+1 ), the parent of i is equally likely to
have been any of b ∈ Su,s , such that
(A.12)
E [I((b, i) ∈ Tu ) | I(i ∈ Su,s+1 ), Fu,s ] =
1
.
|Su,s |
!
50
BORGS-CHAYES-LEE-SHAH
For (b, i) ∈ Tu , by construction, M1 (b, i) = M (b, i) = Z(b, i). Recall that
Z(b, i) ∈ [0, 1] is independent from the tree structure (the sampling of edges
E), and its expectationPis a function of αb and αi .
Let us define Xi = b∈Su,s I((b, i) ∈ Tu )Nu,s (b)Z(b, i)qk (αi ), such that
eTk QNu,s+1 =
X
Xi .
i∈Su,s+1
By the model assumptions that Nu,s (b) ∈ [0, 1], Z(b, i) ∈ [0, 1], and |qk (αi )| ∈
[0, Bq ], it follows that |Xi | ≤ Bq . If i ∈
/ Su,s+1 , then Xi = 0 with probability
1. We next compute the expectation and variance of Xi and then proceed
to apply Bernstein’s inequality to bound eTk QNu,s+1 .
Recall that Nu,s and Su,s are fixed conditioned on Fu,s . Conditioned on
Fu,s and Su,s+1 , Xi are identically distributed for i ∈ Su,s+1 . We can verify
that for i 6= j, Xi is independent from Xj conditioned on Fus and Su,s+1
because αi is independent from αj , and the variables I((b, i) ∈ Tu ) are only
correlated for the same i but not correlated across different potential children
vertices, i.e. the tree constraint only enforces that each child has exactly
one parent, but allows parents to have arbitrarily many children. Therefore
eTk QNu,s+1 is a sum of iid random variables.
We use the independence of sampled edge set E1 from the latent variables,
along with (A.12) and the law of iterated expectations to show that
E[Xi | i ∈ Su,s+1 , Fus ]
X
= E
I((b, i) ∈ Tu )Nu,s (b)Z(b, i)qk (αi )
i ∈ Su,s+1 , Fu,s
b∈Su,s
X 1
=
Nu,s (b)E[Z(b, i)qk (αi ) | Fu,s ]
|Su,s |
b∈Su,s
X 1
Nu,s (b)E[E[Z(b, i) | αi , Fu,s ]qk (αi ) | Fu,s ].
=
|Su,s |
b∈Su,s
For b ∈ Su,s , αb is fixed when conditioned on Fus , such that E[Z(b, i) | αi , Fus ] =
ITERATIVE COLLABORATIVE FILTERING
51
f (αb , αi ). Additionally, by the spectral decomposition of f ,
E[Xi | i ∈ Su,s+1 , Fu,s ]
X 1
Nu,s (b)E[f (αb , αi )qk (αi ) | Fu,s ]
=
|Su,s |
b∈Su,s
X 1
X
λh qh (αb )qh (αi )qk (αi ) | Fu,s ]
=
Nu,s (b)E[
|Su,s |
b∈Su,s
h∈[d]
X 1
=
Nu,s (b)λk qk (αb )
|Su,s |
b∈Su,s
1
=
eT ΛQNu,s
|Su,s | k
(A.13)
= eTk ΛQÑu,s ,
where we used the orthonormality property of the eigenfunctions qh (·). Using
similar arguments,
we can compute a bound on the variance of Xi , using
P
the fact that b∈Su,s I((b, i) ∈ Tu ) ∈ {0, 1}, such that I((b, i) ∈ Tu ) is only
active for at most one value of b ∈ Su,s .
Var[Xi | i ∈ Su,s+1 , Fu,s ]
≤ E[Xi2 | i ∈ Su,s+1 , Fu,s ]
X
= E
I((b, i) ∈ Tu )Nu,s (b)2 Z(b, i)2 qk (αi )2 | i ∈ Su,s+1 , Fu,s
b∈Su,s
X 1
=
Nu,s (b)2 E[Z(b, i)2 qk (αi )2 | Fu,s ].
|Su,s |
b∈Su,s
Because Z(b, j) ∈ [0, 1],R it follows that Z(b, j)2 ≤ 1. By assumption on f
and qk (·), E[qk (αi )2 ] = X1 qk (y)2 dP1 (y) = 1. Therefore,
Var[Xi | i ∈ Su,s+1 , Fu,s ]
X 1
≤
Nu,s (b)2 E[qk (αi )2 ]
|Su,s |
b∈Su,s
X 1
Nu,s (b)2 .
≤
|Su,s |
b∈Su,s
By construction, Nu,s (b) ∈ [0, 1]. Therefore, Nu,s (b)2 ∈ [0, Nu,s (b)] and by
52
BORGS-CHAYES-LEE-SHAH
definition Ñu,s = [Nu,s (b)/|Su,s |]. Subsequently
X 1
(A.14)
Nu,s (b) = kÑu,s k1 .
Var[Xi | i ∈ Su,s+1 , Fu,s ] ≤
|Su,s |
b∈Su,s
P
By definition eTk QÑu,s+1 =
X
/|Su,s+1 |. Also we have that
i
i∈Su,s+1
|Xi | ≤ Bq . Therefore, by applying Bernstein’s inequality and using (A.13),
(A.14), it follows that
P eTk QÑu,s+1 − eTk ΛQÑu,s > t Fu,s , Su,s+1
X
=P |
(Xi − E[Xi ])| > t|Su,s+1 | Fu,s , Su,s+1
!
1
2 2
2 |Su,s+1 | t
≤ 2 exp −
|Su,s+1 |kÑu,s k1 + 13 Bq t|Su,s+1 |
!
3|Su,s+1 |t2
.
= 2 exp −
6kÑu,s k1 + 2Bq t
Let us define
(
(A.15) A2u,h,k :=
ek QÑu,h − eTk ΛQÑu,h−1 < (c1 pn)
− 12 +θ
|λd |2
2|λ1 |
h−1 )
.
Lemma A.4. For any u ∈ [n], k ∈ [d], and r ∈ Z+ , conditioned on
∩rh=1 A2u,h,k , for any r0 ∈ Z+ and ∆ ∈ [r0 ] such that r0 + ∆ ≤ r,
|eTk QÑu,r0 +∆ − eTk Λ∆ QÑu,r0 | ≤
2|λk |∆−1
1
(c1 pn) 2 −θ
|λd |2
2|λ1 |
r0
.
Proof. Recall the definition of A2u,h,k from (A.15). We can write the
telescoping sum
eTk QÑu,r0 +∆ − eTk Λ∆ QÑur0 =
=
∆
X
h=1
∆
X
h=1
eTk Λ∆−h QÑu,r0 +h − eTk Λ∆−h+1 QÑu,r0 +h−1
T
T
λ∆−h
e
Q
Ñ
−
e
ΛQ
Ñ
.
u,r
+h
u,r
+h−1
k
k
0
0
k
ITERATIVE COLLABORATIVE FILTERING
53
0 +∆
Conditioned on ∩rh=1
A2u,h,k ,
|eTk QÑu,r0 +∆
−
eTk Λ∆ QÑu,r0 |
r +h−1
∆
X
|λd |2 0
|λk |∆−h
≤
1
−θ
2|λ1 |
2
h=1 (c1 pn)
r ∆−1
h
|λd |2 0 X
|λk |∆−1
|λd |2
=
1
2|λ1 ||λk |
(c1 pn) 2 −θ 2|λ1 |
h=0
Recall that we ordered the eigenvalues such that |λd |2 ≤ |λ1 ||λk | which
implies
r ∆−1
|λk |∆−1
|λd |2 0 X −h
T
T ∆
|ek QÑu,r0 +∆ − ek Λ QÑu,r0 | ≤
2
(c1 pn)1/4 2|λ1 |
h=0
r
2|λk |∆−1 |λd |2 0
≤
.
1
(c1 pn) 2 −θ 2|λ1 |
1
Lemma A.5. Assume that |λk | = ω((c1 pn)− 2 +θ ). For any u, v ∈ [n],
k ∈ [d], and r ∈ Z+ , conditioned on ∩rh=1 (A2u,h,k ∩ A2v,h,k ) for any r0 ∈ Z+
and ∆1 , ∆2 ∈ [r0 ] such that r0 + ∆1 ≤ r and r0 + ∆2 ≤ r,
(eTk QÑu,r0 +∆1 )(eTk QÑv,r0 +∆2 ) − (eTk Λ∆1 QÑu,r0 )(eTk Λ∆2 QÑv,r0 )
!
r
|λd |2 |λk | 0 4Bq |λk |∆1 +∆2 −1
=
(1 + o(1))
1
2|λ1 |
(c1 pn) 2 −θ
This bound also applies if u = v.
Proof. Recall the definition of A2u,h,k from (A.15). Using the fact that
2(xy − ab) = (x − a)(y + b) + (x + a)(y − b), it follows that
(eTk QÑu,r0 +∆1 )(eTk QÑv,r0 +∆2 ) − (eTk Λ∆1 QÑu,r0 )(eTk Λ∆2 QÑv,r0 )
1
= ((eTk QÑu,r0 +∆1 ) − (eTk Λ∆1 QÑu,r0 ))((eTk QÑv,r0 +∆2 ) + (eTk Λ∆2 QÑv,r0 ))
2
(A.16)
1
+ ((eTk QÑu,r0 +∆1 ) + (eTk Λ∆1 QÑu,r0 ))((eTk QÑv,r0 +∆2 ) − (eTk Λ∆2 QÑv,r0 )).
2
By rearranging, it follows that
(eTk QÑv,r0 +∆2 ) + (eTk Λ∆2 QÑv,r0 )
= (eTk QÑv,r0 +∆2 ) − (eTk Λr0 +∆2 Qev ) + (eTk Λ∆2 QÑv,r0 )
(A.17)
− (eTk Λr0 +∆2 Qev ) + 2(eTk Λr0 +∆2 Qev ).
54
BORGS-CHAYES-LEE-SHAH
By substituting (A.17) into (A.16), and then applying Lemma A.4, it follows
0 +∆2
0 +∆1
A2v,h,k ,
A2u,h,k ∩rh=1
that conditioned on ∩rh=1
(eTk QÑu,r0 +∆1 )(eTk QÑv,r0 +∆2 ) − (eTk Λ∆1 QÑu,r0 )(eTk Λ∆2 QÑv,r0 )
!
!
r
1 2|λk |∆1 −1
|λd |2 0 4|λk |r0 +∆2 −1
T r0 +∆2
≤
+ 2 ek Λ
Qev
1
2 (c1 pn) 12 −θ
2|λ1 |
(c1 pn) 2 −θ
!
!
r
∆2 −1
2|λ
|
|λd |2 0
1 4|λk |r0 +∆1 −1
k
T r0 +∆1
Qeu
.
+ 2 ek Λ
+
1
1
2
2|λ1 |
(c1 pn) 2 −θ
(c1 pn) 2 −θ
By the assumption that |qk (·)| ≤ Bq ,
eTk Λr0 +∆2 Qev = |λrk0 +∆2 qk (αv )| ≤ Bq |λk |r0 +∆2 .
Therefore, it follows that
(eTk QÑu,r0 +∆1 )(eTk QÑv,r0 +∆2 ) − (eTk Λ∆1 QÑu,r0 )(eTk Λ∆2 QÑv,r0 )
!
4|λ |∆1 +∆2 −2 |λ |2 |λ | r0
1
k
d
k
≤ 2(c1 pn)− 2 +θ + |λk |Bq
1
−θ
2|λ1 |
(c1 pn) 2
!
r
|λd |2 |λk | 0 4Bq |λk |∆1 +∆2 −1
=
(1 + o(1)) ,
1
2|λ1 |
(c1 pn) 2 −θ
1
where the last step follows from |λk | = ω((c1 pn)− 2 +θ ).
1
Lemma A.6. Assume that |λd | = ω((c1 pn)− 2 +θ ). For any u, v ∈ [n] and
r ∈ Z+ , conditioned on ∩dk=1 ∩rh=1 (A2u,h,k ∩ A2v,h,k ), for any r1 , r2 ∈ [r] and
r0 ∈ Z+ such that r0 ≤ min(r1 , r2 ),
T
T
|Ñu,r
F Ñv,r2 − Ñu,r
QT Λr1 +r2 −2r0 +1 QÑv,r0 |
1
0
!
r
|λd |2 0 4Bq d|λ1 |r1 +r2 −2r0
≤
(1 + o(1))
1
2
(c1 pn) 2 −θ
Proof. Recall the definition of A2u,h,k from (A.15). By definition, F =
QT ΛQ, such that
T
T
|Ñu,r
F Ñv,r2 − Ñu,r
QT Λr1 +r2 −2r0 +1 QÑv,r0 |
1
0
T
T
= |Ñu,r
QT ΛQÑv,r2 − Ñu,r
QT Λr1 +r2 −2r0 +1 QÑv,r0 |
1
0
X
=|
λk (eTk QÑu,r1 )(eTk QÑv,r2 ) − (eTk Λr1 −r0 QÑu,r0 )(eTk Λr2 −r0 QÑv,r0 ) |.
k
ITERATIVE COLLABORATIVE FILTERING
55
By Lemma A.5, conditioned on ∩dk=1 ∩rh=1 (A2u,h,k ∩ A2v,h,k ), if we substitute
r1 − r0 for ∆1 , and r2 − r0 for ∆2 , and since |λk | ≤ |λ1 | for all k, it follows
that
T
T
QT Λr1 +r2 −2r0 +1 QÑv,r0 |
|Ñu,r
F Ñv,r2 − Ñur
0
1
!
r
X
|λd |2 |λk | 0 4Bq |λk |r1 +r2 −2r0 −1
≤
|λk |
(1 + o(1))
1
2|λ1 |
(c1 pn) 2 −θ
k
!
r
|λd |2 0 4Bq d|λ1 |r1 +r2 −2r0
≤
(1 + o(1)) .
1
2
(c1 pn) 2 −θ
Next we prove concentration of kNu,s+1 k1 conditioned on Fus and Su,s+1
using similar arguments as the proof of Lemma A.3.
Lemma A.7.
P
For any s ∈ Z+ ,
kÑu,s+1 k1 − ρT ΛQÑu,s > t
Fu,s , Su,s+1
3|Su,s+1 |t2
≤ 2 exp −
6kÑu,s k1 + 2t
!
where recall (cf. (2.2)) that
R ρ = [ρk ]k∈[d] denotes the d dimensional vector
such that ρk = E[qk (α)] = X1 qk (y)dP1 (y).
Proof. Since Nu,s+1 is a nonnegative vector,
X
kNu,s+1 k1 =
Nu,s+1 (i)
i
=
X
X
I((b, i) ∈ Tu )Nu,s (b)M1 (b, i).
i∈Su,s+1 b∈Su,s
Due to the construction of Tu as a BFS tree, for all i,
X
I((b, i) ∈ Tu ) = I(i ∈ Su,s+1 ) ∈ {0, 1}.
b∈Su,s
If there were more than one possible parent for i in the graph defined by
edge set E1 , Tu is constructed by simply choosing one uniformly at random.
Therefore, conditioned on I(i ∈ Su,s+1 ), the parent of i is equally likely to
have been any of b ∈ Su,s , such that
(A.18)
E [I((b, i) ∈ Tu ) | I(i ∈ Su,s+1 ), Fu,s ] =
1
.
|Su,s |
,
56
BORGS-CHAYES-LEE-SHAH
For (b, i) ∈ Tu , by construction, M1 (b, i) = M (b, i) = Z(b, i). Recall that
Z(b, i) ∈ [0, 1] is independent from the tree structure (the sampling of edges
E), and its expectationPis a function of αb and αi .
Let us define Xi = b∈Su,s I((b, i) ∈ Tu )Nu,s (b)Z(b, i), such that
X
kNu,s+1 k1 =
Xi .
i∈Su,s+1
By the model assumptions that Nu,s (b) ∈ [0, 1] and Z(b, i) ∈ [0, 1], it follows
that |Xi | ≤ 1. If i ∈
/ Su,s+1 , then Xi = 0 with probability 1. We next compute
the expectation and variance of Xi and then proceed to apply Bernstein’s
inequality to bound kNu,s+1 k1 .
Recall that Nu,s and Su,s are fixed conditioned on Fu,s . Conditioned on
Fu,s and Su,s+1 , Xi are identically distributed for i ∈ Su,s+1 . We can verify
that for i 6= j, Xi is independent from Xj conditioned on Fus and Su,s+1
because αi is independent from αj , and the variables I((b, i) ∈ Tu ) are only
correlated for the same i but not correlated across different potential children
vertices, i.e. the tree constraint only enforces that each child has exactly
one parent, but allows parents to have arbitrarily many children. Therefore
kNu,s+1 k1 is a sum of iid random variables.
We use the independence of sampled edge set E1 from the latent variables
Θ, along with (A.18) and the law of iterated expectations to show that
X 1
E[Xi | i ∈ Su,s+1 , Fu,s ] =
Nu,s (b)E[Z(b, i) | Fu,s ].
|Su,s |
b∈Su,s
For b ∈ Su,s , αb is fixed when conditioned on Fu,s , such that E[Z(b, i) | αi , Fu,s ] =
f (αb , αi ). Additionally, by the spectral decomposition of f ,
X 1
E[Xi | i ∈ Su,s+1 , Fu,s ] =
Nu,s (b)E[f (αb , αi ) | Fu,s ]
|Su,s |
b∈Su,s
X 1
X
=
Nu,s (b)
λk qk (αb )E[qk (αi )]
|Su,s |
b∈Sus
T
k
= ρ ΛQÑu,s .
Using similar arguments,
P we can also compute a bound on the variance of
Xi , using the fact that b∈Su,s I((b, i) ∈ Tu ) ∈ {0, 1}, such that I((b, i) ∈ Tu )
57
ITERATIVE COLLABORATIVE FILTERING
is only active for at most one value of b ∈ Su,s .
Var[Xi | i ∈ Su,s+1 , Fu,s ] ≤ E[Xi2 | i ∈ Su,s+1 , Fu,s ]
X
I((b, i) ∈ Tu )Nu,s (b)2 Z(b, i)2
= E
i ∈ Su,s+1 , Fu,s
b∈Su,s
X 1
=
Nu,s (b)2 E Z(b, j)2 | Fu,s
|Su,s |
b∈Su,s
Because Z(b, j) ∈ [0, 1] and Nu,s (b) ∈ [0, 1],
X
X
1
1
Var[Xi | i ∈ Su,s+1 , Fu,s ] ≤
Nu,s (b)2 ≤
Nu,s (b) = kÑu,s k1 .
|Su,s |
|Su,s |
b∈Su,s
b∈Su,s
By an application of Bernstein’s inequality,
!
2
3|S
|t
u,s+1
P kÑu,s+1 k1 − ρT ΛQÑu,s > t Fu,s , Su,s+1 ≤ 2 exp −
6kÑu,s k1 + 2t
Let us define
(
(A.19) A3u,h :=
− 12 +θ
kÑu,h k1 − ρT ΛQÑu,h−1 < (c1 pn)
|λd |2
2|λ1 |
h−1 )
.
Lemma A.8. Assuming c1 pn = ω(1), for any u ∈ [n] and s ∈ Z+ ,
conditioned on ∩sh=1 (A3u,h ∩dk=1 A2u,h,k ), it holds that
kÑu,s k1 ≤ Bq2 d|λ1 |s (1 + o(1)).
Proof. Recall the definition of A2u,h,k from (A.15), and recall the definition of A3u,h from (A.19). We first rearrange the expression kÑu,s k1 into the
sum of three expressions,
kÑu,s k1
= kÑu,s k1 − ρT ΛQÑu,s−1 + ρT ΛQÑu,s−1 − ρT Λs Qeu + ρT Λs Qeu
≤ 1T Ñu,s − ρT ΛQÑu,s−1 + ρT ΛQÑu,s−1 − ρT Λs Qeu + ρT Λs Qeu
= 1T Ñu,s − ρT ΛQÑu,s−1 +
X
k
+
X
k
ρk λsk qk (αu )
ρk λk eTk QÑu,s−1 − eTk Λs−1 Qeu
58
BORGS-CHAYES-LEE-SHAH
The first expression is upper bounded conditioned on A3u,s . The second expression is upper bounded by Lemma A.4 conditioned on ∩sh=1 ∩dk=1 A2u,h,k ,
choosing r0 = 0 and ∆ = s − 1. The third expression is bounded by the
assumption that |qk (·)| ≤ Bq , |ρk | = |E[qk (α)]| ≤ Bq , and |λk | ≤ |λ1 |. Therefore it follows that
s−1
X 2|λk |s−1
|λd |2
− 21 +θ
kÑu,s k1 ≤ (c1 pn)
+ Bq
+ Bq2 d|λ1 |s
1
−θ
2|λ1 |
2
(c
pn)
1
k
s−1
2
1
1
|λd |
+ Bq d|λ1 |s 2|λ1 |−1 (c1 pn)− 2 +θ + Bq
≤ (c1 pn)− 2 +θ
2|λ1 |
= Bq2 d|λ1 |s (1 + o(1)).
A.3. Showing that Majority of Local
P Neighborhoods are Good.
For each a ∈ [n], show concentration of b∈[n]\b I((a, b) ∈ E1 ). Let’s define
event
X
(A.20) A4 := ∩a∈[n]
I((a, b) ∈ E1 ) < c1 (p + (n − 1)−3/4 )(n − 1) .
b∈[n]\a
Lemma A.9.
c1 (n − 1)1/4
P ¬A4 ≤ n exp −
3
!
.
Proof. We can show this easily because this is just a sum of Bernoulli
random variables. For a fixed a,
X
E
I((a, b) ∈ E1 ) = c1 p(n − 1)
b∈[n]\a
By Chernoff’s bound,
X
P
I((a, b) ∈ E1 ∪ E2 ) > (1 + p−1 (n − 1)−3/4 )c1 p(n − 1)
b∈[n]\a
c1 (n − 1)1/4
≤ exp −
3
!
.
We use union bound to get the final expression.
ITERATIVE COLLABORATIVE FILTERING
59
Let us define
A5u,s−1 := ∩sh=1 (A1u,h ∩ A3u,h ∩dk=1 A2u,h,k ).
(A.21)
We bound the probability of events A1u,s , A2u,s,k , and A3u,s conditioned on
Fu,s−1 , A5u,s−1 .
Lemma A.10.
which satisfies
Assuming that c1 pn = ω(1), and for any u ∈ [n] and s
9c1 pn
8
s
≤ (c1 p)−7/8 ,
for sufficiently large n,
P(¬A1u,s |Fu,s−1 , A5u,s−1 )
(c1 pn)2θ
≤ 4 exp −
2−s .
4Bq2
Proof. Recall the definition of event A1u,h and A5u,s−1 from (A.3) and
1
(A.21). The event A5u,s−1 contains the event ∩s−1
h=1 Au,h , such that by Lemma
5
A.2 and the assumption on s, conditioned on Au,s−1 , we can verify that
− 1 +θ
|Su,s | < (c1 p)−7/8 . Choosing t = c1 pnBq4 2 = o(1), we can verify that
t < 1 − c1 p|Su,s | = 1 − o(1). By applying Lemma A.1 for this choice of t, it
follows that
c
|Bu,s−1
||Su,s−1 |c1 p
1
5
(A.22)
.
P(¬Au,s |Fu,s−1 , Au,s−1 ) ≤ 2 exp −
3(c1 pn)1−2θ Bq2
c
| ≥ n(1 − o(1)), such that for
By (A.8) from the proof of Lemma A.2, |Bu,s−1
sufficiently large n,
3
7c1 pn s−1
c
|Bu,s−1 ||Su,s−1 | ≥ n
.
4
8
By plugging into (A.22) and using the inequality that for s ≥ 1, xs−1 ≥
1 + (x − 1)(s − 1) for x ≥ 1, it follows that
P(¬A1u,s |Fu,s−1 , A5u,s−1 )
!
(c1 pn)2θ 7c1 pn s−1
≤ 2 exp −
4Bq2
8
(c1 pn)2θ
7c1 pn
≤ 2 exp −
1+
− 1 (s − 1)
4Bq2
8
(c1 pn)2θ
(c1 pn)2θ 7c1 pn
= 2 exp −
exp −
− 1 (s − 1) .
4Bq2
4Bq2 d
8
60
BORGS-CHAYES-LEE-SHAH
In above, we used the fact that 7c1 pn/8 > 1 for n large enough since c1 pn =
ω(1). For sufficiently large n,
(c1 pn)2θ 7c1 pn
1
exp −
−1
≤ ,
2
4Bq
8
2
such that
(A.23)
P(¬A1u,s |Fu,s−1 , A5u,s−1 )
(A.24)
(c1 pn)2θ
≤ 2 exp −
2−(s−1)
4Bq2
(c1 pn)2θ
≤ 4 exp −
2−s .
4Bq2
1
Lemma A.11. Assuming that c1 pn = ω(1), |λd | = ω((c1 pn)− 4 ), for any
u ∈ [n] and s ∈ Z+ which satisfies
9c1 pn s
≤ (c1 p)−7/8 ,
8
conditioned on A5u,s−1 , for sufficiently large n,
(c1 pn)2θ
P(¬A2u,s,k |Fu,s−1 , A5u,s−1 ) ≤ 4 exp −
2−s ,
3Bq2 d
and
P(¬A3u,s |Fu,s−1 , A5u,s−1 )
(c1 pn)2θ
≤ 4 exp −
2−s .
3Bq2 d
Proof. Recall the definitions of events A1u,h , A3u,h , A2u,h,k , and A5u,s−1
2 s−1
1
d|
from (A.3), (A.19), (A.15), and (A.21). By plugging in t = (c1 pn)− 2 +θ |λ
2|λ1 |
to Lemmas A.3 and A.7, it follows that
!
2
3|S
|t
u,s
P(¬A2u,s,k |Fu,s−1 , A5u,s−1 ) ≤ 2 exp −
6kÑu,s−1 k1 + 2Bq t
and
P(¬A3u,s |Fu,s−1 , A5u,s−1 )
3|Su,s |t2
≤ 2 exp −
6kÑu,s−1 k1 + 2t
!
61
ITERATIVE COLLABORATIVE FILTERING
We lower bound the expression in the exponent of the first bound using
Lemmas A.2 and A.8. For sufficiently large n,
3
|t2
7c1 pn
8
s
(c1 pn)−1+2θ
|λd |2
2|λ1 |
2(s−1)
3|Su,s
≥
2 s−1
1
6kÑu,s−1 k1 + 2Bq t
d|
6Bq2 d|λ1 |s−1 (1 + o(1)) + 2Bq (c1 pn)− 2 +θ |λ
2|λ1 |
7c1 pn
s−1
−1+2θ
3
(c1 pn)
8
7c1 pn|λd |4
≥
.
s−1
1
32|λ1 |3
|λd |2
6Bq2 d(1 + o(1)) + 2Bq (c1 pn)− 2 +θ 2|λ
2
1|
1
Assuming that |λd | = ω((c1 pn)− 4 ), then
Using the fact that
it follows that
x(s−1)
3|Su,s |t2
≥
6kÑu,s−1 k1 + 4Bq t
7c1 pn|λd |4
32|λ1 |3
> 1 for n large enough.
≥ 1 + (x − 1)(s − 1) for x, s ≥ 1, and |λd | ≤ |λ1 |,
7(c1 pn)2θ
16Bq2 d(1 + o(1))
7c1 pn|λd |4
1+
− 1 (s − 1) .
32|λ1 |3
Therefore, for sufficiently large n,
(c1 pn)2θ
(c1 pn)2θ 7c1 pn|λd |4
P(¬A2u,s,k |Fu,s−1 , A5u,s−1 ) ≤ 2 exp −
exp
−
−
1
(s
−
1)
.
3Bq2 d
3Bq2 d
32|λ1 |3
Further, because
7c1 pn|λd |4
32|λ1 |3
> 1, for n large enough
1
(c1 pn)2θ 7c1 pn|λd |4
−1
≤ ,
exp −
2
3
3Bq d
32|λ1 |
2
such that
P(¬A2u,s,k |Fu,s−1 , A5u,s−1 )
(c1 pn)2θ
2−(s−1)
≤ 2 exp −
3Bq2 d
(c1 pn)2θ
= 4 exp −
2−s .
3Bq2 d
Using similar arguments, for sufficiently large n, it follows that
(c1 pn)2θ
3
5
P(¬Au,s |Fu,s−1 , Au,s−1 ) ≤ 4 exp −
2−s .
3Bq2 d
62
BORGS-CHAYES-LEE-SHAH
We consider a vertex u to have “good” neighborhood behavior if event
A5u,r holds. Next we show a lower bound on the probability that the neighborhood of a vertex behaves well.
1
Lemma A.12. Assuming that c1 pn = ω(1) and |λd | = ω((c1 pn)− 4 ), for
any r ∈ Z+ which satisfies
9c1 pn r
≤ (c1 p)−7/8 ,
8
for sufficiently large n,
P
¬A5u,r
(c1 pn)2θ
.
≤ 4(d + 2) exp −
4Bq2 d
Proof. Recall the definition of event A5u,s−1 from (A.21).
P ¬A5u,r = P ∪sh=1 ¬(A1u,h ∩ A3u,h ∩dk=1 A2u,h,k )
For a sequence of events {Zh }h∈[s] ,
P(∪h∈[s] ¬Zh ) = P({¬Zs ∩h∈[s−1] Zh } ∪ {∪h∈[s−1] ¬Zh })
= P(¬Zs | ∩h∈[s−1] Zh )P(∩h∈[s−1] Zh ) + P(∪h∈[s−1] ¬Zh )
≤ P(¬Zs | ∩h∈[s−1] Zh ) + P(∪h∈[s−1] ¬Zh ).
We can repeated apply this bound to show that
X
P(∪h∈[s] ¬Zh ) ≤
P(¬Zt | ∩h∈[t−1] Zh ).
t∈[s]
By using this inequality for our sequence of events as described in A5u,s−1
and applying union bound, it follows that
P ¬A5u,r
r
X
≤
P ¬(A1u,h ∩ A3u,h ∩dk=1 A2u,h,k ) | A5u,s−1
≤
s=1
r h
X
P(¬A1u,s |Fu,s−1 , A5u,s−1 ) +
s=1
d
X
k=1
+
P(¬A3u,s |Fu,s−1 , A5u,s−1 )
i
.
P(¬A2u,s,k |Fu,s−1 , A5u,s−1 )
63
ITERATIVE COLLABORATIVE FILTERING
By Lemmas A.10 and A.11, for large enough n,
P ¬A5u,r
≤
r
X
4 exp
pn)2θ
− (c14B
2
q
2
−s
s=1
+
d
X
2θ
1 pn)
2−s
4 exp − (c3B
2d
q
k=1
2θ
1 pn)
+ 4 exp − (c3B
2−s
2
qd
(c1 pn)2θ
(c1 pn)2θ
+
4(d
+
1)
exp
−
≤ 4 exp −
4Bq2
3Bq2 d
(c1 pn)2θ
≤ 4(d + 2) exp −
.
4Bq2 d
Next we show that with high probability a large fraction of the vertices
have good local neighborhood behavior. Let us define
X
2θ
(c1 pn)
(A.25) A6u,v,r :=
.
I(A5a,r ) ≥ (n − 2) 1 − exp −
8Bq2 d
a∈[n]\u,v
1
Lemma A.13. Assuming that c1 pn = ω(1) and |λd | = ω((c1 pn)− 4 ), for
any r ∈ Z+ which satisfies
9c1 pn r
≤ (c1 p)−7/8 ,
8
for sufficiently large n,
P
¬A6u,v,r
(c1 pn)2θ
.
≤ 4(d + 2) exp −
8Bq2 d
Proof. Recall the definitions of events A5u,s−1 and A6u,v,r from (A.21)
and (A.25). We use Lemma A.12 to upper bound the probability of event
¬A5u,s−1 and then plug it into Markov’s inequality to show that
2θ
X
(c
pn)
1
P(¬A6u,v,r ) = P
I(¬A5a,r ) ≥ (n − 2) exp −
8Bq2 d
a∈[n]\u,v
P
5
(c1 pn)2θ
a∈[n]\u,v E[I(¬Aa,r )]
≤
exp
(n − 2)
8Bq2 d
(c1 pn)2θ
≤ 4(d + 2) exp −
8Bq2 d
64
BORGS-CHAYES-LEE-SHAH
A.4. Concentration of Edges Between Local Neighborhoods. Suppose that we condition on all the latent variables {αi }i∈[n] and edge set E1
and associated weights in M1 such that vectors Nu,r1 and Nv,r2 are deterT M Ñ
mined. The remaining randomness in computing Ñu,r
2 v,r2 comes from
1
the sampling of edges E2 and associated weights M2 , which are sampled independently with probability close to c2 p. Furthermore, we use the bounded
assumption that Z(i, j) ∈ [0, 1] and f (αi , αj ) ∈ [0, 1] to apply concentration
inequalities on the edge weights.
Lemma A.14.
For any u, v ∈ [n] and r1 , r2 ∈ Z+ ,
T
T
P Nu,r
M2 Nv,r2 − EE2 [Nu,r
M2 Nv,r2 | FE1 ] > t | FE1
1
1
3t2 (1 − c1 p)
≤ 2 exp −
.
T FN
12c2 pNur
vr2 + 2(1 − c1 p)t
1
Proof. Recall that M2 is symmetric, and for some edge (i, j) ∈ E2 ,
M2 (i, j) = M (i, j) = Z(i, j), but for (i, j) ∈
/ E2 , M2 (i, j) = 0. Therefore we
T M N
can rewrite the expression Nu,r
as
2 v,r2
1
T
Nu,r
M2 Nv,r2 =
1
X
Nu,r1 (i)Nv,r2 (j)M2 (i, j)
i,j∈[n]
=
X
Nu,r1 (i)Nv,r2 (j)I((i, j) ∈ E2 )Z(i, j)
i,j∈[n]
=
X
(Nu,r1 (i)Nv,r2 (j) + Nu,r1 (j)Nv,r2 (i))I((i, j) ∈ E2 )Z(i, j).
i<j,(i,j)∈E
/ 1
The last step comes from the fact that our edges and associated weights are
undirected, such that Z(i, j) = Z(j, i), and that we do not sample self-edges,
such that (i, i) ∈
/ E2 , leading to the strict inequality that i < j.
Conditioned on FE1 , Nu,r1 and Nv,r2 are fixed. We have presented the
expression as a sum of independent random variables
conditioned on FE1 ,
n
since Z(i,
j)
are
independent
for
the
distinct
vertex
pairs. The sum is
2
n
over 2 − |E1 | terms, which are each bounded by 2, since Z(i, j) ∈ [0, 1], and
the entries in vectors Nu,r1 and Nv,r2 are also in [0, 1]. By independence, it
65
ITERATIVE COLLABORATIVE FILTERING
follows that
T
Var Nu,r
M2 Nv,r2 | FE1
1
X
(Nu,r1 (i)Nv,r2 (j) + Nu,r1 (j)Nv,r2 (i))2 Var [I((i, j) ∈ E2 )Z(i, j) | FE1 ]
=
i<j,(i,j)∈E
/ 1
X
≤
(Nu,r1 (i)Nv,r2 (j) + Nu,r1 (j)Nv,r2 (i))2 E I((i, j) ∈ E2 )Z(i, j)2
F E1 .
i<j,(i,j)∈E
/ 1
where the last line follows from the fact that Var[X] ≤ E[X 2 ]. We assumed
that Z(i, j) is independent from I((i, j) ∈ E2 ). Conditioned on (i, j) ∈
/ E1 ,
which occurs with probability 1 − c1 p, I((i, j) ∈ E2 ) is distributed according
to a Bernoulli(c2 p/(1−c1 p)) random variable. By assumption, Z(i, j) ∈ [0, 1]
such that E[Z(i, j)2 |αi , αj ] ≤ E[Z(i, j)|αi , αj ] = f (αi , αj ). Therefore,
T
Var Nu,r
M2 Nv,r2 | FE1
1
X
c2 p
≤
(Nu,r1 (i)Nv,r2 (j) + Nu,r1 (j)Nv,r2 (i))2 1−c
f (αi , αj ).
1p
i<j,(i,j)∈E
/ 1
By the fact that (a + b)2 ≤ 2a2 + 2b2 ,
T
Var Nu,r
M2 Nv,r2 | FE1
1
X
≤
(2Nu,r1 (i)2 Nv,r2 (j)2 + 2Nu,r1 (j)2 Nv,r2 (i)2 )
c2 p
1−c1 p f (αi , αj )
i<j,(i,j)∈E
/ 1
=
2c2 p
1−c1 p
X
Nu,r1 (i)2 Nv,r2 (j)2 f (αi , αj ).
i6=j,(i,j)∈E
/ 1
Since f (αi , αj ) ∈ [0, 1], and the entries of Nu,r1 and Nv,r2 are in [0, 1], it
follows that
X
T
2c2 p
Nu,r1 (i)Nv,r2 (j)f (αi , αj )
Var Nu,r
M
N
|
F
≤
2
v,r
E
2
1
1−c1 p
1
i6=j,(i,j)∈E
/ 1
≤
2c2 p
1−c1 p
X
Nu,r1 (i)Nv,r2 (j)f (αi , αj )
(i,j)∈E
/ 1
=
2c2 p
N T F Nv,r2 .
1 − c1 p u,r1
By Bernstein’s inequality,
T
T
M2 Nv,r2 − EE2 Nu,r
M2 Nv,r2 FE1 > t | FE1
P Nu,r
1
1
3t2 (1 − c1 p)
≤ 2 exp −
.
T FN
12c2 pNu,r
v,r2 + 2(1 − c1 p)t
1
66
BORGS-CHAYES-LEE-SHAH
Let us define the following event
n
T
T
M2 Nv,r2 FE1
A7u,v,r1 ,r2 := Nu,r
M2 Nv,r2 − EE2 Nu,r
1
1
1/2 o
(A.26)
≤ c2 p|λ1 |r1 +r2 |Su,r1 ||Sv,r2 |p−1/4
Lemma A.15.
which satisfiy
Assuming that c1 pn = ω(1), for any u, v ∈ [n] and r1 < r2
(A.27)
and
7|λd |2 c1 pn
8|λ1 |
9c1 pn
8
r1
≤
(r1 +r2 )/2
9c1 pn
8
r2
≥ p−6/8 ,
≤ (c1 p)−7/8 ,
conditioned on A4 ∩ A5u,r1 ∩ A5v,r2 ∩ A7u,v,r1 ,r2 , for sufficiently large n,
1−c1 p T
c2 p Ñu,r1 M2 Ñv,r2
≤ (1 −
T
− Ñu,r
F Ñv,r2
1
−1/2
c1 p)c2 |λd |r1 +r2 p1/8
+ (c1 pn + c1 n
1/4
)Bq2 d|λ1 |r1
8
7c1 pn
r2
(1 + o(1)).
Proof. Recall the definitions of events A4 , A5u,r1 , ∩A5v,r2 , and A7u,v,r1 ,r2
as given in (A.20), (A.21), and (A.26). Let us bound
h
i
1−c1 p
T
T
FE1 − Nu,r
F Nv,r2 .
c2 p EE2 Ñu,r1 M2 Nv,r2
1
Recall that M2 is symmetric, and for some edge (i, j) ∈ E2 , M2 (i, j) =
M (i, j) = Z(i, j), but for (i, j) ∈
/ E2 , M2 (i, j) = 0. Since we do not sample
T M N
self-edges, (i, i) ∈
/ E2 . Therefore we can rewrite the expression Nu,r
2 v,r2
1
as
X
T
M
N
Nu,r
=
Nu,r1 (i)Nv,r2 (j)M2 (i, j)
2
v,r
2
1
i,j∈[n]
=
X
Nu,r1 (i)Nv,r2 (j)I((i, j) ∈ E2 )Z(i, j).
i6=j,(i,j)∈E
/ 1
Conditioned on (i, j) ∈
/ E1 , which occurs with probability 1 − c1 p, I((i, j) ∈
E2 ) is distributed according to a Bernoulli(c2 p/(1 − c1 p)) random variable.
Conditioned on FE1 , Nu,r1 and Nv,r2 are fixed, and since we assumed that
67
ITERATIVE COLLABORATIVE FILTERING
Z(i, j) is independent from I((i, j) ∈ E2 ) and E[Z(i, j)|αi , αj ] = f (αi , αj ), it
follows that
X
c2 p
T
Nu,r1 (i)Nv,r2 (j)f (αi , αj )
EE2 [Nu,r1 M2 Nv,r2 | FE1 ] =
(1 − c1 p)
i6=j,(i,j)∈E
/ 1
T
T M N
Therefore the difference between EE2 [Nu,r
2 v,r2 | FE1 ] and Nu,r1 F Nv,r2
1
consists of the terms associated to edges in E1 and self edges,
T
T
1p
Nu,r
F Nv,r2 − 1−c
FE1
c2 p EE2 Nu,r1 M2 Nv,r2
1
X
X
Nu,r1 (i)Nv,r2 (i)f (αi , αi ).
Nu,r1 (i)Nv,r2 (j)f (αi , αj ) +
=
i∈[n]
(i,j)∈E1
By the fact that f (αi , αj ) ∈ [0, 1], and the entries in vectors Nu,r1 and Nv,r2
are in [0, 1], it follows that
T
1−c1 p
T
Nu,r
F
N
−
E
N
M
N
F
v,r
2
v,r
E
E
u,r
2
2
2
1
c2 p
1
1
X
X
X
≤
Nu,r1 (i)
I((i, j) ∈ E1 ) +
Nu,r1 (i).
i∈[n]
j∈[n]
Conditioned on A4 , for all i,
(n − 1)−3/4 )(n − 1), such that
i∈[n]
P
T
Nu,r
F Nv,r2 −
1
j∈[n] I((i, j)
1−c1 p
c2 p EE2
∈ E1 ) is bounded above by c1 (p +
T
Nu,r
M2 Nv,r2
1
FE1
≤ (c1 (p + (n − 1)−3/4 )(n − 1) + 1)kNu,r1 k1 .
Conditioned on A5u,r1 , by Lemma A.8 we can upper bound kNu,r1 k1 =
kÑu,r1 k1 |Su,r1 | such that
T
1−c1 p
T
E
N
M
N
F
Nu,r
F
N
−
2
v,r
v,r
E
E
u,r
2
2
2
1
c2 p
1
1
≤ (c1 (p + (n − 1)−3/4 )(n − 1) + 1)|Su,r1 |Bq2 d|λ1 |r1 (1 + o(1))
= (c1 pn + c1 n1/4 )|Su,r1 |Bq2 d|λ1 |r1 (1 + o(1)).
By combining the above bound with the bound given when conditioned
on A7u,v,r1 ,r2 , and then dividing both sides by |Su,r1 ||Sv,r2 |, it follows that
1−c1 p T
c2 p Ñu,r1 M2 Ñv,r2
= |Su,r1 |−1 |Sv,r2 |−1
≤ (1 − c1 p)
T
− Ñu,r
F Ñv,r2
1
1−c1 p T
c2 p Nu,r1 M2 Nv,r2
|λ1 |r1 +r2 p−1/4
c2 p|Su,r1 ||Sv,r2 |
T
− Nu,r
F Nv,r2
1
!1/2
+ (c1 pn + c1 n
1/4
)
Bq2 d|λ1 |r1
|Sv,r2 |
!
(1 + o(1))
68
BORGS-CHAYES-LEE-SHAH
Conditioned on A5u,r1 ∩ A5v,r2 , we can lower bound |Su,r1 | and |Sv,r2 | with
Lemma A.2 to show that
1−c1 p T
c2 p Ñu,r1 M2 Ñv,r2
≤ (1 − c1 p)
|λ1 |r1
T
− Ñu,r
F Ñv,r2
1
+r −1/4 1/2
2p
8
7c1 pn
c2 p
(r1 +r2 )/2
r2
+ (c1 pn + c1 n1/4 )Bq2 d|λ1 |r1 7c18pn
(1 + o(1))
(r1 +r2 )/2
−1/2
1|
= (1 − c1 p)c2 p−5/8 |λd |r1 +r2 7|λ8|λ
2
d | c1 pn
r2
(1 + o(1)).
+ (c1 pn + c1 n1/4 )Bq2 d|λ1 |r1 7c18pn
By assumption on r1 and r2 in (A.27),
1−c1 p T
T
c2 p Ñu,r1 M2 Ñv,r2 − Ñu,r1 F Ñv,r2
−1/2
≤ (1 − c1 p)c2 p−5/8 |λd |r1 +r2 p6/8
+ (c1 pn + c1 n
−1/2
= (1 − c1 p)c2
1/4
)Bq2 d|λ1 |r1
8
7c1 pn
r2
8
7c1 pn
r2
(1 + o(1))
|λd |r1 +r2 p1/8
+ (c1 pn + c1 n
1/4
)Bq2 d|λ1 |r1
(1 + o(1)).
We can bound the probability of event A7u,v,r1 ,r2 .
Lemma A.16.
satisfies
1
Assume that |λd | = ω((c1 pn)− 2 +θ ). For r1 ≤ r2 which
9c1 pn
8
and
7λ2d c1 pn
|λ1 |8
r2
≤ (c1 p)−7/8 ,
(r1 +r2 )/2
≥ p−6/8 ,
conditioned on A5u,r1 ∩ A5v,r2 , for sufficiently large n,
P ¬A7u,v,r1 ,r2 | FE1 , A5u,r1 , A5v,r2
(1 − c1 p)p−1/4
≤ 2 exp −
5|λ1 |Bq2 d
!
,
ITERATIVE COLLABORATIVE FILTERING
69
Proof. Recall the definition of event A7u,v,r1 ,r2 from (A.26). For a choice
1/2
, by Lemma A.14
of t = c2 pλr11 +r2 |Sur1 ||Svr2 |p−1/4
P ¬A7u,v,r1 ,r2 | FE1 , A5u,r1 , A5v,r2
r1 +r2
−1/4
|Su,r1 ||Sv,r2 |p
3(1 − c1 p)c2 pλ1
≤ 2 exp −
r1 +r2
−1/4 1/2
T FN
|p
||S
|S
+
2(1
−
c
p)
c
pλ
12c2 pNu,r
v,r2
u,r1
1
2
v,r2
1
1
!
3(1 − c1 p)c2 λr11 +r2 p−1/4
= 2 exp −
.
r1 +r2 −5/4 1/2
T F Ñ
p
(|Sur1 ||Svr2 |)−1/2
12c2 Ñur
vr2 + 2(1 − c1 p) c2 λ1
1
T F Ñ
Conditioned on A5u,r1 and A5v,r2 , we can bound Ñur
vr2 by Lemma A.6,
1
choosing r0 = 0,
!
r1 +r2
4B
d|λ
|
q
1
T
Ñur
F Ñvr2 ≤ eTu QT Λr1 +r2 +1 Qev +
(1 + o(1))
1
1
(c1 pn) 2 −θ
!
4B
d
q
≤ |λ1 |r1 +r2 |λ1 |Bq2 d +
(1 + o(1))
1
(c1 pn) 2 −θ
= |λ1 |r1 +r2 +1 Bq2 d(1 + o(1)).
Conditioned on A5u,r1 and A5v,r2 , we can bound |Su,r1 ||Sv,r2 | by Lemma A.2,
such that for sufficiently large n,
1/2
2(1 − c1 p) c2 |λ1 |r1 +r2 p−5/4
(|Su,r1 ||Sv,r2 |)−1/2
−(r1 +r2 )/2
1/2
(r1 +r2 )/2 −5/8 7c1 pn
≤ 2(1 − c1 p)(c2 ) |λ1 |
p
8
(r1 +r2 )/2
8|λ1 |
1/2
r1 +r2 −5/8
= 2(1 − c1 p)(c2 ) |λd |
.
p
7|λd |2 c1 pn
Then we use the condition assumed on r1 and r2 to show that
1/2
2(1 − c1 p) c2 |λ1 |r1 +r2 p−5/4
(|Su,r1 ||Sv,r2 |)−1/2
≤ 2(1 − c1 p)(c2 )1/2 |λd |r1 +r2 p−5/8 p6/8
= 2(1 − c1 p)(c2 )1/2 |λd |r1 +r2 p1/8
= o(|λ1 |r1 +r2 +1 Bq2 d),
where the last step follows from p1/8 = o(1), |λ1 | and Bq are constants, and
d = Ω(1).
70
BORGS-CHAYES-LEE-SHAH
Therefore,
P ¬A7u,v,r1 ,r2 | FE1 , A5u,r2 , A5v,r2
3(1 − c1 p)c2 λr11 +r2 p−1/4
≤ 2 exp −
12c2 |λ1 |r1 +r2 +1 Bq2 d(1 + o(1))
!
(1 − c1 p)p−1/4
= 2 exp −
.
4|λ1 |Bq2 d(1 + o(1))
!
For sufficiently large n,
P
¬A7u,v,r1 ,r2
|
FE1 , A5u,r2 , A5v,r2
(1 − c1 p)p−1/4
≤ 2 exp −
5|λ1 |Bq2 d
!
.
Recall that our evaluation of distance estimates dist1 and dist2 involves
expressions of the form
T
1−c1 p
Ñ
−
Ñ
M
Ñ
−
Ñ
.
u,r
v,r
2
u,r+i
v,r+i
c2 p
By expanding this expression into its four terms, we can show by Lemma
A.15 that this expression approximates
(Ñu,r − Ñv,r )T F (Ñu,r+i − Ñv,r+i )
when conditioned on events
A4 ∩ A5u,r+i ∩ A5v,r+i ∩ A7u,v,r,r+i ∩ A7v,u,r,r+i ∩ A7u,u,r,r+i ∩ A7v,v,r,r+i .
This will allow us to prove bound on dist1 (u, v) and dist2 (u, v).
Let us define the following event
(A.28)
A8u,v,r1 ,r2 := {A7u,v,r1 ,r2 ∩ A7v,u,r1 ,r2 ∩ A7u,u,r1 ,r2 ∩ A7v,v,r1 ,r2 }
1
Lemma A.17. Assume that θ ∈ (0, 14 ), |λd | = ω((c1 pn)− 2 +θ ), c1 pn =
ω(1), and p = o(n−1+1/(5+8θ) ). For some u, v ∈ [n] and r which satisfies
7|λd |2 c1 pn
8|λ1 |
and
9c1 pn
8
r+ 12
r+1
≥ p−6/8 ,
≤ (c1 p)−7/8 .
71
ITERATIVE COLLABORATIVE FILTERING
Conditioned on A4 ∩ A5u,r+1 ∩ A5v,r+1 ∩ A8u,v,r,r+1 , for sufficiently large n,
kΛQ(eu − ev )k22 is lower bounded by
|λ1 |−2r dist1 (u, v) −
32Bq d|λ1 |
1
(c1 pn) 2 −θ
,
and kΛQ(eu − ev )k22 is upper bounded by
−2r
|λd |
dist1 (u, v) +
32Bq d|λ1 |
1
(c1 pn) 2 −θ
|λ1 |
|λd |
2r
.
Proof. Recall that
dist1 (u, v) =
1−c1 p
c2 p
Ñu,r − Ñv,r
T
M2 Ñu,r+1 − Ñv,r+1 .
By Lemma A.15, we upper bound the difference between dist1 (u, v) and
(A.29)
Ñu,r − Ñv,r
T
F Ñu,r+1 − Ñv,r+1 .
By Lemma A.6, plugging in r0 = 0, r1 = r, and r2 = r + 1, we can upper
bound the difference between (A.29) and
(eu − ev )T QT Λ2r+2 Q(eu − ev ).
72
BORGS-CHAYES-LEE-SHAH
Therefore,
dist1 (u, v) − (eu − ev )T QT Λ2r+2 Q(eu − ev )
≤ 4(1 −
+4
−1/2
c1 p)c2 |λd |2r+1 p1/8
4Bq d|λ1 |2r+1
+4
≤ 4(1 −
+4
= 4(1 −
8
7c1 pn
2r+1
|λd |2r+1 p1/8 + 4(c1 pn + c1 n1/4 )Bq2 d |λd | 1
|λ1 | 2
!
4Bq d|λ1 |2r+1
(1 + o(1))
8|λ1 |
7|λd |2 c1 pn
r+ 1
2
−1/2
c1 p)c2 |λd |2r+1 p1/8
+ 4((c1 pn)
1/2
+
1/2
c1 p−1/2 n−1/4 )Bq2 d
1
8 2 |λd |2r+1
1
(7|λ1 |) 2
−1/2
c1 p)c2 |λd |2r+1 p1/8
+
27
7|λ1 |
1
2
(1 + o(1))
6
p 8 (1 + o(1))
1/2
1/2
(c1 p5/4 n1/2 + c1 p1/4 n−1/4 )Bq2 d|λd |2r+1 (1 + o(1))
!
1
(c1 pn) 2 −θ
(1 + o(1)) .
Since θ ∈ (0, 41 ) and p = o(n−1+1/(5+8θ) ), it follows that p = o(n−4/5 ), which
can be used to show that the first term dominates the second term in (A.30)
because
1/2
2
(1 + o(1))
1
(A.30)
+4
1
!
(c1 pn) 2 −θ
4Bq d|λ1 |2r+1
8
7c1 pn
(1 + o(1))
1
(c1 pn) 2 −θ
4Bq d|λ1 |2r+1
r+1
(1 + o(1))
1
−1/2
)Bq2 d|λ1 |r
!
(c1 pn) 2 −θ
= 4(1 − c1 p)c2
+ 4(c1 pn + c1 n
1/4
1/2
1/2
1/2
c1 p5/4 n1/2 + c1 p1/4 n−1/4 = p1/8 (c1 p9/8 n1/2 + c1 p1/8 n−1/4 )
1/2
= p1/8 c1 o(n−9/10 n1/2 + n−1/10 n−1/4 )
1/2
= p1/8 c1 o(n−2/5 + n−7/20 )
= o(p1/8 )
By the assumption that p = o(n−1+1/(5+8θ) ) and θ ∈ (0, 14 ), it follows that
1
p1/8 = o((c1 pn)− 2 +θ ), such that the last term of (A.30) asymptotically
dominates both the first and second terms. Therefore, for sufficiently large
73
ITERATIVE COLLABORATIVE FILTERING
n,
(A.31)
dist1 (u, v) − (eu − ev )T QT Λ2r+2 Q(eu − ev ) ≤ 32Bq
d|λ1 |2r+1
1
(c1 pn) 2 −θ
!
.
By definition, since |λ1 | ≥ · · · ≥ |λd |,
(A.32)
kΛQ(eu − ev )k22 ≥ |λ1 |−2r (eu − ev )T QT Λ2r+2 Q(eu − ev ),
and
(A.33)
kΛQ(eu − ev )k22 ≤ |λd |−2r (eu − ev )T QT Λ2r+2 Q(eu − ev ).
By combining (A.32) and (A.31),
kΛQ(eu − ev )k22 ≥ |λ1 |−2r dist1 (u, v) − 32Bq
d|λ1 |2r+1
!
1
(c1 pn) 2 −θ
32Bq d|λ1 |
= |λ1 |−2r dist1 (u, v) −
.
1
(c1 pn) 2 −θ
Similarly, by combining (A.33) and (A.31),
kΛQ(eu − ev )k22 ≤ |λd |−2r dist1 (u, v) + 32Bq
d|λ1 |2r+1
!
1
(c1 pn) 2 −θ
32Bq d|λ1 | |λ1 | 2r
−2r
.
= |λd | dist1 (u, v) +
1
(c1 pn) 2 −θ |λd |
Lemma A.17 showed that dist1 approximates kΛQ(eu − ev )k22 well if r
is constant and the condition number |λ1 |/|λd | is not too large. We can
show that in fact dist2 approximates kΛQ(eu − ev )k22 well, adjusting for the
distortion which was present in dist1 .
1
Lemma A.18. Assume that θ ∈ (0, 14 ), |λd | = ω((c1 pn)− 2 +θ ), c1 pn =
ω(1), and p = o(n−1+1/(5+8θ) ). Let λgap = minij |λi − λj |. Assume that d0
74
BORGS-CHAYES-LEE-SHAH
and r satisfy
(d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 )
r≥
,
ln(2)
1
θ
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ),
ln
2(1 + 2θ)
p
r+ 12
7|λd |2 c1 pn
≥ p−6/8 ,
8|λ1 |
0
9c1 pn r+d
≤ (c1 p)−7/8 .
8
(A.34)
(A.35)
(A.36)
(A.37)
For any u, v ∈ [n], conditioned on A4 ∩ A5u,r+d0 ∩ A5v,r+d0 ∩i∈[d0 ] A8u,v,r,r+i , for
sufficiently large n,
1
dist2 (u, v) − kΛQ(eu − ev )k22 ≤ 32Bq d|λ1 |(c1 pn)− 2 +θ .
Proof. Recall that d0 is the number of distinct valued eigenvalues in Λ,
thus it is upper bounded by d. The distance estimate dist2 is calculated
according to
dist2 (u, v) =
X
T
1p
z(i) 1−c
Ñ
−
Ñ
M
Ñ
−
Ñ
,
u,r
v,r
2
u,r+i
v,r+i
c2 p
i∈[d0 ]
0
where z ∈ Rd is a vector that satisfies
Λ2r+2 Λ̃z = Λ2 1.
Such a vector z always exists and is unique because Λ̃ is a Vandermonde
matrix, and Λ−2r 1 lies within the span of its columns. We will split the
difference into a few terms
dist2 (u, v) − kΛQ(eu − ev )k22
X
T
1p
=
z(i) 1−c
c2 p (Ñu,r − Ñv,r ) M2 (Ñu,r+i − Ñv,r+i )
i∈[d0 ]
− (Ñu,r − Ñv,r )T QT Λi+1 Q(Ñu,r − Ñv,r )
T
X
+
z(i) Ñu,r − Ñv,r QT Λi+1 Q Ñu,r − Ñv,r − kΛQ(eu − ev )k22 .
i∈[d0 ]
75
ITERATIVE COLLABORATIVE FILTERING
First let’s consider the last line. By definition,
T
X
z(i) Ñu,r − Ñv,r QT Λi+1 Q Ñu,r − Ñv,r
i∈[d0 ]
X
=
z(i)
X
i∈[d0 ]
2
T
λi+1
Ñ
−
Ñ
e
Q
u,r
v,r
k
k
k
By design,
recall that z is chosen such that Λ2r+2 Λ̃z = Λ2 1, which implies
P
that i∈[d0 ] z(i)λi−1
= λ−2r
for all k. Therefore, it follows that
k
k
2 X
2
X
X
−2r+2
T
T
z(i)
λi+1
Ñ
−
Ñ
=
λ
Ñ
−
Ñ
,
e
Q
e
Q
u,r
v,r
u,r
v,r
k
k
k
k
i∈[d0 ]
k
k
such that
T
X
z(i) Ñu,r − Ñv,r QT Λi+1 Q Ñu,r − Ñv,r − kΛQ(eu − ev )k22
i∈[d0 ]
=
X
=
X
=
X
2 X
T
Q
Ñ
−
Ñ
e
−
λ2k (eTk Q(eu − ev ))2
λ−2r+2
u,r
v,r
k
k
k
k
2
2
eTk Q Ñu,r − Ñv,r
λ2k λ−2r
k
− (eTk Q(eu − ev ))2
k
λ−2r+2
k
eTk Q
Ñu,r − Ñv,r
−
(eTk Λr Q(eu
2
− ev ))
.
k
By applying Lemma A.5 with a choice of r0 = 0, ∆1 = ∆2 = r, we can
bound each term of the summation to show that
T
X
z(i) Ñu,r − Ñv,r QT Λi+1 Q Ñu,r − Ñv,r − kΛQ(eu − ev )k22
i∈[d0 ]
≤
X
4Bq |λk |−1
λ2k
1
(c1 pn) 2 −θ
k
(A.38)
≤
!
4Bq d|λ1 |
1
(c1 pn) 2 −θ
(1 + o(1))
(1 + o(1)),
where the last step comes from the assumption that |λk | ≤ |λ1 |. Now let’s
consider bounding the difference
X
T
1p
z(i) 1−c
c2 p (Ñu,r − Ñv,r ) M2 (Ñu,r+i − Ñv,r+i )
i∈[d0 ]
− (Ñu,r − Ñv,r )T QT Λi+1 Q(Ñu,r − Ñv,r ) .
76
BORGS-CHAYES-LEE-SHAH
By Lemma A.15, we upper bound the difference
1−c1 p
c2 p
T
Ñu,r − Ñv,r M2 Ñu,r+i − Ñv,r+i
T
− Ñu,r − Ñv,r F Ñu,r+i − Ñv,r+i .
By Lemma A.6, plugging in r0 = r, r1 = r, and r2 = r + i, we can upper
bound the difference
T
T
Ñu,r − Ñv,r F Ñu,r+i − Ñv,r+i − Ñu,r − Ñv,r QΛi+1 Q Ñu,r − Ñv,r .
Therefore, by plugging in the corresponding bounds in Lemmas A.15 and
A.6,
1−c1 p
c2 p
T
Ñu,r − Ñv,r M2 Ñu,r+i − Ñv,r+i
T
− Ñur − Ñvr QT Λi+1 Q Ñu,r − Ñv,r
−1/2
≤ 4(1 − c1 p)c2
|λd |2r+i p1/8
+ 4(c1 pn + c1 n
+4
|λd |2
2
r
1/4
)Bq2 d|λ1 |r
4Bq d|λ1 |i
1
(c1 pn) 2 −θ
8
7c1 pn
r+i
(1 + o(1))
!
(1 + o(1)) .
By definition of z, Λ2r+2 Λ̃z = Λ2 1, where we recall that Λ̃(a, b) = λb−1
a .
b−1
Let us define a diagonal matrix D with Dbb = |λ1 |
such that (Λ̃D)ab =
b−1
λa
+
. Let (Λ̃D) denote the pseudoinverse of (Λ̃D) if there are repeated
|λ1 |
eigenvalues, such that
z = D−1 (Λ̃D)+ Λ−2r 1
and
zi = eTi D−1 (Λ̃D)+ Λ−2r
X
eh
h
=
X
+
−i+1
λ−2r
.
h (Λ̃D)ih |λ1 |
h
By a result from Gautschi in 1962 (On inverses of Vandermonde and
confluent Vandermonde matrices), we can bound the sum of the entries of
77
ITERATIVE COLLABORATIVE FILTERING
the pseudoinverse of a Vandermonde matrix with the following bound:
X X
X Y |λ1 | + |λj |
+
|(Λ̃D)ji | ≤
|λi − λj |
0
0
0
i∈[d ] j∈[d ]
(A.39)
i∈[d ] j6=i
0
≤d
2|λ1 |
mini,j |λi − λj |
d0 −1
.
We use λgap to denote the minimum gap between eigenvalues mini,j |λi −λj |.
(Although they give the result for inverses, it extends to the pseudoinverse
when there are repeated eigenvalues.) The above bound will become useful
later.
78
BORGS-CHAYES-LEE-SHAH
Therefore,
X
T
1p
z(i) 1−c
c2 p (Ñur − Ñvr ) M2 (Ñu,r+i − Ñv,r+i )
i∈[d0 ]
− (Ñur − Ñvr )T QT Λi+1 Q(Ñu,r − Ñv,r )
XX
+
T
−i+1 1−c1 p
=
λ−2r
h (Λ̃D)ih |λ1 |
c2 p (Ñur − Ñvr ) M2 (Ñu,r+i − Ñv,r+i )
i∈[d0 ] h
− (Ñur − Ñvr )T QT Λi+1 Q(Ñu,r − Ñv,r )
h
X
X
−1/2
+
−i+1
≤
λ−2r
(
Λ̃D)
|λ
|
4(1 − c1 p)c2 |λd |2r+i p1/8
1
h
ih
h
i∈[d0 ]
+4
|λd |2
2
r
4Bq d|λ1 |i
(1 + o(1))
1
(c1 pn) 2 −θ
r+i
i
+ 4(c1 pn + c1 n1/4 )Bq2 d|λ1 |r 7c18pn
(1 + o(1))
X |λ | 2r X
−1/2
|λd | i
d
≤ 4|λ1 |(1 − c1 p)c2 p1/8
(Λ̃D)+
ih |λ1 |
|λh |
i∈[d0 ]
h
+ 4|λ1 |
4Bq d
(1 + o(1))
1
(c1 pn) 2 −θ
X
|λd |2
2|λh |2
r X
(Λ̃D)+
ih
i∈[d0 ]
h
+ 4|λ1 |(c1 pn + c1 n1/4 )Bq2 d (1 + o(1))
X
8|λ1 |
7|λh |2 c1 pn
r+ 1
2
|λh |2
|λ1 |2
−1/2 1/8
≤ 4|λ1 |(1 − c1 p)c2
p
(Λ̃D)+
ih
i∈[d0 ]
h
XX
1/2 X
(Λ̃D)+
ih
h i∈[d0 ]
+ 4|λ1 |
4Bq d
1
(c1 pn) 2 −θ
+ 4|λ1 |(c1 pn + c1 n
2−r (1 + o(1))
XX
(Λ̃D)+
ih
h i∈[d0 ]
1/4
)Bq2 d (1
+ o(1))
X
h
p
6/8
X
i∈[d0 ]
(Λ̃D)+
ih
8
7|λ1 |c1 pn
1/2
.
8
7|λ1 |c1 pn
i− 1
2
79
ITERATIVE COLLABORATIVE FILTERING
We substitute in the bound from (A.39) to show that
X
T
1p
z(i) 1−c
c2 p (Ñur − Ñvr ) M2 (Ñu,r+i − Ñv,r+i )
i∈[d0 ]
− (Ñur − Ñvr )T QT Λi+1 Q(Ñu,r − Ñv,r )
0
−1/2 1/8 0 2|λ1 | d −1
≤ 4|λ1 |(1 − c1 p)c2 p d λgap
d0 −1
4Bq d
−r
0 2|λ1 |
+ 4|λ1 |
2
(1
+
o(1))
d
1 −θ
λgap
(c1 pn) 2
+
27 |λ1 |
7
1/2
(c1 pn + c1 n1/4 )Bq2 d (1 + o(1)) p6/8 (c1 pn)−1/2 d0
2|λ1 |
λgap
d0 −1
Since θ ∈ (0, 41 ) and p = o(n−1+1/(5+8θ) ), it follows that p = o(n−4/5 ), which
can be used to show that
1/2
1/2
c1 p5/4 n1/2 + c1 p1/4 n−1/4 = o(p1/8 ).
This implies that the first term asymptotically dominates the third term
in the right hand side of the above inequality. Then we put together the
previous bound with (A.38) to show that
dist2 (u, v) − kΛQ(eu − ev )k22
d0 −1
(a)
−1/2
1|
≤ 4|λ1 |(1 − c1 p)c2 p1/8 d0 2|λ
(1 + o(1))
λgap
d0 −1
4Bq d
−r
0 2|λ1 |
+
+ 4|λ1 |
2 (1 + o(1)) d λgap
1 −θ
(c1 pn) 2
(b)
≤
4Bq d|λ1 |
1
(c1 pn) 2 −θ
4Bq d|λ1 |
1
(c1 pn) 2 −θ
(1 + o(1))
(1 + o(1)).
To justify inequality (b), we can verify that the second term on the right
hand side of (a) is dominated by the third term due to the assumption stated
in (A.34), which implies that
2−r d0
2|λ1 |
λgap
d0 −1
≤ 1.
By the assumption that p = o(n−1+1/(5+8θ) ), it follows that p(1−2θ)/8(1+2θ) =
1
o((c1 pn)− 2 +θ ). The assumption stated in (A.35) implies that
p
θ/2(1+2θ) 0
d
2|λ1 |
λgap
d0 −1
≤ 1.
80
BORGS-CHAYES-LEE-SHAH
We combine these two observations to show that the first term on the right
hand side of (a) is dominated by the third term,
d0 −1
d0 −1
(1−2θ)/8(1+2θ) θ/2(1+2θ) 0 2|λ1 |
1|
p1/8 d0 2|λ
=
p
p
d
λgap
λgap
1
= o((c1 pn)− 2 +θ ).
Therefore, for sufficiently large n,
dist2 (u, v) − kΛQ(eu − ev )k22 ≤
32Bq d|λ1 |
1
(c1 pn) 2 −θ
.
We can bound the probability of event ∩i∈[d0 ] A8u,v,r,r+i .
Lemma A.19.
fies
1
Assume that |λd | = ω((c1 pn)− 2 +θ ). For r, d0 which satis
9c1 pn
8
r+d0
and
7λ2d c1 pn
|λ1 |8
≤ (c1 p)−7/8 ,
r+ 12
≥ p−6/8 ,
conditioned on A5u,r+d0 ∩ A5v,r+d0 , for sufficiently large n,
P ∪i∈[d0 ] ¬A8u,v,r,r+i | FE1 , A5u,r+d0 , A5v,r+d0
(1 − c1 p)p−1/4
≤ 8d0 exp −
5|λ1 |Bq2 d
!
,
Proof. Recall the definition of A8u,v,r,r+i from (A.28). The result follows
directly from applying union bound on the bound provided in Lemma A.16.
Next we show that for any u, v ∈ [n], conditioned on A4 , A5u,r+d0 , A5v,r+d0 ,
and A6u,v,r+d0 , with high probability a large fraction of the vertices a ∈
[n] will satisfy ∩i∈[d0 ] A8u,a,r,r+i , and thus have good distance measurements
dist1 (u, a) and dist2 (u, a).
A9u,r,l :=
n X
I(A5a,r+l ∩ A5u,r+l ∩li=1 A8u,a,r,r+i )
a∈[n]\u
(A.40)
≥
(1 − c1 p)p−1/4
1 − exp −
10Bq2 d
!!
X
a∈[n]\u
I(A5a,r+l )
o
81
ITERATIVE COLLABORATIVE FILTERING
Lemma A.20.
1
Assume that |λd | = ω((c1 pn)− 2 +θ ). For r, l which satisfies
9c1 pn
8
and
r+d0
7λ2d c1 pn
8|λ1 |
≤ (c1 p)−7/8 ,
r
≥ p−6/8 ,
for large enough n,
P(¬A9u,r,l
(1 − c1 p)p−1/4
| FΘ , A5u,r+l ) ≤ 8l exp −
10|λ1 |Bq2 d
!
Proof. Recall the definition of A9u,r,l from (A.40). For readability, let δ
and Vu∗ denote
!!
(1 − c1 p)p−1/4
,
δ = 1 − exp −
10|λ1 |Bq2 d
and
Vu∗ = {a ∈ [n] \ u : A5a,r+l }.
The event ¬A9u,r,l conditioned on FΘ is equivalent to
¬A9u,r,l =
X
I(¬A5a,r+l ∪li=1 ¬A8u,a,r,r+i ) ≥ (n − 1) − δ|Vu∗ |
a∈[n]\u
X
X
l
8
∗
5
=
I(∪i=1 ¬Au,a,r,r+i ) ≥ (n − 1) − δ|Vu |
I(¬Aa,r+l ) +
a∈[n]\u
a∈[n]\u:A5a,r+l
X
X
=
I(∪li=1 ¬A8u,a,r,r+i ) ≥ (n − 1) − δ|Vu∗ | −
I(¬A5a,r+l )
∗
a∈Vu
a∈[n]\u
X
l
8
∗
∗
=
I(∪i=1 ¬Au,a,r,r+i ) ≥ (n − 1) − δ|Vu | − ((n − 1) − |Vu |)
a∈[n]\u:A5
a,r+l
X
l
8
∗
=
I(∪i=1 ¬Au,a,r,r+i ) ≥ (1 − δ)|Vu | .
a∈[n]\u:A5
a,r+l
82
BORGS-CHAYES-LEE-SHAH
Therefore, by Lemma A.19 we can bound the probability of I(∪li=1 ¬A8u,a,r,r+i )
and apply Markov’s inequality to show that
h
i
P
l ¬A8
5
5
E
I(∪
)
|
A
,
A
i=1
u,a,r,r+i
a∈Vu∗
u,r+l
a,r+l
P(¬A9u,r,l | FΘ , A5u,r+l ) ≤
∗
(1 − δ)|Vu |
!
(1 − c1 p)p−1/4
(1 − δ)−1
≤ 8l exp −
5|λ1 |Bq2 d
!
(1 − c1 p)p−1/4
= 8l exp −
10Bq2 d
A.5. Existence of Close Neighbors. In Lemmas A.17 and A.18, we
showed that the distance measurements dist1 and dist2 are good estimates
of the true L2 functional differences. We would like to show that with high
probability a majority of the entries (a, b) that are included in the final
estimate of F̂ (u, v) are indeed points which are close in functional value, i.e.
F (a, b) ≈ F (u, v). First we show that with high probability there is a large
enough set of vertices a ∈ [n] such that kΛQ(eu − ea )k22 is small.
Recall the definition of φ from (2.1),
!
n
o
X
φ(ξ) := ess inf α0 ∈X1 P1
α ∈ X1 s.t.
λ2k (qk (α) − qk (α0 ))2 < ξ 2
.
k
By assumption on the form of function f (that it is finite spectrum with an
orthonormal decomposition represented by {λk , qk }k∈[d] ), for any vertices
u, v,
Z
X
(f (αu , y)−f (αv , y))2 dP1 (y) =
λ2k (qk (αu )−qk (αv ))2 = kΛQ(eu −ev )k22 .
X1
k∈[d]
In essence, for a measure one set of α0 ∈ X1 , φ(ξ) denotes the lower bound
on the probability that the latent variable associated to a randomly chosen
vertex v, denoted by αv , has a close L2 distance from α0 in terms of the
function f (α0 , ·) compared to f (αv , ·). Because we assumed X1 is a compact
metric space, and if f satisfies some continuity conditions, then we can show
that the function φ(ξ) is well defined and positive for all ξ > 0.
Depending if we use dist1 or dist2 , we need the size of the neighborhood
around α0 to be chosen as a function of the different bounds in Lemmas A.17
and A.18. Recall that the algorithm chooses to average over points where
ITERATIVE COLLABORATIVE FILTERING
83
dist1 or dist2 are less than ξ1 (n) or ξ2 (n) respectively. Let us then define
the following thresholds which follow from plugging in ξ1 (n) or ξ2 (n) to the
bounds presented in Lemmas A.17 and A.18.
(A.41)
2
ξ1LB
:=
(A.42)
2
ξ1U
B
(A.43)
2
ξ2LB
:=
(A.44)
2
ξ2U
B :=
Bq d|λ1 |
,
1
(c1 pn) 2 −θ
|λ1 | 2r 65Bq d|λ1 |
:=
,
1
|λd |
(c1 pn) 2 −θ
Bq d|λ1 |
1
,
1
.
(c1 pn) 2 −θ
65Bq d|λ1 |
(c1 pn) 2 −θ
(A.45)
For some ξ, let’s define event
X
(n − 1)φ(ξ)
2
2
:=
I
kΛQ(e
−
e
)k
≤
ξ
≥
A10
(A.46)
.
u
a
2
u,ξ
2
a∈[n]\u
We bound the probability of these events in Lemma A.21. Under the assumption that the latent variables are sampled from the uniform distribution
on [0, 1] and the function f is piecewise L-Lipschitz with minimum piece size
of ` ≤ 1, the function φ(ξ) is bounded by
ξ
φ(ξ) ≥ min `,
,
2L
because
Z
(f (αu , y) − f (αv , y))2 dP1 (y) =
X1
Z
(f (αu , y) − f (αv , y))2 dy
[0,1]
=
X
λ2k (qk (αu ) − qk (αv ))2
k∈[d]
= kΛQ(eu − ev )k22 .
If f is piecewise constant, then L is 0, such that φ(ξ) ≥ `.
Lemma A.21. For some function f (having spectral decomposition {λk , qk }k∈[d] )
and latent probability measure P1 such that
!
X
φ(ξ) := min P1 {α ∈ X1 s.t.
λ2k (qk (α) − qk (α0 ))2 < ξ 2 } ,
α0 ∈X1
k
84
BORGS-CHAYES-LEE-SHAH
it holds that
(n − 1)φ(ξ)
P ¬A10
≤
exp
−
.
u,ξ
8
Proof. Conditioned on αu ,
X
I(kΛQ(eu − ea )k2 ≤ ξ)
a∈[n]\u
is a sum of (n − 1) independent Bernoulli random variables with probability
parameter
!
X
P αa
λ2k (qk (αa ) − qk (αu ))2 < ξ 2 ≥ φ(ξ).
k
Therefore, it stochastically dominates a Binomial((n−1), φ(ξ)) random variable, such that by stochastic dominance, for any t and αu ,
X
P
I(kΛQ(eu − ea )k2 ≤ ξ) ≤ t αu ≤ P (Binomial(n − 1, φ(ξ)) ≤ t | αu ) .
a∈[n]\u
We choose t = 21 (n − 1)φ(ξ) and show that by Chernoff’s bound,
X
1
P
I(kΛQ(eu − ea )k2 ≤ ξ) ≤ (n − 1)φ(ξ)
2
a∈[n]\u
Z
1
=
P Binomial(n − 1, φ(ξ)) ≤ (n − 1)φ(ξ) αu dP1 (αu )
2
X1
Z
φ(ξ)(n − 1)
≤
exp −
dP1 (αu )
8
X1
(n − 1)φ(ξ)
= exp −
.
8
Define
(A.47)
(A.48)
(A.49)
(A.50)
Vu1 := {a ∈ [n] : dist1 (u, a) < ξ1 (n)},
2
Wu1 := {a ∈ [n] : dist1 (u, a) < ξ1 (n), kΛQ(eu − ea )k22 < ξ1U
B },
Vu2 := {a ∈ [n] : dist2 (u, a) < ξ2 (n)},
2
Wu2 := {a ∈ [n] : dist2 (u, a) < ξ2 (n), kΛQ(eu − ea )k22 < ξ2U
B }.
ITERATIVE COLLABORATIVE FILTERING
85
In the following lemma we will assume that φ satisfies
(c1 pn)2θ
φ(ξ1LB ) = ω exp −
.
8Bq2 d
In fact when the latent variables are sampled from the uniform distribution
on [0, 1] and the function f is piecewise L-Lipschitz with minimum piece size
1
2
of ` ≤ 1, then φ(ξ) ≥ min(`, ξ/2L). By design, ξ1LB
= Bq d|λ1 |(c1 pn)− 2 +θ ,
such that any exponentially decaying term is dominated by φ(ξ1LB ), which
is at least polynomial in (c1 pn).
Lemma A.22.
1
Assume that |λd | = ω((c1 pn)− 2 +θ ) and
(c1 pn)2θ
φ(ξ1LB ) = ω exp −
.
8Bq2 d
For p = o(n−1+1/(5+8θ) ) and r which satisfies
9c1 pn r+1
≤ (c1 p)−7/8 ,
8
and
7λ2d c1 pn
|λ1 |8
r+ 12
≥ p−6/8 ,
conditioned on
9
6
5
5
4
A10
u,ξ1LB ∩ Au,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩ Av,r+1 ∩ A ,
it follows that
nφ(ξ1LB )
(1 − o(1)),
2
nφ(ξ1LB )
|Wu1 | ≥
(1 − o(1)),
2
|Wu1 |
2
(c1 pn)2θ
=1−
exp −
(1 + o(1)).
|Vu1 |
φ(ξ1LB )
8Bq2 d
2θ
1 pn)
and r, d0 satisfy
Additionally assuming φ(ξ2LB ) = ω exp − (c8B
2d
|Vu1 | ≥
q
(A.51)
(A.52)
(A.53)
(d0
− 1) ln(2|λ1 |/λgap ) + ln(d0 )
r≥
,
ln(2)
θ
1
ln
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ),
2(1 + 2θ)
p
0
9c1 pn r+d
≤ (c1 p)−7/8 ,
8
86
BORGS-CHAYES-LEE-SHAH
for any u, v ∈ [n], conditioned on
9
6
5
5
4
A10
u,ξ2LB ∩ Au,r,d0 ∩ Au,v,r+d0 ∩ Au,r+d0 ∩ Av,r+d0 ∩ A ,
nφ(ξ2LB )
(1 − o(1)),
2
nφ(ξ2LB )
|Wu2 | ≥
(1 − o(1)),
2
2
(c1 pn)2θ
|Wu2 |
(1 + o(1)).
=1−
exp −
|Vu2 |
φ(ξ2LB )
8Bq2 d
|Vu2 | ≥
10
9
6
5
Proof. Recall the definitions of A10
u,ξ1LB , Au,ξ2LB , Au,r,1 , Au,v,r+d0 , Au,r+d0 ,
A5v,r+d0 from (A.46), (A.40), (A.25), and (A.21). Recall the definitions of
ξ1LB , ξ1U B , ξ2LB and ξ2U B from (A.41), (A.42), (A.43), and (A.44). Recall
the definitions of Vu1 , Wu1 , Vu2 , and Wu2 from (A.47), (A.48), (A.49), and
(A.50).
Vu1 always contains the vertex u since dist1 (u, u) = 0. For all vertices
2
a such that A5a,r+d0 and A8u,a,r,r+d0 hold, if kΛQ(eu − ea )k22 < ξ1LB
, then
Lemma A.17 along with the definition of ξ1LB implies that dist1 (u, a) <
1
(c1 pn)− 2 +2θ , such that a ∈ Vu1 . Therefore, the set Vu1 is lower bounded by
2
minus the number
the number of vertices such that kΛQ(eu − ea )k22 < ξ1LB
5
8
of vertices such that either Aa,r+1 or Aua,r,r+1 are violated (not satisfied).
(A.54)
|Vu1 | ≥ 1 +
X
a∈[n]\u
2
I(kΛQ(eu − ea )k22 < ξ1LB
)−
X
a∈[n]\u
I(¬A5a,r+1 ∪ ¬A8u,a,r,r+1 )
ITERATIVE COLLABORATIVE FILTERING
87
Conditioned on A9u,r,1 , A6u,v,r+1 , A5v,r+1 , and A5u,r+1 , by definition
X
I(¬A5a,r+1 ∪ ¬A5u,r+1 ∪ ¬A8u,a,r,r+1 )
a∈[n]\u
≤ (n − 1) −
(1 − c1 p)p−1/4
1 − exp −
10|λ1 |Bq2 d
!!
X
I(A5a,r+1 )
a∈[n]\u
!!
(1 − c1 p)p−1/4
(c1 pn)2θ
≤ (n − 1) − 1 − exp −
1 + (n − 2) 1 − exp −
10|λ1 |Bq2 d
8Bq2 d
!
!
(c1 pn)2θ
(1 − c1 p)p−1/4
+ exp −
≤ (n − 1) exp −
10|λ1 |Bq2 d
8Bq2 d
(A.55)
(c1 pn)2θ
≤ (n − 1) exp −
8Bq2 d
(1 + o(1)),
where the last inequality follows from the condition that p = o(n−1+1/(5+8θ) )
such that p−1/4 = ω((c1 pn)2θ ).
Conditioned on A10
u,ξ1LB , it follows from (A.55), (A.54), and the definition
of A10
u,ξ1LB
X
X
2
|Vu1 | ≥ 1 +
I(kΛQ(eu − ea )k22 < ξ1LB
)−
I(¬A5a,r+1 ∪ ¬A8u,a,r,r+1 )
a∈[n]\u
a∈[n]\u
φ(ξ1LB )
(c1 pn)2θ
≥ 1 + (n − 1)
− exp −
(1 + o(1))
2
8Bq2 d
φ(ξ1LB )
(c1 pn)2θ
≥n
− exp −
(1
−
o(1))
2
8Bq2 d
≥
nφ(ξ1LB )
(1 − o(1)).
2
2θ
1 pn)
where the last step followed from the assumption that φ(ξ1LB ) = ω exp − (c8B
.
2d
q
With similar arguments, we can bound |Vu2 | using instead Lemma A.18
9
6
5
and ξ2LB instead of ξ1LB to show that conditioned on A10
u,ξ2LB , Au,r,d0 , Au,v,r+d0 , Av,r+d0 ,
and A5u,r+d0 ,
(c1 pn)2θ
nφ(ξ2LB )
φ(ξ2LB )
|Vu2 | ≥ n
− exp −
(1
−
o(1))
≥
(1 − o(1)).
2
8Bq2 d
2
Wu1 always contains the vertex u since dist1 (u, u) = 0 and kΛQ(eu −
eu )k22 = 0. For all vertices a such that A5a,r+1 and A8u,a,r,r+1 hold, if kΛQ(eu −
88
BORGS-CHAYES-LEE-SHAH
2
2
ea )k22 < ξ1LB
< ξ1U
B , then Lemma A.17 implies that dist1 (u, a) < ξ1 (n),
and a ∈ Wu1 . Therefore, the set Wu1 is lower bounded by the number of
2
vertices such that kΛQ(eu − ea )k22 < ξ1LB
minus the number of vertices such
5
8
that either Aa,r+1 or Aua,r,r+1 are violated (not satisfied). Using the same
arguments as for proving the lower bound on |Vu1 |, we can obtain the same
lower bound on |Wu1 |,
X
X
2
I(¬A5a,r+1 ∪ ¬A8u,a,r,r+1 )
I(kΛQ(eu − ea )k22 < ξ1LB
)−
|Wu1 | ≥ 1 +
a∈[n]\u
a∈[n]\u
≥n
≥
φ(ξ1LB )
(c1 pn)2θ
(1 + o(1))
− exp −
2
8Bq2 d
nφ(ξ1LB )
(1 − o(1)).
2
We use similar arguments to lower bound |Wu2 |,
(c1 pn)2θ
φ(ξ2LB )
− exp −
(1 + o(1))
|Wu2 | ≥ n
2
8Bq2 d
≥
nφ(ξ2LB )
(1 − o(1)).
2
We also bound the ratio of |Wu1 | to |Vu1 | using (A.55) and the definition of
A10
u,ξ1LB ,
|Wu1 |
|Vu1 |
≥
1+
≥
1+
≥
≥
2
2
a∈[n]\u I(kΛQ(eu − ea )k2 < ξ1LB )
P
5
2
2
a∈[n]\u I(¬Aa,r+1
a∈[n]\u I(kΛQ(eu − ea )k2 < ξ1LB ) +
1LB )
1 + (n−1)φ(ξ
2
)
(c1 pn)2θ
(n − 1) φ(ξ1LB
+
exp
−
(1
+
o(1))
2
2
8Bq d
1+
P
P
nφ(ξ1LB )
2θ
1 pn)
nφ(ξ1LB ) + 2(n − 1) exp − (c8B
(1 + o(1))
2
qd
2θ
1 pn)
nφ(ξ1LB ) − 2(n − 1) exp − (c8B
(1 + o(1))
2d
q
nφ(ξ1LB )
n−1
2
(c1 pn)2θ
=1−
exp −
(1 + o(1))
n
φ(ξ1LB )
8Bq2 d
2
(c1 pn)2θ
=1−
exp −
(1 + o(1)).
φ(ξ1LB )
8Bq2 d
∪ ¬A8u,a,r,r+1 )
89
ITERATIVE COLLABORATIVE FILTERING
Using similar arguments we can bound the ratio of |Wu2 | to |Vu2 |, showing
that
|Wu2 |
2
(c1 pn)2θ
(1 + o(1)).
≥1−
exp −
|Vu2 |
φ(ξ2LB )
8Bq2 d
For each a ∈ [n], we show concentration of
Let’s define event
P
b∈[n]\b I((a, b)
∈ E1 ∪ E2 ).
(A.56)
A11 := ∩a∈[n]
X
b∈[n]\a
I((a, b) ∈ E1 ∪ E2 ) < (c1 + c2 )(p + (n − 1)−3/4 )(n − 1) .
Lemma A.23.
(c1 + c2 )(n − 1)1/4
P ¬A11 ≤ n exp −
3
!
.
Proof. We can show this easily because this is just a sum of Bernoulli
random variables. For a fixed a,
X
E
I((a, b) ∈ E1 ∪ E2 ) = (c1 + c2 )p(n − 1)
b∈[n]\a
By Chernoff’s bound,
X
P
I((a, b) ∈ E1 ∪ E2 ) > (1 + p−1 (n − 1)−3/4 )(c1 + c2 )p(n − 1)
b∈[n]\a
(c1 + c2 )(n − 1)1/4
≤ exp −
3
!
.
We use union bound to get the final expression.
The estimate F̂ (u, v) is computed by averaging over nearby points defined by the distance estimates dist1 and dist2 . Let Euv1 denote the set
of undirected edges (a, b) such that (a, b) ∈ E3 and both dist1 (u, a) and
dist1 (v, b) are less than ξ1 (n). We remove duplicate entries, i.e. for some
90
BORGS-CHAYES-LEE-SHAH
a, b ∈ Vu1 ∩ Vv1 , we only count the edge (a, b) or (b, a) once. The final estimate F̂ (u, v) produced by using dist1 is computed by averaging over the
undirected edge set Euv1 ,
X
1
(A.57)
M3 (a, b).
F̂ (u, v) =
|Euv1 |
(a,b)∈Euv1
Let Euv2 denote the set of undirected edges (a, b) such that (a, b) ∈ E3 ,
and both dist2 (u, a) and dist2 (v, b) are less than ξ2 (n). The final estimate
F̂ (u, v) produced by using dist2 is computed by averaging over the undirected edge set Euv2 ,
X
1
F̂ (u, v) =
(A.58)
M3 (a, b).
|Euv2 |
(a,b)∈Euv2
We would like to show that amongst the entries which are eligible to
be in E3 , i.e. entries which are not in E1 or in E2 , the ratio of Wu1 × Wv1
∗ , n∗ , and w ∗
and Vu1 × Vv1 is lower bounded. Let us define n∗uv1 , wuv1
uv2
uv2
according to
(A.59)
1
n∗uv1 = |(Vu1 × Vv1 ) \ E1 , E2 | − |((Vu1 ∩ Vv1 ) × (Vu1 ∩ Vv1 )) \ E1 , E2 |
2
(A.60)
1
∗
wuv1
= |(Wu1 × Wv1 ) \ E1 , E2 | − |((Wu1 ∩ Wv1 ) × (Wu1 ∩ Wv1 )) \ E1 , E2 |
2
(A.61)
1
n∗uv2 = |(Vu2 × Vv2 ) \ E1 , E2 | − |((Vu2 ∩ Vv2 ) × (Vu2 ∩ Vv2 )) \ E1 , E2 |
2
(A.62)
1
∗
wuv2
= |(Wu2 × Wv2 ) \ E1 , E2 | − |((Wu2 ∩ Wv2 ) × (Wu2 ∩ Wv2 )) \ E1 , E2 |.
2
We will use Lemma A.23, to argue that A11 holds with high probability,
such that there are not “too many” entries which are already part of edge
sets E1 , E2 , leaving most entries still eligible to be sampled in E3 and used to
compute the final estimate.
We will additionally assume that
(c1 pn)2θ
−3/4
.
φ(ξ1LB ) = ω max p, n
, exp −
8Bq2 d
One example that would satisfy this condition would be if the latent variables
are sampled from the uniform distribution on [0, 1] and the function f is
91
ITERATIVE COLLABORATIVE FILTERING
piecewise L-Lipschitz with minimum piece size of ` ≤ 1, such that φ(ξ) ≥
1
2
min(`, ξ/2L). By design, ξ1LB
= Bq d|λ1 |(c1 pn)− 2 +θ . Using the assumption
that p = o(n−1+1/(5+8θ) ) for θ < 41 , and r = Θ(1), it follows that φ(ξ1LB )
dominates p and n−3/4 .
2θ
1 pn)
Lemma A.24. Assume that φ(ξ1LB ) = ω max p, n−3/4 , exp − (c8B
2d
q
1
and |λd | = ω((c1 pn)− 2 +θ ). For p = o(n−1+1/(5+8θ) ) for some θ <
ω(n−1+ ) for some > 0, and r = Θ(1) which satisfies
9c1 pn r+1
≤ (c1 p)−7/8 ,
8
and
7λ2d c1 pn
|λ1 |8
r+ 12
1
4,
p =
≥ p−6/8 ,
9
9
6
5
10
conditioned on A11 ∩ A10
u,ξ1LB ∩ Av,ξ1LB ∩ Au,r ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩
A5v,r+1 ∩ A4 , it follows that
n2 φ(ξ1LB )2
(1 − o(1))
8
n2 φ(ξ1LB )2
≥
(1 − o(1))
8
4
(c1 pn)2θ
≥1−
exp −
(1 + o(1)).
φ(ξ1LB )
8Bq2 d
n∗uv1 ≥
∗
wuv1
∗
wuv1
n∗uv1
Proof. Conditioned on A11 , by definition,
1
n∗uv1 ≥ |Vu1 | |Vv1 | − (c1 + c2 )(p + (n − 1)−3/4 )(n − 1) − |Vu1 ||Vv1 |
2
1
≥ |Vu1 |
|Vv1 | − (c1 + c2 )(pn + n1/4 ) ,
2
and
1
∗
wuv1
≥ |Wu1 | |Wv1 | − (c1 + c2 )(p + (n − 1)−3/4 )(n − 1) − |Wu1 ||Wv1 |
2
1
≥ |Wu1 |
|Wv1 | − (c1 + c2 )(pn + n1/4 ) .
2
By substituting bounds from Lemma A.22, it follows that
nφ(ξ1LB )
nφ(ξ1LB )
(1 − o(1))
(1 − o(1)) − (c1 + c2 )(pn + n1/4 ) .
n∗uv1 ≥
2
4
92
BORGS-CHAYES-LEE-SHAH
By assumption, φ(ξ1LB ) dominates p and n−3/4 , such that
n∗uv1 ≥
n2 φ(ξ1LB )2
(1 − o(1)),
8
and
∗
wuv1
φ(ξ1LB )
nφ(ξ1LB )
1/4
(1 − o(1))
(1 − o(1)) − (c1 + c2 )(pn + n )
≥
2
4
n2 φ(ξ1LB )2
=
(1 − o(1)).
8
Similarly, we can bound the ratio by
∗
wuv1
n∗uv1
|Wu1 | 12 |Wv1 | − (c1 + c2 )(pn + n1/4 )
≥
|Wu1 | 12 |Wv1 | − (c1 + c2 )(pn + n1/4 ) + |Wu1 |(|Vv1 | − |Wv1 |) + (|Vu1 | − |Wu1 |)|Vv1 |
|Wu1 |(|Vv1 | − |Wv1 |) + (|Vu1 | − |Wu1 |)|Vv1 |
=1−
1
|Wu1 | 2 |Wv1 | − (c1 + c2 )(pn + n1/4 ) + |Wu1 |(|Vv1 | − |Wv1 |) + (|Vu1 | − |Wu1 |)|Vv1 |
|Vu1 |(|Vv1 | − |Wv1 |) + (|Vu1 | − |Wu1 |)|Vv1 |
=1−
|Wu1 | 21 |Wv1 | − (c1 + c2 )(pn + n1/4 )
!
|Vu1 ||Vv1 |
1 − |Wv1 |/|Vv1 |
=1−
.
|Wu1 ||Wv1 | 12 − (c1 + c2 )(pn + n1/4 )|Wv1 |−1
By substitutingbounds from
Lemma A.22, and by the assumption that φ(ξ)
(c1 pn)2θ
dominates exp − 8B 2 d , it follows that
q
|Wu1 |
2
(c1 pn)2θ
=1−
exp −
(1 + o(1)) = 1 − o(1),
|Vu1 |
φ(ξ1LB )
8Bq2 d
which then implies
|Vu1 ||Vv1 |
= (1 + o(1))2 .
|Wu1 ||Wv1 |
Again substituting bounds from Lemma A.22, and by the assumption
that φ(ξ) dominates p and n−3/4 , it follows that
|Wv1 | ≥
nφ(ξ1LB )
(1 − o(1)) = ω(pn + n1/4 ),
2
93
ITERATIVE COLLABORATIVE FILTERING
such that
1 − |Wv1 |/|Vv1 |
1
1/4 )|W |−1
v1
2 − (c1 + c2 )(pn + n
2θ
(c
pn)
2
1
(1 + o(1))
exp
−
2
φ(ξ1LB )
8Bq d
=
1
2
1/4 )
−
(c
+
c
)(pn
+
n
(1
+
o(1))
1
2
2
nφ(ξ1LB )
−1
1
(c1 pn)2θ
2
(1 + o(1))
exp −
− o(1)(1 + o(1))
=
φ(ξ1LB )
8Bq2 d
2
4
(c1 pn)2θ
=
(1 + o(1)).
exp −
φ(ξ1LB )
8Bq2 d
Therefore we put it all together to show that
∗
wuv1
4
(c1 pn)2θ
2
≥ 1 − (1 + o(1))
exp −
(1 + o(1))
n∗uv1
φ(ξ1LB )
8Bq2 d
(c1 pn)2θ
4
exp −
(1 + o(1)).
=1−
φ(ξ1LB )
8Bq2 d
Lemma A.25.
2θ
1 pn)
Assume that φ(ξ2LB ) = ω max p, n−3/4 , exp − (c8B
2d
q
− 12 +θ
and |λd | = ω((c1 pn)
which satisfies
(A.63)
(A.64)
(A.65)
(A.66)
). For p =
o(n−1+1/(5+8θ) )
for θ ∈ (1,
1
4 ),
and r, d0
(d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 )
r≥
,
ln(2)
1
θ
ln
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ),
2(1 + 2θ)
p
2
r+ 12
7λd c1 pn
≥ p−6/8
|λ1 |8
0
9c1 pn r+d
≤ (c1 p)−7/8 ,
8
10
9
9
for any u, v ∈ [n], conditioned on A11 ∩ A10
u,ξ2LB ∩ Av,ξ2LB ∩ Au,r,d0 ∩ Av,r ∩
94
BORGS-CHAYES-LEE-SHAH
A6u,v,r+d0 ∩ A5u,r+d0 ∩ A5v,r+d0 ∩ A4 , it follows that
n2 φ(ξ2LB )2
(1 − o(1))
8
n2 φ(ξ2LB )2
≥
(1 − o(1))
8
4
(c1 pn)2θ
(1 + o(1)).
≥1−
exp −
φ(ξ2LB )
8Bq2 d
n∗uv2 ≥
∗
wuv2
∗
wuv2
n∗uv2
Proof. This proof follows the same argument as the proof of Lemma
A.24. Conditioned on A11 we use the definition of A11 along with substituting bounds from Lemma A.22 and using the assumption that p =
o(n−1+1/(5+8θ) ) in order to complete the proof.
A.6. Error Bounds of Final Estimator. Finally, conditioned on
{αu }u∈[n] , E1 , E2 , we want to show concentration properties of E3 , specifically to prove bounds on the final estimate F̂ (u, v). The estimate F̂ (u, v)
is computed by averaging over nearby points defined by the distance estimates dist1 and dist2 . Let Euv1 denote the set of undirected edges (a, b)
such that (a, b) ∈ E3 and both dist1 (u, a) and dist1 (v, b) are less than
1
ξ1 (n) = 33Bq d|λ1 |2r+1 (c1 pn)− 2 +θ . We remove duplicate entries, i.e. for some
a, b ∈ Vu1 ∩ Vv1 , we only count the edge (a, b) or (b, a) once. The final estimate F̂ (u, v) produced by using dist1 is computed by averaging over the
undirected edge set Euv1 ,
(A.67)
F̂ (u, v) =
1
|Euv1 |
X
M3 (a, b).
(a,b)∈Euv1
Let Euv2 denote the set of undirected edges (a, b) such that (a, b) ∈ E3 , and
1
both dist2 (u, a) and dist2 (v, b) are less than ξ2 (n) = 33Bq d|λ1 |(c1 pn)− 2 +θ .
The final estimate F̂ (u, v) produced by using dist2 is computed by averaging
over the undirected edge set Euv2 ,
(A.68)
F̂ (u, v) =
1
|Euv2 |
X
M3 (a, b).
(a,b)∈Euv2
∗ , n∗ , and w ∗ . We
In Lemmas A.24 and A.25, we bounded n∗uv1 , wuv1
uv2
uv2
will use those results to bound |Euv1 |, |Euv2 |, |Euv1 ∩ (Wu1 × Wv1 )|, and
∗ , n∗ , and w ∗
|Euv2 ∩ (Wu2 × Wv2 )|. Recall the definitions of n∗uv1 , wuv1
uv2
uv2
from (A.59), (A.60), (A.61), and (A.62).
95
ITERATIVE COLLABORATIVE FILTERING
Let us define the following events
n∗uv1 c3 p
12
(A.69) Au,v,1 := |Euv1 | ∈ (1 ± ξ1U B )
,
1 − c1 p − c2 p
n∗uv2 c3 p
12
(A.70) Au,v,2 := |Euv1 | ∈ (1 ± ξ2U B )
,
1 − c1 p − c2 p
∗ c p
wuv1
3
13
(A.71) Au,v,1 := |(Wu1 × Wv1 ) ∩ E3 | ≥ (1 − ξ1U B )
,
1 − c1 p − c2 p
∗ c p
wuv2
3
13
(A.72) Au,v,2 := |(Wu2 × Wv2 ) ∩ E3 | ≥ (1 − ξ2U B )
.
1 − c1 p − c2 p
Observe that by definition, Vu1 , Vv1 , Wu1 , Wv1 , Vu2 , Vv2 , Wu2 , and Wv2 are
measurable with respect to FE1 ,E2 .
Lemma A.26.
2θ
1 pn)
Assume that φ(ξ1LB ) = ω max p, n−3/4 , exp − (c8B
2d
q
− 12 +θ
o(n−1+1/(5+8θ) )
and |λd | = ω((c1 pn)
). For p =
for some θ <
ω(n−1+ ) for some > 0 and r = Θ(1) which satisfies
9c1 pn
8
r+1
and
7λ2d c1 pn
|λ1 |8
1
4,
p =
≤ (c1 p)−7/8 ,
r+ 12
≥ p−6/8 ,
9
9
6
5
10
conditioned on A11 ∩ A10
u,ξ1LB ∩ Av,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩
A5v,r+1 ∩ A4 ,
2
2
2
ξ1U
B c3 pn φ(ξ1LB )
(1 − o(1)) ,
P
≤ 2 exp −
24
2
ξ1U B c3 pn2 φ(ξ1LB )2
13
P ¬Au,v,1 |FE1 ,E2 ≤ exp −
(1 − o(1)) .
16
¬A12
u,v,1 |FE1 ,E2
13
Proof. We will first consider A12
u,v,1 , but the proof for Au,v,1 is identical.
∗
Conditioned on FE1 ,E2 , |Euv1 | is distributed as a sum of nuv1 Bernoulli random variables with parameter c3 p/(1−c1 p−c2 p), i.e. the probability that an
edge is in E3 conditioned that it is not in E1 ∪ E2 . Therefore in expectation,
E [|Euv1 | | FE1 ,E2 ] =
c3 pn∗uv1
.
1 − c1 p − c2 p
96
BORGS-CHAYES-LEE-SHAH
By Chernoff’s bound for binomials,
¬A12
u,v,1 |FE1 ,E2
P
2
∗
ξ1U
B nuv1 c3 p
≤ 2 exp −
3(1 − c1 p − c2 p)
We can lower bound n∗uv1 by Lemma A.24, such that
2
ξ1U
n2 φ(ξ1LB )2
12
B c3 p
(1 − o(1))
P ¬Au,v,1 |FE1 ,E2 ≤ 2 exp −
3(1 − c1 p − c2 p)
8
2
ξ1U B c3 pn2 φ(ξ1LB )2
≤ 2 exp −
(1 − o(1)) .
24
∗
We can use equivalent proof steps with the corresponding bounds for wuv1
from Lemma A.24 to show that
2
∗
ξ1U
13
B wuv1 c3 p
P ¬Au,v,1 |FE1 ,E2 ≤ exp −
2(1 − c1 p − c2 p)
2
ξ
c3 pn2 φ(ξ1LB )2
≤ exp − 1U B
(1 − o(1)) .
16
Lemma A.27.
2θ
1 pn)
Assume that φ(ξ2LB ) = ω max p, n−3/4 , exp − (c8B
2d
q
1
and |λd | = ω((c1 pn)− 2 +θ ). For p = o(n−1+1/(5+8θ) ) for some θ <
ω(n−1+ ) for some > 0, and r, d0 which satisfies
(A.73)
(A.74)
(A.75)
(A.76)
1
4,
p =
(d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 )
r≥
,
ln(2)
θ
1
ln
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ),
2(1 + 2θ)
p
r+ 12
2
7λd c1 pn
≥ p−6/8
|λ1 |8
0
9c1 pn r+d
≤ (c1 p)−7/8 ,
8
10
9
9
for any u, v ∈ [n], conditioned on A11 ∩ A10
u,ξ2LB ∩ Av,ξ2LB ∩ Au,r,d0 ∩ Av,r,d0 ∩
A6u,v,r+d0 ∩ A5u,r+d0 ∩ A5v,r+d0 ∩ A4 ,
2
ξ2U B c3 pn2 φ(ξ2LB )2
12
P ¬Au,v,2 |FE1 ,E2 ≤ 2 exp −
(1 − o(1))
24
2
ξ2U B c3 pn2 φ(ξ2LB )2
13
P ¬Au,v,2 |FE1 ,E2 ≤ exp −
(1 − o(1)) .
16
97
ITERATIVE COLLABORATIVE FILTERING
Proof. Using the same proof steps as the proof of Lemma A.26. Observe
that |Euv2 | and |Euv2 ∩ (Wu2 × Wv2 )| conditioned on FE1 ,E2 are Binomial
random variables. Therefore by applying Chernoff’s bound and using results
in Lemma A.25
2
∗
ξ2U
12
B nuv2 c3 p
P ¬Au,v,2 |FE1 ,E2 ≤ 2 exp −
,
3(1 − c1 p − c2 p)
2
ξ
c3 pn2 φ(ξ2LB )2
≤ 2 exp − 2U B
(1 − o(1)) ,
24
and
P
¬A13
u,v,2
2
∗
ξ2U
B wuv2 c3 p
≤ exp −
2(1 − c1 p − c2 p)
2
ξ2U B c3 pn2 φ(ξ2LB )2
≤ exp −
(1 − o(1)) .
16
Next, conditioned on FE1 ,E2 , we want to show concentration of
1
|Euv1 |
and
X
(a,b)∈Eu,v1
1
|Euv2 |
(f (αa , αb ) − M3 (a, b))
X
(f (αa , αb ) − M3 (a, b))
(a,b)∈Eu,v2
Let’s define the following events
1
X
(f
(α
,
α
)
−
M
(a,
b))
(A.77)
A14
:=
<
ξ
,
a
3
1U B
b
u,v,1
|Euv1 |
(a,b)∈Euv1
1
X
(A.78)
<
ξ
.
(f
(α
,
α
)
−
M
(a,
b))
A14
:=
a
3
2U B
b
u,v,2
|Euv2 |
(a,b)∈Euv2
Lemma A.28.
2θ
1 pn)
Assume that φ(ξ1LB ) = ω max p, n−3/4 , exp − (c8B
2d
q
− 12 +θ
o(n−1+1/(5+8θ) )
and |λd | = ω((c1 pn)
). For p =
for some θ <
ω(n−1+ ) for some > 0, and r = Θ(1) which satisfies
9c1 pn r+1
≤ (c1 p)−7/8 ,
8
1
4,
p =
98
BORGS-CHAYES-LEE-SHAH
and
7λ2d c1 pn
|λ1 |8
r+ 12
≥ p−6/8 ,
11
10
10
9
9
6
conditioned on A12
u,v,1 ∩ A ∩ Au,ξ1LB ∩ Av,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩
A5u,r+1 ∩ A5v,r+1 ∩ A4 , it holds that
P
¬A14
u,v,1
FE2 ,E1 , Euv1
2
2
c3 pn2 ξ1U
B φ(ξ1LB )
(1 − o(1)) .
≤ 2 exp −
16
Proof. First we show that the expression has zero mean.
X
E
(f (αa , αb ) − M3 (a, b)) FE1 ,E2 , Euv1
(a,b)∈Euv1
X
=
(f (αa , αb ) − E [Z(a, b) | FE1 ,E2 ]) = 0
(a,b)∈Euv1
We can also compute the variance. Each undirected edge is distinct, thus
conditioned on FE1 ,E2 and Euv1 , the individual terms of the summation are
independent, and Z(a, b) ∈ [0, 1], therefore
X
Var
(f (αa , αb ) − M3 (a, b)) FE1 ,E2 , Euv1
(a,b)∈Euv1
=
X
Var [Z(a, b) | FE1 ,E2 ]
(a,b)∈Euv1
≤
X
1
(a,b)∈Euv1
≤ |Euv1 |.
By Bernstein’s inequality,
P ¬A14
u,v1
2
3|Euv1 |ξ1U
B
.
FE2 ,E1 ,Θ , Euv ≤ 2 exp −
6 + 2ξ1U B
11
10
10
9
By Lemma A.24, conditioned on A12
u,v,1 ∩ A ∩ Au,ξ1LB ∩ Av,ξ1LB ∩ Au,r,1 ∩
99
ITERATIVE COLLABORATIVE FILTERING
A9v,r,1 ∩ A6u,v,r+1 ∩ A5u,r+1 ∩ A5v,r+1 ∩ A4 ,
c3 pn∗u,v,1
|Euv1 | ≥ (1 − ξ1U B )
1 − c1 p − c2 p
n2 φ(ξ1LB )2
≥ (1 − ξ1U B )c3 p
(1 − o(1))
8
c3 pn2 φ(ξ1LB )2
(1 − o(1)).
=
8
Therefore,
P
¬A14
u,v,1
FE1 ,E2 , Euv1
Lemma A.29.
2
3ξ1U
c3 pn2 φ(ξ1LB )2
B
≤ 2 exp −
(1 − o(1))
6 + 2ξ1U B
8
2
2
c3 pn2 ξ1U
B φ(ξ1LB )
(1 − o(1)) .
≤ 2 exp −
16
2θ
1 pn)
Assume that φ(ξ2LB ) = ω max p, n−3/4 , exp − (c8B
2d
q
1
and |λd | = ω((c1 pn)− 2 +θ ). For p = o(n−1+1/(5+8θ) ) for some θ < 14 , and r, d0
which satisfies
(A.79)
(A.80)
(A.81)
(A.82)
(d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 )
r≥
,
ln(2)
θ
1
ln
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ),
2(1 + 2θ)
p
2
r+ 12
7λd c1 pn
≥ p−6/8
|λ1 |8
0
9c1 pn r+d
≤ (c1 p)−7/8 ,
8
11
10
10
9
for any u, v ∈ [n], conditioned on A12
u,v,2 ∩ A ∩ Au,ξ2LB ∩ Av,ξ2LB ∩ Au,r,d0 ∩
9
6
5
5
4
Av,r,d0 ∩ Au,v,r+d0 ∩ Au,r+d0 ∩ Av,r+d0 ∩ A , it holds that
P
¬A14
u,v,2
FE1 ,E2 , Euv2
2
2
c3 pn2 ξ2U
B φ(ξ2LB )
≤ 2 exp −
(1 − o(1)) .
16
Proof. We use the same arguments from the proof of Lemma A.28 to
prove the bound on A14
u,v,2 . We apply Bernstein’s inequality for sums of independent bounded random variables, and we use Lemma A.25, conditioned
100
BORGS-CHAYES-LEE-SHAH
11
10
9
6
5
5
4
on A12
u,v,2 , A , Au,ξ2LB ∩ Au,r,d0 ∩ Au,v,r+d0 ∩ Au,r+d0 ∩ Av,r+d0 ∩ A , to show
that
2
2
c3 pn2 ξ2U
14
B φ(ξ2LB )
(1 − o(1)) .
P ¬Au,v,2 FE1 ,E2 , Euv2 ≤ 2 exp −
16
2θ
−1
1 pn)
In the next Lemma, we assume that φ(ξ1LB ) = ω ξ1LB
.
exp − (c8B
2d
q
We again verify that this is satisfied for a piecewise Lipschitz latent function,
as both φ(ξ1LB ) and ξ1LB decay polynomially in c1 pn.
2θ
−1
1 pn)
Lemma A.30. Assume that φ(ξ1LB ) = ω max p, n−3/4 , ξ1LB
exp − (c8B
2d
q
− 12 +θ
o(n−1+1/(5+8θ) )
and |λd | = ω((c1 pn)
). For p =
for some θ <
ω(n−1+ ) for some > 0, and r = Θ(1) which satisfies
9c1 pn r+1
≤ (c1 p)−7/8 ,
8
and
7λ2d c1 pn
|λ1 |8
(2r+1)/2
1
4,
p =
≥ p−6/8 ,
6
5
9
12
11
10
conditioned on A13
u,v,1 ∩ Au,v,1 ∩ A ∩ Au,ξ1LB ∩ Au,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩
A5v,r+1 A4 ,
1
X
|Euv1 |
(f (αa , αb ) − f (αu , αv )) = O
(a,b)∈Euv1
|λ1 |
|λd |
r
Bq3 d2 |λ1 |
1/2 !
1
(c1 pn) 2 −θ
.
Proof. We first decompose the sum into indices (a, b) ∈ (Wu1 × Wv1 )
and those indices in the complement.
1
X
|Euv1 |
=
(a,b)∈Euv1
1
X
|Euv1 |
+
(f (αa , αb ) − f (αu , αv ))
(f (αa , αb ) − f (αu , αv ))I((a, b) ∈ Wu1 × Wv1 )
(a,b)∈Euv1
1
|Euv1 |
X
(a,b)∈Euv1
(f (αa , αb ) − f (αu , αv ))I((a, b) ∈ (Wu1 × Wv1 )c ) .
ITERATIVE COLLABORATIVE FILTERING
101
Recall that Wu1 indicates vertices a such that kΛQ(ea − eu )k2 ≤ ξ1U B .
Note that
|(f (αa , αb ) − f (αu , αv ))| = |eTa QT ΛQeb − eTu QT ΛQev |
√
≤ Bq d (kΛQ(ea − eu )k2 + kΛQ(eb − ev )k2 ) ,
such that (a, b) ∈ Wu1 × Wv1 also implies that
√
(f (αa , αb ) − f (αu , αv )) ≤ 2Bq dξ1U B .
Therefore, the first term is bounded by
1
|Euv1 |
X
(a,b)∈Euv1
√
√
2Bq dξ1U B |Euv1 ∩ (Wu1 × Wv1 )|
2Bq dξ1U B I((a, b) ∈ Wu1 × Wv1 ) =
.
|Euv1 |
Because the function f takes value in [0, 1], the second term is trivially
bounded by
X
1
|Euv1 ∩ (Wu1 × Wv1 )|
.
I((a, b) ∈ (Wu1 × Wv1 )c ) = 1 −
|Euv1 |
|Euv1 |
c
(a,b)∈Euv1 ∩(Wu1 ×Wv1 )
12
Conditioned on A13
u,v,1 ∩ Au,v,1 ,
1
|Euv1 |
X
(f (αa , αb ) − f (αu , αv ))
(a,b)∈Euv1
√
|Euv1 ∩ (Wu1 × Wv1 )|
≤ 1 − (1 − 2Bq dξ1U B )
|Euv1 |
∗
wuv1 c3 p
(1
−
ξ
)
√
1U B
1−c1 p−c2 p
∗
≤ 1 − (1 − 2Bq dξ1U B )
n
c3 p
(1 + ξ1U B ) 1−cuv1
1 p−c2 p
∗
√
2ξ1U B
wuv1
.
≤ 1 − (1 − 2Bq dξ1U B ) 1 −
1 + ξ1U B n∗uv1
Therefore by Lemma A.24,
1
|Euv1 |
X
(f (αa , αb ) − f (αu , αv ))
(a,b)∈Euv1
≤ 1 − (1 − 2Bq
√
(c1 pn)2θ
exp −
(1 + o(1))
φ(ξ1LB )
8Bq2 d
4
(c1 pn)2θ
+
exp −
(1 + o(1)).
φ(ξ1LB )
8Bq2 d
2ξ1U B
dξ1U B ) 1 −
1 + ξ1U B
√
2ξ1U B
≤ 2Bq dξ1U B +
1 + ξ1U B
1−
4
102
BORGS-CHAYES-LEE-SHAH
2θ
−1
1 pn)
, such that the first and
By assumption, φ(ξ1LB ) = ω ξ1LB
exp − (c8B
2
qd
second term dominate and by the definition of ξ1U B ,
1
√
(f (αa , αb ) − f (αu , αv )) ≤ O(Bq dξ1U B )
X
|Euv1 |
(a,b)∈Euv1
=O
Lemma A.31.
|λ1 |
|λd |
r
Bq3 d2 |λ1 |
1
(c1 pn) 2 −θ
1/2 !
.
2θ
−1
1 pn)
Assume that φ(ξ2LB ) = ω max p, n−3/4 , ξ2LB
exp − (c8B
2d
q
− 12 +θ
and |λd | = ω((c1 pn)
which satisfies
). For p =
o(n−1+1/(5+8θ) )
for some θ <
1
4,
and
r, d0
(d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 )
r≥
,
ln(2)
θ
1
ln
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ),
2(1 + 2θ)
p
2
r+ 12
7λd c1 pn
≥ p−6/8
|λ1 |8
0
9c1 pn r+d
≤ (c1 p)−7/8 ,
8
(A.83)
(A.84)
(A.85)
(A.86)
10
12
11
10
for any u, v ∈ [n], conditioned on A13
u,v,2 ∩ Au,v,2 ∩ A ∩ Au,ξ2LB ∩ Av,ξ2LB ∩
A9u,r,d0 ∩ A9v,r,d0 ∩ A6u,v,r+d0 ∩ A5u,r+d0 ∩ A5v,r+d0 A4 ,
1
|Euv2 |
X
(a,b)∈Euv2
(f (αa , αb ) − f (αu , αv )) = O
Bq3 d2 |λ1 |
1
(c1 pn) 2 −θ
1/2 !
.
Proof. We prove this using a similar argument as the proof of Lemma
103
ITERATIVE COLLABORATIVE FILTERING
A.30. We first decompose the sum
1
X
|Euv2 |
=
1
X
|Euv2 |
+
(f (αa , αb ) − f (αu , αv ))
(a,b)∈Euv2
(f (αa , αb ) − f (αu , αv ))I((a, b) ∈ Wu2 × Wv2 )
(a,b)∈Euv2
1
X
|Euv2 |
(f (αa , αb ) − f (αu , αv ))I((a, b) ∈ (Wu2 × Wv2 )c ) .
(a,b)∈Euv2
For (a, b) ∈ Wu2 × Wv2 ,
√
|(f (αa , αb ) − f (αu , αv ))| ≤ 2Bq dξ2U B ,
such that the first term is bounded by
1
|Euv2 |
X
(a,b)∈Euv2
√
√
2Bq dξ2U B |Euv2 ∩ (Wu2 × Wv2 )|
2Bq dξ2U B I((a, b) ∈ Wu2 × Wv2 ) =
.
|Euv2 |
Because the function f takes value in [0, 1], the second term is trivially
bounded by
1
|Euv2 |
X
I((a, b) ∈ (Wu2 × Wv2 )c ) = 1 −
(a,b)∈Euv2 ∩(Wu2 ×Wv2 )c
|Euv2 ∩ (Wu2 × Wv2 )|
.
|Euv2 |
12
Conditioned on A13
u,v,2 ∩ Au,v,2 ,
1
|Euv2 |
X
(f (αa , αb ) − f (αu , αv ))
(a,b)∈Euv1
√
≤ 1 − (1 − 2Bq dξ2U B ) 1 −
1
2(c1 pn)− 4 +θ
1
1 + (c1 pn)− 4 +θ
Therefore by Lemma A.25, using the assumption that
(c1 pn)2θ
−1
φ(ξ1LB ) = ω ξ2LB exp −
,
8Bq2 d
!
∗
wuv2
.
n∗uv2
104
BORGS-CHAYES-LEE-SHAH
and the definition of ξ2U B ,
1
X
|Euv2 |
(f (αa , αb ) − f (αu , αv ))
(a,b)∈Euv2
√
= O(Bq dξ2U B ) = O
Bq3 d2 |λ1 |
1
(c1 pn) 2 −θ
1/2 !
.
Let us define the following events
14
13
12
11
10
10
9
9
A15
u,v,1 := Au,v,1 ∩ Au,v,1 ∩ Au,v,1 ∩ A ∩ Au,ξ1LB ∩ Av,ξ1LB ∩ Au,r,1 ∩ Av,r,1
(A.87)
∩ A6u,v,r+1 ∩ A5u,r+1 ∩ A5v,r+1 ∩ A4 ,
9
9
10
14
13
12
11
10
A15
u,v,2 := Au,v,2 ∩ Au,v,2 ∩ Au,v,2 ∩ A ∩ Au,ξ2LB ∩ Av,ξ2LB ∩ Au,r,d0 ∩ Av,r,d0
(A.88)
∩ A6u,v,r+d0 ∩ A5u,r+d0 ∩ A5v,r+d0 ∩ A4 .
Lemma A.32.
Let
(c1 pn)2θ
−3/4 −1
φ(ξ1LB ) = ω max p, n
, ξ1LB exp −
,
8Bq2 d
1
and |λd | = ω((c1 pn)− 2 +θ ). For p = o(n−1+1/(5+8θ) ) for some θ <
ω(n−1+ ) for some > 0, and r = Θ(1) which satisfies
9c1 pn
8
r+1
and
7λ2d c1 pn
|λ1 |8
1
4,
p =
≤ (c1 p)−7/8 ,
r+ 12
≥ p−6/8 ,
for any (u, v) ∈ [n] × [n], conditioned on A15
u,v,1 , the error of the estimate
output when using dist1 is bounded by
1/2 !
r 3 2
√
Bq d |λ1 |
|λ1 |
|F̂ (u, v) − f (αu , αv )| = O(Bq dξ1U B ) = O
.
1 −θ
|λd |
(c1 pn) 2
105
ITERATIVE COLLABORATIVE FILTERING
Proof. To obtain a final bound on the estimate, we separate out the
noise terms with the bias terms.
|F̂ (u, v) − f (αu , αv )| =
X
1
M3 (a, b) − f (αu , αv )
|Euv1 |
(a,b)∈Euv1
=
X
1
|Euv1 |
(M3 (a, b) − f (αa , αb ))
(a,b)∈Euv1
+
1
|Euv1 |
X
(f (αa , αb ) − f (αu , αv )) .
(a,b)∈Euv1
Conditioned on A14
u,v,1 , the first term is bounded by ξ1U B . By Lemma A.30,
the second term is bounded by O(Bq dξ1U B ). Combining these two bounds
leads to the desired result.
2θ
−1
1 pn)
exp − (c8B
Lemma A.33. Assume that φ(ξ2LB ) = ω max p, n−3/4 , ξ2LB
2d
q
− 12 +θ
and |λd | = ω((c1 pn)
which satisfies
(A.89)
(A.90)
(A.91)
(A.92)
). For p =
o(n−1+1/(5+8θ) )
for some θ <
1
4,
and
r, d0
(d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 )
r≥
,
ln(2)
θ
1
ln
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ),
2(1 + 2θ)
p
2
r+ 12
7λd c1 pn
≥ p−6/8
|λ1 |8
0
9c1 pn r+d
≤ (c1 p)−7/8 ,
8
for any (u, v) ∈ [n] × [n], conditioned on A15
u,v,2 , the error of the estimate
output when using dist2 is bounded by
1/2 !
√
Bq3 d2 |λ1 |
|F̂ (u, v) − f (αu , αv )| = O(Bq dξ2U B ) = O
.
1 −θ
(c1 pn) 2
Proof. The proof is equivalent to the proof of Lemma A.32, and follows
from combining the definition of A14
u,v,1 along with Lemma A.31.
106
BORGS-CHAYES-LEE-SHAH
Lemma A.34.
2θ
−1
1 pn)
Assume that φ(ξ1LB ) = ω max p, n−3/4 , ξ1LB
exp − (c8B
2d
q
1 1
and |λd | = ω((c1 pn)− min( 4 , 2 −θ) ). For p = o(n−1+1/(5+8θ) ) for some θ < 14 ,
c1 pn = ω(1), p = ω(n−1+ ) for some > 0, and r = Θ(1) which satisfies
9c1 pn
8
r+1
and
7λ2d c1 pn
|λ1 |8
≤ (c1 p)−7/8 ,
r+ 12
≥ p−6/8 ,
2
2
c pn2 ξ1U
(n−1)φ(ξ1LB )
(c1 pn)2θ
B φ(ξ1LB )
+ exp − 3
+
exp
−
.
P(¬A15
u,v,1 ) ≤ O d exp − 8B 2 d
24
8
q
If φ(ξ) ≥ min(`, ξ/2L), then
(c1 pn)2θ
P ¬A15
=
O
exp
−
.
u,v,1
8Bq2 d
Proof. For events Z1 , Z2 , Z3 , we can use the inequality which results
from
P(¬Z1 ∪ ¬Z2 ∪ ¬Z3 ) = P(¬Z1 |Z2 ∩ Z3 )P(Z2 ∩ Z3 ) + P(¬Z2 ∪ ¬Z3 )
≤ P(¬Z1 |Z2 ∩ Z3 ) + P(¬Z2 ∪ ¬Z3 ).
Applying the above inequality multiple times and applying the union bound
allows us to decompose the desired probability computation into each event
ITERATIVE COLLABORATIVE FILTERING
107
for which we have previously defined bounds.
P ¬A15
u,v,1
12
11
10
10
9
9
6
5
5
4
≤ P ¬A14
u,v,1 ∪ ¬Au,v,1 | A ∩ Au,ξ1LB ∩ Au,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩ Av,r+1 ∩ A
11
10
10
9
9
6
5
5
4
+ P ¬A13
u,v,1 | A ∩ Au,ξ1LB ∩ Au,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩ Av,r+1 ∩ A
10
9
9
6
5
5
4
+ P ¬A11 ∪ ¬A10
u,ξ1LB ∪ ¬Au,ξ1LB ∪ ¬Au,r,1 ∪ ¬Av,r,1 ∪ ¬Au,v,r+1 ∪ ¬Au,r+1 ∪ ¬Av,r+1 ∪ ¬A
12
11
10
10
9
9
6
5
5
4
≤ P ¬A14
u,v,1 | Au,v,1 ∩ A ∩ Au,ξ1LB ∩ Au,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩ Av,r+1 ∩ A
11
10
10
9
9
6
5
5
4
+ P ¬A12
u,v,1 | A ∩ Au,ξ1LB ∩ Au,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩ Av,r+1 ∩ A
11
10
10
9
9
6
5
5
4
+ P ¬A13
u,v,1 | A ∩ Au,ξ1LB ∩ Au,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩ Av,r+1 ∩ A
10
+ P ¬A11 + P(¬A10
u,ξ1LB ) + P(¬Au,ξ1LB )
+ P(¬A9u,r,1 ∪ ¬A5u,r+1 ) + P(¬A9v,r,1 ∪ ¬A5v,r+1 ) + P(¬A6u,v,r+1 ) + P ¬A4
9
9
6
5
5
4
10
12
11
10
≤ P ¬A14
u,v,1 | Au,v,1 ∩ A ∩ Au,ξ1LB ∩ Au,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩ Av,r+1 ∩ A
9
9
6
5
5
4
10
11
10
+ P ¬A12
u,v,1 | A ∩ Au,ξ1LB ∩ Au,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩ Av,r+1 ∩ A
11
10
10
9
9
6
5
5
4
+ P ¬A13
u,v,1 | A ∩ Au,ξ1LB ∩ Au,ξ1LB ∩ Au,r,1 ∩ Av,r,1 ∩ Au,v,r+1 ∩ Au,r+1 ∩ Av,r+1 ∩ A
9
5
10
+ P ¬A11 + P(¬A10
u,ξ1LB ) + P(¬Au,ξ1LB ) + P(¬Au,r,1 |Au,r+1 )
+ P(¬A5u,r+1 ) + P(¬A9v,r,1 |A5v,r+1 ) + P(¬A5v,r+1 ) + P(¬A6u,v,r+1 ) + P ¬A4 .
We then bound this expression by using Lemmas A.9, A.12, A.13, A.20,
A.21, A.23, A.26, and A.28 to show that
P ¬A15
u,v,1
2
2
2
c3 pn2 ξ1U
ξ1U B c3 pn2 φ(ξ1LB )2
B φ(ξ1LB )
≤ 2 exp −
(1 − o(1)) + exp −
(1 − o(1))
16
16
!
2
ξ1U B c3 pn2 φ(ξ1LB )2
(c1 + c2 )(n − 1)1/2
+ 2 exp −
(1 − o(1)) + n exp −
24
3
!
(1 − c1 p)p−1/4
(n − 1)φ(ξ1LB )
+ 16 exp −
+ 2 exp −
8
10|λ1 |Bq2 d
!
(c1 pn)2θ
(c1 pn)2θ
c1 (n − 1)1/4
+ 4(d + 2) exp −
+ 8(d + 2) exp −
+ n exp −
.
8Bq2 d
4Bq2 d
3
Using the assumption that p = o(n−1+1/(5+8θ) ) for some θ < 14 implies that
n1/4 − ln(n) = ω((c1 pn)2θ ) and p−1/4 = ω((c1 pn)2θ ), such that
2
2
c3 pn2 ξ1U
(c1 pn)2θ
(n−1)φ(ξ1LB )
B φ(ξ1LB )
P ¬A15
=
O
d
exp
−
+
exp
−
+
exp
−
.
u,v,1
24
8
8B 2 d
q
108
BORGS-CHAYES-LEE-SHAH
Furthermore, φ(ξ) ≥ min(`, ξ/2L) and r = Θ(1) also implies that (n −
2
2
2θ
1)φ(ξ1LB ) = ω((c1 pn)2θ ) and c3 pn2 ξ1U
B φ(ξ1LB ) = ω((c1 pn) ), such that
P
¬A15
u,v,1
(c1 pn)2θ
.
= O d exp −
8Bq2 d
2θ
−1
1 pn)
Assume that φ(ξ2LB ) = ω max p, n−3/4 , ξ2LB
exp − (c8B
2d
Lemma A.35.
q
1 1
and |λd | = ω((c1 pn)− min( 4 , 2 −θ) ). For p = o(n−1+1/(5+8θ) ) for some θ < 14 ,
c1 pn = ω(1), and some r, d0 which satisfies
(A.93)
(A.94)
(A.95)
(A.96)
(d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 )
,
r≥
ln(2)
θ
1
ln
≥ (d0 − 1) ln(2|λ1 |/λgap ) + ln(d0 ),
2(1 + 2θ)
p
2
r+ 12
7λd c1 pn
≥ p−6/8
|λ1 |8
0
9c1 pn r+d
≤ (c1 p)−7/8 ,
8
for any u, v ∈ [n],
P(¬A15
u,v,2 )
2
2θ
ξ
c pn2 φ(ξ
)2
1 pn)
2LB )
= O d exp − (c8B
+ exp − 2U B 3 24 2LB (1 − o(1)) + exp − (n−1)φ(ξ
.
2d
8
q
If φ(ξ) ≥ min(`, ξ/2L), then
(c1 pn)2θ
P ¬A15
=
O
d
exp
−
.
u,v,2
8Bq2 d
Proof. We use the same proof steps as the proof for Lemma A.34, and
plug in Lemmas A.9, A.12, A.13, A.20, A.21, A.23, A.27, and A.29 to show
ITERATIVE COLLABORATIVE FILTERING
109
that
P ¬A15
u,v,2
2
2
2
c3 pn2 ξ2U
ξ2U B c3 pn2 φ(ξ2LB )2
B φ(ξ2LB )
≤ 2 exp −
(1 − o(1)) + exp −
(1 − o(1))
16
16
!
2
ξ2U B c3 pn2 φ(ξ2LB )2
(c1 + c2 )(n − 1)1/2
+ 2 exp −
(1 − o(1)) + n exp −
24
3
!
(1 − c1 p)p−1/4
(n − 1)φ(ξ2LB )
+ 16d0 exp −
+ 2 exp −
8
10|λ1 |Bq2 d
!
(c1 pn)2θ
(c1 pn)2θ
c1 (n − 1)1/4
+ 4(d + 2) exp −
+ 8(d + 2) exp −
+ n exp −
.
8Bq2 d
4Bq2 d
3
Using the assumption that p = o(n−1+1/(5+8θ) ) for some θ < 14 implies that
n1/4 − ln(n) = ω((c1 pn)2θ ) and p−1/4 = ω((c1 pn)2θ ), such that
2
ξ2U B c3 pn2 φ(ξ2LB )2
(c1 pn)2θ
+
exp
−
P ¬A15
=
O
d
exp
−
(1
−
o(1))
2
u,v,2
24
8Bq d
2LB )
+ exp − (n−1)φ(ξ
.
8
Furthermore, φ(ξ) ≥ min(`, ξ/2L) also implies that (n−1)φ(ξ2LB ) = ω((c1 pn)2θ )
2
2
2θ
and c3 pn2 ξ2U
B φ(ξ2LB ) = ω((c1 pn) ), such that
P
¬A15
u,v,2
(c1 pn)2θ
.
= O d exp −
8Bq2 d
APPENDIX B: PROOF OF MAIN RESULTS
In this section we combine the Lemmas to show the Theorems presented
in Section 4. Theorem 4.1 follows from combining Lemmas A.34 and A.32
to obtain high probability error bounds for the estimate produced by dist1 .
Theorem 4.2 follows from combining Lemmas A.35 and A.33 to obtain high
probability error bounds for the estimate produced by dist2 .
B.1. Bounding the Max Error. If we want to bound the max error as
in Theorems 4.3 and 4.4, i.e. show that for all vertices the error is bounded,
the events we would need to hold (when using dist1 ) would be
13
12
11
10
9
5
4
∩(u,v)∈[n]×[n] (A14
u,v,1 ∩ Au,v,1 ∩ Au,v,1 ) ∩ A ∩u∈[n] (Au,ξ1LB ∩ Au,r,1 ∩ Au,r+1 ) ∩ A
110
BORGS-CHAYES-LEE-SHAH
If these events hold, then for all (u, v) ∈ [n] × [n],
√
|F̂ (u, v) − f (αu , αv )| = O(Bq dξ1U B ) = O
2
(F̂ (u, v) − f (αu , αv ))2 = O(Bq2 dξ1U
B) = O
|λ1 |
|λd |
|λ1 |
|λd |
r
2r
1/2 !
Bq3 d2 |λ1 |
1
(c1 pn) 2 −θ
Bq3 d2 |λ1 |
1
(c1 pn) 2 −θ
.
By using union bound along with Lemmas A.9, A.12, A.20, A.21, A.23, A.26,
and A.28, these events hold with probability at least
n(n + 1)
2
2
2
c pn2 ξ1U
ξ1U B c3 pn2 φ(ξ1LB )2
B φ(ξ1LB )
1 − n(n + 1) exp − 3
(1
−
o(1))
−
exp
−
(1
−
o(1))
16
16
2
2
ξ1U B c3 pn2 φ(ξ1LB )2
(c1 +c2 )(n−1)1/2
− 2n(n + 1) exp −
+
n
exp
−
(1
−
o(1))
24
3
!
−1/4
(n − 1)φ(ξ1LB )
(1 − c1 p)p
− n exp −
− 8n exp −
8
10|λ1 |Bq2 d
!
c1 (n − 1)1/4
(c1 pn)2θ
− n exp −
− 4n(d + 2) exp −
4Bq2 d
3
2
ξ
c pn2 φ(ξ
)2
(c1 pn)2θ
1LB )
.
= 1 − O n2 exp − 1U B 3 24 1LB (1 − o(1)) + n exp − (n−1)φ(ξ
+
nd
exp
−
2
8
4B d
q
Note that since we have n and n2 in the coefficient, we need the exponential term to decay sufficiently fast to guarantee that this probability converges to zero as n goes to infinity. If φ(ξ) ≥ min(`, ξ/2L) and (c1 pn)2θ =
ω(4Bq2 d log(nd)), then the second to last term dominates and the probability
of error reduces to
(c1 pn)2θ
(c1 pn)2θ
O n(d + 2) exp −
= O (d + 2) exp −
.
4Bq2 d
5Bq2 d
An equivalent result and proof holds for bounding the max error for estimates computed by dist2 .
B.2. Using Subsampled Anchor Vertices. Recall that in Section
3.3, we discussed a modification of the algorithm to reduce computation
by subsampling for anchor vertices,
and comparing only to anchor vertices
rather than computing all n2 pairwise distances. In order to prove error
bounds for the modified algorithm, we need to ensure that there exists an
anchor vertex that is within close distance from the target vertex u. Let
K denote the set of anchor vertices. For some location (u, v) we need to
first prove that with high probability there are anchor vertices within a
111
ITERATIVE COLLABORATIVE FILTERING
“close” distance to u and v. Then we need the distance estimates between
the vertices u, v and the anchor vertices to be close such that the anchor
vertices π(u) and π(v) which minimize dist2 (u, π(u)) and dist2 (v, π(v)) will
also be close in terms of kΛQ(eu − eπ(u) )k2 and kΛQ(ev − eπ(v) )k2 . Finally,
since the algorithm estimates F̂ (u, v) = F̂ (π(u), π(v)), it only remains to
show that |F̂ (π(u), π(v)) − f (απ(u) , απ(v) )| is bounded, which follows from
the proof which we showed before.
Define event
A0u,ξ := {min kΛQ(ei − eu )k22 ≤ ξ 2 }
i∈K
Lemma B.1.
P(¬A0u,ξ ) ≤ exp(−|K|φ(ξ)).
Proof. Because the set K is sampled at random amongst the vertices,
the latent variables are sampled i.i.d. from P1 . Recall that by definition,
φ(ξ) lower bounds the probability that a randomly sampled vertex satisfies
kΛQ(ei − eu )k22 ≤ ξ 2 . Therefore,
P(¬A0u,ξ ) = P(∩i∈K {kΛQ(ei − eu )k22 ≥ ξ 2 })
= P1 (kΛQ(ei − eu )k22 ≥ ξ 2 )|K|
= (1 − P1 (kΛQ(ei − eu )k22 ≤ ξ 2 ))|K|
≤ (1 − φ(ξ))|K|
≤ exp(−|K|φ(ξ)).
Proof of Theorem 4.5. Conditioned on the event that A4 ∩A5u,r+d0 ∩l∈K
(A5l,r+d0 ∩i∈[d0 ] A8l,u,r,r+i ), we proved in Lemma A.18 that for all l ∈ K,
1
|dist2 (u, l) − kΛQ(eu − el )k22 | ≤ 32Bq d|λ1 |(c1 pn)− 2 +θ .
Let π(u) denote arg mini∈K dist2 (i, u), and let π ∗ (u) denote arg mini∈K kΛQ(ei −
eu )k22 . If we additionally condition on A0u,ξ , by definition kΛQ(eπ∗ (u) −eu )k22 ≤
ξ 2 . It follows that
1
kΛQ(eu − eπ(u) )k22 − 32Bq d|λ1 |(c1 pn)− 2 +θ ≤ dist2 (π(u), u)
≤ dist2 (π ∗ (u), u)
1
≤ kΛQ(eu − eπ∗ (u) )k22 + 32Bq d|λ1 |(c1 pn)− 2 +θ
1
≤ ξ 2 + 32Bq d|λ1 |(c1 pn)− 2 +θ .
112
BORGS-CHAYES-LEE-SHAH
Therefore,
1
kΛQ(eu − eπ(u) )k22 ≤ ξ 2 + 64Bq d|λ1 |(c1 pn)− 2 +θ .
Similarly, conditioned on A0v,ξ ∩ A4 ∩ A5v,r+d0 ∩l∈K (A5l,r+d0 ∩i∈[d0 ] A8l,v,r,r+i ),
it follows that
1
kΛQ(ev − eπ(v) )k22 ≤ ξ 2 + 64Bq d|λ1 |(c1 pn)− 2 +θ .
Therefore, conditioned on A0u,ξ ∩A0v,ξ ∩A4 ∩A5u,r+d0 ∩A5v,r+d0 ∩l∈K (A5l,r+d0 ∩i∈[d0 ]
(A8l,u,r,r+i ∩ A8l,v,r,r+i )),
|f (απ(u) , απ(v) ) − f (αu , αv )| = |eTπ(u) QT ΛQeπ(v) − eTu QT ΛQev |
√
≤ Bq d kΛQ(eπ(u) − eu )k2 + kΛQ(eπ(v) − ev )k2
1/2
√
1
≤ 2Bq d ξ 2 + 64Bq d|λ1 |(c1 pn)− 2 +θ
.
Conditioned on A15
π(u),π(v),2 , by Lemma A.33, the error of the estimated
output when using dist2 is bounded by
1/2 !
√
Bq3 d2 |λ1 |
|F̂ (π(u), π(v)) − f (απ(u) , απ(v) )| = O(Bq dξ2U B ) = O
,
1 −θ
(c1 pn) 2
2
(F̂ (π(u), π(v)) − f (απ(u) , απ(v) )) =
2
O(Bq2 dξ2U
B)
=O
Bq3 d2 |λ1 |
1
(c1 pn) 2 −θ
.
Using the modification of the algorithm as specified in Section 3.3, we
estimate F̂ (u, v) = F̂ (π(u), π(v)). Therefore,
|F̂ (u, v) − f (αu , αv )| ≤ |F̂ (π(u), π(v)) − f (απ(u) , απ(v) )| + |f (απ(u) , απ(v) ) − f (αu , αv )|
1/2
√
√
1
≤ O(Bq dξ2U B ) + 2Bq d ξ 2 + 64Bq d|λ1 |(c1 pn)− 2 +θ
√
= O(Bq d min(ξ2U B , ξ)).
0
0
5
5
The event that this holds is A15
π(u),π(v),2 ∩Au,ξ ∩Av,ξ ∩Au,r+d0 ∩Av,r+d0 ∩l∈K
(A5l,r+d0 ∩i∈[d0 ] (A8l,u,r,r+i ∩ A8l,v,r,r+i )), which by Lemmas A.35, A.19, A.12,
ITERATIVE COLLABORATIVE FILTERING
113
B.1, is bounded above by
0
0
5
5
5
8
8
P(¬A15
π(u),π(v),2 ∪ ¬Au,ξ ∪ ¬Av,ξ ∪ ¬Au,r+d0 ∪ ¬Av,r+d0 ∪l∈K (¬Al,r+d0 ∪i∈[d0 ] (¬Al,u,r,r+i ∪ ¬Al,v,r,r+i )))
0
0
5
5
5
≤ P(¬A15
π(u),π(v),2 ) + P(¬Au,ξ ) + P(¬Av,ξ ) + P(¬Au,r+d0 ∪ ¬Av,r+d0 ∪l∈K ¬Al,r+d0 )
+ P(∪l∈K ∪i∈[d0 ] (¬A8l,u,r,r+i ∪ ¬A8l,v,r,r+i ) | A5u,r+d0 ∩ A5v,r+d0 ∩l∈K A5l,r+d0 )
X
0
0
5
5
≤ P(¬A15
)
+
P(¬A
)
+
P(¬A
)
+
P(¬A
)
+
P(¬A
)
+
P(¬A5l,r+d0 )
0
0
u,ξ
v,ξ
u,r+d
v,r+d
π(u),π(v),2
l∈K
+
X
+
X
P(∪i∈[d0 ] ¬A8l,u,r,r+i
|
A5u,r+d0
∩
A5v,r+d0
∩l∈K A5l,r+d0 )
l∈K
P(∪i∈[d0 ] ¬A8l,v,r,r+i | A5u,r+d0 ∩ A5v,r+d0 ∩l∈K A5l,r+d0 )
l∈K
!
(1 − c1 p)p−1/4
≤
+ 2 exp(−|K|φ(ξ)) + 16|K|d exp −
5|λ1 |Bq2 d
(c1 pn)2θ
+ 4(|K| + 2)(d + 2) exp −
4Bq2 d
2
2
2
c3 pn2 ξ2U
ξ2U B c3 pn2 φ(ξ2LB )2
B φ(ξ2LB )
≤ 2 exp −
(1 − o(1)) + exp −
(1 − o(1))
16
16
!
2
ξ2U B c3 pn2 φ(ξ2LB )2
(c1 + c2 )(n − 1)1/2
(1 − o(1)) + n exp −
+ 2 exp −
24
3
!
(n − 1)φ(ξ2LB )
(1 − c1 p)p−1/4
+ 2 exp −
+ 16d0 exp −
8
10|λ1 |Bq2 d
!
(c1 pn)2θ
(c1 pn)2θ
c1 (n − 1)1/4
+ 8(d + 2) exp −
+ n exp −
+ 4(d + 2) exp −
8Bq2 d
4Bq2 d
3
!
(1 − c1 p)p−1/4
2 exp(−|K|φ(ξ)) + 16|K|d0 exp −
5|λ1 |Bq2 d
(c1 pn)2θ
+ 4(|K| + 2)(d + 2) exp −
4Bq2 d
2
ξ2U B c3 pn2 φ(ξ2LB )2
(c1 pn)2θ
+ exp −
(1 − o(1))
.
= O exp(−|K|φ(ξ)) + |K|d exp −
4Bq2 d
24
P(¬A15
π(u),π(v),2 )
0
If we want to bound the maximum error, we need to in addition bound
the event that all vertices are close to at least one anchor vertex. In order
to prove this event holds with high probability, we will show that we can
114
BORGS-CHAYES-LEE-SHAH
bound the number of balls of diameter ξ needed to cover the space X1 with
respect to the measure P1 .
Lemma B.2. There exist a set of H disjoint subsets of the latent space
X1 , denoted Y1 . . . YH , such that the measure of the union of the sets is 1,
the number of subsets is bounded by φ(ξ/4)−1 , the measure of each subset is
bounded below by φ(ξ/4), and the diameter of each subset is bounded above
by ξ. In mathematical notation,
1.
2.
3.
4.
P1 (∪i∈[L] Yi ) = 1,
H ≤ φ(ξ/4)−1 ,
P1 (Yi ) ≥ φ(ξ/4) for all i ∈ [H],
For all i ∈ [H], for any a, b ∈ Yi , kΛQ(ea − eb )k22 ≤ ξ 2 .
Proof. We will prove the existence of this set by constructing it inductively. Let B(α0 , ξ/2) denote the ball of radius ξ/2 centered at point α0 .
(
)
Z
1/2
ξ
2
B(α0 , ξ/2) := α ∈ X1 s.t.
(f (α, y) − f (α0 , y)) dP1 (y)
≤
2
X1
By definition, the diameter of B(α0 , ξ/2) is at most ξ. Let C ⊂ X1 denote a finite set of “centers” which we will use to inductively construct the
subsets. First let C = ∅. As long as P1 (∪t∈C B(t, ξ/2)) < 1, there must exist some point α ∈ X1 \ (∪t∈C B(t, ξ/2)) such that P1 (B(α, ξ/4)) ≥ φ(ξ/4)
by definition of the function φ. Additionally, by the triangle inequality, we
can guarantee that for this choice of α, the ball B(α, ξ/4) is disjoint/nonoverlapping with the set ∪t∈C B(t, ξ/4), otherwise α would have to be within
distance ξ/2 of one of the centers. We then add α to the set C and repeat
the process. For each element that we add to C, P1 (∪t∈C B(t, ξ/2)) decreases
by at least φ(ξ/4), such that after choosing at most φ(ξ/4)−1 center, we will
have covered the space. Once this process finishes, we construct the disjoint
subsets by defining Yi to be all points that are within distance ξ/2 of the
i-th center in C for which the i-th center is the closest point within C. We
can verify that by construction, the four desired properties are satisfied.
Lemma B.3.
For |K|φ(ξ/4)2 ≥ 2,
P(∪u∈[n] ¬A0u,ξ )
|K|φ(ξ/4)
≤ exp −
8
Proof. Lemma B.2 proved that there exists a set of subsets Y1 . . . YH ,
each of diameter at most ξ, which cover measure 1 of the latent space.
ITERATIVE COLLABORATIVE FILTERING
115
Therefore, if we can ensure that there is at least one anchor vertex in each
of the H subsets, then it follows that P1 (∪i∈K B(i, ξ)) = 1, since B(t, ξ/2) ⊂
B(i, ξ) for i ∈ B(t, ξ/2).
Next we bound the probability that amongst the |K| randomly chosen
vertices, there is at least one in each of the H subsets. Consider iteratively
drawing each random anchor vertex. If there is some subset Yt which does not
yet contain an anchor vertex, the probability of randomly choosing a vertex
in Yt is at least φ(ξ/4), since by construction P1 (Yt ) ≥ φ(ξ/4). Therefore, for
each newly sampled anchor vertex, the probability that it is a member of an
unrepresented subset is at least φ(ξ/4) as long as there still exists subsets to
be represented. Thus the number of distinct subsets that the anchor vertices
cover stochastically dominates a Binomial random variable with parameters
|K| (the number of coupons drawn) and φ(ξ/4). The probability that all H
subsets are represented after |K| randomly chosen vertices, is lower bounded
by the probability that a Binomial(|K|, φ(ξ/4)) random variable is larger
than or equal to H, which is bounded above by φ(ξ/4)−1 .
By using Chernoff Hoeffding’s inequality for sums of Bernoulli random
variables, it follows that
P(∩t∈[L] {∃i ∈ Ks.t.i ∈ Yt }) ≥ P(Bin(|K|, φ(ξ/4)) ≥ H)
≥ P(Bin(|K|, φ(ξ/4)) ≥ φ(ξ/4)−1 )
(|K|φ(ξ/4) − φ(ξ/4)−1 )2
≥ 1 − exp −
2φ(ξ/4)|K|
(|K|φ(ξ/4)2 − 1)2
= 1 − exp −
.
2φ(ξ/4)3 |K|
If we condition on choosing |K| such that |K|φ(ξ/4)2 ≥ 2, then it follows
that
|K|2 φ(ξ/4)4
−1
P(Bin(|K|, φ(ξ/4)) ≥ φ(ξ/4) ) ≥ 1 − exp −
8φ(ξ/4)3 |K|
|K|φ(ξ/4)
= 1 − exp −
.
8
Therefore this probability is bounded by 1 − ε for
2
1
1
|K| ≥
max 4 ln
,
.
φ(ξ/4)
ε
φ(ξ/4)
116
BORGS-CHAYES-LEE-SHAH
Proof of Theorem 4.6. If we wanted the maximum error to be bounded,
we need to show the following events hold
13
12
11
10
4
0
5
8
∩(a,b)∈K×K (A14
a,b,2 ∩ Aa,b,2 ∩ Aa,b,2 ) ∩ A ∩a∈K Aa,ξ2LB ∩ A ∩u∈[n] (Au,ξ ∩ Au,r+d0 ∩l∈K,i∈[d0 ] Al,u,r,r+i )
If these events hold, as shown in Theorem 4.5, for all (u, v) ∈ [n] × [n],
√
|F̂ (u, v) − f (αu , αv )| = O(Bq d min(ξ2U B , ξ)).
By Lemmas B.3, A.9, A.12, A.19, A.21, A.23, A.27, and A.29, for |K|φ(ξ/4)2 ≥
2, these events hold with probability at least
2
2
2
c pn2 ξ2U
ξ2U B c3 pn2 φ(ξ2LB )2
|K|(|K|+1)
B φ(ξ2LB )
1 − |K|(|K| + 1) exp − 3
(1
−
o(1))
−
exp
−
(1
−
o(1))
16
2
16
!
2
2
2
ξ2U B c3 pn φ(ξ2LB )
(c1 + c2 )(n − 1)1/2
− 2|K|(|K| + 1) exp −
(1 − o(1)) + n exp −
24
3
!
|K|φ(ξ/4)
(1 − c1 p)p−1/4
(n − 1)φ(ξ2LB )
0
− exp(−
) − 8n|K|d exp −
− |K| exp −
8
8
5|λ1 |Bq2 d
!
c1 (n − 1)1/4
(c1 pn)2θ
− n exp −
− 4n(d + 2) exp −
4Bq2 d
3
2
ξ2U B c3 pn2 φ(ξ2LB )2
(c1 pn)2θ
|K|φ(ξ/4)
2
= 1 − O nd exp −
+ exp −
+ |K| exp −
(1 − o(1))
.
4Bq2 d
8
24
| 1 |
An Encoding for Order-Preserving Matching
Travis Gagie1 , Giovanni Manzini2 , and Rossano Venturini3
1
2
arXiv:1610.02865v2 [cs.DS] 17 Feb 2017
3
School of Computer Science and Telecommunications, Diego Portales
University and CEBIB, Santiago, Chile
[email protected]
Computer Science Institute, University of Eastern Piedmont, Alessandria,
Italy and IIT-CNR, Pisa, Italy
[email protected]
Department of Computer Science, University of Pisa, Pisa, Italy and
ISTI-CNR, Pisa, Italy
[email protected]
Abstract
Encoding data structures store enough information to answer the queries they are meant to
support but not enough to recover their underlying datasets. In this paper we give the first
encoding data structure for the challenging problem of order-preserving pattern matching. This
problem was introduced only a few years ago but has already attracted significant attention
because of its applications in data analysis. Two strings are said to be an order-preserving match
if the relative order of their characters is the same: e.g., 4, 1, 3, 2 and 10, 3, 7, 5 are an orderpreserving match. We show how, given a string S[1..n] over an arbitrary alphabet and a constant
c ≥ 1, we can build an O(n log log n)-bit encoding such that later, given a pattern P [1..m] with
m ≤ logc n, we can return the number of order-preserving occurrences of P in S in O(m) time.
Within the same time bound we can also return the starting position of some order-preserving
match for P in S (if such a match exists). We prove that our space bound is within a constant
factor of optimal; our query time is optimal if log σ = Ω(log n). Our space bound contrasts with
the Ω(n log n) bits needed in the worst case to store S itself, an index for order-preserving pattern
matching with no restrictions on the pattern length, or an index for standard pattern matching
even with restrictions on the pattern length. Moreover, we can build our encoding knowing only
how each character compares to O(logc n) neighbouring characters.
1998 ACM Subject Classification E.1 Data Structures; F.2.2 Nonnumerical Algorithms and
Problems; H.3 Information Storage and Retrieval.
Keywords and phrases Compact data structures; encodings; order-preserving matching.
Digital Object Identifier 10.4230/LIPIcs.CVIT.2016.23
1
Introduction
As datasets have grown even faster than computer memories, researchers have designed increasingly space-efficient data structures. We can now store a sequence of n numbers from
{1, . . . , σ} with σ ≤ n in about n words, and sometimes n log σ bits, and sometimes even
nH bits, where H is the empirical entropy of the sequence, and still support many powerful
queries quickly. If we are interested only in queries of the form “what is the position of
the smallest number between the ith and jth?”, however, we can do even better: regardless
of σ or H, we need store only 2n + o(n) bits to be able to answer in constant time [19].
Such a data structure, that stores enough information to answer the queries it is meant to
support but not enough to recover the underlying dataset, is called an encoding [37]. As well
© Travis Gagie, Giovanni Manzini and Rossano Venturini;
licensed under Creative Commons License CC-BY
42nd Conference on Very Important Topics (CVIT 2016).
Editors: John Q. Open and Joan R. Acces; Article No. 23; pp. 23:1–23:14
Leibniz International Proceedings in Informatics
Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany
23:2
An Encoding for Order-Preserving Matching
as the variant of range-minimum queries mentioned above, there are now efficient encoding
data structures for range top-k [12, 22, 25], range selection [33], range majority [34], range
maximum-segment-sum [21] and range nearest-larger-value [18] on sequences of numbers,
and range-minimum [24] and range nearest-larger-value [29, 30] on two-dimensional arrays
of numbers; all of these queries return positions but not values from the sequence or array.
Perhaps Orlandi and Venturini’s [35] results about sublinear-sized data structures for substring occurrence estimation are the closest to the ones we present in this paper, in that they
are more related to pattern matching than range queries: they showed how we can store a
sequence of n numbers from {1, . . . , σ} in significantly less than n log σ bits but such that
we can estimate quickly and well how often any pattern occurs in the sequence.
Encoding data structures can offer better space bounds than traditional data structures
that store the underlying dataset somehow (even in succinct or compressed form), and
possibly even security guarantees: if we can build an encoding data structure using only
public information, then we need not worry about it being reverse-engineered to reveal
private information. From the theoretical point of view, encoding data structures pose new
interesting combinatorial problems and promise to be a challenging field for future research.
In this paper we give the first encoding for order-preserving pattern matching, which asks
us to search in a text for substrings whose characters have the same relative order as those
in a pattern. For example, in 6, 3, 9, 2, 7, 5, 4, 8, 1, the order-preserving matches of 2, 1, 3 are
6, 3, 9 and 5, 4, 8. Kubica et al. [32] and Kim et al. [31] formally introduced this problem and
gave efficient online algorithms for it. Other researchers have continued their investigation,
and we briefly survey their results in Section 2. As well as its theoretical interest, this
problem has practical applications in data analysis. For example, mining for correlations
in large datasets is complicated by amplification or damping — e.g., the euro fluctuating
against the dollar may cause the pound to fluctuate similarly a few days later, but to a
greater or lesser extent — and if we search only for sequences of values that rise or fall by
exactly the same amount at each step we are likely to miss many potentially interesting
leads. In such settings, searching for sequences in which only the relative order of the values
is constrained to be the same is certainly more robust.
In Section 2 we review some previous work on order-preserving pattern matching. In
Section 3 we review the algorithmic tools we use in the rest of the paper. In Section 4
we prove our first result showing how, given a string S[1..n] over an arbitrary alphabet [σ]
and a constant c ≥ 1, we can store O(n log log n) bits — regardless of σ — such that later,
given a pattern P [1..m] with m < logc n, in O(n logc n) time we can scan our encoding
and report all the order-preserving matches of P in S. Our space bound contrasts with
the Ω(n log n) bits needed in the worst case, when log σ = Ω(log n), to store S itself, an
index for order-preserving pattern matching with no restriction on the pattern length, or
an index for standard pattern matching even with restrictions on the pattern length. (If
S is a permutation then we can recover it from an index for unrestricted order-preserving
pattern matching, or from an index for standard matching of patterns of length 2, even
when they do not report the positions of the matches. Notice this does not contradict
Orlandi and Venturini’s result, mentioned above, about estimating substring frequency, since
that permits additive error.) In fact, we build our representation of S knowing only how
each character compares to 2 logc n neighbouring characters. We show in Section 5 how to
adapt and build on this representation to obtain indexed
order-preserving pattern matching,
instead of scan-based, allowing queries in O m log3 n time but now reporting the position
of only one match.
In Section 6 we give our main result showing how to speed up our index using weak prefix
T. Gagie, G. Manzini and R. Venturini
23:3
search and other algorithmic improvements. The final index is able to count the number
of occurrences and return the position of an order-preserving match (if one exists) in O(m)
time. This query time is optimal if log σ = Ω(log n). Finally, in Section 7 we show that our
space bound is optimal (up to constant factors) even for data structures that only return
whether or not S contains any order-preserving matches.
2
Previous Work
Although recently introduced, order-preserving pattern matching has received considerable
attention and has been studied in different settings. For the online problem, where the
pattern is given in advance, the first contributions were inspired by the classical KnuthMorris-Pratt and Boyer-Moore algorithms [3, 10, 31, 32]. The proposed algorithms have
guaranteed linear time worst-case complexity or sublinear time average complexity. However,
for the online problem the best results in practice are obtained by algorithms based on the
concept of filtration, in which some sort of “order-preserving” fingerprint is applied to the
text and the pattern [4, 5, 6, 8, 9, 16, 13]. This approach was successfully applied also to
the harder problem of matching with errors [6, 23, 27].
There has also been work on indexed order-preserving pattern matching. Crochemore
et al. [11] showed how, given a string S[1..n], in O(n log(n)/ log log n) time we can build
an O(n log n)-bit index such that later, given a pattern P [1..m], we can return the starting
positions of all the occ order-preserving matches of P in S in optimal O(m + occ) time.
Their index is a kind of suffix tree, and other researchers [38] are trying to reduce the space
bound to n log σ + o(n log σ) bits, where σ is the size of the alphabet of S, by using a kind of
Burrow-Wheeler Transform instead (similar to [20]). Even if they succeed, however, when
σ = nΩ(1) the resulting index will still take linear space — i.e., Ω(n) words or Ω(n log n)
bits.
In addition to Crochemore et al.’s result, other offline solutions have been proposed
combining the idea of fingerprint and indexing. Chhabra et al. [7] showed how to speed
up the search by building an FM-index [17] on the binary string expressing whether in the
input text each element is smaller or larger than the next one. By expanding this approach,
Decaroli et al. [13] show how to build a compressed file format supporting order-preserving
matching without the need of full decompression. Experiments show that this compressed
file format takes roughly the same space as gzip and that in most cases the search is orders of
magnitude faster than the sequential scan of the text. We point out that these approaches,
although interesting for the applications, do not have competitive worst case bounds on the
search cost as we get from Crochemore et al.’s and in this paper.
3
Background
In this section we collect a set of algorithmic tools that will be used in our solutions. In the
following we report each result together with a brief description of the solved problem. More
details can be obtained by consulting the corresponding references. All the results hold in
the unit cost word-RAM model, where each memory word has size w = Ω(log n) bits, where
n is the input size. In this model arithmetic and boolean operations between memory words
require O(1) time.
Rank queries on binary vector. In the next solutions we will need to support Rank queries
on a binary vector B[1..n]. Given an index i, Rank(i) on B returns the number of 1s in the
prefix B[1..i]. We report here a result in [28].
CVIT 2016
23:4
An Encoding for Order-Preserving Matching
I Theorem 1. Given a binary vector B[1..n], we can support Rank queries in constant time
by using n + o(n) bits of space.
Elias-Fano representation. In the following we will need to encode an increasing sequence
of values in almost optimal space. There are several solutions to this problem, we report
here the result obtained with the, so-called, Elias-Fano representation [14, 15].
I Theorem
2. An increasing sequence of n values up to u can be represented by using
log nu + O(n) = n log nu + O(n) bits, so that we can access any value of the sequence in
constant time.
Minimal perfect hash functions. In our solution we will make use of Minimal perfect
hash functions (Mphf) [26] and Monotone minimal perfect hash functions (Mmphf) [1].
Given a subset of S = {x1 , x2 , . . . , xn } ⊆ U of size n, a minimal perfect hash function
has to injectively map keys in S to the integers in [n]. Hagerup and Tholey [26] show how
to build a space/time optimal minimal perfect hash function as stated by the following
theorem.
I Theorem 3. Given a subset of S ⊆ U of size n, there is a minimal perfect hash function
for S that can be evaluated in constant time and requires n log e + o(n) bits of space.
A monotone minimal perfect hash function is a Mphf h() that preserves the lexicographic
ordering, i.e., for any two strings x and y in the set, x ≤ y if and only if h(x) ≤ h(y). Results
on Mmphfs focus their attention on dictionaries of binary strings [1]. The results can be
easily generalized to dictionaries with strings over larger alphabets. The following theorem
reports the obvious generalization of Theorem 3.1 in [1] and Theorem 2 in [2].
I Theorem 4. Given a dictionary of n strings drawn from the alphabet [σ], there is a
monotone minimal perfect hash function h() that occupies O(n log(` log σ)) bits of space,
where ` is the average length of the strings in the dictionary. Given a string P [1..m], h(P )
is computed in O(1 + m log σ/w) time.
Weak prefix search. The Prefix Search Problem is a well-known problem in data-structure
design for strings. It asks for the preprocessing of a given set of n strings in such a way
that, given a query-pattern P , (the lexicographic range of) all the strings in the dictionary
which have P as a prefix can be returned efficiently in time and space.
Belazzougui et al. [2] introduced the weak variant of the problem that allows for a onesided error in the answer. Indeed, in the Weak Prefix Search Problem the answer to a query
is required to be correct only in the case that P is a prefix of at least one string in dictionary;
otherwise, the algorithm returns an arbitrary answer.
Due to these relaxed requirements, the data structures solving the problem are allowed
to use space sublinear in the total length of the indexed strings. Belazzougui et al. [2] focus
their attention on dictionaries of binary strings, but their results can be easily generalized
to dictionaries with strings over larger alphabets. The following theorem states the obvious
generalization of Theorem 5 in [2].
I Theorem 5. Given a dictionary of n strings drawn from the alphabet [σ], there exists a
data structure that weak prefix searches for a pattern P [1..m] in O(m log σ/w + log(m log σ))
time. The data structure uses O(n log(` log σ)) bits of space, where ` is the average length
of the strings in the dictionary.
T. Gagie, G. Manzini and R. Venturini
23:5
We remark that the space bound in [2] is better than the one reported above as it is
stated in terms of the hollow trie size of the indexed dictionary. This measure is always
within O(n log `) bits but it may be much better depending on the dictionary. However, the
weaker space bound suffices for the aims of this paper.
4
An Encoding for Scan-Based Search
As an introduction to our techniques, we show an O(n log log n) bit encoding supporting
scan-based order-preserving matching. Given a sequence S[1..n] we define the rank encoding
E(S)[1..n] as
E(S)[i] =
0.5
j
if S[i] is lexicographically smaller than any
character in {S[1], . . . , S[i − 1]},
j + 0.5
|{S[1], . . . , S[i − 1]}| + 0.5
if S[i] is larger than the lexicographically jth
character in {S[1], . . . , S[i − 1]} but smaller
than the lexicographically (j + 1)st,
if S[i] is equal to the lexicographically jth
character in {S[1], . . . , S[i − 1]},
if S[i] is lexicographically larger than any
character in {S[1], . . . , S[i − 1]}.
This is similar to the representations used in previous papers on order-preserving matching.
We can build E(S) in O(n log n) time. However, we would ideally need E(S[i..n]) for
i = 1, . . . , n, since P [1..m] has an order-preserving match in S[i..i + m − 1] if and only if
E(P ) = E(S[i..i + m − 1]). Assuming P has polylogarithmic size, we can devise a more
space efficient encoding.
I Lemma 6. Given S[1..n] and a constant c ≥ 1 let ` = logc n. We can store O(n log log n)
bits such that later, given i and m ≤ `, we can compute E(S[i..i + m − 1]) in O(m) time.
Proof. For every position i in S which is multiple of ` = logc n, we store the ranks of the
characters in the window S[i..i + 2`]. The ranks are values at most 2`, thus they are stored
in O(log `) bits each. We concatenate the ranks of each window in a vector V , which has
length O(n) and takes O(n log `) bits. Every range S[i..i + m − 1] of length m ≤ ` is fully
contained in at least one window and in constant time we can convert i into i0 such that
V [i0 ..i0 + m − 1] contains the ranks of S[i], . . . , S[i + m − 1] in that window.
Computing E(S[i..i + m − 1]) naïvely from these ranks would take O(m log m) time. We
can speed up this computation by exploiting the fact that S[i..i + m − 1] has polylogaritmic
length. Indeed, a recent result [36] introduces a data structure to represent a small dynamic
set S of O(wc ) integers of w bits each supporting, among the others, insertions and rank
queries in O(1) time. Given an integer x, the rank of x is the number of integers in S that
are smaller than or equal to x. All operations are supported in constant time for sets of size
O(wc ). This result allows us to compute E(S[i..i + m − 1]) in O(m) time. Indeed, we can
use the above data structure to insert S[i..i + m − 1]’s characters one after the other and
compute their ranks in constant time.
J
It follows from Lemma 6 that given S and c, we can store an O(n log log n)-bit encoding of
S such that later, given a pattern P [1..m] with m ≤ logc n, we can compute E(S[i..i+m−1])
CVIT 2016
23:6
An Encoding for Order-Preserving Matching
for each position i in turn and compare it to E(P ), and thus find all the order-preserving
matches of P in O(nm) time. (It is possible to speed this scan-based algorithm up by
avoiding computing each E(S[i..i+m−1]) from scratch but, since this is only an intermediate
result, we do not pursue it further here.) We note that we can construct the encoding
in Lemma 6 knowing only how each character of S compares to O(logc n) neighbouring
characters.
I Corollary 7. Given S[1..n] and a constant c ≥ 1, we can store an encoding of S in
O(n log log n) bits such that later, given a pattern P [1..m] with m ≤ logc n, we can find all
the order-preserving matches of P in S in O(nm) time.
We will not use Corollary 7 in the rest of this paper, but we state it as a baseline easily
proven from Lemma 6.
5
Adding an Index to the Encoding
Suppose we are given S[1..n] and a constant c ≥ 1. We build the O(n log log n)-bit encoding
of Lemma 6 for ` = logc n + log n and call it S` . Using S` we can compute E(S 0 ) for any
substring S 0 of S of length |S 0 | ≤ ` in O(|S 0 |) time. We now show how to complement S`
with a kind of “sampled suffix array” using O(n log log n) more bits, such that we can search
for a pattern P [1..m] with m ≤ logc n and return the starting position of
an order-preserving
match for P in S, if there is one. Out first solution has O m log3 n query time; we will
improve the query time to O(m) in the next section.
We define the rank-encoded suffix array R[1..n] of S such that R[i] = j if E(S[i..n]) is the
lexicographically jth string in {E(S[1..n]), E(S[2..n]), . . . , E(S[n])}. Note that E(S[i..n])
has length n − i + 1. Figure 1 shows an example.
Our algorithm consists of a searching phase followed by a verification phase. The goal of
the searching phase is to identify a range [l, r] in R which contains all the encodings prefixed
by E(P ), if any, or an arbitrary interval if P does not occur. The verification phase has
to check if there is at least an occurrence of P in this interval, and return one position at
which P occurs.
Searching phase. Similarly to how we can use a normal suffix array and S to support
normal pattern matching, we could use R and S to find all order-preserving matches for a
pattern P [1..m] in O(m log n) time via binary search, i.e., at each step we choose an index
i, extract S[R[i]..R[i] + m − 1], compute its rank encoding and compare it to E(P ), all in
O(m) time. If m ≤ ` we can compute E(S[R[i]..R[i] + m − 1]) using S` instead of S, still in
O(m) time, but storing R still takes Ω(n log n) bits.
Therefore, for our searching phase we sample and store only every sample-th element of
R, by position, and every element of R equal 1 or n or a multiple of sample, where sample =
blog n/ log log nc. This takes O(n log log n) bits. Notice we can still find in O(m log n)
time via binary search in the sampled R an order-preserving match for any pattern P [1..m]
that has at least sample order-preserving matches in S. If P has fewer than sample orderpreserving matches in S but we happen to have sampled a cell of R pointing to the starting
position of one of those matches, then our binary search still finds it. Otherwise, we find
an interval of length at most sample − 1 which contains pointers at least to all the orderpreserving matches for P in S; on this interval we perform the verification phase.
Verification phase. The verification phase receives a range R[l, r] (although R is not
stored completely) and has to check if that range contains the starting position of an order
T. Gagie, G. Manzini and R. Venturini
i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
R[i] L[i] B[i] D[i]
30
29
22
13
2
23
8
14
20
3
16
24
11
9
15
28
7
19
12
1
21
10
27
6
18
26
17
5
25
4
2
2
1.5
0.5
4
2
2
3
0.5
3.5
1
3
3
3.5
1
2
3
2
0.5
3.5
1.5
3
3
1
3
3
1.5
1.5
4
5
2
2
4
2.5
1.5
1.5
1
2
2
4
4
2
1
0.5
2.5
3
1
3
2
2.5
4
23:7
E(S[R[i]..n])
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5 0.5 0.5 1.5 5 5.5 6.5 1
0.5 0.5 1 0.5 1.5 4 4.5 1 4 3.5 3.5 2 3 6 7 7.5 2
0.5 0.5 1.5 2.5 3.5 5.5 2.5 2 5 4 8 4 1 1 0.5 1.5 6 7 1 6 5 4 2 3 6
0.5 0.5 1.5 4.5 5.5 6.5 1
0.5 0.5 2.5 2.5 5.5 3 0.5 1 0.5 1.5 6 7 1 6 5 4 2 3 6 7 7.5 2
0.5 1 0.5 1.5 4 4.5 1 4 3.5 3.5 2 3 6 7 7.5 2
0.5 1.5 1.5 1.5 1.5 2.5 6 7 7.5 2
0.5 1.5 2.5 3.5 5.5 2.5 2 5 4 7.5 4 1 1 0.5 1.5 6 7 1 6 5 4 2 3 6 7
0.5 1.5 3.5 4.5 1 4 3.5 3.5 2 3 6 7 7.5 2
0.5 1.5 3.5 4.5 5.5 1
0.5 2.5 1 0.5 1 0.5 1.5 4 5 1 4 3.5 3.5 2 3 6 7 7.5 2
0.5 2.5 2.5 4.5 3 0.5 1 0.5 1.5 6 7 1 6 5 4 2 3 6 7 7.5 2
1 0.5 1.5 3.5 4.5 1 4 3.5 3.5 2 3 6 7 7.5 2
1.5 0.5
1.5 0.5 0.5 3 2.5 5.5 3 0.5 1 0.5 1.5 6 7 1 6 5 4 2 3 6 7 7.5 2
1.5 0.5 2 1.5 1.5 1.5 2.5 6 7 7.5 2
1.5 1 0.5 1 0.5 1.5 4 4.5 1 4 3.5 3.5 2 3 6 7 7.5 2
1.5 1.5 0.5 2 2.5 3.5 5.5 2.5 2 5 4 8 4 1 1 0.5 1.5 6 7 1 6 5 4 2 3
1.5 1.5 1.5 1.5 2.5 6 6.5 7.5 2
1.5 1.5 3.5 2 0.5 1 0.5 1.5 5 6 1 5 4.5 4 2 3 6 7 7.5 2
1.5 2.5 0.5
1.5 2.5 0.5 0.5 4 3 5.5 3 0.5 1 0.5 1.5 6 7 1 6 5 4 2 3 6 7 7.5 2
1.5 2.5 0.5 3 2.5 2.5 2 2.5 6 7 7.5 2
1.5 2.5 3.5 0.5
1.5 2.5 3.5 1 3 2.5 2.5 2 2.5 6 7 7.5 2
1.5 2.5 3.5 1.5 1 4 3 5.5 3 0.5 1 0.5 1.5 6 7 1 6 5 4 2 3 6 7 7.5 2
1.5 2.5 3.5 4.5 1
1.5 2.5 3.5 4.5 2.5 2 5 4 6.5 4 1 1 0.5 1.5 6 7 1 6 5 4 2 3 6 7 7.5
782
82
6782
2
Figure
1
The
rank-encoded
suffix
array
R[1..30]
for
S[1..30]
=
3 9 7 2 3 5 6 8 4 3 6 5 9 5 2 2 0 1 5 6 0 5 4 3 1 2 5 6 7 1, with L[i], B[i] and D[i] computed for sample = 4.
Stored values are shown in boldface.
preserving match for P and, if so, return its position. This is done by adding auxiliary data
structures to the sampled entries of R.
Suppose that for each unsampled element R[i] = j we store the following data.
the smallest number L[i] (if one exists) such that S[j − 1..j + L[i] − 1] has at most logc n
order-preserving matches in S;
the rank B[i] = E(S[j−1..j+L[i]−1]rev )[L[i]+1] ≤ L[i]+1/2 of S[j−1] in S[j..j+L[i]−1],
where the superscript rev indicates that the string is reversed;
the distance D[i] to the cell of R containing j − 1 from the last sampled element x such
that E(S[x..x + L[i]]) is lexicographically smaller than E(S[j − 1..j + L[i] − 1]).
Figure 1 shows the values in L, B and D for our example.
Assume we are given P [1..m] and i and told that S[R[i]..R[i] + m − 1] is an orderpreserving match for P , but we are not told the value R[i] = j. If R[i] is sampled, of course,
then we can return j immediately. If L[i] does not exist or is greater than m then P has at
least logc n ≥ sample order-preserving matches in S, so we can find one in O(m) time: we
consider the sampled values from R that precede and follow R[i] and check with Lemma 6
whether there are order-preserving matches starting at those sampled values. Otherwise,
CVIT 2016
23:8
An Encoding for Order-Preserving Matching
from L[i], B[i] and P , we can compute E(S[j − 1..j + L[i] − 1]) in O(m log m) time: we take
the length-L[i] prefix of P ; if B[i] is an integer, we prepend to P [1..L[i]] a character equal to
the lexicographically B[i]th character in that prefix; if B[i] is r + 0.5 for some integer r with
1 ≤ r < L[i], we prepend a character lexicographically between the lexicographically rth and
(r + 1)st characters in the prefix; if B[i] = 0.5 or B[i] = L[i] + 0.5, we prepend a character
lexicographically smaller or larger than any in the prefix, respectively. We can then find in
O(m log n) time the position in R of x, the last sampled element such that E(S[x..x + L[i]])
is lexicographically smaller than E(S[j − 1..j + L[i] − 1]). Adding D[i] to this position gives
us the position i0 of j − 1 in R. Repeating this
procedure until we reach a sampled cell of
R takes O m log2 n/ log log n = O m log2 n time, and we can then compute and return
j. As the reader may have noticed, the procedure is very similar to how we use backward
stepping to locate occurrences of a pattern with an FM-index [17], so we refer to it as a
backward step at position i.
Even if we do not really know whether S[R[i]..R[i] + m − 1] is an order-preserving match
for P , we can still start at the cell R[i] and repeatedly apply this procedure: if we do not
find a sampled cell after sample − 1 repetitions, then S[R[i]..R[i] + m − 1] is not an orderpreserving match for P ; if we do, then we add the number of times we have repeated the
procedure to the contents of the sampled cell to obtain the contents of R[i] = j. Then,
using S` we compute E(S[j..k + m − 1]) in O(m)
time, compare it to E(P ) and, if they are
the same, return j. This still takes O m log2 n time. Therefore, after our searching phase,
if we find an interval [l, r] of length at most sample − 1 which contains pointers to all the
order-preserving matches for P in S (instead of an order-preserving match directly),
then
we can check each cell in that interval with this procedure, in a total of O m log3 n time.
If R[i] = j is the starting position of an order-preserving match for a pattern P [1..m]
with m ≤ logc n that has at most sample order-preserving matches in S, then L[i] ≤ logc n.
Moreover, if R[i0 ] = j − 1 then L[i0 ] ≤ logc n + 1 and, more generally, if R[i00 ] = j − t then
L[i00 ] ≤ logc n + t. Therefore, we can repeat the stepping procedure described above and find
j without ever reading a value in L larger than logc n + log n and, since each value in B is
bounded in terms of the corresponding value in L, without ever reading a value in B larger
than logc n + log n + 1/2. It follows that we can replace any values in L and B greater than
logc n + log n + 1/2 by the flag −1, indicating that we can stop the procedure when we read
it. With this modification, each value in L and B takes O(log log n) bits so, since each value
in D is less than logc n + log n and also takes O(log log n) bits, L, B and D take a total of
O(n log log n) bits. Since also the encoding S` from Lemma 6 with ` = logc n + log n takes
O(n log log n) bits, the following intermediate theorem summarizes our results so far.
I Theorem 8. Given S[1..n] and a constant c ≥ 1, we can store an encoding of S in
O(n log log n) bits such that later, given a pattern P [1..m] with m ≤ logc n, in O m log3 n
time we can return the position of an order-preserving match of P in S (if one exists).
A complete search example. Suppose we are searching for order-preserving matches for
P = 2 3 1 2 in the string S[1..30] shown in Figure 1. Binary search on R tells us that pointers
to all the matches are located in R strictly between R[16] = 28 and R[19] = 12, because
E(S[28..30] = E(6 7 1) = 0.5 1.5 0.5
≺ E(P ) = E(2 3 1 2) = 0.5 1.5 0.5 2
≺ E(S[12..14]) = E(5 9 5) = 0.5 1.5 1 ;
notice R[16] = 28 and R[19] = 12 are stored because 16, 28 and 12 are multiples of sample =
4.
T. Gagie, G. Manzini and R. Venturini
23:9
We first check whether R[17] points to an order-preserving match for P . That is, we
assume (incorrectly) that it does; we take the first L[17] = 3 characters of P ; and, because
B[17] = 1.5, we prepend a character between the lexicographically first and second, say
1.5. This gives us 1.5 2 3 1, whose encoding is 0.5 1.5 2.5 0.5. Another binary search on R
shows that R[20] = 1 is the last sampled element x such that E(S[x..x + 3]), in this case
0.5 1.5 1.5 0.5, is lexicographically smaller than 0.5 1.5 2.5 0.5. Adding D[17] = 4 to 20, we
would conclude that R[24] = R[17] − 1 (which happens to be true in this case) and that
0.5 1.5 2.5 0.5 is a prefix of E(S[R[24]..n]) (which also happens to be true). Since R[24] = 6
is sampled, however, we compute E(S[7..10]) = 0.5 1.5 0.5 0.5 and, since it is not the same
as P ’s encoding, we reject our initial assumption that R[17] points to an order-preserving
match for P .
We now check whether R[18] points to an order preserving match for P . That is, we
assume (correctly this time) that it does; we take the first L[18] = 3 characters of P ; and,
because B[18] = 1.5, we prepend a character between the lexicographically first and second,
say 1.5. This again gives us 1.5 3 2 1, whose encoding is 0.5 1.5 2.5 0.5. As before, a binary
search on R shows that R[20] = 1 is the last sampled element x such that E(S[x..x + 3]) is
lexicographically smaller than 0.5 1.5 2.5 0.5. Adding D[18] = 5 to 20, we conclude (correctly)
that R[25] = R[18] − 1 and that 0.5 1.5 2.5 0.5 is a prefix of E(S[R[25]..n])
Repeating this procedure with L[25] = 4, B[25] = 1 and D[25] = 3, we build a string
with encoding 0.5 1.5 2.5 0.5, say 2 3 4 1, and prepend a character equal to the lexicographically first, 1. This gives us 1 2 3 4 1, whose encoding is 0.5 1.5 2.5 3.5 1. Another binary search
shows that R[24] = 6 is the last sampled element x such that E(S[x..x + 4]) is lexicographically smaller than 0.5 1.5 2.5 3.5 1. We conclude (again correctly) that R[27] = R[18] − 2 and
that 0.5 1.5 2.5 3.5 1 is a prefix of E(S[R[27]..n]).
Finally, repeating this procedure with L[27] = 2, B[27] = 2.5 and D[27] = 3, we build a
string with encoding 0.5 1.5, say 1 2, and prepend a character lexicographically greater than
any currently in the string, say 3. This gives us 3 1 2, whose encoding is 0.5 0.5 1.5. A final
binary search show that R[8] = 14 is the last sampled element x such that E(S[x..x + 2])
is lexicographically smaller than 0.5 0.5 1.5. We conclude (again correctly) that R[11] =
R[18] − 3 and that 0.5 0.5 1.5 is a prefix of E(S[R[11]..n]). Since R[11] = 16 is sampled, we
compute E(S[19..22]) = 0.5 1.5 0.5 2 and, since it matches P ’s encoding, we indeed report
S[19..22] as an order-preserving match for P .
6
Achieving O(m) query time
In this section we prove our main result:
I Theorem 9. Given S[1..n] and a constant c ≥ 1, we can store an encoding of S in
O(n log log n) bits such that later, given a pattern P [1..m] with m ≤ logc n, in O(m) time
we can return the position of an order-preserving match of P in S (if one exists). In O(m)
time we can also report the total number of order-preserving occurrences of P in S.
Compared to Theorem 8, we improve the query time from O m log3 n to O(m). This
is achieved by speeding up several steps of the algorithm described in the previous section.
Speeding up pattern’s encoding. Given a pattern P [1..m], the algorithm has to compute
its encoding E(P [1..m]). Doing this naïvely as in the previous section would cost O(m log m)
time, which is, by itself, larger than our target time complexity. However, since m is
polylogarithmic in n, we can speed this up as we sped up the computation of the rankencoding of S[i..i + m − 1] in the proof of Lemma 6, and obtain E(P ) in O(m) time. Indeed,
CVIT 2016
23:10
An Encoding for Order-Preserving Matching
we can insert P ’s characters one after the other in the data structures of [36] and compute
their ranks in constant time.
Dealing with short patterns. The approach used by our solution cannot achieve a
o(sample) query time. This is because we answer a query by performing Θ(sample) backward
steps regardless of the pattern’s length. This means that for very short patterns, namely
m = o(sample) = o(log n/ log log n), the solution cannot achieve O(m) query time. However,
we can precompute and store the answers of all these short patterns in o(n) bits. Indeed,
the encoding of a pattern of length at most m = o(log n/ log log n) is a binary string of
√
length o(log n). Thus, there are o( n) possible encodings. For each of these encodings we
explicitly store the number of its occurrence and the position of one of them in o(n) bits.
From now on, thus, we can safely assume that m = Ω(log n/ log log n).
Speeding up searching phase. The searching phase of the previous algorithm has two
important drawbacks. First, it costs O(m log n) time and, thus, it is obviously too expensive
for our target time complexity. Second, binary searching on the sampled entries in R gives
too imprecise results. Indeed, it finds a range [l, r] of positions in R which may be potential
matches for P . However, if the entire range is within two consecutive sampled positions,
we are only guaranteed that all the occurrences of P are in the range but there may exist
positions in the range which do not match P . This uncertainty forces us to explicitly check
every single position in the range until a match for P is found, if any. This implies that we
have to check r − l + 1 = O(sample) positions in the worst case. Since every check has a
cost proportional to m, this gives ω(m) query time.
We use the data structure for weak prefix search of Theorem 5 to index the encodings of all suffixes of the text truncated at length ` = logc + log n. This way, we can find
the range [l, r] of suffixes prefixed by E(P [1..m]) in O(m log log n/w + log(m log log n)) =
O(m log log n/w + log log n) time with a data structure of size O(n log log n) bits. This is
because E(P [1..m]) is drawn from an alphabet of size O(logc n), and both m and ` are in
O(logc n). Apart from its faster query time, this solution has stronger guarantees. Indeed,
if the pattern P has at least one occurrence, the range [l, r] contains all and only the occurrences of P . Instead, if the pattern P does not occur, [l, r] is an arbitrary and meaningless
range. In both cases, just a single check of any position in the range is enough to answer the
order-preserving query. This property gives a O(log n/ log log n) factor improvement over
the previous solution.
Speeding up verification phase. It is clear by the discussion above that the verification
phase has to check only one position in the range [l, r]. If the range contains at least one
sampled entry of R, we are done. Otherwise, we have to perform at most sample backward
steps as in the previous solution.
We now improve the computation of every single backward step. Assume we have to
compute a backward step at i, where R[i] = j. Before performing the backward step, we
have to compute the encoding E(S[j − 1..j + L[i] − 1]), given B[i], L[i], and E(S[j..j + u])
for some u ≥ L[i]. This is done as follows. We first prepend 0.5 to E(S[j..j + u]) and take its
prefix of length L[i]. Then, we increase by one every value in the prefix which is larger than
B[i]. These operations can be done in O(1 + L[i] log log n/w) = O(1 + m log log n/w) time
by exploiting word parallelism of the RAM model. Indeed, we can operate on O(w/ log log n)
symbols of the encoding in parallel.
Now the backward step at i is i0 = k + D[i], where k is the only sampled entry in R
whose encoding is prefixed by E(S[j − 1..j + L[i] − 1]). Notice that there cannot be more
than one otherwise S[j − 1..j + L[i] − 1] would occur more than sample times, which was
T. Gagie, G. Manzini and R. Venturini
23:11
excluded in the construction.
Thus, the problem is to compute k, given i and E(S[j − 1..j + L[i] − 1]). It is crucial to
observe that E(S[j − 1..j + L[i] − 1]) depends only on S and L[i] and not on the pattern P
we are searching for. Thus, there exists just one valid E(S[j − 1..j + L[i] − 1]) that could be
used at query time for a backward step at i. Notice that, if the pattern P does not occur,
the encoding that will be used at i may be different, but in this case it is not necessary to
compute a correct backward step. Consider the set E all these, at most n, encodings. The
goal is to map each encoding in E to the sampled entry in R that it prefixes. This can be
done as follows. We build a monotone minimal perfect hash function h() on E to map each
encoding to its lexicographic rank. Obviously, the encodings that prefix a certain sampled
entry i in R form a consecutive range in the lexicographic ordering. Moreover, none of these
ranges overlaps because each encoding prefixes exactly one sampled entry. Thus, we can
use a binary vector B to mark each of these ranges, so that, given the lexicographic rank of
an encoding, we can infer the sampled entry it prefixes. The binary vector is obtained by
processing the sampled entries in R in lexicographic order and by writing the size of its range
in unary. It is easy to see that the sampled entry prefixed by x = E(S[j − 1..j + L[i] − 1])
can be computed as Rank1 (h(x)) in constant time. The data structures that stores B and
supports Rank requires O(n) bits (see Theorem 1).
The evaluation of h() is the dominant cost, and, thus, a backward step is computed in
O(1 + m log log n/w) time. The overall space usage of this solution is O(n log log n) bits,
because B has at most 2n bits and h() requires O(n log log n) bits by Theorem 4.
Since we perform at most sample backward steps, it follows that the overall query time
is O(sample × (1 + m log log n/w) = O(m). The equality follows by observing that sample =
O(log n/ log log n), m = Ω(log n/ log log n) and w = Ω(log n).
We finally observe that we could use the weak prefix search data structure instead of h()
to compute a backward step. However, this would introduce a term O(log n) in the query
time, which would be dominant for short patterns, i.e., m = o(log n).
Query algorithm. We report here the query algorithm for a pattern P [1..m], with m =
Ω(log n/ log log n). Recall that for shorter patterns we store all possible answers.
We first compute E(P [1..m]) in O(1 + m log log n/w) time. Then, we perform a weak
prefix search to identify the range [l, r] of encodings that are prefixed by E(P [1..m]) in
O(m log log n/w + log log n) time. If P has at least one occurrence, the search is guaranteed
to find the correct range; otherwise, the range may be arbitrary but the subsequent check
will identify the mistake and report zero occurrences.
In the checking phase, there are only two possible cases.
The first case occurs when [l, r] contains a sampled entry, say i, in R. Thus, we can use
the encoding from Lemma 6 to compare E(S[R[i]..R[i] + m]) and E(P [1..m]) in O(m) time.
If they are equal, we report R[i]; otherwise, we are guaranteed that there is no occurrence
of P in S.
The second case is when there is no sampled entry in [l, r]. We arbitrarily select an index
i ∈ [l, r] and we perform a sequence of backward steps starting from i. If P has at least
one occurrence, we are guaranteed to find a sampled entry e in at most sample backward
steps. The overall time of these backward steps is O(sample × m log log n/w) = O(m). If
e is not found, we conclude that P has no occurrence. Otherwise, we explicitly compare
E(S[R[e]+b..R[e]+m+b]) and E(P [1..m]) in O(m) time, where b is the number of performed
backward steps. We report R[e] + b only in case of a successful comparison. Note that if P
occurs, then the number of its occurrences is r − l + 1.
CVIT 2016
23:12
An Encoding for Order-Preserving Matching
7
Space Lower Bound
In this section we prove that our solution is space optimal. This is done by showing a lower
bound on the space that any data structure must use to solve the easier problem of just
establishing if a given pattern P has at least one order-preserving occurrence in S.
More precisely, in this section we prove the following theorem.
I Theorem 10. Any encoding data structure that indexes any S[1..n] over the alphabet [σ]
with log σ = Ω(log log n) which, given a pattern P [1..m] with m = log n, establishes if P has
any order-preserving occurrence in S must use Ω(log log n) bits of space.
By contradiction, we assume that there exists a data structure D that uses o(n log log n)
bits. We prove that this implies that we can store any string S[1, n] in less than n log σ bits,
which is clearly impossible.
We start by splitting S into n/m blocks of size m = log n characters each. Let Bi denote
the ith block in this partition. Observe that if we know both the list L(Bi ) of characters
that occur in Bi together with their number of occurrences and E(Bi ), we can recover Bi .
This is because E(Bi ) implicitly tells us how to permute the characters in L(Bi ) to obtain
Bi . Obviously, if we are able to reconstruct each Bi , we can reconstruct S. Thus, our goal
is to use D together with additional data structures to obtain E(Bi ) and L(Bi ), for any Bi .
We first directly encode L(Bi ) for each i by encoding the sorted sequence of characters
σ
with Elias-Fano representation. By Theorem 2, we know that this requires m log m
+ O(m)
σ
bits. Summing up over all the blocks, the overall space used is n log m + O(n) bits.
Now it remains to obtain the encodings of all the blocks. Consider the set E of the
encodings of all the substrings of S of length m. We do not store E because it would require
too much space. Instead, we use a minimal perfect hash function h() on E. This requires
O(n) bits by Theorem 3. This way each distinct encoding is bijectively mapped to a value
in [n]. For each block Bi , we store h(Bi ). This way, we are keeping track of those elements
in E that are blocks and their positions in S. This requires O(n) bits, because there are
n/ log n blocks and storing each value needs O(log n) bits.
We are now ready to retrieve the encoding of all the blocks, which is the last step to be
able to reconstruct S. This is done by searching in D for every possible encoding of exactly
m characters. The data structure will be able to tell us the ones that occurs in S, i.e., we
are retrieving the entire set E. For each encoding e ∈ E, we check if h(e) is the hash of any
of the blocks. In this way we are able to associate the encodings in E to the original block.
Thus, we are able to reconstruct S by using D and additional data structures which uses
n log σ − n log log n + O(n) bits of space. This implies that D cannot use o(n log log n) bits.
8
Conclusion
We have given an encoding data structure for order-preserving pattern matching: given
a string S of length n over an arbitrary alphabet and a constant c ≥ 1, we can store
O(n log log n) bits such that later, given a pattern P of length m ≤ logc n, in O(m) time we
can return the position of an order-preserving match of P in S (if one exists) and report the
number of such matches. Our space bound is within a constant factor of optimal, even for
only detecting whether a match exists, and our time bound is optimal when the alphabet
size is at least logarithmic in n. We can build our encoding knowing only how each character
of S compares to O(logc n) neighbouring characters. We believe our results will help open
up a new line of research, where space is saved by restricting the set of possible queries or
by relaxing the acceptable answers, that will help us deal with the rapid growth of datasets.
T. Gagie, G. Manzini and R. Venturini
23:13
References
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Djamal Belazzougui, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna. Monotone minimal
perfect hashing: searching a sorted table with o (1) accesses. In Proceedings of the twentieth
Annual ACM-SIAM Symposium on Discrete Algorithms, pages 785–794. SIAM, 2009.
Djamal Belazzougui, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna. Fast prefix search
in little space, with applications. In European Symposium on Algorithms, pages 427–438.
Springer, 2010.
Djamal Belazzougui, Adeline Pierrot, Mathieu Raffinot, and Stéphane Vialette. Single and
multiple consecutive permutation motif search. In International Symposium on Algorithms
and Computation, pages 66–77. Springer, 2013.
Domenico Cantone, Simone Faro, and M Oguzhan Külekci. An efficient skip-search approach to the order-preserving pattern matching problem. In Stringology, pages 22–35,
2015.
Tamanna Chhabra, Simone Faro, M Oğuzhan Külekci, and Jorma Tarhio. Engineering
order-preserving pattern matching with simd parallelism. Software: Practice and Experience, 2016.
Tamanna Chhabra, Emanuele Giaquinta, and Jorma Tarhio. Filtration algorithms for
approximate order-preserving matching. In International Symposium on String Processing
and Information Retrieval, pages 177–187. Springer, 2015.
Tamanna Chhabra, M Oguzhan Külekci, and Jorma Tarhio. Alternative algorithms for
order-preserving matching. In Stringology, pages 36–46, 2015.
Tamanna Chhabra and Jorma Tarhio. Order-preserving matching with filtration. In International Symposium on Experimental Algorithms, pages 307–314. Springer, 2014.
Tamanna Chhabra and Jorma Tarhio. A filtration method for order-preserving matching.
Information Processing Letters, 116(2):71–74, 2016.
Sukhyeun Cho, Joong Chae Na, Kunsoo Park, and Jeong Seop Sim. A fast algorithm for
order-preserving pattern matching. Information Processing Letters, 115(2):397–402, 2015.
Maxime Crochemore, Costas S Iliopoulos, Tomasz Kociumaka, Marcin Kubica, Alessio
Langiu, Solon P Pissis, Jakub Radoszewski, Wojciech Rytter, and Tomasz Waleń. Orderpreserving indexing. Theoretical Computer Science, 638:122–135, 2016.
Pooya Davoodi, Gonzalo Navarro, Rajeev Raman, and S Srinivasa Rao. Encoding range
minima and range top-2 queries. Philosophical Transactions of the Royal Society of London
A: Mathematical, Physical and Engineering Sciences, 372(2016):20130131, 2014.
Gianni Decaroli, Travis Gagie, and Giovanni Manzini. A compact index for order-preserving
pattern matching. In Data Compression Conference, 2017. To appear.
Peter Elias. Efficient storage and retrieval by content and address of static files. Journal
of the ACM (JACM), 21(2):246–260, 1974.
Robert M. Fano. On the number of bits required to implement an associative memory. Technical Report Memorandum 61, Project MAC, Computer Structures Group, Massachusetts
Institute of Technology, 1971.
Simone Faro and M Oğuzhan Külekci. Efficient algorithms for the order preserving pattern
matching problem. In International Conference on Algorithmic Applications in Management, pages 185–196. Springer, 2016.
Paolo Ferragina and Giovanni Manzini. An experimental study of a compressed index.
Information Sciences, 135(1):13–28, 2001.
Johannes Fischer. Combined data structure for previous-and next-smaller-values. Theoretical Computer Science, 412(22):2451–2456, 2011.
Johannes Fischer and Volker Heun. A new succinct representation of rmq-information and
improvements in the enhanced suffix array. In Combinatorics, Algorithms, Probabilistic
and Experimental Methodologies, pages 459–470. Springer, 2007.
CVIT 2016
23:14
An Encoding for Order-Preserving Matching
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Arnab Ganguly, Rahul Shah, and Sharma V Thankachan. pbwt: Achieving succinct data
structures for parameterized pattern matching and related problems. In Proceedings of the
Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 397–407.
SIAM, 2017.
Paweł Gawrychowski and Patrick K Nicholson. Encodings of range maximum-sum segment
queries and applications. In Annual Symposium on Combinatorial Pattern Matching, pages
196–206. Springer, 2015.
Paweł Gawrychowski and Patrick K Nicholson. Optimal encodings for range top-k, selection, and min-max. In International Colloquium on Automata, Languages, and Programming, pages 593–604. Springer, 2015.
Paweł Gawrychowski and Przemysław Uznański. Order-preserving pattern matching with
k mismatches. Theoretical Computer Science, 638:136–144, 2016.
Mordecai Golin, John Iacono, Danny Krizanc, Rajeev Raman, Srinivasa Rao Satti, and
Sunil Shende. Encoding 2d range maximum queries. Theoretical Computer Science,
609:316–327, 2016.
Roberto Grossi, John Iacono, Gonzalo Navarro, Rajeev Raman, and Satti Srinivasa Rao.
Encodings for range selection and top-k queries. In European Symposium on Algorithms,
pages 553–564. Springer, 2013.
Torben Hagerup and Torsten Tholey. Efficient minimal perfect hashing in nearly minimal
space. In Annual Symposium on Theoretical Aspects of Computer Science, pages 317–326.
Springer, 2001.
Tommi Hirvola and Jorma Tarhio. Approximate online matching of circular strings. In
International Symposium on Experimental Algorithms, pages 315–325. Springer, 2014.
Guy Jacobson. Space-efficient static trees and graphs. In Foundations of Computer Science,
1989., 30th Annual Symposium on, pages 549–554. IEEE, 1989.
Varunkumar Jayapaul, Seungbum Jo, Rajeev Raman, Venkatesh Raman, and Srinivasa Rao
Satti. Space efficient data structures for nearest larger neighbor. Journal of Discrete
Algorithms, 36:63–75, 2016.
Seungbum Jo, Rajeev Raman, and Srinivasa Rao Satti. Compact encodings and indexes
for the nearest larger neighbor problem. In International Workshop on Algorithms and
Computation, pages 53–64. Springer, 2015.
Jinil Kim, Peter Eades, Rudolf Fleischer, Seok-Hee Hong, Costas S Iliopoulos, Kunsoo
Park, Simon J Puglisi, and Takeshi Tokuyama. Order-preserving matching. Theoretical
Computer Science, 525:68–79, 2014.
Marcin Kubica, Tomasz Kulczyński, Jakub Radoszewski, Wojciech Rytter, and Tomasz
Waleń. A linear time algorithm for consecutive permutation pattern matching. Information
Processing Letters, 113(12):430–433, 2013.
Gonzalo Navarro, Rajeev Raman, and Srinivasa Rao Satti. Asymptotically optimal encodings for range selection. In 34th International Conference on Foundation of Software
Technology and Theoretical Computer Science, page 291, 2014.
Gonzalo Navarro and Sharma V Thankachan. Encodings for range majority queries. In
CPM, pages 262–272, 2014.
Alessio Orlandi and Rossano Venturini. Space-efficient substring occurrence estimation.
Algorithmica, 74(1):65–90, 2016.
Mihai Patrascu and Mikkel Thorup. Dynamic integer sets with optimal rank, select, and
predecessor search. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual
Symposium on, pages 166–175. IEEE, 2014.
Rajeev Raman. Encoding data structures. In International Workshop on Algorithms and
Computation, pages 1–7. Springer, 2015.
Rahul Shah. Personal communication, 2016.
| 8 |
On the role of synaptic stochasticity in training low-precision neural networks
Carlo Baldassi,1, 2, 3 Federica Gerace,2, 4 Hilbert J. Kappen,5 Carlo
Lucibello,2, 4 Luca Saglietti,2, 4 Enzo Tartaglione,2, 4 and Riccardo Zecchina1, 2, 6
arXiv:1710.09825v2 [cond-mat.dis-nn] 20 Mar 2018
1
Bocconi Institute for Data Science and Analytics, Bocconi University, Milano, Italy
2
Italian Institute for Genomic Medicine, Torino, Italy
3
Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Italy
4
Dept. of Applied Science and Technology, Politecnico di Torino, Torino, Italy
5
Radboud University Nijmegen, Donders Institute for Brain,
Cognition and Behaviour 6525 EZ Nijmegen, The Netherlands
6
International Centre for Theoretical Physics, Trieste, Italy
Stochasticity and limited precision of synaptic weights in neural network models are key aspects
of both biological and hardware modeling of learning processes. Here we show that a neural network
model with stochastic binary weights naturally gives prominence to exponentially rare dense regions
of solutions with a number of desirable properties such as robustness and good generalization performance, while typical solutions are isolated and hard to find. Binary solutions of the standard
perceptron problem are obtained from a simple gradient descent procedure on a set of real values
parametrizing a probability distribution over the binary synapses. Both analytical and numerical
results are presented. An algorithmic extension aimed at training discrete deep neural networks is
also investigated.
Learning can be regarded as an optimization process
over the connection weights of a neural network. In
nature, synaptic weights are known to be plastic, low
precision and unreliable, and it is an interesting issue
to understand if this stochasticity can help learning or
if it is an obstacle. The debate about this issue has a
long history and is still unresolved (see [1] and references therein). Here, we provide quantitative evidence
that the stochasticity associated with noisy low precision synapses can drive elementary supervised learning
processes towards a particular type of solutions which,
despite being rare, are robust to noise and generalize
well — two crucial features for learning processes.
In recent years, multi-layer (deep) neural networks
have gained prominence as powerful tools for tackling a
large number of cognitive tasks [2]. In a K-class classification task, neural network architectures are typically
trained as follows. For any input x ∈ X (the input space
X typically being a tensor space) and for a given set of
parameters W , called synaptic weights, the network defines a probability density function P (y | x, W ) over the
K possible outcomes. This is done through composition
of affine transformations involving the synaptic weights
W , element wise non-linear operators, and finally a softmax operator that turns the outcome of previous operations into a probability density function [3]. The weights
W are adjusted, in a supervised learning scenario, using
a training set D of M known input-output associations,
M
D = {(xµ , y µ )}µ=1 . The learning problem is reframed
into the problem of maximizing a log-likelihood L̃ (W )
over the synaptic weights W :
max L̃ (W ) :=
W
X
(x,y)∈D
log P (y | x, W )
(1)
The maximization problem is approximately solved
using variants of the Stochastic Gradient Descent
(SGD) procedure over the loss function −L̃ (W ) [4].
In a Bayesian approach instead one is interested
in computing the posterior distribution P (W | D) ∝
P (D | W ) P (W ), where P (W ) is some prior over the
weights W . In deep networks, unfortunately, the exact computation of P (W | D) is typically infeasible and
various approximated approaches have been proposed
[5–7].
Shallow neural network models, such as the perceptron model for binary classification, are amenable to analytic treatment while exposing a rich phenomenology.
They have attracted great attention from the statistical
physics community for many decades [8–16]. In the perceptron problem we have binary outputs y ∈ {−1, +1},
2
while inputs x and weights W are N -components vectors. Under some statistical assumptions on the training
set D and for large N , single variable marginal probabilities P (Wi | D) can be computed efficiently, using
Belief Propagation [17–19]. The learning dynamics has
also been analyzed, in particular in the online learning
setting [11, 20]. In a slightly different perspective the
perceptron problem is often framed as the task of minimizing the error-counting Hamiltonian
min H (W ) :=
W
X
(x,y)∈D
Θ −y
N
X
i=1
Wi xi
!
,
(2)
where Θ (x) is the Heaviside step function, Θ (x) = 1
if x > 0 and 0 otherwise. As a constraint satisfaction
problem, it is said to be satisfiable (SAT) if zero energy (i.e. H (W ) = 0) configurations exists, unsatisfiable (UNSAT) otherwise. We call solutions such configurations. Statistical physics analysis, assuming random and uncorrelated D, shows a sharp threshold at a
certain αc = M/N , when N grows large, separating a
SAT phase from an UNSAT one. Moreover, restricting
the synaptic space to binary values,Wi = ±1, leads to
a more complex scenario: most solutions are essentially
isolated and computationally hard to find [13, 21]. Some
efficient algorithms do exist though [12, 22] and generally land in a region dense of solutions. This apparent
inconsistency has been solved through a large deviation
analysis which revealed the existence of sub-dominant
and dense regions of solutions [14, 23]. This analysis introduced the concept of Local Entropy [14] which subsequently led to other algorithmic developments [24–26]
(see also [27] for related analysis).
In the generalization perspective, solutions within a
dense region may be loosely considered as representative of the entire region itself, and therefore act as better pointwise predictors than isolated solutions, since
the optimal Bayesian predictor is obtained averaging all
solutions [14].
Here, we propose a method to solve the binary perceptron problem (2) through a relaxation to a distributional space. We introduce a perceptron problem with
stochastic discrete weights, and show how the learning
process is naturally driven towards dense regions of solutions, even in the regime in which they are exponentially
rare compared to the isolated ones. In perspective, the
same approach can be extended to the general learning
problem (1), as we will show.
Denote with Qθ (W ) a family of probability distributions over W parametrized by a set of variables θ.
Consider the following problem:
max L (θ) :=
θ
X
(x,y)∈D
log EW ∼Qθ P (y | x, W )
(3)
Here L (θ) is the log-likelihood of a model where
for each training example (x, y) ∈ D the synaptic weights are independently sampled according to
Qθ (W ).
Within this scheme two class predictors can be devised for any input x: ŷ1 (x) =
argmaxy P (y | x, Ŵ ), ´where Ŵ = argmaxW Qθ (W ),
and ŷ2 (x) = argmaxy dW P (y | x, W ) Qθ (W ). In this
paper we will analyze the quality of the training error
given by the first predictor. Generally, dealing with
Problem (3) is more difficult than dealing with Problem (1), since it retains some of the difficulties of the
computation of P (W | D). Also notice that for any maximizer W ? of Problem (1) we have that δ (W − W ? ) is
a maximizer of Problem (3) provided that it belongs to
the parametric family, as can be shown using Jensen’s
inequality. Problem (3) is a "distributional" relaxation
of Problem (1).
Optimizing L (θ) instead of L̃ (W ) may seem an unnecessary complication. In this paper we argue that
there are two reasons for dealing with this kind of task.
First, when the configuration space of each synapse
is restricted to discrete values, the network cannot be
trained with SGD procedures. The problem, while being
very important for computational efficiency and memory gains, has been tackled only very recently [5, 28].
Since variables θ typically lie in a continuous manifold
instead, standard continuous optimization tools can be
applied to L (θ). Also, the learning dynamics on L (θ)
enjoys some additional properties when compared to the
dynamics on L̃ (W ). In the latter case additional regularizers, such as dropout and L2 norm, are commonly
used to improve generalization properties [4]. The SGD
in the θ-space instead already incorporates the kind of
natural regularization intrinsic in the Bayesian approach
and the robustness associated to high local entropy [14].
Here we make a case for these arguments by a numerical
3
1.0
0.8
E
0.6
q
0.4
0.2
100 200 300 400 5000.0
1.0
N = 1001
0.8
N = 10001
0.6
0.4 1.0
E (%)
0.2 0.5
0.0
0.56 0.64 0.72
0.0
0.56 0.60 0.64 0.68 0.72
q
success probability
E
0.5
0.4
0.3
0.2
0.1
0.00
training epoch
Figure 1. (Left) The training error and the squared norm
against the number of training epochs, for α = 0.55 and
N = 10001, averaged over 100 samples. (Right) Success
probability in the classification task as a function of the load
α for networks of size N = 1001, 10001 averaging 1000 and
100 samples respectively. In the inset we show the average
training error at the end of GD as a function of α.
and analytical study of the proposed approach for the
binary perceptron. We also present promising preliminary numerical results on deeper networks.
Learning for the Stochastic Perceptron. Following
the above discussion, we now introduce our binary
stochastic perceptron model. For each input x presented, N synaptic weights W = (W1 , . . . , WN ), Wi ∈
{−1, +1}, are randomly extracted according to the distribution
Qm (W ) =
N
Y
1 + mi
i=1
2
δWi ,+1 +
1 − mi
δWi ,−1
2
(4)
where δa,b is the Kronecker delta symbol. We will refer
to the set m = (mi )i , where mi ∈ [−1, 1] ∀i, as the
magnetizations or the control parameters. We choose
the probability P (y | x, W ) on the class y ∈ {−1, +1}
for a given input x as follows:
P (y | x, W ) = Θ y
N
X
i=1
Wi xi
!
.
(5)
While other possibilities for P (y | x, W ) could be considered, this particular choice is directly related to the
form of the Hamiltonian in Problem (2), which we
ultimately aim to solve. Given a training set D =
M
{(xµ , y µ )}µ=1 , we can then compute the log-likelihood
function of Eq. (3), with the additional assumption that
N is large and the central limit theorem applies. It reads
L (m) =
X
(x,y)∈D
log H
y
− pP
P
i
i
mi xi
(1 − m2i ) x2i
!
,
(6)
√
´∞
2
where H (x) := x dz e−z /2 / 2π. Minimizing
−L (m) instead of finding the solutions of Problem (2)
allows us to use the simplest method for approximately
solving continuous optimization problems, the Gradient
Descent (GD) algorithm:
t
t
mt+1
←
clip
m
+
η
∂
L
m
.
mi
i
i
(7)
We could have adopted the more efficient SGD approach, however in our case simple GD is already effective. In the last equation η is a suitable learning rate and
clip (x) := max (−1, min (1, x)), applied element-wise.
Parameters are randomly initialized to small values,
m0i ∼ N 0, N −1 . At any epoch t in the GD dynamics
a binarized configuration Ŵit = sign (mti) can be used to
compute the training error Ê t =
1
MH
Ŵ t . We con-
sider a training set D where each input component xµi is
sampled uniformly and independently in {−1, 1} (with
this choice we can set y µ = 1 ∀µ without loss of generality). The evolution of the network during GD is shown
in Fig. 1. The training error goes progressively to zero
while the
P mean 2squared norm of the control variables
q?t = N1 i (mti ) approaches one. Therefore the distribution Qm concentrates around a single configuration
as the training is progressing. This natural flow is similar to the annealing of the coupling parameter manually
performed in local entropy inspired algorithms [25, 26].
We also show in Fig. 1 the probability over the realizations of D of finding a solution of the binary problem as
function of the load α = M/N . The algorithmic capacity of GD is approximately αGD ≈ 0.63. This value has
to be compared to the theoretical capacity αc ≈ 0.83,
above which there are almost surely no solutions [9],
and state-of-the-art algorithms based on message passing heuristics for which we have a range of capacities
αM P ∈ [0.6, 0.74] [12, 22, 29]. Therefore GD reaches
loads only slightly worse than those reached by much
more fine tuned algorithms, a surprising results for such
a simple procedure. Also, for α slightly above αGD
4
Z=
ˆ Y
Ω
dmi δ
i
X
i
N
m2i
− q? N
!
eβL(m)
(8)
where Ω = [−1, 1] , β is an inverse temperature, and
we constrained the squared norm to q? N in order to
mimic the natural flow of q?t in the training process.
The dependence on the training set D is implicit in last
equation. We shall denote with ED the average over the
training sets with i.i.d. input and output components
uniform in {−1, 1}. We investigate the average properties of the system for large N and fixed load α = M/N
using the replica method in the Replica Symmetric (RS)
ansatz [35]. Unfortunately the RS solution becomes locally unstable for very large β. Therefore, instead of
taking the infinite β limit to maximize the likelihood
we will present the results obtained for β large but still
in the RS region. The details of the free energy cal-
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0.00 0.7
0.20
GD
GD slow
equilibrium
training error
0.15
stochastic
binary
continuous
0.10
S(d)
the training error remains comparably low, as shown
in Fig. 1. In our experiments most variants of the GD
procedure of Eq. (7) performed just as well: e.g. SGD
ors GD computed on the fields hti = tanh−1 (mti ) rather
than the magnetizations[30]. Other updates rules for
the control parameters can be derived as multiple pass
of on-line Bayesian learning [31, 32]. Variations of rule
(7) towards biological plausibility are discussed in the
SM.
Deep Networks. We applied our framework to deep
neural networks with binary stochastic weights and sign
activation functions. Using an uncorrelated neuron approximation, as in Ref. [6], we trained the network using
the standard SGD algorithm with backpropagation. We
give the details in the SM. On the MNIST benchmark
problem [33], using a network with three hidden layers
we achieved ∼ 1.7% test error, a very good result for a
network with binary weights and activations and with
no convolutional layers [34]. No other existing approach
to the binary perceptron problem has been extended yet
to deeper settings.
Statistical mechanics Analysis. We now proceed
with the analytical investigation of the equilibrium
properties of the stochastic perceptron, which partly
motivates the good performance of the GD dynamics.
The starting point of the analysis is the partition function
0.05
0.8
q
0.9
1.0 0.00
0.00
0.04
d
0.08
0.12
Figure 2. (Left) Energy of the Binarized Configuration versus the control variable q? . We show the equilibrium prediction of Eq. (9), and numerical results from the GD algorithm and a GD algorithm variant where after each update
we rescale the norm of m to q? until convergence before moving to the next value of q? according to a certain schedule.
The results are averaged over 20 random realizations of the
training set with N = 10001. (Right) Entropy of binary
solutions at fixed distance d from BCs of the spherical, binary and stochastic perceptron (q? = 0.7, 0.8 and 0.9 from
bottom to top) at thermodynamic equilibrium. In both figures α = 0.55, also β = 20 for the stochastic perceptron and
β = ∞ for the spherical and binary ones.
culation and of the stability check can be found in the
SM.
Energy of the Binarized Configuration. We now analyze some properties of the mode of the distribution
Qm (W ), namely Ŵi = sign (mi ), that we call Binarized
Configuration (BC). The average training error per pattern is:
*
!+
X
X
1
ED
Θ −y
sign (mi ) xi
E = lim
N →∞ αN
i
(x,y)∈D
(9)
where h•i is the thermal average over m according
to the partition function (8), which implicitly depends
on D, q? and β. The last equation can be computed
analytically within the replica framework (see SM). In
Fig. 2 (Left) we show that for large β the BC becomes
a solution of the problem when q? approaches one. This
is compared to the values of the training error obtained
from GD dynamics at corresponding values of q? , and
a modified GD dynamics where we let the system equilibrate at fixed q? . The latter case, although we are
at finite N and we are considering a dynamical process
that could suffer the presence of local minima, is in rea-
5
sonable agreement with the equilibrium result of Eq.
(9).
Geometrical structure of the solution space. Most solutions of the binary perceptron problem are isolated
[13], while a subdominant but still exponentially large
number belongs to a dense connected region [14]. Solutions in the dense region are the only ones that are
algorithmically accessible. Here we show that BCs of
the stochastic binary perceptron typically belong to the
dense region, provided q? is high enough. To prove this
we count the number of solutions at a fixed Hamming
distance d from typical BC (this corresponds to fixing
an overlap p = 1 − 2d). Following the approach of Franz
and Parisi [36] we introduce the constrained partition
function
Z(d, m) =
X Y
Θ y
W (x,y)∈D
X
W i xi
i
× δ N (1 − 2d) −
X
!
sign (mi ) Wi
i
!
, (10)
N
where the sum is over the {−1, +1} binary configurations. The Franz-Parisi entropy S (d) is then given
by
S(d) = lim
N →∞
1
ED hlog Z (d, m)i .
N
(11)
We show how to compute S (d) in the SM. In Fig. 2
(Right) we compare S (d) for the stochastic perceptron
with the analogous entropies obtained substituting the
expectation h•i over m in Eq. (11) with a uniform sampling from the solution space of the spherical (the model
of Ref. [8]) and the binary (as in Ref. [13]) perceptron.
The distance gap between the BC and the nearest binary solutions (i.e., the value of the distance after which
S(d) becomes positive) vanishes as q? is increased: in
this regime the BC belongs to the dense cluster and we
have an exponential number of solutions at any distance
d > 0. Typical binary solutions and binarized solutions
of the continuous perceptron are isolated instead (finite
gap, corresponding to S(d) = 0 at small distances). In
the SM we provide additional numerical results on the
properties of the energetic landscape in the neighbor-
hood of different types of solutions, showing that solutions in flatter basins achieve better generalization than
those in sharp ones.
Conclusions.
Our analysis shows that stochasticity in the synaptic connections may play a fundamental role in learning processes, by effectively reweighting
the error loss function, enhancing dense, robust regions,
suppressing narrow local minima and improving generalization.
The simple perceptron model allowed us to derive analytical results as well as to perform numerical tests.
Moreover, as we show in the SM, when considering
discretized priors, there exist a connection with the
dropout procedure which is popular in modern deep
learning practice. However, the most promising immediate application is in the deep learning scenario, where
this framework can be extended adapting the tools developed in Refs. [6, 7], and where we already achieved
state-of-the-art results in our preliminary investigations.
Hopefully, the general mechanism shown here can also
help to shed some light on biological learning processes,
where the role of low precision and stochasticity is still
an open question. Finally, we note that this procedure is
not limited to neural network models; for instance, application to constraint satisfaction problems is straightforward.
CB, HJB and RZ acknowledge ONR Grant N0001417-1-2569.
[1] H Sebastian Seung. Learning in spiking neural networks
by reinforcement of stochastic synaptic transmission.
Neuron, 40(6):1063–1073, 2003.
[2] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton.
Deep learning. Nature, 521(7553):436–444, 2015.
[3] David JC MacKay. Information theory, inference and
learning algorithms. Cambridge university press, 2003.
[4] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning.
arXiv preprint arXiv:1606.04838, 2016.
[5] Daniel Soudry, Itay Hubara, and R Meir. Expectation
Backpropagation: parameter-free training of multilayer
neural networks with real and discrete weights. Neural
Information Processing Systems 2014, 2(1):1–9, 2014.
[6] José Miguel Hernández-Lobato and Ryan P Adams.
Probabilistic Backpropagation for Scalable Learning of
6
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
Bayesian Neural Networks. Journal of Machine Learning Research, 37:1–6, feb 2015.
Oran Shayar, Dan Levi, and Ethan Fetaya. Learning discrete weights using the local reparameterization
trick. ArXiv e-print, 2017.
Elizabeth Gardner. The space of interactions in neural
network models, 1988.
Werner Krauth and Marc Mézard. Storage capacity of
memory networks with binary couplings, 1989.
Manfred Opper and Ole Winther. A Mean Field Approach to Bayes Learning in Feed-Forward Neural Networks. Physical Review Letters, 76(11):1964–1967, mar
1996.
Sara Solla and Ole Winther. Optimal perceptron learning: an online Bayesian approach. In David Saad, editor, On-Line Learning in Neural Networks, pages 1–20.
Cambridge University Press, 1998.
Alfredo Braunstein and Riccardo Zecchina. Learning
by Message Passing in Networks of Discrete Synapses.
Physical Review Letters, 96(3):030201, jan 2006.
Haiping Huang and Yoshiyuki Kabashima. Origin of
the computational hardness for learning with binary
synapses. Physical Review E, 90(5):052813, nov 2014.
Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello,
Luca Saglietti, and Riccardo Zecchina.
Subdominant Dense Clusters Allow for Simple Learning and
High Computational Performance in Neural Networks
with Discrete Synapses.
Physical Review Letters,
115(12):128101, 2015.
Silvio Franz and Giorgio Parisi. The simplest model
of jamming. Journal of Physics A: Mathematical and
Theoretical, 49(14):145001, apr 2016.
Simona Cocco, Rémi Monasson, and Riccardo Zecchina.
Analytical and numerical study of internal representations in multilayer neural networks with binary weights.
Physical Review E, 54(1):717–736, jul 1996.
Marc Mézard.
The space of interactions in neural networks: Gardner’s computation with the cavity
method. Journal of Physics A: Mathematical and General, 22(12):2181–2190, jun 1989.
Giorgio Parisi, Marc Mézard, and Miguel Angel Virasoro. Spin glass theory and beyond. World Scientific
Singapore, 1987.
Andrea Montanari and Marc Mézard. Information,
Physics and Computation. Oxford Univ. Press, 2009.
David Saad. On-line learning in neural networks. Cambridge University Press, 1998.
Lenka Zdeborová and Marc Mézard. Constraint satisfaction problems with isolated solutions are hard. Journal of Statistical Mechanics: Theory and Experiment,
2008(12):P12004, oct 2008.
Carlo Baldassi, Alfredo Braunstein, Nicolas Brunel, and
Riccardo Zecchina. Efficient supervised learning in net-
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
works with binary synapses. Proceedings of the National
Academy of Sciences, 104(26):11079–11084, jun 2007.
Carlo Baldassi, Federica Gerace, Carlo Lucibello, Luca
Saglietti, and Riccardo Zecchina. Learning may need
only a few bits of synaptic precision. Physical Review
E, 93(5):052313, 2016.
Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello,
Luca Saglietti, and Riccardo Zecchina. Local entropy as
a measure for sampling solutions in constraint satisfaction problems. Journal of Statistical Mechanics: Theory
and Experiment, 2016(2):023301, 2016.
Carlo Baldassi, Christian Borgs, Jennifer T. Chayes,
Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti,
and Riccardo Zecchina. Unreasonable effectiveness
of learning neural networks: From accessible states
and robust ensembles to basic algorithmic schemes.
Proceedings of the National Academy of Sciences,
113(48):E7655–E7662, nov 2016.
Pratik Chaudhari, Anna Choromanska, Stefano Soatto,
Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer T. Chayes, Levent Sagun, and Riccardo Zecchina.
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys. ArXiv e-prints, nov 2016.
Alfredo Braunstein, Luca Dall’Asta, Guilhem Semerjian, and Lenka Zdeborová. The large deviations of
the whitening process in random constraint satisfaction
problems. Journal of Statistical Mechanics: Theory and
Experiment, 2016(5):053401, may 2016.
Matthieu Courbariaux, Itay Hubara, Daniel Soudry,
Ran El-Yaniv, and Yoshua Bengio. Binarized Neural Networks: Training Deep Neural Networks with
Weights and Activations Constrained to +1 or -1. ArXiv
e-prints, page 9, feb 2016.
Carlo Baldassi and Alfredo Braunstein. A max-sum
algorithm for training discrete neural networks. Journal of Statistical Mechanics: Theory and Experiment,
2015(8):P08008, 2015.
This has the advantage that it doesn’t require clipping.
Manfred Opper and Ole Winther. A Bayesian approach
to on-line learning. In David Saad, editor, On-line learning in neural networks, pages 363–378. 1998.
Thomas P Minka. Expectation Propagation for Approximate Bayesian Inference F d. Uncertainty in Artificial
Intelligence (UAI), 17(2):362–369, 2001.
Yann LeCun, Corinna Cortes, and Christopher JC
Burges. Mnist handwritten digit database. AT&T
Labs [Online]. Available:
http://yann. lecun.
com/exdb/mnist, 2, 2010.
Carlo Baldassi, Christian Borgs, Jennifer T. Chayes,
Carlo Lucibello, Luca Saglietti, Enzo Tartaglione, and
Riccardo Zecchina. In preparation.
Marc Mézard and Andrea Montanari. Information,
Physics, and Computation. Oxford University Press,
7
January 2009.
[36] Silvio Franz and Giorgio Parisi. Recipes for metastable
states in spin glasses. Journal de Physique I, 5(11):1401–
1415, 1995.
On the role of synaptic stochasticity in training low-precision neural networks
Supplementary Material
Carlo Baldassi,1, 2, 3 Federica Gerace,2, 4 Hilbert J. Kappen,5 Carlo
Lucibello,2, 4 Luca Saglietti,2, 4 Enzo Tartaglione,2, 4 and Riccardo Zecchina1, 2, 6
1
Bocconi Institute for Data Science and Analytics, Bocconi University, Milano, Italy
2
Italian Institute for Genomic Medicine, Torino, Italy
3
Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Italy
4
Dept. of Applied Science and Technology, Politecnico di Torino, Torino, Italy
5
Radboud University Nijmegen, Donders Institute for Brain,
Cognition and Behaviour 6525 EZ Nijmegen, The Netherlands
6
International Centre for Theoretical Physics, Trieste, Italy
CONTENTS
I. Replica Symmetric solution and stability analysis
1
II. Energy of the Binarized Configuration
4
III. Franz-Parisi entropy
6
IV. Binary control variables
8
V. Stochastic deep networks
9
VI. A weighted perceptron rule
10
VII. Average energy around algorithmic solutions
12
References
13
I.
REPLICA SYMMETRIC SOLUTION AND STABILITY ANALYSIS
In this Section we show how to compute the average the free entropy of the stochastic perceptron model discussed
in the main text, using the replica trick and under Replica Symmetry (RS) assumptions. The limit of validity of the
RS ansatz will also be discussed. The statistical physics model of a perceptron with N stochastic binary synapses
is defined by the partition function:
ZN =
ˆ Y
N
Ω i=1
dmi δ
N
X
i=1
m2i
− q? N
!
eβL(m) .
(1)
2
M
The partition function depends implicitly on the inverse temperature β, a training set D = {y µ , xµ }µ=1 , M = αN
N
for some α > 0, y µ ∈ {−1, 1} , xµ ∈ {−1, 1}N , and a norm parameter q? . The integration is in the box Ω = [−1, 1] .
The log-likelihood is given, for large N , by
M
X
L (m) =
with H (x) =
1
2
erfc
x
√
2
!
PN
y µ i=1 mi xµi
− p
,
N (1 − q? )
log H
µ=1
(2)
. As usual in statistical physics, we are interested in the large N limit of the system, an
assumption which has already been exploited in expressing L(m) as in Eq. (2), as already stated. Also, the average
ED over random instances of the training set is considered: xµi are uniformly and independently distributed over
{−1, 1} and without loss of generality we set y µ = 1 ∀µ. Notice that although L(m) is concave on the box Ω, the
norm constraint decomposes Ω into disjoint domains, therefore the maximization problem (i.e. the large β limit) is
non-trivial.
We define the average asymptotic free entropy as
φ = lim
N →∞
1
ED log ZN
N
(3)
where the limit is taken at fixed α. In order to compute φ we shall resort to the standard machinery of the replica
method [13, 20]. The replicated partition function of the model is given by
n
ED ZN
= ED
ˆ
n Y
N
Y
Ω⊗n a=1 i=1
dmai
n
Y
a=1
δ
N
X
2
(mai )
i=1
− q? N
!
n
M Y
Y
H
µ=1 a=1
After some manipulations, at the leading order in N , we obtain
n
ED ZN
β
PN µ a !
x m
− p i=1 i i .
N (1 − q? )
ˆ Y
dq̂aa Y dq̂ab dqab N nφ[q̂,q]
∼
e
,
2π
2π
a
(4)
(5)
a<b
where the replicated free entropy is given by
φ [q̂, q] = −
1 X
q̂ab qab + GS [q̂] + αGE [q] ,
2n
(6)
a,b
where we defined qaa ≡ q? for convenience. In last equation we also defined
GS [q̂] =
GE [q] =
1
log
n
1
log
n
ˆ
Y
[−1,1]n a
1
dma e 2
P
ab
q̂ab ma mb
,
ˆ Y
dûa dua − 1 Pab qab ûa ûb +iûa ua Y β
ua
e 2
H
−√
.
2π
1 − q?
a
a
(7)
(8)
3
Saddle point evaluation of the replicated partition function yields the following identities:
q̂ab = −α hhûa ûb iiE
qab = hhma mb iiS
m2a
qaa ≡ q? =
S
a > b,
(9)
a > b,
(10)
(11)
.
Here we denoted with hh•iiS and hh•iiE the expectations taken according to the single-body partition function
in the logarithms of Eq. (7) and Eq. (8) respectively. Notice that last equation is an implicit equation for q̂aa .
We perform the saddle point evaluation and analytic continuation of n ↓ 0 within the Replica Symmetric (RS)
ansatz. Therefore we have qab = q0 , q̂ab = q̂0 for a 6= b and q̂aa = q̂1 ∀a. The RS prediction for the average free
entropy is then given by
1
(q0 q̂0 − q? q̂1 ) + GS (q̂0 , q̂1 ) + αGE (q0 , q? )
2
φRS = extr
q0 ,q̂0 ,q̂1
(12)
where
ˆ
ˆ
1
1
2
√
dm e 2 (q̂1 −q̂0 )m + q̂0 zm ,
(13)
−1
√
√
ˆ
ˆ
q0 z + q? − q0 u
√
GE (q0 ) = Dz log Du H β −
.
(14)
1 − q?
´
´
z2
e− 2 . Saddle point conditions yield the set of equations
In last equation we used the notation Dz = √dz
2π
GS (q̂0 , q̂1 ) =
q0 = −2
Dz log
∂GS
;
∂ q̂0
q̂0 = −2α
∂GE
;
∂q0
0=2
∂GS
− q? ,
∂ q̂1
(15)
that we solve iteratively. Last equation is an implicit equation for q̂1 .
This derivation is the starting point for the more complicated calculations of the energy of the Binarized Configurations (BCs) in Section II and of the Franz-Parisi entropy in Section III.
While we conjecture φRS to be exact at low β, in the region of high β that we need to explore in order to
maximize the log-likelihood L(m) it may be necessary to use a replica symmetry breaking formalism. A necessary,
but not sufficient, condition for the validity of the RS formalism is the local stability condition for the free energy
functional of Eq. (6) at the RS stationary point. The stability criterion involving the eigenvalues of the Hessian can
be rephrased, with a slight adaption of the derivation of Ref. [14], as
αγE γS < 1.
(16)
Here γE and γS are the relevant eigenvalues of the Hessians of GE [q] and GS [q̂] respectively and for small n.
They are given by
γE =
ˆ
γS =
ˆ
h
2 i 2
Dz û2 (z) − û (z)
,
h
i2
2
Dz m2 (z) − (m (z))
.
(17)
(18)
4
RS instability lines
40
35
βc
30
25
20
15
0.78
0.8
0.82
0.84 0.86
q*
0.88
0.9
0.92
Figure 1. Critical value βc for the stability of the RS solution for different loads α = 0.5, 0.55, 0.6 (from top to bottom) as a
function of q? . Above βc the RS solution is unstable and a replica symmetry breaking ansatz should be considered to obtain
the correct solution.
Expectations in lasts equations are defined by
ûk
(z) ≡
mk (z) ≡
√
dûdu k − 12 (q? −q0 )û2 +iûu+iû q0 z β
H (u)
2π û e
´ dûdu − 1 (q −q )û2 +iûu+iû√q z
?
0
0 H β (u)
2
2π e
√
´1
2
1
dm mk e 2 (q̂1 −q̂0 )m + q̂0 zm
−1
√
´1
1
2
dm e 2 (q̂1 −q̂0 )m + q̂0 zm
−1
´
(19)
(20)
In Fig. 1 we show the stability line βc (q? ), defined by the condition αγE γS = 1, for different values of α. Due
to numerical problem arising in computing integrals at high β, we explore a small q? window. We note that βc (q? )
stays finite in the range of parameters we explored and that the β ↑ ∞ limit of the RS solution cannot be taken
carelessly. Nonetheless the βc (q? ) is generally quite high, although decreasing with α. In the main text, where we
presented the results for α = 0.55, we set the inverse temperature to β = 20, where the RS results are supposedly
correct and quantitatively close to the β = +∞ limit.
II.
ENERGY OF THE BINARIZED CONFIGURATION
We now show how to compute the average energy E (also called training error) associated to a typical Binarized
Configuration (BC). In the thermodynamic limit it is written as
E = lim ED
N →∞
"*
Θ −y
1
X
i
sign (mi ) x1i
!+#
,
(21)
5
where the average h•i is over m sampled according to the partition function (1) and Θ(x) is the Heaviside step
function. Along the lines of previous Section, we resort to the replica approach, although here the replica of index
1 is distinguished from the others:
E = lim
ˆ
lim ED
N →∞ n→0
Y
Ω⊗n a,i
dmai
n
Y
δ
a=1
N
X
2
(mai )
i=1
− q? N
!
Θ −
X
i
sign
m1i
x1i
!
e
β
P
a
a L(m )
.
(22)
P a b
I and the conjugated Lagrangian multipliers q̂ab ,
In addition to the order parameters qab = N1 P
i mi mi of Section
we also have to introduce the overlaps pa = N1 i sign m1i mai and the corresponding multipliers p̂a . We obtain
the following expression for the mean energy E:
E = lim
lim
N →∞ n→0
ˆ Y
dqab
a<b
Y dq̂ab Y dpa dp̂a
eN nφ̃[q,q̂,p,p̂] Ẽ [q, p]
2π a
2π
(23)
a≤b
The free entropy functional φ [q, q̂, p, p̂] in this case reads
1X
1 X
q̂ab qab −
p̂a pa + GS [q̂, p̂] + αGE [q]
2n
n a
a,b
ˆ
Y
X
X
1
1
q̂ab ma mb + sign m1
GS [q̂, p̂] = log
dma exp
p̂a ma
n
2
n
[−1,1]
a
a
a,b
ˆ Y
X
X
1
dua dûa Y β
ua
1
GE [q] = log
H
−√
exp i
ua ûa −
qab ûa ûb .
n
2π
2
1 − q?
a
a
a
φ̃ [q, q̂, p, p̂] = −
(24)
(25)
(26)
a,b
and the other term appearing in the integrand is given by
ˆ Y
ˆ
P
P
P
ˆY
1 ˆ2
dua dûa
ua
dũdũ
ˆ
ˆ 1
β
Ẽ [q, p] =
H
−√
Θ (−ũ) ei a ua ûa − 2 ũ +iũũ− 2 a,b qab ûa ûb −ũ a pa ûa . (27)
2π
2π a
1 − q?
a
Saddle point evaluation of φ̃ with respect to pa readily gives p̂a = 0. On this submanifold, φ̃ reduces to the
functional φ of previous Section, the matrix qab and q̂ab can be evaluated at saddle point in terms of q0 , q̂1 , q̂0 within
the RS ansatz and analytic continuation to n = 0 is finally obtained. Saddle point conditions with respect to p̂a
instead, i.e. ∂ φ̃/∂ p̂a = 0, fix the parameters pa ≡ p̃ ∀a > 1 and p1 ≡ p (here there is a little abuse of notation, the
scalar value p has not to be confused to the n-dimensional vector of previous equations). In conclusion, and in the
small n limit, after solving Eqs. (15) for q0 , q̂1 , q̂0 we compute the saddle point values of p and p̃ by
p=
ˆ
p̃ =
ˆ
Dz
´1
Dz
´
1
−1
2
√
dm sign (m) m e 2 (q̂∗ −q̂0 )m +
√
´1
1
2
dm e 2 (q̂∗ −q̂0 )m + q̂0 zm
−1
1
−1
1
dm e 2 (q̂∗ −q̂0 )m
2
q̂0 zm
,
´
2 √
1
1
sign (m)
dm m e 2 (q̂∗ −q̂0 )m + q̂0 zm
−1
.
h´
i2
√
1
1
2
2 (q̂∗ −q̂0 )m + q̂0 zm
dm
e
−1
(28)
√
+ q̂0 z0 m
(29)
6
The value of E is then simply given by Ẽ evaluated on the saddle point. After some manipulation of the integrals
appearing in Eq. (27) we finally arrive to
E=
ˆ
Dz
´
Du H
√ p−p̃ u+ √p̃q z
0
rq? −q0
2
1− p̃q −
0
´
(p−p̃)2
q? −q0
√
√
q −q0 u+ q0 z
H β − ? √1−q
?
√
√
q −q0 u+ q0 z
Du H β − ? √1−q
?
(30)
.
This result is presented as the equilibrium curve in Figure 2 (Left) of the main text.
III.
FRANZ-PARISI ENTROPY
In last Section we obtained some analytical proving that typical BCs of the stochastic perceptron can achieve
essentially zero training error if β and q∗ are large enough, and if the load α is below some critical capacity. This
BCs are therefore solution of the binary perceptron problem. While typical (most numerous) solutions of the binary
solutions problem are known to be isolated [17], we will show here that typical BCs belong to the dense solution
region uncovered in Ref. [5]. Notice that, while for q∗ = 1 the stochastic perceptron reduces to binary one, the
limit q∗ → 1 of many observables will not be continuous due to this phenomena. Most noticeably, as shown in [5],
the generalization error of solutions in the dense region is typically lower than the generalization error of isolated
solutions.
We are interested in counting the number of solutions of the binary perceptron problem at fixed Hamming
distancePd from a typical BC of the stochastic perceptron. For notation convenience we work at fixed overlap
p = N1 i Wi sign(mi ), which can be linked to d by the relation p = 1 − 2d. Following [12, 17] we define the
Franz-Parisi entropy as
1
S (p) = lim
ED
N →∞ N
X
+
X Y
X
log
Θ y
Wi xi × δ pN −
sign (mi ) Wi ,
*
i
W (x,y)∈D
(31)
i
where the expectation h•i P
over m is defined as usual according to Gibbs measure of the stochastic perceptron
given by Eq. (1). The sum W is over the binary configuration space {−1, +1}N . The computation of S(p) is
lengthy but straightforward, and can be done along the lines of Refs. [12, 17] using the replica method within the
RS ansatz. Here we have the additional complication of some extra order parameters, due to the presence of the
signin the constraint. We will present here just the final result. First, the order parameters q0 , q̂0 and q̂1 can be
independently fixed solving the saddle point equations (15). S(p) is then given by
S (p) =
extr
Q0 ,Q̂0 ,s0 ,s1 ,ŝ0 ,ŝ1 ,p̂
1
P
FP
− Q̂ (1 − Q) + ŝ0 s0 − ŝ1 s1 − p̂p + GF
S (Q̂0 , ŝ0 , ŝ1 , p̂) + αGE (Q0 , s0 , s1 ),
2
(32)
where the entropic contribution is given by
P
GF
S (Q̂0 , ŝ0 , ŝ1 , p̂)
=
ˆ
Dz
´1
−1
dm
´
1
Dη e 2 (q̂1 −q̂0 )W̃
´1
−1
dm e
2
√
+ q̂0 zm
√
1
2
2 (q̂1 −q̂0 )m +
A (m, η, z)
q̂0 zm
,
(33)
7
with
A (m, η, z) = log 2 cosh (ŝ1 − ŝ0 ) m + p̂ sign (m) +
and the energetic contribution by
P
GF
E (Q0 , s0 , s1 ) =
ˆ
Dz0
´
with
a = q? − q0 −
s
Q̂0 q̂0 − ŝ20
ŝ0
η+ √ z ,
q̂0
q̂0
(34)
√
√
√
s
s −s
bη+ √q0 z0
q0 z0 + az1 + 1√ 0 η
b
0
√
√
log
H
−
Dη Dz1 H β −
1−q?
1−Q0
√
√
,
´
q z + q −q z
Dz1 H β − 0 0√1−q?? 0 1
(s1 − s0 )
Q0 − s0
2
1−
s0 (q0 − s0 )
Q0 q0 − s20
;
b=
Q0 q0 − s20
.
q0
(35)
(36)
The extremization condition of Eq. (32) reads
Q̂0 = −2α
Q0 = 1 − 2
P
∂GF
S
∂ Q̂0
P
∂GF
∂GF P
∂GF P
E
; ŝ0 = −α E ; ŝ1 = α E ;
∂Q0
∂s0
∂s1
; s0 = −
(37)
P
P
P
∂GF
∂GF
∂GF
S
S
S
; s1 =
; 0=
− p.
∂ŝ0
∂ŝ1
∂ p̂
(38)
This system of coupled equations can be solved once again by iteration, with last equation being solved for p̂ at
each step with Newton method. The solution can then be plugged into Eq. (32), thus obtaining the final expression
for the Franz-Parisi entropy. In Figure 2 (Right) of the main text we show the results for S(p) at α = 0.55, β = 20
and different values of q? . Due to convergence problems in finding the fixed point of Eqs. (37,38), some of the
curves could not be continued to large values of d = (1 − p)/2. It is not clear if the problem is purely numerical
and caused by the many integrals appearing in the equations and by the large value of β, or if it is an hint of a
replica symmetry breaking transition. Nonetheless the region we are interested in exploring is that of low d, where
the curve at q∗ = 0.9 reaches the origin, meaning that typical BCs are in the dense region of binary solution at this
point. In the same figure we also compare the S(p) with two similar Franz-Parisi entropies, which we denote here
by Sbin (p) and Ssph (p), for the binary and the spherical perceptron respectively. These two entropies are defined
as in Eq. (31), the only difference being that the expectation h•i over m is according to
Z=
with dν(m) =
spherical one.
Q
i (δ(mi
ˆ
dν(m)
Y X
Θ yµ
mi xµi ,
µ
(39)
i
− 1) + δ(mi − 1))dmi in the binary case and dν(m) =
Q
i
dmi δ
P
i
m2i − N in the
8
0.16
analytic
BP N=1001
0.14
0.12
S
0.1
0.08
0.06
0.04
0.02
0
0
0.01
0.02
0.03
d
0.04
0.05
0.06
Figure 2. Franz-Parisi entropies for α = 0.55 and q? = 0.7, 0.8, 0.9 (from top to bottom). (Purple) Average case Franz-Parisi
entropy S(d) as given by Eq. Eq. (32) for β = 20. (Green) Single sample Franz-Parisi entropies computed with Belief
Propagation, averaged over 100 samples.
We also derived and implemented a single sample version of the Franz-Parisi calculation, performed as follows.
For a given realization of D we establish a slow schedule of q? values and perform a GD on mt where after each
update we rescale the squared norm of mt to q? N until convergence, before moving to the next value of q? . At
any point of the schedule, the configuration mt is binarized and given and the constrained entropy of the binary
perceptron is computed using the Bethe approximation given by the Belief Propagation algorithm (see Ref. [6] for
a detailed exposition). The result of the simulations are presented in Fig. 2. The slight deviation of the BP results
from the analytic curves could be explained by several causes: 1) finite size effects; 2) the analytic prediction is
for non-zero (although low) temperature; 3) the reference configuration mt is selected through the GD dynamics,
while in the analytic computation m is sampled according to the thermal measure defined by partition function of
Eq. (1).
IV.
BINARY CONTROL VARIABLES
In order to make a straightforward connection with the large deviation analyses proposed in Ref. [5], we have
√
also considered the case in which the control variables mi are discretized as well: mi = q? σi , with σi ∈ {−1, 1}.
In this case the log-likelihood of the stochastic perceptron model reads:
L(σ) =
M
X
µ=1
log H
−
r
!
P
q? y µ i σi xµi
pP µ
.
2
1 − q?
i (xi )
(40)
The statistical physics analysis proposed in the main text can be easily adapted to this case. Fig. (3) shows the
average energy E, associated to a typical configuration σ, as a function of q? . The analytic results are found to be
in reasonable agreement with the estimation of the training error obtained through a Markov Chain Monte Carlo
on the system with Hamiltonian given by −L(σ), with inverse temperature β = 15.
Moreover, instead of assuming σ to be the parameters controlling Qm (W ), from which the stochastic binary
synapses are sampled, it is instructive to take a different perspective: consider a model where the synapses are
9
0.016
MC
analytic
0.014
training error
0.012
0.01
0.008
0.006
0.004
0.002
0
0.6
0.65
0.7
0.75
0.8
q*
0.85
0.9
0.95
1
Figure 3. Stochastic perceptron with binary control. Energy of the clipped center versus q? . Red curve, MC simulation at
N = 1001, averaged over 100 samples. Green curve, analytic results determined through the replica approach. Storage load
α = 0.55, inverse temperature β = 15.
binary and deterministic, and where we introduce a dropout mask [21], randomly setting to zero a fraction p of the
inputs. In this scenario, we can write the log-likelihood of obtaining a correct classification over the independent
realizations of the dropout mask for each datapoint. For large N the resulting log-likelihood is exactly that given
by Eq. (40), once we set q? = 1 − p. Moreover, in the case of a single layer network we are considering, the dropout
mask on the inputs could be equivalently applied to the synapses, as in the drop-connect scheme [22]. We can thus
see a clear analogy between the dropout/dropconnect schemes and the learning problem analyzed throughout this
paper, even though in standard machine learning practice the synaptic weights σi are not constrained to binary
values.
V.
STOCHASTIC DEEP NETWORKS
The stochastic framework can be extended to train deep networks models with binary synapses and binary
activations using standard deep learning techniques, once some approximations to the log-likelihood estimation are
taken into account. Since this extension is beyond the scope of the present paper, here we only sketch the training
algorithm and give some preliminary results on its performance, reserving a detailed overview and extensive testing
to a future publication [3].
Consider a multi-layer perceptron with L hidden neuron layers, with synaptic weights Wij` , ` = 0, . . . . , L, and
sign activations:
τj`+1
= sign
X
j
Wij` τj`
+
b`i
,
` = 0, . . . , L,
(41)
where τ 0 = x is the input of the network, and b` are continuous biases to be optimized. In our stochastic
framework the weights Wij` are independent binary (±1) random variables with means m`ij to be optimized. For a
fixed activation trajectory (τ ` )` and wide layers, expectations with respect to W can be taken. Also, adapting the
10
scheme of Ref. [15], the probabilistic iteration of the neuron activation distribution P` (τ ` ) across the layers can be
performed within a factorized approximation, in terms of the neuron activation’s means a`i :
a`+1
i
= 2H
−P
P
j
j
m`ij a`j + b`i
1 − (m`ij )2 (a`j )2
!
−1
Finally an approximated softmax layer can be defined on the last layer output aL+1 , and consequently a crossentropy loss function can be used in the training. We experimented this approach on the MNIST dataset, where
we trained networks a 3 hidden layers of width 801. We approximately minimized the loss function using Adam
optimizer with standard parameters and learning rate η = 10−2 . We used in our simulations the Julia deep learning
library Knet [23], providing automatic differentiation, backpropagation and GPU acceleration. At the end of the
training the resulting binarized configuration, Ŵij` = sign(m`ij ), achieved ∼ 1.7% test error. Our implementation
of the current state of the art algorithm [10] on the same network, using batch normalization and with learning
rate η = 10−3 , achieves approximately the same result. For the sake of comparison, we note that a standard
neural network with the same structure but with ReLU activations and continuous weights we obtain ∼ 1.4% test
error. Given the heavy constraints on the weights, the discontinuity of the sign activation and the peculiarity of
the training procedures, it is quite astonishing to observe only a slight degradation in the performance of binary
networks when compared to their continuous counterparts. Further improvements of our results, up to ∼ 1.3% test
error, can be obtained applying dropout on the input and the hidden layers.
VI.
A WEIGHTED PERCEPTRON RULE
In the main text we introduced a stochastic perceptron model where the stochastic synapses could be integrated
out thanks to the central limit theorem. Therefore we could express the log-likelihood of the model L(m) as an easy
QN
1−mi
i
to compute function of the parameters m governing the distribution Qm (W ) = i=1 1+m
2 δWi ,+1 +
2 δWi ,−1 .
We used deterministic gradient ascent as a procedure to optimize m. At convergence, the binarized configuration
Wi = sign(mi ) is proposed as an approximate solution of the binary problem. This learning rule (i.e. Eq. (7) in
the main text) can be rewritten as
m0i
M
1 X
= clip mi + η
K
M µ=1
µ
yµ h
− µ
σ
!
µ
y µ xµi
(xµi )2 h
+
σµ
(σ µ )3
!!
,
(42)
pP
P
µ
µ 2
2
where we defined h = i mi xµi , σ µ =
i (1 − mi )(xi ) and K(x) = ∂x log H(x).
We now proceed to modify the learning rule to test its adaptability to biological scenarios. As a premise, we
note that the emergence of a discretized set of synaptic strengths, as encoded by our model, is an experimentally
observed property of many neural systems [8, 19]. Inspection of (42) shows an Hebbian structure, where the synaptic
µ
strength is reinforced on the base of presynaptic and postsynaptic activity, with a modulating factor K(−y µ h /σ µ )
that can be interpreted as a reward signal [18].
The sum over the examples in the training set can be changed with the random extraction of a single index
µ. In this way the algorithm can be naturally extended to an online learning scenario. We revert to the original
µ
stochastic
variables, sampling Wi ∼ Qmi ∀i and we replace the average pre-activation value h with
√ its realization
P
µ
hµ = i Wi xi . Further butchering of (42) is obtained crudely replacing σ µ by the constant σ = 0.5N . The final
stochastic rule reads
11
1.0
0.8
E
0.6
q
0.4
0.2
0.0
200 400 600 800 1000
1.0
0.8
0.6
0.4
0.2
0.00.3
q
success probability
E
0.5
0.4
0.3
0.2
0.1
0.00
training epoch
N = 1001
N = 10001
5
4 E (%)
3
2
1
0
0.4 0.5 0.6
0.4
0.5
0.6
Figure 4. Performance of learning rule Eq. (43). Results on system size N = 1001 are averaged over 100 samples, learning
rate η = 10−2 σ. Experiments at N = 10001 are averaged over 10 samples, learning rate η = 10−3 σ. (Left) The training
error and the squared norm against the number of training epochs for α = 0.55 and N = 1001. (Right) Success probability
within 2000 epochs in the classification task as a function of the load α = M/N . In the inset we show the average training
error (in percentage) at the end of GD.
m0i
y µ hµ
= clip mi + ηK −
σ
y µ xµi
(xµ )2 hµ
+ i 3
σ
σ
.
(43)
We measure the performance of rule (43), on randomly genererated training sets with uniform i.i.d. xµi = ±1 and
y = ±1. We present the results in Fig. 4.
We observe a degradation in performance with respect to rule (42) and longer convergence times. Nonetheless,
the algorithm is still able to efficiently classify an extensive number of examples (for the considered system sizes)
up to a load α = M/N ∼ 0.45. As with gradient descent, above the algorithmic threshold we observe a graceful
increase of the training error of the binarized configurations returned by the algorithm.
Learning rule (43) could be utterly simplified if we discarded the last term on the right hand side and we replaced
K(x) with the Heaviside theta function Θ(x), which takes value 0 for negative arguments and 1 otherwise. The
new rule would read
µ
m0i = clip (mi + ηΘ (−y µ hµ ) y µ xµi ) .
(44)
P
We first observe that, if we choose hµ = i sign(mi )xµi , we obtain a variant of the clipped perceptron (CP)
algorithm, analyzed in Refs. [1, 4]. The performances of this rule were shown to degrade rapidly with N . For
example, rule (44) with deterministic hµ fails
solution within 2000 epochs at N = 2001 and α = 0.3.
P to find
µ
Instead, we find that if we consider hµ =
i Wi xi , with Wi sampled according to mi , we obtain a stochastic
0
version
of the rule able to succeed in the same setting. We note also that the ordinary perceptron rule, mi =
µ
clip mi + ηΘ(−y µ h )y µ xµi , is not able to provide binary solutions, even at very low α.
Although a proper assessment of the scaling behaviour of these algorithms with N is beyond the scope of this
work, we report that rule (43) consistently outperforms both variants of rule (44). Moreover its performance can
12
16
teacher
SA
CP
CP−S
GD
BP+R
14
12
Ê (%)
10
8
6
4
2
0
0
0.02
0.04
0.06
0.08
0.1
d
Figure 5. The energy of Eq. 45 as a function of the Hamming distance dN from the teacher and from solutions found by
different algorithms. N = 1001 and α = 0.4 in all simulations. Curves are averaged over 40 samples.
be further improved using the actual value σ µ instead of σ. In the next section we show that stochastic variant of
(44), which is a novel contribution of this paper and it is tied to the stochastic framework we have investigated,
typically lands in a flatter region compared to the deterministic version.
VII.
AVERAGE ENERGY AROUND ALGORITHMIC SOLUTIONS
In order to characterize the energy landscape around a reference configuration W , a standard tool is the constrained entropy S(W, d) (also called local entropy), counting the number of solutions (i.e. zero energy configurations) at distance d from W . The average properties of S(W, d) have been analyzed in the main text. If for any
small d > 0 we have S(W, d) > 0, we say that the configuration W belongs to a dense cluster. Otherwise, we say
that W is isolated [5, 17].
Along with S(W, d), we can consider a simpler observable that can help to build a picture of an heterogeneous
energy landscape, made of wide and sharp basins. Following Ref. [7], we thus define the average misclassification
error made by configurations at distance d from W , as
Ê(W, d) = EW 0 |W,d
where the expectation is defined by
EW 0 |W,d • =
M
X
1 X
Θ −y µ
Wi0 xµi
M µ=1
i
!
,
P
P
W W 0)
W 0 • × δ (N (1 − 2d) −
P
P i i0 i
W 0 δ (N (1 − 2d) −
i Wi Wi )
(45)
(46)
Notice that configurations W 0 partecipating to EW 0 |W,d are not required to be solutions of the problem: we can
easily sample them by choosing dN spins at random to be flipped in W . In our tests the expectation is approximated
by 103 Monte Carlo samples.
13
Teacher
SA
CP
CP-S
GD
BP+R
1
0.578(3) 0.628(3) 0.644(3) 0.642(3) 0.657(2)
Table I. Generalization accuracy in the teacher-student scenario. N = 1001, α = 0.4, averages over 40 samples.
We explored the behavior of Ê(W, d) as a function of d for different solutions W of the problem, obtained from
different algorithms. We compare: the Gradient Descent (GD) algorithm investigated in
(Eq. 7);
Pthe main text
µ
the two variants of rule 44 (from previous section), the one with deterministic hµ =
sign(m
)x
(CP)
and
i
i
i
the one with stochastic hµ sampled according to m (CP-S); the Belief Propagation
algorithm
with
reinforcement
√
P
P
heuristic (BP+R) of Ref. [9]; Simulated Annealing (SA) on the Hamiltonian µ Θ1 (−y µ i Wi0 xµi / N ), where
Θ1 (x) = x Θ(x).
In order to compare the properties of the algorithmic solutions and the typical isolated solutions, it is useful to
consider the so-called teacher-student scenario [11]. As before, we generate uniformly i.i.d. xµi = ±1, but we assign
the labels according to a teacher configuration W T (we can choose WiT = 1 ∀i without loss of generality). In this
scenario, the teacher has the same statistical properties of the typical solutions of the training set, therefore it is an
isolated solution itself [5, 17].
Results are presented in Fig. 5. For the isolated teacher we see a rapid increase of the average energy around
the origin. The same happens for solutions discovered by SA, which we can classify as isolated as well. Also, SA
was the slowest algorithm to reach a solution (unsurprisingly, since it is known to scale badly with the system size
[2, 16, 17]).
Solutions from CP-S, GD and BP+R instead are surrounded by a much flatter average landscape and, remarkably,
they all give similar results. These three algorithms are implicitly or explicitly devised to reach robust basins: GD
and CP-S are derived within our stochastic framework, while the reinforcement term in BP+R has been shown in
[2] to be linked to local entropy maximization. Solutions from CP algorithm, while not in basins as sharp as the
ones found by SA, do not achieve the same robustness as ones from those three algorithms.
We give some additional details on the simulation’s protocol. The setting chosen, N = 1001 and α = 0.4 (a low
load regime in the teacher-student scenario), is such that each algorithm finds a solution on each instance within
small computational time (a few minutes). As soon as a solution W is discovered the algorithm is arrested. Results
could be slightly different for some algorithms if they were allowed to venture further within the basin of solutions.
For CP and CP-S we set η = 2 ∗ 10−3 , while η = 0.1 for GD. The reinforcement parameter in BP+R is updated
according to 1 − rt+1 = (1 − rt )(1 − 10−3 ) while the inverse temperature schedule in SA is β t+1 = β t (1 + 5 ∗ 10−3 ) .
In Table I we report the probability of correctly classifying a new example generated by the teacher, usually
called generalization accuracy, for each of the algorithms investigated. We note a clear correlation between the
flatness of the basin as seen in Fig. 5 and the ability to classify correctly new examples, with SA having the worst
performances and BP+R the best ones.
[1] Carlo Baldassi. Generalization Learning in a Perceptron with Binary Synapses. Journal of Statistical Physics, 136(5):902–
916, sep 2009.
[2] Carlo Baldassi, Christian Borgs, Jennifer T. Chayes, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo
Zecchina. Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic
algorithmic schemes. Proceedings of the National Academy of Sciences, 113(48):E7655–E7662, nov 2016.
[3] Carlo Baldassi, Christian Borgs, Jennifer T. Chayes, Carlo Lucibello, Luca Saglietti, Enzo Tartaglione, and Riccardo
Zecchina. In preparation.
14
[4] Carlo Baldassi, Alfredo Braunstein, Nicolas Brunel, and Riccardo Zecchina. Efficient supervised learning in networks
with binary synapses. Proceedings of the National Academy of Sciences, 104(26):11079–11084, jun 2007.
[5] Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Subdominant Dense
Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses.
Physical Review Letters, 115(12):128101, 2015.
[6] Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Local entropy as a measure
for sampling solutions in constraint satisfaction problems. Journal of Statistical Mechanics: Theory and Experiment,
2016(2):023301, 2016.
[7] Carlo Baldassi and Riccardo Zecchina. Efficiency of quantum vs. classical annealing in nonconvex learning problems.
Proceedings of the National Academy of Sciences, 115(7):1457–1462, feb 2018.
[8] Thomas M Bartol, Cailey Bromer, Justin P Kinney, Michael A Chirillo, Jennifer N Bourne, Kristen M Harris, and
Terrence J Sejnowski. Hippocampal spine head sizes are highly precise. bioRxiv, page 016329, 2015.
[9] Alfredo Braunstein and Riccardo Zecchina. Learning by Message Passing in Networks of Discrete Synapses. Physical
Review Letters, 96(3):030201, jan 2006.
[10] Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized Neural Networks:
Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. ArXiv e-prints, page 9, feb 2016.
[11] Andreas Engel. Statistical mechanics of learning. Cambridge University Press, 2001.
[12] Silvio Franz and Giorgio Parisi. Recipes for metastable states in spin glasses. Journal de Physique I, 5(11):1401–1415,
1995.
[13] Elizabeth Gardner. The space of interactions in neural network models, 1988.
[14] Elizabeth Gardner and Bernard Derrida. Optimal storage properties of neural network models, 1988.
[15] José Miguel Hernández-Lobato and Ryan P Adams. Probabilistic Backpropagation for Scalable Learning of Bayesian
Neural Networks. Journal of Machine Learning Research, 37:1–6, feb 2015.
[16] Heinz Horner. Dynamics of learning for the binary perceptron problem. Zeitschrift für Physik B Condensed Matter,
86(2):291–308, 1992.
[17] Haiping Huang and Yoshiyuki Kabashima. Origin of the computational hardness for learning with binary synapses.
Physical Review E, 90(5):052813, nov 2014.
[18] Yonatan Loewenstein and H. Sebastian Seung. Operant matching is a generic outcome of synaptic plasticity based on the
covariance between reward and neural activity. Proceedings of the National Academy of Sciences, 103(41):15224–15229,
2006.
[19] Daniel H. O’Connor, Gayle M. Wittenberg, and Samuel S.-H. Wang. Graded bidirectional synaptic plasticity is composed
of switch-like unitary events. Proceedings of the National Academy of Sciences, 102(27):9679–9684, 2005.
[20] Giorgio Parisi, Marc Mézard, and Miguel Angel Virasoro. Spin glass theory and beyond. World Scientific Singapore,
1987.
[21] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way
to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014.
[22] Li Wan, Matthew Zeiler, Sixin Zhang, Yann Lecun, and Rob Fergus. Regularization of neural networks using dropconnect.
In Proceedings of the 30th International Conference on Machine Learning (ICML-13), 2013.
[23] Deniz Yuret. Knet: beginning deep learning with 100 lines of julia. In Machine Learning Systems Workshop at NIPS
2016, 2016.
| 9 |
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM FOR
SMOOTHING SPLINE
By Zuofeng Shang† and Guang Cheng∗
Indiana University-Purdue University at Indianapolis and Purdue University
arXiv:1512.09226v2 [math.ST] 21 Jul 2017
July 19, 2017
In this paper, we explore statistical versus computational tradeoff to address a basic question in the application of a distributed algorithm: what is the minimal computational cost in obtaining statistical
optimality? In smoothing spline setup, we observe a phase transition
phenomenon for the number of deployed machines that ends up being a simple proxy for computing cost. Specifically, a sharp upper
bound for the number of machines is established: when the number
is below this bound, statistical optimality (in terms of nonparametric estimation or testing) is achievable; otherwise, statistical optimality becomes impossible. These sharp bounds partly capture intrinsic
computational limits of the distributed algorithm considered in this
paper, and turn out to be fully determined by the smoothness of the
regression function. As a side remark, we argue that sample splitting
may be viewed as an alternative form of regularization, playing a
similar role as smoothing parameter.
1. Introduction. In the parallel computing environment, divide-and-conquer (D&C) method
distributes data to multiple machines, and then aggregates local estimates computed from each
machine to produce a global one. Such a distributed algorithm often requires a growing number
of machines in order to process an increasingly large dataset. A practically relevant question is
“how many processors do we really need in this parallel computing?” or “shall we allocate all
our computational resources in the data analysis?” Such questions are related to the minimal
computational cost of this distributed method (which will be defined more precisely later).
The major goal of this paper is to provide some “theoretical” insights for the above questions
from a statistical perspective. Specifically, we consider a classical nonparametric regression setup:
(1.1)
yl = f (l/N ) + l , l = 0, 1, . . . , N − 1,
where l ’s are iid random errors with E{l } = 0 and V ar(l ) = 1, in the following distributed
∗
†
Assistant Professor.
Corresponding Author. Professor. Research Sponsored by NSF (CAREER Award DMS-1151692, DMS-1418042),
Simons Fellowship in Mathematics and Office of Naval Research (ONR N00014-15-1-2331). Guang Cheng gratefully
acknowledges Statistical and Applied Mathematical Sciences Institute (SAMSI) for the hospitality and support during
his visit in the 2013-Massive Data Program.
1
2
Z. SHANG AND G. CHENG
algorithm:
Entire Data (N )
Divide
−−−−→
Subset 1 (n)
Machine 1
Subset 2 (n)
Machine 2
···
Subset s (n)
w
w
Superw
machine
fb1
fb2
−→
−→
···
···
fbw
s
w
Aggrew
gate
Machine s
−→
Oracle Estimate
denoted as fbN
D&C Estimate
P
¯
f = (1/s) sj=1 fbj
We assume that the total sample size is N , the number of machines is s and the size of each subsample is n. Hence, N = s × n. Each machine produces an individual smoothing spline estimate fbj
to be defined in (2.2) ([13]).
A known property of the above D&C strategy is that it can preserve statistical efficiency for
a wide-ranging choice of s (as demonstrated in Figure 1), say log s/ log N ∈ [0, 0.4], while largely
reducing computational burden as log s/ log N increases (as demonstrated in Figure 2). An important observation from Figure 1 is that there is an obvious blowup for mean squared errors of f¯
when the above ratio is beyond some threshold, e.g, 0.8 for N = 10000. Hence, we are interested
in knowing whether there exists a critical value of log s/ log N in theory, beyond which statistical
optimality no longer exists. For example, mean squared errors will never achieve minimax optimal
lower bound (at rate level) no matter how smoothing parameters are tuned. Such a sharpness result
partly captures the computational limit of the particular D&C algorithm considered in this paper,
also complementing the upper bound results in [10, 16, 17]
N=500
N=1000
0.4
0.0
0.2
MSE
0.6
N=10000
0.0
0.2
0.4
0.6
0.8
log(s)/log(N)
Fig 1. Mean-square errors (MSE) of f¯ based on 500 independent replications under different choices of N and s.
The values of MSE stay at low levels for various choice of s with log s/ log N ∈ [0, 0.7]. True regression function is
f0 (z) = 0.6b30,17 (z) + 0.4b3,11 (z) with ba1 ,a2 the density function for Beta(a1 , a2 ).
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
3
2.0
1.0
0.0
Time (seconds)
3.0
N=10000
0.0
0.2
0.4
0.6
0.8
log(s)/log(N)
Fig 2. Computing time of f¯ based on a single replication under different choices of s when N = 10, 000. The larger
the s, the smaller the computing time.
Our first contribution is to establish a sharp upper bound of s under which f¯ achieves the
minimax optimal rate N m/(2m+1) , where m represents the smoothness of f0 . By “sharp” upper
bound, we mean the largest possible upper bound for s to gain statistical optimality. This result
is established by directly computing (non-asymptotic) upper and lower bounds of mean squared
error of f¯. These two bounds hold uniformly as s diverges, and thus imply that the rate of mean
squared error transits once s reaches the rate N 2m/(2m+1) , which we call as phase transition in
divide-and-conquer estimation. In fact, the choice of smoothing parameter, denoted as λ, also plays
a very subtle role in the above phase transition. For example, λ is not necessarily chosen at an
optimal level when s attains the above bound as illustrated in Figure 3.
Our second contribution is a sharp upper bound of s under which a simple Wald-type testing
method based on f¯ is minimax optimal in the sense of [6]. It is not surprising that our testing
method is consistent no matter s is fixed or diverges at any rate. Rather, this sharp bound is
entirely determined by analyzing its (non-asymptotic) power. Specifically, we find that our testing
method is minimax optimal if and only if s does not grow faster than N (4m−1)/(4m+1) . Again, we
observe a subtle interplay between s and λ as depicted in Figure 3.
One theoretical insight obtained in our setup is that a more smooth regression function can be
optimally estimated or tested at a shorter time. In addition, the above Figure 3 implies that s and
λ play an interchangeable role in obtaining statistical optimality. Therefore, we argue that it might
be attempting to view sample splitting as an alternative form of regularization, complementing the
use of penalization in smoothing spline. In practice, we propose to select λ via a distributed version
of generalized cross validation (GCV); see [14].
In the end, we want to mention that our theoretical results are developed in one-dimensional
models under fixed design. This setting allows us to develop proofs based on exact analysis of various
Fourier series, coupled with properties of circulant Bernoulli polynomial kernel matrix. The major
4
Z. SHANG AND G. CHENG
b
b
4m
4 m+1
2m
2m+1
0
2m
2m+1
0
a
4 m−1
4 m+1
a
Fig 3. Two lines indicate the choices of s N a and λ N −b , leading to minimax optimal estimation rate (left) and
minimax optimal testing rate (right). Whereas (a, b)’s outside these two lines lead to suboptimal rates. Results are
based on smoothing spline regression with regularity m ≥ 1.
goal of this work is to provide some theoretical insights in a relatively simple setup, which are useful
in extending our results to more general setup such as random or multi-dimensional design. Efforts
toward this direction have been made by [8] who derived upper bounds of s for optimal estimation
or testing in various nonparametric models when design is random and multi-dimensional.
2. Smoothing Spline Model. Suppose that we observe samples from model (1.1). The regression function f is smooth in the sense that it belongs to an m-order (m ≥ 1) periodic Sobolev
space:
(∞
X
S m (I) =
fν ϕν (·) :
ν=1
where I := [0, 1] and for k = 1, 2, . . .,
ϕ2k−1 (t) =
√
∞
X
ν=1
)
fν2 γν < ∞ ,
2 cos(2πkt), ϕ2k (t) =
√
2 sin(2πkt),
γ2k−1 = γ2k = (2πk)2m .
The entire dataset is distributed to each machine in a uniform manner as follows. For j = 1, . . . , s,
the jth machine is assigned with samples (Yi,j , ti,j ), where
Yi,j = yis−s+j−1 and ti,j =
is − s + j − 1
N
for i = 1, . . . , n. Obviously, t1,j , . . . , tn,j are evenly spaced points (with a gap 1/n) across I. At the
jth machine, we have the following sub-model:
(2.1)
Yi,j = f (ti,j ) + i,j , i = 1, . . . , n,
where i,j = is−s+j−1 , and obtain the jth sub-estimate as
fbj = arg min `j,n,λ (f ).
f ∈S m (I)
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
5
Here, `j,n,λ represents a penalized square criterion function based on the jth subsample:
n
λ
1 X
(2.2)
(Yi,j − f (ti,j ))2 + J(f, f ),
`j,n,λ (f ) =
2n
2
i=1
R
with λ > 0 being a smoothing parameter and J(f, g) = I f (m) (t)g (m) (t)dt1
3. Minimax Optimal Estimation. In this section, we investigate the impact of the number
of machines on the mean squared error of f¯. Specifically, Theorem 3.1 provides an (non-asymptotic)
upper bound for this mean squared error, while Theorem 3.2 provides a (non-asymptotic) lower
bound. Notably, both bounds hold uniformly as s diverges. From these bounds, we observe an
interesting phase transition phenomenon that f¯ is minimax optimal if s does not grow faster than
N 2m/(2m+1) and an optimal λ N −2m/(2m+1) is chosen, but the minimax optimality breaks down
if s grows even slightly faster (no matter how λ is chosen). Hence, the upper bound of s is sharp.
Moreover, λ does not need to be optimal when this bound is attained. In some sense, a proper
sample splitting can compensate a sub-optimal choice of λ.
In this section, we assume that l ’s are iid zero-mean random variables with unit variance. Denote
mean squared error as
where kf k2 =
qR
Theorem 3.1.
MSEf0 (f ) := Ef0 {kf − f0 k22 },
2
I f (t) dt.
For simplicity, we write Ef0 as E later. Define h = λ1/(2m) .
(Upper Bounds of Variance and Squared Bias) Suppose h > 0, and N is divisible
by n. Then there exist absolute positive constants bm , cm ≥ 1 (depending on m only) such that
Z πnh
1
2
−1
−1
¯
¯
(3.1)
E{kf − E{f }k2 } ≤ bm N + (N h)
dx ,
(1 + x2m )2
0
p
(3.2)
kE{f¯} − f0 k2 ≤ cm J(f0 )(λ + n−2m + N −1 )
for any fixed 1 ≤ s ≤ N .
From (6.2) and (6.3) in Appendix, we can tell that f¯ − E{f¯} is irrelevant to f0 . So is the upper
bound for the (integrated) variance in (3.1). However, this is not the case for the (integrated)
bias kE{f¯} − f0 k2 , whose upper bound depends on f0 through its norm J(f0 ). In particular, the
(integrated) bias becomes zero if f0 is in the null space, i.e., J(f0 ) = 0, according to (3.2).
Since
MSEf0 (f¯) = E{kf¯ − E{f¯}k22 } + kE{f¯} − f0 k22 ,
(3.3)
Theorem 3.1 says that
(3.4)
Z
−1
−1
¯
MSEf0 (f ) ≤ bm N + (N h)
0
1
For simplicity, we denote J(f, f ) = J(f ) later.
πnh
1
dx + c2m J(f0 )(λ + n−2m + N −1 ).
(1 + x2m )2
6
Z. SHANG AND G. CHENG
When we choose h N −1/(2m+1) and n−2m = O(λ), it can be seen from (3.4) that f¯ is minimax
optimal, i.e., kf¯ − f0 k2 = OP (N −m/(2m+1) ). Obviously, the above two conditions hold if
λ N −2m/(2m+1) and s = O(N 2m/(2m+1) ).
(3.5)
From now on, we define the optimal choice of λ as N −2m/(2m+1) , denoted as λ∗ ; according to [16].
Alternatively, the minimax optimality can be achieved if s N 2m/(2m+1) and nh = o(1), i.e.,
λ = o(λ∗ ). In other words, a sub-optimal choice of λ can be compensated by a proper sampling
splitting strategy. See Figure 3 for the subtle relation between s and λ. It should be mentioned that
λ∗ depends on N (rather than n) for achieving optimal estimation rate. In practice, we propose to
select λ via a distributed version of GCV; see [14].
Remark 3.1.
Under random design and uniformly bounded eigenfunctions, Corollary 4 in [16]
showed that the above rate optimality is achieved under the following upper bound on s (and λ = λ∗ )
s = O(N (2m−1)/(2m+1) / log N ).
For example, when m = 2, their upper bound is N 0.6 / log N (versus N 0.8 in our case). We improve
their upper bound by applying a more direct proof strategy.
To understand whether our upper bound can be further improved, we prove a lower bound result
in a “worst case” scenario. Specifically, Theorem 3.2 implies that once s is beyond the above upper
bound, the rate optimality will break down for at least one true f0 .
Theorem 3.2.
(Lower Bound of Squared Bias) Suppose h > 0, and N is divisible by n. Then
for any constant C > 0, it holds that
sup
f0 ∈S m (I)
J(f0 )≤C
kE{f¯} − f0 k22 ≥ C(am n−2m − 8N −1 ),
where am ∈ (0, 1) is an absolute constant depending on m only, for any fixed 1 < s < N .
It follows by (3.3) that
(3.6)
sup
f0 ∈S m (I)
J(f0 )≤C
MSEf0 (f¯) ≥
sup
f0 ∈S m (I)
J(f0 )≤C
kE{f¯} − f0 k22 ≥ C(am n−2m − 8N −1 ).
It is easy to check that the above lower bound is strictly slower than the optimal rate N −2m/(2m+1)
if s grows faster than N 2m/(2m+1) no matter how λ is chosen. Therefore, we claim that N 2m/(2m+1)
is a sharp upper bound of s for obtaining an averaged smoothing spline estimate.
7
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
In the end, we provide a graphical interpretation for our sharp bound result. Let s = N a for
0 ≤ a ≤ 1 and λ = N −b for 0 < b < 2m. Define ρ1 (a), ρ2 (a) and ρ3 (a) as
Upper bound of squared bias: N −ρ1 (a) λ + n−2m + N −1 ,
Lower bound of squared bias: N −ρ2 (a) max{n−2m − N −1 , 0},
Z πnh
1
dx,
Upper bound of variance: N −ρ3 (a) N −1 + (N h)−1
(1 + x2m )2
0
based on Theorems 3.1 and 3.2. A direct examination reveals that
ρ1 (a) = min{2m(1 − a), 1, b}
(
2m(1 − a), a > (2m − 1)/(2m)
ρ2 (a) =
∞,
a ≤ (2m − 1)/(2m)
ρ3 (a) = max{a, (2m − b)/(2m)}
ρ1
ρ2
2m
2m
ρ3
1
ρ2 (a)
2m
2 m+1
2m
2 m+1
2m
2 m+1
ρ3 (a)
ρ1 (a)
0
2m
2 m+1
1
a
0
2 m−1
2m
2m
2 m+1
1
a
0
2m
2 m+1
1
a
Fig 4. Plots of ρ1 (a), ρ2 (a), ρ3 (a) versus a, indicated by thick solid lines, under λ = N −2m/(2m+1) . ρ1 (a), ρ2 (a) and
ρ3 (a) indicate upper bound of squared bias, lower bound of squared bias and upper bound of variance, respectively.
ρ2 (a) is plotted only for (2m − 1)/(2m) < a ≤ 1; when 0 ≤ a ≤ (2m − 1)/(2m), ρ2 (a) = ∞, which is omitted.
Figure 4 displays ρ1 , ρ2 , ρ3 for λ = N −2m/(2m+1) . It can be seen that when a ∈ [0, 2m/(2m + 1)],
upper bounds of squared bias and variance maintain at the same optimal rate N −2m/(2m+1) , while
the exact bound of squared bias increases above N −2m/(2m+1) when a ∈ (2m/(2m + 1), 1). This
explains why transition occurs at the critical point a = 2m/(2m + 1) (even when the upper bound
of variance decreases below N −2m/(2m+1) when a ∈ (2m/(2m + 1), 1)).
It should be mentioned that when λ 6= N −2m/(2m+1) , i.e., b 6= 2m/(2m+1), suboptimal estimation
almost always occurs. More explicitly, b < 2m/(2m + 1) yields ρ1 (a) < 2m/(2m + 1) for any
0 ≤ a ≤ 1. While b > 2m/(2m + 1) yields ρ2 (a) < 2m/(2m + 1) for any 2m/(2m + 1) < a ≤ 1;
yields ρ3 (a) < 2m/(2m + 1) for any 0 ≤ a < 2m/(2m + 1). The only exception is a = 2m/(2m + 1)
which yields ρ1 = ρ2 = ρ3 = 2m/(2m + 1) for any b > 2m/(2m + 1).
Remark 3.2.
As a side remark, we notice that each machine is assigned with n N 1/(2m+1)
samples when s attains its upper bound in the estimation regime. This is very similar as the local
8
Z. SHANG AND G. CHENG
polynomial estimation where approximately N 1/(2m+1) local points are used for obtaining optimal
estimation (although we realize that our data is distributed in a global manner).
Remark 3.3.
Under repeated curves with a common design, [2] observed a similar phase tran-
sition phenomenon for the minimax rate of a two-stage estimate, where the rate transits when the
number of sample curves is nearly N 2m/(2m+1) . This coincides with our observation for s. However,
the common design assumption, upon which their results crucially rely, clearly does not apply to
our divide-and-conquer setup, and our proof techniques are significantly different. Rather,Theorems
3.1 and 3.2 imply that the results in [2] may still hold for a non-common design.
4. Minimax Optimal Testing. In this section, we consider nonparametric testing:
(4.1)
H0 : f = 0 v.s. H1 : f ∈ S m (I)\{0}.
In general, testing f = f0 (for a known f0 ) is equivalent to testing f∗ ≡ f − f0 = 0. So (4.1) has
no loss of generality. Inspired by the classical Wald test ([11]), we propose a simple test statistic
based on the f¯ as
TN,λ := kf¯k22 .
We find that testing consistency essentially requires no condition on the number of machines no
matter it is fixed or diverges at any rate. However, our power analysis, which is non-asymptotically
valid, depends on the number of machines in a nontrivial way. Specifically, we discover that our
test method is minimax optimal in the sense of Ingster ([6]) when s does not grow faster than
N (4m−1)/(4m+1) and λ is chosen optimally (different from λ∗ , though), but it is no longer optimal
once s is beyond the above threshold (no matter how λ is chosen). This is a similar phase transition
phenomenon as we observe in the estimation regime. Again, we notice an optimal choice of λ may
not be necessary if the above upper bound of s is achieved.
In this section, we assume that the model errors i,j ’s are iid standard normal for technical convenience. In fact, our results can be generalized to likelihood ratio test without assuming Gaussian
errors. This extension is possible (technically tedious, though) since likelihood ratio statistic can
be approximated by TN,λ through quadratic expansion; see [9].
Theorem 4.1 implies the consistency of our proposed test method with the following testing rule:
φN,λ = I(|TN,λ − µN,λ | ≥ z1−α/2 σN,λ ),
2
where µN,λ := EH0 {TN,λ }, σN,λ
:= VarH0 {TN,λ } and z1−α/2 is the (1 − α/2) × 100 percentile of
N (0, 1). The conditions required in Theorem 4.1 are so mild that our proposed testing is consistent
no matter the number of machines is fixed or diverges at any rate.
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
Theorem 4.1.
9
(Testing Consistency) Suppose that h → 0, n → ∞ when N → ∞, and
limN →∞ nh exists (which could be infinity). Then, we have under H0 ,
TN,λ − µN,λ d
−→ N (0, 1), as N → ∞.
σN,λ
Our next theorem analyzes the non-asymptotic power of TN,λ , in which we pay particular attention to the impact of s on the separation rate of testing, defined as
q
dN,λ = λ + n−2m + σN,λ .
Let B = {f ∈ S m (I) : J(f ) ≤ C} for a positive constant C.
Theorem 4.2.
(Upper Bound) Suppose that h → 0, n → ∞ when N → ∞, and limN →∞ nh
exists (which could be infinity). Then for any ε > 0, there exist Cε , Nε > 0 s.t. for any N ≥ Nε ,
(4.2)
inf
f ∈B
kf k2 ≥Cε dN,λ
Pf (φN,λ = 1) ≥ 1 − ε.
Under assumptions of Theorem 4.1, it can be shown that (see (6.40) in Appendix)
(
n
, if limN →0 nh = 0,
2
N2
σN,λ
(4.3)
1
, if limN →∞ nh > 0.
N 2h
p
Given a range of λ leading to limN →∞ nh > 0, we have by (4.3) that dN,λ = λ + (N h1/2 )−1 . An
optimal choice of λ (satisfying the above requirement) is λ∗∗ := N −4m/(4m+1) since it leads to the
optimal separating rate d∗N,λ := N −2m/(4m+1) ; see [6]. Meanwhile, the constraint limN →∞ nh > 0
(together with the choice of λ∗∗ ) implies that
s = O(N (4m−1)/(4m+1) ).
(4.4)
The above discussions illustrate that we can always choose λ∗∗ to obtain a minimax optimal testing
(just as in the single dataset [9]) as long as s does not grow faster than N (4m−1)/(4m+1) . In the case
that limN →∞ nh = 0, the minimax optimality can be maintained if s N (4m−1)/(4m+1) , h = o(1)
and nh = o(1). Such a selection of s gives us a lot of freedom in choosing λ that needs to satisfy
λ = o(λ∗∗ ). A complete picture in depicting the relation between s and λ is given in Figure 3.
We further discover in Theorem 4.3 that the upper bound (4.4) turns out to be sharp.
Theorem 4.3.
(Lower Bound) Suppose that s N (4m−1)/(4m+1) , h → 0, n → ∞ when N →
∞, and limN →∞ nh exists (which could be infinity). Then there exists a positive sequence βN,λ with
limN →∞ βN,λ = ∞ s.t.
(4.5)
lim sup
N →∞
inf
f ∈B
kf k2 ≥βN,λ d∗N,λ
Pf (φN,λ = 1) ≤ α.
Recall that 1 − α is the pre-specified significance level.
10
Z. SHANG AND G. CHENG
Theorem 4.3 says that when s N (4m−1)/(4m+1) , the test φN,λ is no longer powerful even
when kf k2 d∗N,λ . In other words, our test method fails to be optimal. Therefore, we claim that
N (4m−1)/(4m+1) is a sharp upper bound of s to ensure our testing to be minimax optimal.
Remark 4.1.
As a side remark, the existence of limN →∞ nh can be replaced by the following
weaker condition under which the results in Theorems 4.1, 4.2 and 4.3 still hold:
Condition (R) : either lim nh = 0 or inf nh > 0.
N →∞
N ≥1
Condition (R) aims to exclude irregularly behaved s such as in the following case where s vibrates
too much along with N :
(4.6)
s=
(
N b1 ,
N is odd,
N b2 ,
N is even,
where h N −c for some c > 0, b1 , b2 ∈ [0, 1] satisfy b1 + c ≥ 1 and b2 + c < 1. Clearly, Condition
(R) fails under (4.6).
5. Discussions. This paper offers “theoretical” suggestions on the allocation of data. In a
relatively simple distributed algorithm, i.e., in m-order periodic splines with evenly spaced design,
our recommendation proceeds as follows:
• Distribute to
s N 2m/(2m+1)
machines for obtaining an optimal estimate;
• Distribute to
s N (4m−1)/(4m+1)
machines for performing an optimal test.
However, data-dependent formulae are still needed in picking a right number of machines in practice.
This might be possible in light of Figure 3 indicating that sample splitting could be an alternative
form of tuning. As for the choice of λ, we prove that it should be chosen in the order of N even
when each subsample has size n. Hence, a distributed version of the generalized cross validation
method is applied to each sub-sample; see [14]. Another theoretically interesting direction is how
much adaptive estimation (where m is unknown) can affect the computational limits.
Acknowledgments We thank PhD student Meimei Liu at Purdue for the simulation study.
6. Appendix. Proofs of our results are included in this section.
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
11
6.1. Proofs in Section 3.
Proof of Theorem 3.1. We do a bit preliminary analysis before proving (3.1) and (3.2). It
follows from [13] that (S m (I), J) is a reproducing kernel Hilbert space with reproducing kernel
function
K(x, y) =
∞
X
ϕν (x)ϕν (y)
γν
ν=1
=2
∞
X
cos(2πk(x − y))
(2πk)2m
k=1
, x, y ∈ I.
For convenience, define Kx (·) = K(x, ·) for any x ∈ I. It follows from the representer theorem ([13])
that the optimization to problem (2.2) has a solution
fbj =
(6.1)
n
X
i=1
b
ci,j Kti,j , j = 1, 2, . . . , s,
where b
cj = (b
c1,j , . . . , b
cn,j )T = n−1 (Σj + λIn )−1 Yj , Yj = (Y1,j , . . . , Yn,j )T , In is n × n identity matrix,
and Σj = [K(ti,j , ti0 ,j )/n]1≤i,i0 ≤n . It is easy to see that Σ1 = Σ2 = · · · = Σs . For convenience, denote
Σ = Σ1 . Similarly, define
K 0 (x, y) =
∞
X
ϕν (x)ϕν (y)
γν2
ν=1
=2
∞
X
cos(2πk(x − y))
(2πk)4m
k=1
, x, y ∈ I.
For 1 ≤ j ≤ s, let Ωj = [K 0 (ti,j , ti0 ,j )/n]1≤i,i0 ≤n . It is easy to see that Ω1 = Ω2 = · · · = Ωs . For
convenience, denote Ω = Ω1 , and let Φν,j = (ϕν (t1,j ), . . . , ϕν (tn,j )).
It is easy to examine that
f¯ =
(6.2)
∞
X
(6.3)
j=1 Φν,j (Σ
ν=1
∞
X
Ps
E{f¯} =
∞
X
=
ν=1
and
Ps
N γν
j=1 Φν,j (Σ
ν=1
+ λIn )−1 Yj
ϕν
+ λIn )−1 (f0,j + j )
N γν
Ps
j=1 Φν,j (Σ
+ λIn )−1 f0,j
N γν
ϕν ,
ϕν ,
where f0,j = (f0 (t1,j ), . . . , f0 (tn,j ))T and j = (1,j , . . . , n,j )T .
We now look at Σ and Ω. For 0 ≤ l ≤ n − 1, let
cl =
dl =
∞
2 X cos(2πkl/n)
,
n
(2πk)2m
k=1
∞
2 X cos(2πkl/n)
.
n
(2πk)4m
k=1
Since cl = cn−l and dl = dn−l for l = 1, 2, . . . , n − 1, Σ and Ω are both symmetric circulant of order
√
n. Let ε = exp(2π −1/n). Ω and Σ share the same normalized eigenvectors as
1
xr = √ (1, εr , ε2r , . . . , ε(n−1)r )T , r = 0, 1, . . . , n − 1.
n
12
Z. SHANG AND G. CHENG
Let M = (x0 , x1 , . . . , xn−1 ). Denote M ∗ as the conjugate transpose of M . Clearly, M M ∗ = In and
Σ, Ω admits the following decomposition
Σ = M Λc M ∗ , Ω = M Λd M ∗ ,
(6.4)
where Λc = diag(λc,0 , λc,1 , . . . , λc,n−1 ) and Λd = diag(λd,0 , λd,1 , . . . , λd,n−1 ) with λc,l = c0 + c1 εl +
. . . + cn−1 ε(n−1)l and λd,l = d0 + d1 εl + . . . + dn−1 ε(n−1)l .
Direct calculations show that
(6.5)
λc,l =
P∞
(6.6)
λd,l =
P∞
1
k=1 (2πkn)2m ,
P∞
1
1
k=0 [2π(kn+l)]2m ,
k=1 [2π(kn−l)]2m +
2
P∞
1
k=1 (2πkn)4m ,
P∞
P∞
1
1
k=1 [2π(kn−l)]4m +
k=0 [2π(kn+l)]4m ,
2
It is easy to examine that
l = 0,
1 ≤ l ≤ n − 1.
l = 0,
1 ≤ l ≤ n − 1.
λc,0 = 2c̄m (2πn)−2m , λd,0 = 2d¯m (2πn)−4m ,
(6.7)
and for 1 ≤ l ≤ n − 1,
λc,l =
1
1
+
2m
[2π(n − l)]
(2πl)2m
∞
∞
X
X
1
1
+
,
+
2m
[2π(kn − l)]
[2π(kn + l)]2m
k=1
k=2
λd,l
(6.8)
1
1
=
+
[2π(n − l)]4m (2πl)4m
∞
∞
X
X
1
1
+
+
,
4m
[2π(kn − l)]
[2π(kn + l)]4m
k=2
and for c̄m :=
P∞
k=1 k
−2m ,
cm :=
P∞
k=2 k
cm (2πn)−2m ≤
cm (2πn)−2m ≤
dm (2πn)−4m ≤
dm (2πn)−4m ≤
k=1
−2m ,
d¯m :=
P∞
k=1 k
−4m ,
dm :=
P∞
k=2 k
∞
X
1
≤ c̄m (2πn)−2m ,
[2π(kn − l)]2m
k=1
∞
X
1
≤ d¯m (2πn)−4m ,
[2π(kn − l)]4m
k=2
∞
X
k=2
∞
X
k=1
−4m ,
1
≤ c̄m (2πn)−2m ,
[2π(kn + l)]2m
1
≤ d¯m (2πn)−4m .
[2π(kn + l)]4m
For simplicity, we denote I = E{kf¯−E{f¯}k22 } and II = kE{f¯}−f0 k22 . Hence, MSEf0 (f¯) = I +II.
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
Proof of (3.1)
Using (6.4) – (6.8), we get that
∞ Ps
−1
2
X
j=1 E{|Φν,j (Σ + λIn ) j | }
I =
N 2 γν2
ν=1
P
s
∞
−1 T
−1
X
j=1 trace((Σ + λIn ) Φν,j Φν,j (Σ + λIn ) )
=
N 2 γν2
ν=1
=
=
=
=
s
∞
X
ΦTν,j Φν,j /n
n X
−1
trace
(Σ
+
λI
)
(Σ + λIn )−1
n
N2
γν2
n
N2
j=1
s
X
ν=1
trace (Σ + λIn )−1 Ω(Σ + λIn )−1
j=1
!
1
trace M (Λc + λIn )−1 Λd (Λc + λIn )−1 M ∗
N
n−1
λd,l
1 X
N
(λ + λc,l )2
l=0
≤
2d¯m
N (2c̄m + (2πn)2m λ)2
n−1
X
(2π(n − l))−4m + (2πl)−4m
+(1 + d¯m )N −1
(λ + (2π(n − l))−2m + (2πl)−2m )2
l=1
2d¯m
≤
N (2c̄m + (2πn)2m λ)2
X
+2(1 + d¯m )N −1
1≤l≤n/2
(2πl)−4m + (2π(n − l))−4m
(λ + (2πl)−2m + (2π(n − l))−2m )2
X
2d¯m
(2πl)−4m
¯m )N −1
≤
+
4(1
+
d
N (2c̄m + (2πn)2m λ)2
(λ + (2πl)−2m )2
1≤l≤n/2
Z
2(1 + d¯m ) πnh
1
2d¯m
≤
+
dx
N (2c̄m + (2πn)2m λ)2
πN h
(1
+
x2m )2
0
Z πnh
1
1
1
≤ bm
+
dx ,
N
Nh 0
(1 + x2m )2
where bm ≥ 1 is an absolute constant depending on m only. This proves (3.1).
Proof of (3.2)
√
Throughout, let η = exp(2π −1/N ). For 1 ≤ j, l ≤ s, define
Σj,l =
σj,l,r =
∞
1 X ΦTν,j Φν,l
,
n
γν
ν=1
∞ cos 2πk r − j−l
X
n
N
2
, r = 0, 1, . . . , n − 1.
n
(2πk)2m
k=1
13
14
Z. SHANG AND G. CHENG
It can be shown that Σj,l is a circulant matrix with elements σj,l,0 , σj,l,1 , . . . , σj,l,n−1 , therefore, by
[1] we get that
Σj,l = M Λj,l M ∗ ,
(6.9)
where M is the same as in (6.4), and Λj,l = diag(λj,l,0 , λj,l,1 , . . . , λj,l,n−1 ), with λj,l,r , for r =
1, . . . , n − 1, given by the following
n−1
X
λj,l,r =
σj,l,t εrt
t=0
2
n
n−1
∞
XX
cos 2πk nt −
j−l
N
εrt
(2πk)2m
t=0 k=1
Pn−1 (r−k)t
Pn−1 (k+r)t
∞
X
η −k(j−l) t=0
ε
ε
+ η k(j−l) t=0
1
n
(2πk)2m
=
=
k=1
∞
∞
X
X
η −(qn−r)(j−l)
η (qn+r)(j−l)
=
+
,
[2π(qn − r)]2m
[2π(qn + r)]2m
(6.10)
q=1
q=0
and for r = 0, given by
λj,l,0 =
=
(6.11)
=
n−1
X
σj,l,t
t=0
∞ P
kt k(j−l)
1 X n−1
t=0 ε η
n
Pn−1 −kt −k(j−l)
ε η
+ t=0
2m
(2πk)
k=1
∞
X
η qn(j−l) + η −qn(j−l)
q=1
(2πqn)2m
.
For p ≥ 0, 1 ≤ v ≤ n, 0 ≤ r ≤ n − 1 and 1 ≤ j ≤ s, define
Ap,v,r,j =
s
s
l=1
l=1
1X
1X
λj,l,r x∗r ΦT2(pn+v)−1,l , Bp,v,r,j =
λj,l,r x∗r ΦT2(pn+v),l .
s
s
By direct calculation, we have for 1 ≤ v ≤ n − 1,
p
Φ2(pn+v)−1,l xr =
n/2 η (pn+v)(l−1) I(r + v = n) + η −(pn+v)(l−1) I(v = r) ,
p
Φ2(pn+v),l xr =
−n/2 η (pn+v)(l−1) I(r + v = n) − η −(pn+v)(l−1) I(v = r) ,
(6.12)
and
p
n/2I(r = 0) η (p+1)n(l−1) + η −(p+1)n(l−1) ,
p
=
−n/2I(r = 0) η (p+1)n(l−1) − η −(p+1)n(l−1) .
Φ2(pn+n)−1,l xr =
(6.13)
Φ2(pn+n),l xr
15
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
Let I(·) be an indicator function. Then we have for p ≥ 0, 1 ≤ j ≤ s and 1 ≤ v, r ≤ n − 1,
s
Bp,v,r,j
1X
λj,l,r x∗r ΦT2(pn+v),l
s
l=1
s
∞
∞
−(qn−r)(j−l)
(qn+r)(j−l)
X
X
X
p
1
η
η
= −
−n/2
+
s
[2π(qn − r)]2m
[2π(qn + r)]2m
q=1
q=0
l=1
× η −(pn+v)(l−1) I(r + v = n) − η (pn+v)(l−1) I(r = v)
X
p
η −(pn+v)(j−1)
= − −n/2
I(r + v = n)
[2π(uN + pn + v)]2m
=
u≥−p/s
−
X
u≥(p+1)/s
η (pn+v)(j−1)
I(r = v)
[2π(uN − pn − v)]2m
X
η −(pn+v)(j−1)
I(r + v = n)
[2π(uN − pn − v)]2m
u≥(p+1)/s
X
η (pn+v)(j−1)
−
I(r = v) = ap,v x∗r ΦT2(pn+v),j ,
[2π(uN + pn + v)]2m
+
(6.14)
u≥−p/s
where ap,v =
P
1
u≥−p/s [2π(uN +pn+v)]2m
+
P
1
u≥(p+1)/s [2π(uN −pn−v)]2m ,
For v = n, similar calculations give that
X
p
Bp,n,r,j = − −n/2I(r = 0)
u≥−p/s
−
+
X
u≥(p+2)/s
X
u≥(p+2)/s
=
(6.15)
where ap,n =
P
for p ≥ 0, 1 ≤ v ≤ n − 1.
η −(pn+n)(j−1)
[2π(uN + pn + n)]2m
η (pn+n)(j−1)
[2π(uN − pn − n)]2m
η −(pn+n)(j−1)
[2π(uN − pn − n)]2m
ap,n x∗r ΦT2(pn+n),j ,
1
u≥−p/s [2π(uN +pn+n)]2m
+
P
−
X
u≥−p/s
η (pn+n)(j−1)
[2π(uN + pn + n)]2m
1
u≥(p+2)/s [2π(uN −pn−n)]2m ,
for p ≥ 0.
16
Z. SHANG AND G. CHENG
Similarly, we have p ≥ 0, 1 ≤ j ≤ s and 1 ≤ v, r ≤ n − 1,
X
p
η −(pn+v)(j−1)
Ap,v,r,j =
n/2
I(r + v = n)
[2π(uN + pn + v)]2m
u≥−p/s
X
+
u≥(p+1)/s
η (pn+v)(j−1)
I(r = v)
[2π(uN − pn − v)]2m
X
η −(pn+v)(j−1)
I(r + v = n)
[2π(uN − pn − v)]2m
u≥(p+1)/s
X
η (pn+v)(j−1)
+
I(r = v)
[2π(uN + pn + v)]2m
+
u≥−p/s
= ap,v x∗r ΦT2(pn+v)−1,j ,
(6.16)
and for v = n,
Ap,n,r,j
X
p
=
n/2I(r = 0)
u≥−p/s
X
+
η (pn+n)(j−1)
[2π(uN − pn − n)]2m
u≥(p+2)/s
X
η −(pn+n)(j−1)
[2π(uN − pn − n)]2m
u≥(p+2)/s
X
η (pn+n)(j−1)
= ap,n x∗r ΦT
+
2(pn+n)−1,j .
[2π(uN + pn + n)]2m
+
(6.17)
η −(pn+n)(j−1)
[2π(uN + pn + n)]2m
u≥−p/s
It is easy to check that both (6.14) and (6.16) hold for r = 0. Summarizing (6.14)–(6.17), we have
that for p ≥ 0, 1 ≤ j ≤ s, 1 ≤ v ≤ n and 0 ≤ r ≤ n − 1,
(6.18)
Ap,v,r,j
= ap,v x∗r ΦT2(pn+v)−1,j ,
Bp,v,r,j
= ap,v x∗r ΦT2(pn+v),j .
To show (3.2), let f¯j = (E{f¯(t1,j )}, . . . , E{f¯(tn,j )})T , for 1 ≤ j ≤ s. It follows by (6.3) that
∞ Ps
−1
X
l=1 Φν,l (Σ + λIn ) f0,l T
¯
fj =
Φν,j
N γν
ν=1
!
s
∞
1 X 1 X ΦTν,j Φν,l
=
(Σ + λIn )−1 f0,l
s
n
γν
l=1
s
X
=
1
s
=
s
1X
s
ν=1
Σj,l (Σ + λIn )−1 f0,l
l=1
l=1
M Λj,l (Λc + λIn )−1 M ∗ f0,l ,
17
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
together with (6.18), leading to that
s
M ∗ f¯j
=
1X
Λj,l (Λc + λIn )−1 M ∗ f0,l
s
l=1
1 Ps
∗ T
=
=
fµ0
µ=1
∞
X
l=1
λj,l,n−1 x∗n−1 ΦT
µ,l
λ+λc,n−1
1 Ps
∞ X
n
X
+
0
f2(pn+v)−1
p=0 v=1
∞ X
n
X
p=0 v=1
+
0
f2(pn+v)
0
f2(pn+v)−1
∞ X
n
X
p=0 v=1
1
s
∞ X
n
X
l=1
s
0
f2(pn+v)
p=0 v=1
∞ X
n
X
p=0 v=1
=
Ps
..
.
0
f2(pn+v)−1
p=0 v=1
∞ X
n
X
+
=
1
s
λj,l,0 x0 Φµ,l
λ+λc,0
l=1
s
0
f2(pn+v)
Ps
l=1
1
s
1
s
Ps
l=1
λj,l,0 x∗0 ΦT
2(pn+v)−1,l
λ+λc,0
..
.
λj,l,n−1 x∗n−1 ΦT
2(pn+v)−1,l
λ+λc,n−1
l=1
Ps
λj,l,0 x∗0 ΦT
2(pn+v),l
λ+λc,0
..
.
λj,l,n−1 x∗n−1 ΦT
2(pn+v),l
λ+λc,n−1
Ap,v,0,j
λ+λc,0
..
.
Ap,v,n−1,j
λ+λc,n−1
Bp,v,0,j
λ+λc,0
..
.
Bp,v,n−1,j
λ+λc,n−1
ap,v
∗ T
λ+λc,0 x0 Φ2(pn+v)−1,j
..
.
ap,v
∗
T
λ+λc,n−1 xn−1 Φ2(pn+v)−1,j
ap,v
∗ T
λ+λc,0 x0 Φ2(pn+v),j
..
.
ap,v
∗
T
λ+λc,n−1 xn−1 Φ2(pn+v),j
.
18
Z. SHANG AND G. CHENG
On the other hand,
M ∗ f0,j
=
=
∞
X
fµ0 M ∗ ΦTµ,j
µ=1
∞ X
n
X
0
M ∗ ΦT2(pn+v)−1,j
f2(pn+v)−1
p=0 v=1
+
∞ X
n
X
0
M ∗ ΦT2(pn+v),j
f2(pn+v)
p=0 v=1
∗ ΦT
x
0
2(pn+v)−1,j
∞ X
n
X
..
0
=
f2(pn+v)−1
.
p=0 v=1
∗
T
xn−1 Φ2(pn+v)−1,j
∗ ΦT
x
0
2(pn+v),j
∞ X
n
X
..
0
.
+
f2(pn+v)
.
p=0 v=1
T
∗
xn−1 Φ2(pn+v),j
Therefore,
M ∗ (f¯j − f0,j ) =
∞ X
n
X
p=0 v=1
+
(6.19)
0
f2(pn+v)−1
∞ X
n
X
p=0 v=1
where bp,v,r =
ap,v
λ+λc,r
0
f2(pn+v)
bp,v,0 x∗0 ΦT2(pn+v)−1,j
..
.
bp,v,n−1 x∗n−1 ΦT2(pn+v)−1,j
bp,v,0 x∗0 ΦT2(pn+v),j
..
,
.
∗
T
bp,v,n−1 xn−1 Φ2(pn+v),j
− 1, for p ≥ 0, 1 ≤ v ≤ n and 0 ≤ r ≤ n − 1.
It holds the trivial observation bks+g,v,r = bg,v,r for k ≥ 0, 0 ≤ g ≤ s − 1, 1 ≤ v ≤ n
√
P∞
0
0
and 0 ≤ r ≤ n − 1. Define Cg,r =
−1f2(kN
k=0 (f2(kN +gn+n−r)−1 −
+gn+n−r) ) and Dg,r =
√
P∞
0
0
k=0 (f2(kN +gn+r)−1 + −1f2(kN +gn+r) ), for 0 ≤ g ≤ s − 1 and 0 ≤ r ≤ n − 1. Also denote Cg,r
and Dg,r as their conjugate. By (6.12) and (6.13), and direct calculations we get that, for 1 ≤ j ≤ s
and 1 ≤ r ≤ n − 1,
δj,r ≡
=
(6.20)
∞
n
X
X
p=0
r
n
2
0
f2(pn+v)−1
bp,v,r x∗r ΦT2(pn+v)−1,j
v=1
∞ h
X
p=0
+
n
X
0
bp,v,r x∗r ΦT2(pn+v),j
f2(pn+v)
v=1
0
(f2(pn+n−r)−1
−
0
+(f2(pn+r)−1
+
√
√
0
−1f2(pn+n−r)
)bp,n−r,r η −(pn+n−r)(j−1)
i
0
−1f2(pn+r)
)bp,r,r η (pn+r)(j−1) ,
!
19
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
leading to that
s
X
j=1
2
|δj,r |
=
s
∞
√
n X Xh 0
0
)bp,n−r,r η −(pn+n−r)(j−1)
(f2(pn+n−r)−1 − −1f2(pn+n−r)
2
p=0
j=1
0
+(f2(pn+r)−1
+
=
0
−1f2(pn+r)
)bp,r,r η (pn+r)(j−1)
i
2
s
s−1
n X X
Cg,r bg,n−r,r η −(gn+n−r)(j−1) + Dg,r bg,r,r η (gn+r)(j−1)
2
g=0
j=1
=
√
n
2
s−1
X
s
X
(Cg,r bg,n−r,r η −(gn+n−r)(j−1) + Dg,r bg,r,r η (gn+r)(j−1) )
g,g 0 =0 j=1
0
0
×(Cg0 ,r bg0 ,n−r,r η (g n+n−r)(j−1) + Dg0 ,r bg0 ,r,r η −(g n+r)(j−1) )
=
s−1
NX
(|Cg,r |2 b2g,n−r,r + Cg,r Ds−1−g,r bg,n−r,r bs−1−g,r,r
2
g=0
+Dg,r Cs−1−g,r bg,r,r bs−1−g,n−r,r + |Dg,r |2 b2g,r,r )
(6.21)
=
s−1
NX
|Cg,r bg,n−r,r + Ds−1−g,r bs−1−g,r,r |2
2
g=0
≤ N
= N
s−1
X
g=0
s−1
X
g=0
(|Cg,r |2 b2g,n−r,r + |Ds−1−g,r |2 b2s−1−g,r,r )
(|Cg,r |2 b2g,n−r,r + |Dg,r |2 b2g,r,r ).
It is easy to see that for 0 ≤ g ≤ s − 1 and 1 ≤ r ≤ n − 1,
|Cg,r |
2
= (
∞
X
+(
k=0
≤
∞
X
≤
k=0
×
(6.22)
and
2
|Dg,r |
0
2
f2(kN
+gn+n−r) )
0
2
0
2
2m
(|f2(kN
+gn+n−r)−1 | + |f2(kN +gn+n−r) | )(kN + gn + n − r)
k=0
∞
X
∞
X
k=0
k=0
∞
X
×
(6.23)
0
2
f2(kN
+gn+n−r)−1 )
(kN + gn + n − r)−2m
0
2
0
2
2m
(|f2(kN
+gn+n−r)−1 | + |f2(kN +gn+n−r) | )(kN + gn + n − r)
2m
(gn + n − r)−2m ,
2m − 1
≤
∞
X
k=0
×
0
2
0
2
2m
(|f2(kN
+gn+r) | + |f2(kN +gn+r)−1 | )(kN + gn + r)
2m
(gn + r)−2m .
2m − 1
2
20
Z. SHANG AND G. CHENG
For 1 ≤ g ≤ s − 1, we have ag,n−r ≤ λc,r , which further leads to |bg,n−r,r | ≤ 2. Meanwhile, by
(6.5), we have
0 ≤ λc,r − a0,r ≤ (2π(n − r))−2m + 2c̄m (2πn)−2m ≤ (1 + 2c̄m )(2π(n − r))−2m .
Then we have
λ + λc,r − a0,r
λ + λc,r
λ + (1 + 2c̄m )(2π(n − r))−2m
≤
λ + (2πr)−2m + (2π(n − r))−2m
λ + (2π(n − r))−2m
≤ (1 + 2c̄m )
,
λ + (2πr)−2m + (2π(n − r))−2m
|b0,r,r | =
leading to
2
λ + (2π(n − r))−2m
≤ r
(1 + 2c̄m )
λ + (2π(n − r))−2m + (2πr)−2m
λ + (2π(n − r))−2m
−2m
2
≤ r
(1 + 2c̄m )
λ + (2π(n − r))−2m + (2πr)−2m
r−2m b20,r,r
−2m
2
≤ (2π)2m (1 + 2c̄m )2 (λ + (πn)−2m ).
(6.24)
The last inequality can be proved in two different cases: 2r ≤ n and 2r > n. Similarly, it can be
shown that (n − r)−2m b20,n−r,r ≤ (2π)2m (1 + 2c̄m )2 (λ + (πn)−2m ).
Then we have by (6.22)–(6.24) that
n−1
s−1
XX
r=1 g=0
|Dg,r |2 b2g,r,r ≤
n−1
s−1 X
∞
XX
r=1 g=1 k=0
0
2
0
2
2m
(|f2(kN
+gn+r) | + |f2(kN +gn+r)−1 | )(kN + gn + r)
n−1 ∞
XX
2m
0
2
0
2
2m
×
(gn + r)−2m 22 +
(|f2(kN
+r) | + |f2(kN +r)−1 | )(kN + r)
2m − 1
r=1 k=0
2m −2m 2
×
r
b0,r,r
2m − 1
n−1
s−1 X
∞
XX
0
2
0
2
≤ c0m (λ + n−2m )
(|f2(kN
+gn+r) | + |f2(kN +gn+r)−1 | )
r=1 g=0 k=0
2m
×(2π(kN + gn + r))
(6.25)
,
2m
8m
, (1 + 2c̄m )2 2m−1
}. Similarly, one can show that
where c0m = max{(2π)−2m 2m−1
n−1
s−1
XX
r=1 g=0
(6.26)
|Cg,r |2 b2g,n−r,r
≤
c0m (λ
+n
−2m
)
n−1
s−1 X
∞
XX
r=1 g=0 k=0
2m
×(2π(kN + gn + r))
.
0
2
0
2
(|f2(kN
+gn+r) | + |f2(kN +gn+r)−1 | )
21
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
Combining (6.25) and (6.26) we get that
n−1
s
XX
r=1 j=1
(6.27)
|δj,r |2 ≤ 2c0m (λ + n−2m )N
n−1
s−1 X
∞
XX
r=1 g=0 k=0
2m
×(2π(kN + gn + r))
2
2
0
0
(|f2(kN
+gn+r) | + |f2(kN +gn+r)−1 | )
.
To the end of proof of (3.2), by (6.19) we have for 1 ≤ j ≤ s,
δj,0 ≡
=
∞
n
X
X
0
f2(pn+v)−1
bp,v,0 x∗0 ΦT2(pn+v)−1,j
+
n
X
0
f2(pn+v)
bp,v,0 x∗0 ΦT2(pn+v),j
p=0
v=1
∞
X
0
0
f2(pn+n)−1
bp,n,0 x∗0 ΦT2(pn+n)−1,j + f2(pn+n)
bp,n,0 x∗0 ΦT2(pn+n),j
v=1
p=0
=
r
=
r
!
√
n Xh 0
0
)bp,n,0 η −(p+1)n(j−1)
(f2(pn+n)−1 − −1f2(pn+n)
2
p=0
i
√
0
0
+(f2(pn+n)−1 + −1f2(pn+n)
)bp,n,0 η (p+1)n(j−1)
r s−1 " ∞
√
nX X 0
0
−(gn+n)(j−1)
=
(f2(kN +gn+n)−1 − −1f2(kN
+gn+n) )bg,n,0 η
2
g=0 k=0
#
∞
X
√
0
0
(gn+n)(j−1)
+
(f2(kN
+gn+n)−1 + −1f2(kN +gn+n) )bg,n,0 η
(6.28)
∞
k=0
s−1 h
X
n
2
g=0
i
Cg,0 bg,n,0 η −(gn+n)(j−1) + Dg,n bg,n,0 η (gn+n)(j−1) ,
which, together with Cauchy-Schwartz inequality, (6.22)–(6.23), and the trivial fact |bg,n,0 | ≤ 2 for
0 ≤ g ≤ s − 1, leads to
s
X
j=1
|δj,0 |
2
≤ n
s
s−1
X
X
j=1
Cg,0 bg,n,0 η
+n
g=0
s
s−1
X
X
j=1
s−1
s−1
X
X
= N
|Cg,0 |2 b2g,n,0 +
|Dg,n |2 b2g,n,0
g=0
(6.29)
−(gn+n)(j−1) 2
≤ 2c0m n−2m N
Dg,n bg,n,0 η (gn+n)(j−1)
2
g=0
g=0
s−1 X
∞
X
g=0 k=0
0
2
0
2
2m
(|f2(kN
.
+gn+n)−1 | + |f2(kN +gn+n) | ) × (2π(kN + gn + n))
Combining (6.27) and (6.29) we get that
s X
n
X
j=1 i=1
(E{f¯}(ti,j ) − f0 (ti,j ))2 =
n−1
s−1
XX
r=0 g=0
|δj,r |2
≤ 2c0m (λ + n−2m )N
(6.30)
×(2π(kN + gn +
∞
n X
s−1 X
X
0
2
0
2
(|f2(kN
+gn+i) | + |f2(kN +gn+i)−1 | )
i=1 g=0 k=0
i))2m = 2c0m (λ
+ n−2m )N J(f0 ).
22
Z. SHANG AND G. CHENG
Next we will apply (6.30) to show (3.2). Since fbj is the minimizer of `j,n,λ (f ), it satisfies for
1 ≤ j ≤ s,
n
1X
−
(Yi,j − fbj (ti,j ))Kti,j + λfbj = 0.
n
i=1
Taking expectations, we get that
n
1X
(E{fbj }(ti,j ) − f0 (ti,j ))Kti,j + λE{fbj },
n
i=1
therefore, E{fbj } is the minimizer to the following functional
n
`0j (f ) =
1 X
λ
(f (ti,j ) − f0 (ti,j ))2 + J(f ).
2n
2
i=1
Define gj = E{fbj }. Since `0j (gj ) ≤ `0j (f0 ), we get
n
λ
1 X
λ
(gj (ti,j ) − f0 (ti,j ))2 + J(gj ) ≤ J(f0 ).
2n
2
2
i=1
This means that J(gj ) ≤ J(f0 ), leading to
(6.31)
Note that E{f¯} =
1
s
Ps
and m ≥ 1 we get that
s
s
j=1
j=1
p
1 X (m)
1 X (m)
k
gj k2 ≤
kgj k2 ≤ J(f0 ).
s
s
j=1 gj .
Define g(t) = (E{f¯}(t) − f0 (t))2 . By [4, Lemma (2.24), pp. 58], (6.31)
Z 1
N −1
1 X
g(l/N ) −
g(t)dt
N
0
l=0
(6.32)
≤
2
N
Z
0
≤
2 1
k
N s
≤
2 1
k
N s
1
s
s
1X 0
1X
gj (t) − f0 (t) ×
gj (t) − f00 (t) dt
s
s
j=1
s
X
j=1
s
X
j=1
j=1
gj − f0 k2 × k
(m)
gj
1
s
(m) 2
k2
− f0
s
X
j=1
≤
gj0 − f00 k2
8J(f0 )
.
N
Combining (6.30) and (6.32) we get that
kE{f¯} − f0 k22 ≤ c2m J(f0 )(λ + n−2m + N −1 ),
where c2m = max{8, 2c0m }. This completes the proof of (3.2).
P
0
0
Proof of Theorem 3.2. Suppose f0 = ∞
ν=1 fν ϕν with fν satisfying
(
Cn−1 (2π(n + r))−2m , ν = 2(n + r) − 1, 1 ≤ r ≤ n/2,
0 2
(6.33)
|fν | =
0,
otherwise.
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
It is easy to see that J(f0 ) =
P
0
2
1≤r≤n/2 |f2(n+r)−1 | (2π(n
23
+ r))2m ≤ C.
Consider the decomposition (6.19) and let δj,r be defined as in (6.20) and (6.28). It can be easily
checked that Cg,r = 0 for 1 ≤ r ≤ n/2 and 0 ≤ g ≤ s − 1. Furthermore, for 1 ≤ r ≤ n/2,
λc,r − a1,r =
∞
X
(2π(un + r))−2m +
u=0
∞
X
−
u=1
∞
∞
X
X
(2π(un − r))−2m −
(2π(uN + n + r))−2m
u=1
u=0
(2π(uN − n − r))−2m ≥ (2πr)−2m .
Therefore,
b21,r,r
=
≥
(6.34)
λ + λc,r − a1,r
λ + λc,r
2
λ + (2πr)−2m
λ + 2(1 + c̄m )(2πr)−2m
2
≥
1
.
4(1 + c̄m )2
Using (6.21) and (6.34), we have
s
X
j=1
(f¯j − f0,j )T (f¯j − f0,j ) =
≥
=
s n−1
X
X
j=1 r=0
X
X
X
1≤r≤n/2
=
X
1≤r≤n/2
≥
where am =
1
16(3π)2m (1+c̄m )2
X
1≤r≤n/2
|δj,r |2
s−1
NX
|Cg,r bg,n−r,r + Ds−1−g,r bs−1−g,r,r |2
2
g=0
s−1
NX
|Ds−1−g,r |2 b2s−1−g,r,r
2
g=0
s−1
NX
|Dg,r |2 b2g,r,r
2
g=0
N
|D1,r |2 b21,r,r
2
≥
N
8(1 + c̄m )2
≥
16(3π)2m (1
X
1≤r≤n/2
NC
+ c̄m )2
0
|f2(n+r)−1
|2
n−2m ≡ am N Cn−2m ,
< 1 is an absolute constant depending on m only. Then the conclusion
follows by (6.32). Proof is completed.
6.2. Proofs in Section 4.
s
X
1≤r≤n/2 j=1
1≤r≤n/2
=
|δj,r |2
24
Z. SHANG AND G. CHENG
Proof of Theorem 4.1. For 1 ≤ j, l ≤ s, define
Ωj,l =
σ
ej,l,r =
∞
1 X ΦTν,j Φν,l
,
n
γν2
ν=1
∞ cos 2πk r − j−l
X
n
N
2
, r = 0, 1, . . . , n − 1.
4m
n
(2πk)
k=1
Clearly Ωj,l is a circulant matrix with elements σ
ej,l,0 , σ
ej,l,1 , . . . , σ
ej,l,n−1 . Furthermore, by arguments
(6.9)–(6.11) we get that
Ωj,l = M Γj,l M ∗ ,
(6.35)
where M is the same as in (6.4), and Γj,l = diag(δj,l,0 , δj,l,1 , . . . , δj,l,n−1 ), with δj,l,r , for r = 1, . . . , n−
1, given by the following
(6.36)
δj,l,r =
∞
∞
X
X
η −(qn−r)(j−l)
η (qn+r)(j−l)
+
,
[2π(qn − r)]4m
[2π(qn + r)]4m
q=1
q=0
and for r = 0, given by
(6.37)
δj,l,0 =
∞
X
η qn(j−l) + η −qn(j−l)
q=1
(2πqn)4m
Define A = diag((Σ + λIn )−1 , . . . , (Σ + λIn )−1 ) and
{z
}
|
.
s
Ω1,1 Ω1,2 · · ·
Ω1,s
Ω2,1 Ω2,2 · · ·
B=
··· ···
···
Ωs,1 Ωs,2 · · ·
Ω2,s
.
···
Ωs,s
Note that B is N × N symmetric. Under H0 , it can be shown that
kf¯k22
=
∞ Ps
X
ν=1
=
+ λIn )−1 l
N γν2
l=1 Φν,l (Σ
s
1 X T
j (Σ + λIn )−1
Ns
j,l=1
s
X
=
1
Ns
=
1 T
1 T
ABA =
∆,
Ns
Ns
2
∞
1 X ΦTν,j Φν,l
n
γν2
ν=1
!
Tj (Σ + λIn )−1 Ωj,l (Σ + λIn )−1 l
j,l=1
where = (T1 , . . . , Ts )T and ∆ ≡ ABA.
(Σ + λIn )−1 l
25
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
2
This implies that TN,λ = T ∆/(N s) with µN,λ = trace(∆)/(N s) and σN,λ
= 2trace(∆2 )/(N s)2 .
Define U = (TN,λ − µN,λ )/σN,λ . Then for any t ∈ (−1/2, 1/2),
log E{exp(tU )} = log E{exp(tT ∆/(N sσN,λ ))} − tµN,λ /σN,λ
1
= − log det(IN − 2t∆/(N sσN,λ )) − tµN,λ /σN,λ
2
2
= ttrace(∆)/(N sσN,λ ) + t2 trace(∆2 )/((N s)2 σN,λ
)
3
+O(t3 trace(∆3 )/((N s)3 σN,λ
)) − tµN,λ /σN,λ
3
)).
= t2 /2 + O(t3 trace(∆3 )/((N s)3 σN,λ
3 ) = o(1) in order to conclude the proof.
It remains to show that trace(∆3 )/((N s)3 σN,λ
2 ) and trace(∆3 ). We start from the
In other words, we need to study trace(∆2 ) (used in σN,λ
former. By direct calculations, we get
trace(∆2 ) = trace(A2 BA2 B)
s
s
X
X
trace
M (Λc + λIn )−2 Γl,j (Λc + λIn )−2 Γj,l M ∗
=
j=1
l=1
=
s
X
j,l=1
s n−1
X
X
trace (Λc + λIn )−2 Γl,j (Λc + λIn )−2 Γj,l =
j,l=1 r=0
For 1 ≤ g ≤ s and 0 ≤ r ≤ n − 1, define
Ag,r =
∞
X
p=0
1
.
[2π(pN + gn − r)]4m
Using (6.36) and (6.37), it can be shown that for r = 1, 2, . . . , n − 1,
s
X
j,l=1
2
|δj,l,r |
=
s
s
X
X
Ag,r η
−gn(j−l)
j,l=1 g=1
=
s
X
j,l=1
+
+
s
X
Ag,n−r η
s
X
0
Ag,r Ag0 ,r η −(g−g )n(j−l) +
= s2
(6.38)
≥ s2
g=1
s
X
Ag,r Ag0 ,n−r η
A2g,r + s2
−(g+g 0 −1)n(j−l)
+
(6.39)
g=1
A2g,r =
s
X
Ag,n−r Ag0 ,r η
(g+g 0 −1)n(j−l)
s
X
g=1
s
X
A2g,n−r + 2s2
s
X
Ag,r As+1−g,n−r
g=1
A2g,n−r .
g=1
Since
s
X
0
Ag,n−r Ag0 ,n−r η (g−g )n(j−l)
g,g 0 =1
A2g,r + s2
g=1
s
X
g,g 0 =1
g,g 0 =1
s
X
2
(g−1)n(j−l)
g=1
g,g 0 =1
s
X
|δj,l,r |2
.
(λ + λc,r )4
s
X
g=1
∞
X
p=0
2
1
1
≥
,
4m
[2π(pN + gn − r)]
[2π(n − r)]8m
26
Z. SHANG AND G. CHENG
we get that
n−1
X
2
trace(∆ ) ≥
P
P
s2 ( sg=1 A2g,r + sg=1 A2g,n−r )
(λ + λc,r )4
r=1
1
n−1
X [2π(n−r)]
8m
2
≥ s
(λ + λc,r
r=1
≥
2s2
(2 + 2c̄m )4
=
s2
8(1 + c̄m )4
X
1≤r≤n/2
X
1≤r≤n/2
s2
h−1
8(1 + c̄m )4
≥
+
Z
1
(2πr)8m
)4
(λ
1
(2πr)8m
1
4
+ (2πr)
2m )
1
(1 + (2πrh)2m )4
nh/2
h
1
dx.
(1 + (2πx)2m )4
Meanwhile, (6.38) indicates that for 1 ≤ r ≤ n − 1,
s
X
j,l=1
|δj,l,r |2 ≤ 2s2
s
X
A2g,r + 2s2
g=1
s
X
A2g,n−r .
g=1
From (6.39) we get that for 1 ≤ r ≤ n − 1,
s
X
g=1
A2g,r ≤
cm
,
(2π(n − r))8m
where cm > 0 is a constant depending on m only.
Similar analysis to (6.38) shows that
s
X
j,l=1
2
|δj,l,0 |
s
s
X
X
=
Ag,0 (η gn(j−l) + η −gn(j−l) )
2
j,l=1 g=1
= 2s
2
≤ 4s2
s
X
g=1
s
X
g=1
A2g,0
+ 2s
2
s−1
X
Ag,0 As−g,0 + 2s2 A2s,0
g=1
A2g,0 ≤ cm s2 (2πn)−8m .
Therefore,
2
trace(∆ ) ≤
4s2
Ps
(λ
2
g=1 Ag,0
+ λc,0 )4
n
X
2
+ 2s
n−1
X
r=1
Ps
2
g=1 Ag,r
(λ +
1
(1 + (2πrh)2m )4
r=1
Z nh
1
≤ 4cm s2 h−1
dx.
2m )4
(1
+
(2πx)
0
≤ 4cm s
2
+
Ps
2
g=1 Ag,n−r
λc,r )4
27
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
By the above statements, we get that
2
σN,λ
= 2trace(∆2 )/(N s)2
(6.40)
(
n
,
N2
1
,
N 2h
if nh → 0,
if limN nh > 0.
To the end, we look at the trace of ∆3 . By direct examinations, we have
trace(∆3 ) = trace(ABA2 BA2 BA)
s
s
X
X
=
trace [
M (Λc + λIn )−2 Γj,l (Λc + λIn )−2 Γl,k M ∗ ]
j,k=1
=
l=1
λIn )−2 Γk,j M ∗
×M (Λc +
s
X
trace (Λc + λIn )−2 Γj,l (Λc + λIn )−2 Γl,k (Λc + λIn )−2 Γk,j
j,k,l=1
=
s
n−1
X
X δj,l,r δl,k,r δk,j,r
.
(λ + λc,r )6
j,k,l=1 r=0
For r = 1, 2, . . . , n − 1, it can be shown that
∞
∞
qn(j−l)
−qn(j−l)
X
X
η
η
+
δj,l,r δl,k,r δk,j,r =
(2π(qn − r))4m
(2π(qn + r))4m
q=0
q=1
∞
∞
−qn(l−k)
qn(l−k)
X
X
η
η
×
+
4m
(2π(qn − r))
(2π(qn + r))4m
q=1
q=0
∞
∞
−qn(k−j)
qn(k−j)
X
X
η
η
.
×
(6.41)
+
4m
(2π(qn − r))
(2π(qn + r))4m
q=1
q=0
We next proceed to show that for 1 ≤ r ≤ n − 1,
(6.42)
s
X
l,j,k=1
δj,l,r δl,k,r δk,j,r
96m
≤
12m − 1
4m
4m − 1
3
3
s
1
1
+
12m
(2π(n − r))
(2πr)12m
.
28
Z. SHANG AND G. CHENG
Using the trivial fact that Ag,r ≤
s
∞
X
X
=
=
=
=
≤
≤
j,l,k=1 q1 =1
s
s
X
X
4m
4m−1
1
,
(2π(gn−r))4m
the first term in (6.41) satisfies
∞
∞
X
X
η −q1 n(j−l)
η −q2 n(j−l)
η −q3 n(j−l)
(2π(q1 n − r))4m
(2π(q2 n − r))4m
(2π(q3 n − r))4m
Ag1 ,r η −g1 n(j−l)
j,l,k=1 g1 =1
s
X
q2 =1
s
X
Ag2 ,r η −g2 n(l−k)
g2 =1
Ag1 ,r Ag2 ,r Ag3 ,r
g1 ,g2 ,g3 =1
s
X
Ag1 ,r Ag2 ,r Ag3 ,r
3
g1 ,g2 ,g3 =1
s
X
s3
A3g,r
g=1
4m
4m − 1
×
s
X
q3 =1
s
X
Ag3 ,r η −g3 n(k−j)
g3 =1
η −g1 n(j−l) η −g2 n(l−k) η −g3 n(k−j)
j,l,k=1
s
s
s
X
X
X
(g3 −g1 )n(j−1)
(g1 −g2 )n(l−1)
(g2 −g3 )n(k−1)
η
η
j=1
η
l=1
k=1
s
X
1
(2π(gn − r))12m
g=1
3
12m
4m
1
s3
.
12m − 1 4m − 1
(2π(n − r))12m
s3
Similarly, one can show that all other terms in (6.41) are upper bounded by
3
12m
4m
1
1
3
s
+
.
12m − 1 4m − 1
(2π(n − r))12m (2πr)12m
Therefore, (6.42) holds. It can also be shown by (6.37) and similar analysis that
s
X
(6.43)
j,l,k=1
δj,l,0 δl,k,0 δk,j,0 ≤ s3 (2πn)−12m .
Using (6.42) and (6.43), one can get that
s
n−1
X
X δj,l,r δl,k,r δk,j,r
(λ + λc,r )6
3
trace(∆ ) =
l,j,k=1 r=0
. s
3
r=1
n
X
1
(2π(n−r))12m
+
1
(2πr)12m
(λ + λc,r )
+s
3
(λ
1
(2πn)12m
+ λc,0 )12m
1
(1 + (2πrh)2m )6
r=1
(
Z nh
s3 n,
if nh → 0,
1
3 −1
. s h
dx
2m
6
(1 + (2πx) )
s3 h−1 , if limN nh > 0.
0
. s3
(6.44)
n−1
X
Combining (6.40) and (6.44), and using the assumptions n → ∞, h → 0, we get that
(
n−1/2 ,
if nh → 0,
3
3 3
trace(∆ )/((N s) σN,λ ) .
= o(1).
h1/2 , if limN nh > 0.
Proof is completed.
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
29
Proof of Theorem 4.2. Throughout the proof, we assume that data Y1 , . . . , YN are generated from the sequence of alternative hypotheses: f ∈ B and kf k2 ≥ Cε dN,λ . Define fj =
(f (t1,j ), . . . , f (tn,j ))T for 1 ≤ j ≤ s. Then it can be shown that
N sTN,λ = N s
=
=
=
∞
X
f¯ν2
ν=1
s
X
YjT (Σ + λIn )−1 Ωj,l (Σ + λIn )−1 Yl
j,l=1
s
X
YjT M (Λc + λIn )−1 Γj,l (Λc + λIn )−1 M ∗ Yl
j,l=1
s
X
fTj M (Λc + λIn )−1 Γj,l (Λc + λIn )−1 M ∗ fl
j,l=1
s
X
+
fTj M (Λc + λIn )−1 Γj,l (Λc + λIn )−1 M ∗ l
j,l=1
s
X
Tj M (Λc + λIn )−1 Γj,l (Λc + λIn )−1 M ∗ fl
+
j,l=1
s
X
+
Tj M (Λc + λIn )−1 Γj,l (Λc + λIn )−1 M ∗ l
j,l=1
≡ T1 + T2 + T3 + T4 .
(6.45)
Next we will analyze all the four terms in the above. Let f =
1 ≤ l ≤ s, define dl,r =
dl,r =
x∗r fl .
∞ X
n
X
Then it holds that
f2(pn+v)−1 x∗r ΦT2(pn+v)−1,l
+
p=0 v=1
Using (6.12) and (6.13), we get that for 1 ≤ r ≤ n − 1,
dl,r
(6.46)
∞ X
n
X
P∞
ν=1 fν ϕν .
For 0 ≤ r ≤ n − 1 and
f2(pn+v) x∗r ΦT2(pn+v),l .
p=0 v=1
r
n
η −(pn+v)(l−1) I(r + v = n) + η (pn+v)(l−1) I(r = v)
=
f2(pn+v)−1
2
p=0 v=1
r
∞ n−1
X
X
n −(pn+v)(l−1)
+
f2(pn+v) − −
η
I(r + v = n) − η (pn+v)(l−1) I(r = v)
2
p=0 v=1
r ∞ h
√
nX
=
(f2(pn+n−r)−1 − −1f2(pn+n−r) )η −(pn+n−r)(l−1)
2
p=0
i
√
+(f2(pn+r)−1 + −1f2(pn+r) )η (pn+r)(l−1) ,
∞ n−1
X
X
30
Z. SHANG AND G. CHENG
and for r = 0,
dl,0 =
∞
X
f2(pn+n)−1 x∗0 ΦT2(pn+n)−1,l +
=
(6.47)
f2(pn+n) x∗0 ΦT2(pn+n),l
p=0
p=0
r
∞
X
n
2
∞ h
X
p=0
(f2(pn+n)−1 −
+(f2(pn+n)−1 +
√
√
−1f2(pn+n) )η −(pn+n)(l−1)
i
−1f2(pn+n) )η (pn+n)(l−1) .
We first look at T1 . It can be examined directly that
s
X
T1 =
(dj,0 , . . . , dj,n−1 )diag
j,l=1
(6.48)
=
n−1
X
r=0
Ps
δj,l,n−1
δj,l,0
,...,
2
(λ + λc,0 )
(λ + λc,n−1 )2
× (dl,0 , . . . , dl,n−1 )T
j,l=1 δj,l,r dj,r dl,r
.
(λ + λc,r )2
Using similar arguments as (6.14)–(6.18), one can show that for p ≥ 0, 1 ≤ v ≤ n, 0 ≤ r ≤ n − 1
and 1 ≤ j ≤ s,
s
1X
δj,l,r x∗r ΦT2(pn+v)−1,l = bp,v x∗r ΦT2(pn+v)−1,j ,
s
l=1
s
1X
δj,l,r x∗r ΦT2(pn+v),l = bp,v x∗r ΦT2(pn+v),j ,
s
(6.49)
l=1
where
bp,v =
P
P
1
1
u≥−p/s (2π(uN +pn+v))4m +
u≥(p+1)/s (2π(uN −pn−v))4m , for 1 ≤ v ≤ n −
P
P
1
1
u≥(p+2)/s (2π(uN −pn−n))4m , for v = n.
u≥−p/s (2π(uN +pn+n))4m +
1,
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
31
By (6.49), we have
s
X
δj,l,r dj,r dl,r =
s
X
dj,r
j=1
j,l=1
=
s
X
l=1
dj,r
j=1
+
s
X
s
X
l=1
∞ X
n
X
p=0 v=1
=
s
X
j=1
+
dj,r
∞ X
n
X
δj,l,r dl,r
∞ X
n
X
δj,l,r
f2(pn+v)−1 x∗r Φ2(pn+v)−1,l
p=0 v=1
f2(pn+v) x∗r ΦT2(pn+v),l
∞ X
n
X
f2(pn+v)−1
p=0 v=1
f2(pn+v)
p=0 v=1
= s
s
X
j=1
+
dj,r
∞ X
n
X
p=0 v=1
p=0 v=1
δj,l,r x∗r ΦT2(pn+v)−1,l
l=1
s
X
l=1
∞ X
n
X
s
X
δj,l,r x∗r ΦT2(pn+v),l
f2(pn+v)−1 bp,v x∗r ΦT2(pn+v)−1,j
f2(pn+v) bp,v x∗r ΦT2(pn+v),j .
It then follows from (6.46) and (6.47), trivial facts bs−1−g,r = bg,n−r and Cg,n−r = Dg,r (both Cg,r
and Dg,r are defined similarly as those in the proof of Theorem 3.1, but with f0 therein replaced
32
Z. SHANG AND G. CHENG
by f ), and direct calculations that for 1 ≤ r ≤ n − 1
s
X
√
sn X X h
(f2(pn+n−r)−1 + −1f2(pn+n−r) )η (pn+n−r)(j−1)
2
j=1 p=0
i
√
+(f2(pn+r)−1 − −1f2(pn+r) )η −(pn+r)(j−1)
∞
s
δj,l,r dj,r dl,r =
j,l=1
∞ h
X
√
×
(f2(pn+n−r)−1 − −1f2(pn+n−r) )bp,n−r η −(pn+n−r)(j−1)
p=0
+(f2(pn+r)−1 +
√
−1f2(pn+r) )bp,r η (pn+r)(j−1)
i
s s−1 ∞
√
N XXXh
(f2(kN +gn+n−r)−1 + −1f2(kN +gn+n−r) )η (gn+n−r)(j−1)
2
j=1 g=0 k=0
i
√
+(f2(kN +gn+r)−1 − −1f2(kN +gn+r) )η −(gn+r)(j−1)
=
×
s−1 X
∞ h
X
g=0 k=0
(f2(kN +gn+n−r)−1 −
√
−1f2(kN +gn+n−r) )bks+g,n−r η −(gn+n−r)(j−1)
i
√
+(f2(kN +gn+r)−1 + −1f2(kN +gn+r) )bks+g,r η (gn+r)(j−1)
s
s−1
s−1
X
X
X
N
=
Cg,r η (gn+n)(j−1) +
Dg,r η −gn(j−1)
2
g=0
j=1 g=0
s−1
s−1
X
X
×
bg,n−r Cg,r η −(gn+n)(j−1) +
bg,r Dg,r η gn(j−1)
g=0
=
g=0
s−1
s−1
X
N s X
bg,n−r |Cg,r |2 +
bs−1−g,r Cg,r Ds−1−g,r
2
g=0
g=0
s−1
s−1
X
X
+
bs−1−g,n−r Dg,r Cs−1−g,r +
bg,r |Dg,r |2 ,
g=0
which leads to
(6.50)
n−1
X
r=1
g=0
Ps
j,l=1 δj,l,r dj,r dl,r
(λ + λc,r )2
Since J(f ) ≤ C, equivalently,
(6.51)
P∞
X
n−1
Ns X
=
2
2
ν=1 (f2ν−1
r=1
Ps−1
g=0 bg,r |Cs−1−g,r +
(λ + λc,r )2
Dg,r |2
.
2 )(2πν)2m ≤ C, we get that
+ f2ν
2
2
(f2r−1
+ f2r
) ≥ kf k22 − C(2πn)−2m .
1≤r≤n/2
Meanwhile, for 1 ≤ r ≤ n/2, using similar arguments as (6.25) and (6.26) one can show that there
33
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
exists a constant c0m relying on C and m s.t.
|Cs−1,r + D0,r |2 =
f2r−1 +
∞
X
k=0
+ f2r +
∞
X
k=1
and
2
|Cs−1,r + D0,r | (2πr)
2m
"
f2(kN +r) −
∞
X
k=0
f2(kN +r)−1
k=1
f2(kN +N −r)
!2
!2
1 2
2
(f
+ f2r−1
) − c0m N −2m ,
2 2r−1
≥
(6.52)
f2(kN +N −r)−1 +
∞
X
∞
∞
X
X
2
≤ 4 (
f2(kN +N −r)−1 ) + (
f2(kN +N −r) )2
k=0
∞
X
+(
≤
k=0
2
f2(kN +r)−1 ) + (
k=0
∞
X
4
k=0
+4
∞
X
k=0
+4
+4
∞
X
k=0
∞
X
∞
X
f2(kN +r) )
k=0
2
f2(kN
+N −r)−1 (2π(kN
#
(2πr)2m
2m
+ N − r))
∞
X
k=0
(2π(kN + N − r))−2m
∞
X
2
2m
f2(kN +N −r) (2π(kN + N − r))
(2π(kN + N − r))−2m
k=0
∞
X
2
2m
f2(kN
(2π(kN
+
r))
(2π(kN + r))−2m
+r)−1
2
f2(kN
+r) (2π(kN
8m
2m − 1
8m
+
2m − 1
2m
+ r))
k=0
≤
2
k=0
∞
X
−2m
(2π(kN + r))
k=0
∞
X
!
× (2πr)2m
2
2
−2m
(f2(kN
+N −r)−1 + f2(kN +N −r) )γkN +N −r (2π(N − r))
k=0
∞
X
2
2
−2m
(f2(kN
+r)−1 + f2(kN +r) )γkN +r (2πr)
k=0
!
× (2πr)2m ,
which, together with the fact N ≥ 2r for 1 ≤ r ≤ n/2, leads to that
X
(6.53)
|Cs−1,r + D0,r |2 (2πr)2m ≤ c0m .
1≤r≤n/2
Furthermore, it can be verified that for 1 ≤ r ≤ n/2,
λ2c,r − b0,r
(2πr)−2m ≤
(λ + λc,r )
((2πr)−2m + (2π(n − r))−2m + c̄m (2πn)−2m )2 − (2πr)−4m
(2πr)−2m
((2πr)−2m + (2π(n − r))−2m )2
≤ c0m n−2m ,
(6.54)
which leads to that
(λ + λc,r )2 − b0,r
(2πr)−2m =
(λ + λc,r )2
(6.55)
λ2c,r − b0,r
λ2 + 2λλc,r
−2m
(2πr)
+
(2πr)−2m
(λ + λc,r )2
(λ + λc,r )2
≤ 2λ + c0m n−2m .
34
Z. SHANG AND G. CHENG
Then, using (6.48)–(6.50) and (6.51)–(6.55) one gets that
T1 ≥
=
(6.56)
≥
Ns
2
X
1≤r≤n/2
b0,r |Cs−1,r + D0,r |2
(λ + λc,r )2
X
X
)2
(λ + λc,r − b0,r
Ns
|Cs−1,r + D0,r |2 −
|Cs−1,r + D0,r |2
2
(λ + λc,r )2
1≤r≤n/2
1≤r≤n/2
Ns 1
2
0
−2m
0
−2m
0
0
−2m
kf k2 − cm n
− cm N
− cm (2λ + cm n
) ≥ C 0 N sσN,λ ,
2
2
where the last inequality follows by kf k22 ≥ 4C 0 (λ + n−2m + σN,λ ) for a large constant C 0 satisfying
2C 0 > 2c0m + (c0m )2 . To achieve the desired power, we need to enlarge C 0 further. This will be
described later. Combining (6.56) with (6.40) and (6.56) we get that
(6.57)
T1 s uniformly for f ∈ B with kf k22 ≥ 4C 0 d2N,λ .
Terms T2 and T3 can be handled similarly. To handle T2 , note that T2 = fT ∆, where f =
(fT1 , . . . , fTs )T , = (T1 , . . . , Ts )T , and ∆ is defined in the proof of Theorem 4.1. We need to establish
∆ ≤ sIN . Define an arbitrary a = (aT1 , . . . , aTs )T ∈ RN , where each aj is an (real) n-vector. Let
ξj = M ∗ aj and ξ = (ξ1T , . . . , ξsT )T . For simplicity, put ξj = (ξj,0 , . . . , ξj,n−1 )T for 1 ≤ j ≤ s. Then
based on (6.37) and (6.36), we have
aT ∆a = ξ ∗ [(Λc + λIn )−1 Γj,l (Λc + λIn )−1 ]1≤j,l≤s ξ
=
n−1
X
s
X
n−1
X
s
ξj,r ξl,r
r=0 j,l=1
≤
r=1
+
≤ s
2s
δj,l,r
(λ + λc,r )2
1
(2π(n−r))4m
P∞
(λ
Ps
1
q=1 (2πqn)4m
n−1
s
XX
r=0 j=1
P
1
(2πr)4m
+ λc,r )2
+
(λ + λc,0 )2
s
2
j=1 |ξj,r |
2
j=1 |ξj,0 |
|ξj,r |2 = sξ ∗ ξ = saT a,
therefore, ∆ ≤ sIN . This leads to that, uniformly for f ∈ B with kf k22 ≥ 4C 0 d2N,λ , Ef {T22 } =
fT ∆2 f ≤ sT1 . Together with (6.57), we get that
1/2
sup
(6.58)
Pf |T2 | ≥ ε−1/2 T1 s1/2 ≤ ε.
f ∈B
√
kf k2 ≥2 C 0 dN,λ
Note that (6.58) also applies to T3 . By Theorem 4.1, (T4 /(N s) − µN,λ )/σN,λ is OP (1) uniformly for
f . Therefore, we can choose Cε0 > 0 s.t. Pf (|T4 /(N s) − µN,λ |/σN,λ ≥ Cε0 ) ≤ ε as N → ∞.
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
35
It then follows by (6.56), (6.57) and (6.58) that for suitable large C 0 (e.g., C 0 ≥ 2(Cε0 + z1−α/2 )),
√
uniformly for f ∈ B with kf k2 ≥ 2 C 0 dN,λ ,
Pf |TN,λ − µN,λ |/σN,λ ≥ z1−α/2 ≤ 3ε, as N → ∞.
Proof is completed.
Proof of Theorem 4.3. Define BN = bN 2/(4m+1) c, the integer part of N 2/(4m+1) . We prove
the theorem in two cases: limN nh > 0 and nh = o(1).
Case I: limN nh > 0.
In this case, it can be shown by s N (4m−1)/(4m+1) (equivalently n BN , leading to BN h nh
hence BN h → ∞) that n−6m h−4m+1/2 N (BN /n)6m . Choose g to be an integer satisfying
n−6m h−4m+1/2 N g 6m (BN /n)6m .
(6.59)
Construct an f =
(6.60)
P∞
fν2 =
ν=1 fν ϕν
(
with
C
n−1 (2π(gn
+ r))−2m , ν = 2(gn + r) − 1, r = 1, 2, . . . , n − 1,
0,
otherwise.
It can be seen that
(6.61)
J(f ) =
s−1
X
2
f2(gn+r)−1
(2π(gn + r))2m = C,
r=1
and
n−1
X
kf k22 =
2
f2(gn+r)−1
r=1
n−1
C X
(2π(gn + r))−2m
n−1
=
r=1
(6.62)
2
≥ C(2π(gn + n))−2m = βN,λ
N −4m/(4m+1) ,
2
where βN,λ
= C[BN /(2π(gn + n))]2m . Due to (6.59) and n BN , we have gn + n 2BN , which
further implies βN,λ → ∞ as N → ∞.
Using the trivial fact bs−2−g,n = bg,n for 0 ≤ g ≤ s − 2, one can show that
s−2
s
s−1
X
X
Ns X
2
δj,l,0 dj,0 dl,0 =
2
|Cg0 ,0 |2 bg0 ,n +
Cs−2−g0 ,0 Cg0 ,0 bg0 ,n + Cs−1,0
bs−1,n
2
j,l=1
g 0 =0
g 0 =0
s−2
X
2
Ds−2−g0 ,n Dg0 ,n bg0 ,n + Ds−1,n
bs−1,n
+
g 0 =0
(6.63)
≤ 2N s
s−1
X
g 0 =0
|Cg0 ,0 |2 bg0 ,n = 0,
36
Z. SHANG AND G. CHENG
where the last equality follows by a trivial observation Cg0 ,0 = 0. It follows by (6.63), (6.48) and
(6.50) that
n−1
T1 =
≤
=
(6.64)
=
Ns X
2
Ps−1
g 0 =0 bg 0 ,r |Cs−1−g 0 ,r
(λ + λc,r )2
+ Dg0 ,r |2
r=1
P
P
n−1
n−1
2
2
X s−1
X s−1
g 0 =0 bg 0 ,r |Cs−1−g 0 ,r |
g 0 =0 bg 0 ,r |Dg 0 ,r |
Ns
+
N
s
(λ + λc,r )2
(λ + λc,r )2
r=1
r=1
P
P
n−1
n−1
2
2
X s−1
X s−1
g 0 =0 bs−1−g 0 ,n−r |Cg 0 ,n−r |
g 0 =0 bg 0 ,r |Dg 0 ,r |
Ns
+
N
s
(λ + λc,r )2
(λ + λc,r )2
r=1
r=1
P
2
n−1
n−1
2
X s−1
X bg,r f2(gn+r)−1
g 0 =0 bg 0 ,r |Dg 0 ,r |
2N s
= 2N s
,
(λ + λc,r )2
(λ + λc,r )2
r=1
r=1
where the last equality follows from the design of f , i.e., (6.60). Now it follows from (6.64) and the
fact bg,r ≤ c0m (2π(gn + r))−4m , for some constant c0m depending on m only, that
T1 ≤ 2N s
=
≤
(6.65)
n−1
X c0m (2π(gn
r=0
C
+ r))−4m n−1
(2π(gn + r))−2m
(λ + λc,r )2
n−1
2N sc0m C X (2π(gn + r))−6m
n−1
(λ + λc,r )2
r=1
0
2cm C(2π)−6m N s(gn)−6m h−4m
sh−1/2 N sσN,λ ,
where the last “” follows from (4.3).
By (6.58) we have that
1/2
|T2 + T3 | = T1 s1/2 OPf (1) = oPf (sh−1/4 ) = oPf (N sσN,λ ).
Hence, by (6.45) and Theorem 4.1 we have
TN,λ − µN,λ
σN,λ
=
=
T1 + T2 + T3 T4 /(N s) − µN,λ
+
N sσN,λ
σN,λ
T4 /(N s) − µN,λ
d
+ oPf (1) −→ N (0, 1).
σN,λ
Consequently, as N → ∞
inf
f ? ∈B
kf ? k2 ≥βN,λ N −2m/(4m+1)
Pf ? (φN,λ = 1) ≤ Pf (φN,λ = 1) → α.
This shows the desired result in Case I.
Case II: nh = o(1).
The proof is similar to Case I although a bit technical difference needs to be emphasized. Since
n BN , it can be shown that N n−2m−1/2 (BN /n)6m . Choose g to be an integer satisfying
(6.66)
N n−2m−1/2 g 6m (BN /n)6m .
37
COMPUTATIONAL LIMITS OF A DISTRIBUTED ALGORITHM
Let f =
P∞
J(f ) = C
ν=1 fν ϕν
and kf k22
with fν satisfying (6.60). Similar to (6.61) and (6.62) one can show that
2 N −4m/(4m+1) , where β 2
2m . It is clear that
≥ βN,λ
N,λ = C[BN /(2π(gn + n))]
βN,λ → ∞ as N → ∞. Then similar to (6.63), (6.48), (6.50) and (6.65) one can show that
T1
n−1
X
2
bg,r f2(gn+r)−1
≤
2N s
≤
n−1
2N sc0m C X (2π(gn + r))−6m
n−1
(λ + λc,r )2
≤
r=1
(λ + λc,r )2
r=1
0
2cm C(2π)−2m N sg −6m n−2m
sn1/2 N sσN,λ ,
where the last line follows by (6.66) and (4.3). Then the desired result follows by arguments in the
rest of Case I. Proof is completed.
REFERENCES
[1] Brockwell, P. J. and Davis, R. A. (1987). Time Series: Theory and Methods. Springer, New York.
[2] Cai, T. T. and Yuan, M. (2011). Optimal Estimation of the Mean Function Based on Discretely Sampled Functional Data: Phase Transition. Annals of Statistics, 39, 2330–2355.
[3] de Jong, P. (1987). A central limit theorem for generalized quadratic forms. Probability Theory & Related Fields,
75, 261–277.
[4] Eggermont, P. P. B. and LaRiccia, V. N. (2009). Maximum Penalized Likelihood Estimation: Volume II. Springer
Series in Statistics.
[5] Hastie, T., Tibshirani, R. and Friedman, J. (2001). The Elements of Statistical Learning. Springer.
[6] Ingster, Yu I. (1993). Asymptotically minimax hypothesis testing for nonparametric alternatives I–III. Mathematical Methods of Statistics, 2, 85–114; 3, 171–189; 4, 249–268.
[7] Kosorok, M. R. (2008). Introduction to Empirical Processes and Semiparametric Inference. Springer: New York.
[8] Liu, M., Shang, Z. and Cheng, G. (2017). How Many Machines Can We Use in Parallel Computing? Preprint.
[9] Shang, Z. and Cheng, G. (2013). Local and Global Asymptotic Inference in Smoothing Spline Models. Annals of
Statistics, 41, 2608–2638.
[10] Shang, Z. and Cheng, G. (2015). A Bayesian Splitotic Theory for Nonparametric Models. Arxiv:1508.04175.
[11] Shao, J. (2003). Mathematical Statistics, 2nd Ed. Springer Texts in Statistics. Springer, New York.
[12] Sollich, P. and Williams, C. K. (2005). Understanding Gaussian process regression using the equivalent kernel.
In Deterministic and statistical methods in machine learning. Springer, 211-228.
[13] Wahba, G. (1990). Spline Models for Observational Data. SIAM, Philidelphia.
[14] Xu, G., Shang, Z. and Cheng, G. (2017). Optimal tuning for divide-and-conquer kernel ridge regression with
massive data. arXiv preprint arXiv:1612.05907
[15] Yang,
Y.,
Shang,
Z.
and
Cheng,
G.
(2017).
Non-asymptotic
theory
for
nonparametric
testing.
https://arxiv.org/abs/1702.01330.
[16] Zhang, Y., Duchi, J. C., Wainwright, M. J. (2015). Divide and Conquer Kernel Ridge Regression: A Distributed
Algorithm with Minimax Optimal Rates. Journal of Machine Learning Research, 16, 3299–3340.
38
Z. SHANG AND G. CHENG
[17] Zhao, T., Cheng, G. and Liu, H. (2016) A Partially Linear Framework for Massive Heterogeneous Data. Annals
of Statistics, 44, 1400-1437.
[18] Zhou, D.-X. (2002). The Covering Number in Learning Theory. Journal of Complexity, 18, 739-767.
Department of Mathematical Sciences
Department of Statistics
Indiana University-Purdue University at Indianapolis
Purdue University
420 University Blvd
250 N. University Street
Indianapolis, IN 46202
West Lafayette, IN 47907
Email: [email protected]
Email: [email protected]
| 1 |
A Homological Theory of Functions
Greg Yang
arXiv:1701.02302v3 [math.AC] 6 Apr 2017
Harvard University
[email protected]
April 7, 2017
Abstract
In computational complexity, a complexity class is given by a set of problems or functions,
and a basic challenge is to show separations of complexity classes A 6= B especially when A
is known to be a subset of B. In this paper we introduce a homological theory of functions
that can be used to establish complexity separations, while also providing other interesting
consequences. We propose to associate a topological space SA to each class of functions A, such
that, to separate complexity classes A ⊆ B0 , it suffices to observe a change in “the number of
holes”, i.e. homology, in SA as a subclass B ⊆ B0 is added to A. In other words, if the homologies
of SA and SA∪B are different, then A 6= B0 . We develop the underlying theory of functions
based on combinatorial and homological commutative algebra and Stanley-Reisner theory, and
recover Minsky and Papert’s result [12] that parity cannot be computed by nonmaximal degree
polynomial threshold functions. In the process, we derive a “maximal principle” for polynomial
threshold functions that is used to extend this result further to arbitrary symmetric functions.
A surprising coincidence is demonstrated, where the maximal dimension of “holes” in SA upper
bounds the VC dimension of A, with equality for common computational cases such as the class
of polynomial threshold functions or the class of linear functionals in F2 , or common algebraic
cases such as when the Stanley-Reisner ring of SA is Cohen-Macaulay. As another interesting
application of our theory, we prove a result that a priori has nothing to do with complexity
separation: it characterizes when a vector subspace intersects the positive cone, in terms of
homological conditions. By analogy to Farkas’ result doing the same with linear conditions, we
call our theorem the Homological Farkas Lemma.
1
1.1
Introduction
Intuition
Let A ⊆ B0 be classes of functions. To show that B0 6= A, it suffices to find some B ⊆ B0 such that
A ∪ B 6= A.
In other words, we want to add something to A and watch it change.
Let’s take a step back
Consider a more general setting, where A and B are “nice” subspaces of a larger topological
space C. We can produce a certificate of A ∪ B 6= A by observing a difference in the number of
“holes” of A ∪ B and A. Figure 1 shows two examples of such certificates.
1
1.1
Intuition
B
A
A
B
(a) A and B are both contractible (do not have holes), (b) A has a hole in its center, but B covers it, so that
but their union A ∪ B has a hole.
A ∪ B is now contractible.
Figure 1: Certifying A ∪ B 6= A by noting that the numbers of 1-dimensional holes are different between
A ∪ B and A.
Sometimes, however, there could be no difference between the number of holes in A ∪ B and
A. For example, if B in Figure 1a is slightly larger, then A ∪ B no longer has a hole in the center
(see Figure 2). But if we take a slice of A ∪ B, we observe a change in the number of connected
components (zeroth dimensional holes) from A to A ∪ B.
L
L∩A
L ∩ (A ∪ B)
A
B
Figure 2: A ∪ B and A are both contractible, but if we look at a section L of A ∪ B, we see that L ∩ A has
2 connected components, but L ∩ (A ∪ B) has only 1.
From this intuition, one might daydream of attacking complexity separation problems this way:
1. For each class A, associate a unique topological space (specifically, a simplicial complex) SA .
2. Compute the number of holes in SA and SA∪B of each dimension, and correspondingly for each
section by an affine subspace.
3. Attempt to find a difference between these quantities (a “homological” certificate).
It turns out this daydream is not so dreamy after all!
This work is devoted to developing such a homological theory of functions for complexity separation, which incidentally turns out to have intricate connection to other areas of computer science
and combinatorics. Our main results can be summarized as follows: 1) Through our homological framework, we recover Marvin Minsky and Seymour Papert’s classical result that polynomial
threshold functions do not compute parity unless degree is maximal [12], and in fact we discover
multiple proofs, each “coresponding to a different hole”; the consideration of lower dimension holes
yields a maximal principle for polynomial threshold functions that is used to extend Minsky and
Papert’s result to arbitrary symmetric functions [3]. 2) We show that an algebraic/homological
2
1
INTRODUCTION
quantity arising in our framework, the homological dimension dimh A of a class A, upper bounds the
VC dimension dimVC A of A. Informally, this translates to the following remarkable statement: “The
highest dimension of any holes in SA or its sections upper bounds the number of samples needed
to learn an unknown function from A, up to multiplicative constants.” We furthermore show that
equality holds in many common cases in computation (for classes like polynomial thresholds, F2
linear functionals, etc) or in algebra (when the Stanley-Reisner ring of SA is Cohen-Macaulay). 3)
We formulate the Homological Farkas Lemma, which characterizes by homological conditions when
a linear subspace intersects the interior of the positive cone, and obtain a proof for free from our
homological theory of functions.
While the innards of our theory relies on homological algebra and algebraic topology, we give
an extended introduction in the remainder of this section to the flavor of our ideas in what follows,
assuming only comfort with combinatorics, knowledge of basic topology, and a geometric intuition
for “holes.” A brief note about notation: [n] denotes the set {0, . . . , n − 1}, and [n → m] denotes
the set of functions from domain [n] to codomain [m]. The notation f :⊆ A → B specifies a partial
function from domain A to codomain B. † represents the partial function with empty domain.
1.2
An Embarassingly Simple Example
Let linfund ∼
= (Fd2 )∗ be the class of linear functionals of a d-dimensional vector space V over F2 . If
d ≥ 2, then linfund does not compute the indicator function I1 of the singleton set {1 := 11 · · · 1}.
This is obviously true, but let’s try to reason via a “homological way.” This will provide intuition
for the general technique and set the stage for similar analysis in more complex settings.
Let g : 0 → 0, 1 → 1. Observe that for every partial linear functional h ⊃ g strictly extending
g, I1 intersects h nontrivially. (Because I1 is zero outside of g, and every such h must send at least
one element to zero outside of g). I claim this completes the proof.
Why?
Combinatorially, this is because if I1 were a linear functional, then for any 2-dimensional subspace W of V containing {0, 1}, the partial function h :⊆ V → F2 , dom h = W ,
(
g(u)
if u ∈ dom g
h(u) =
1 − I1 (u) if u ∈ dom h \ dom g
is a linear functional, and by construction, does not intersect I1 on W \ {0, 1}.
Homologically, we are really showing the following
The space associated to linfund , in its section by an affine subspace corresponding to g, “has a hole” that is “filled up” when I1 is added to linfund .
“Wait, what? I’m confused. I don’t see anything in the proof resembling a hole?”
1.3
The Canonical Suboplex
OK. No problem. Let’s see where the holes come from.
3
1.3
0
7→ 0
1
0
7→ 1
1
0
7→ 0
1
1
7→ 0
0
0
7→ 0
1
0
7→ 1
1
1
7→ 0
0
The Canonical Suboplex
1
7→ 1
0
[1 0]
1
7→ 1
1
1
7→ 1
0
1
7→ 1
0
[0 0]
[0 1]
1
7→ 0
0
1
7→ 0
1
[0 0]
1
7→ 1
1
[0 1]
1
7→ 1
1
[1 0]
1
7→ 0
1
[1 1]
(a) Step 1 and Step 2 for linfun02 . Step 1: Each simplex is
labeled with a function f ∈ linfun02 , represented as a row vector.
Step 2: Each vertex of each simplex is labeled by an input/output
pair, here presented in the form of a column vector to a scalar.
The collection of input/output pairs in a simplex Ff recovers the
graph of f . Each face of Ff has an induced partial function label,
given by the collection of input/output pairs on its vertices (not
explicitly shown).
1
7→ 0
1
[1 1]
0
7→ 1
1
(b) Step 3 for linfun02 . The
simplices Ff are glued together
according to their labels. For example, F[0 0] and F[0 1] are glued
together by their vertices with
the common label [1 0]T 7→ 0,
and not anywhere else because
no other faces share a common
label.
Figure 3
Let’s first define the construction of the simplicial complex SC associated to any function class
C, called the canonical suboplex. In parallel, we give the explicit construction in the case of
C = linfun0d := linfun2 {0 7→ 0}. This is the same class as linfun2 , except we delete 0 from
the domain of every function. It gives rise to essentially the same complex as linfun2 , and we will
recover Slinfun2 explicitly at the end.
Pick a domain, say [n] = {0, . . . , n − 1}. Let C ⊆ [n → 2] be a class of boolean functions on [n].
We construct a simplicial complex SC as follows:
1. To each f ∈ C we associate an (n − 1)-dimensional simplex Ff ∼
= 4n−1 , which will be a facet
of SC .
2. Each of the n vertices of Ff is labeled by an input/output pair i 7→ f (i) for some i ∈ [n], and
each face G of Ff is labeled by a partial function f ⊆ f , whose graph is specified by the labels
of the vertices of G. See Figure 3a for the construction in Step 1 and Step 2 for linfun02 .
3. For each pair f, g ∈ C, Ff is glued together with Fg along the subsimplex G (in both facets)
with partial function label f ∩ g. See Figure 3b for the construction for linfun02 .
This is the simplicial complex associated to the class C, called the canonical suboplex SC of
C. Notice that in the case of linfun0d , the structure of “holes” is not trivial at all: Slinfun0d has 3
holes in dimension 1 but no holes in any other dimension. An easy way to visualize this it to pick
one of the triangular holes; If you put your hands around the edge, pull the hole wide, and flatten
the entire complex onto a flat plane, then you get Figure 4a.
It is easy to construct the canonical suboplex of linfund from that of linfun0d : Slinfund is just
a cone over Slinfun0d , where the cone vertex has the label [0 0]T 7→ 0 (Figure 4b). This is because
every function in linfund shares this input/output pair. Note that a cone over any base has no
hole in any dimension, because any hole can be contracted to a point in the vertex of the cone.
This is a fact we will use very soon.
Let’s give another important example, the class of all functions. If C = [n → 2], then one can
see that SC is isomorphic to the 1-norm unit sphere (also known as orthoplex) S1n−1 := {kxk1 =
4
1
INTRODUCTION
(a) The shape obtained by stretching Slinfun0d
along one of its triangular holes and then flatten
everything onto a flat plane. This deformation
preserves all homological information, and from
this picture we see easily that Slinfun0d has 3 holes,
each of dimension 1.
(b) The canonical suboplex of linfund is just
a cone over that of linfun0d . Here we show the
case d = 2.
Figure 4
0 7→ 1
a 7→ b
2 7→ 1
SC(a7→b)
1 7→ 0
1 7→ 1
SC
2 7→ 0
0 7→ 0
(a) The canonical suboplex of
[3 → 2].
(b) SC(a7→b) is an affine section
of SC .
(c) we may recover Slinfun0d as a
linear cut through the “torso” of
Slinfund .
Figure 5
1 : x ∈ Rn } (Figure 5a). For general C, SC can be realized as a subcomplex of S1n−1 . Indeed, for
C = linfun02 ⊆ [3 → 2], it is easily seen that SC is a subcomplex of the boundary of an octahedron,
which is isomorphic to S12 .
Let C ⊆ [n → 2], and let f :⊆ [n] → [2] be a partial function. Define the filtered class C f to
be
{g \ f : g ∈ C, g ⊇ f} ⊆ [[n] \ dom f → [2]]
Unwinding the definition: C f is obtained by taking all functions of C that extend f and
ignoring the inputs falling in the domain of f.
The canonical suboplex SCf can be shown to be isomorphic to an affine section of SC , when the
latter is embedded as part of the L1 unit sphere S1n−1 . Figure 5b shows an example when f has
a singleton domain. Indeed, recall linfun0d is defined as linfund {0 7→ 0}, and we may recover
Slinfun0d as a linear cut through the “torso” of Slinfund (Figure 5c).
“OK. I see the holes. But how does this have anything to do with our proof of
I1 6∈ linfund ?”
5
1.4
Nerve Lemma
Figure 6: A continuous deformation of Slinfun02 into a complete graph with 4 vertices (where we ignore the
sharp bends of the “outer” edges).
Hold on tight! We are almost there.
First let me introduce a “duality principle” in algebraic topology called the Nerve Lemma.
Readers familiar with it can skip ahead to the next section.
1.4
Nerve Lemma
Note that the canonical suboplex of linfun02 can be continuously deformed as shown in Figure 6 into
a 1-dimensional complex (a graph), so that all of the holes are still preserved. Such a deformation
produces a complex • whose vertices correspond exactly to the facets of the original complex, and
• whose edges correspond exactly to intersections of pairs of facets, all the while preserving the
holes of the original complex, and producing no new ones.
Such an intuition of deformation is vastly generalized by the Nerve Lemma:
Lemma 1.1 (Nerve Lemma (Informal)). Let U = {Ui }i be a “nice” cover (to be explained below)
of a topological space X. The nerve NU of U is defined as the simplicial
complex with vertices
T
{Vi : Ui ∈ U}, and with simplices {Vi }i∈S for each index set S such that {Ui : i ∈ S} is nonempty.
Then, for each dimension d, the set of d-dimensional holes in X is bijective with the set of
d-dimensional holes in NU .
What kind of covers are nice? Open covers in general spaces, or subcomplex covers in simplicial (or CW) complexes, are considered “nice”, if in
addition they satisfy the following requirements (acyclicity).
P
• Each set of the cover must have no holes.
• Each nontrivial intersection of a collection of sets must have no holes.
The example we saw in Figure 7 is an application of the Nerve Lemma for the
cover by facets. Another example is the star cover: For vertex V in a complex,
the open star St V of V is defined as the union of all open simplices whose
closure meets V (see Figure 7 for an example). If the cover U consists of the
open stars of every vertex in a simplicial complex X, then NU is isomorphic
to X as complexes.
Figure 7: The open
star St P of vertex P
OK! We are finally ready to make the connection to complexity!
6
1
INTRODUCTION
1
7→ 1
1
1
7→ 1
1
I1
(a) The canonical suboplex of linfun2
{[0 0]T 7→ 0, [1 1]T 7→ 1} is isomorphic to the
affine section as shown, and it has two disconnected components, and thus “a single zeroth dimensional hole.”
(b) When we add I1 to linfund to obtain D :=
linfund ∪ {I1 }, SDg now does not have any hole!
Figure 8
1.5
The Connection
It turns out that Slinfun0d = Slinfund (07→0) (a complex of dimension 2d − 2) has holes in dimension
d − 1. The proof is omitted here but will be given in Section 2.3.6. This can be clearly seen in our
example when d = 2 (Figure 4a), which has 3 holes in dimension d − 1 = 1. Furthermore, for every
partial linear functional h (a linear functional defined on a linear subspace), Slinfund h also has
holes, in dimension d − 1 − dim(dom h). Figure 8a show an example for d = 2 and h = [1 1]T 7→ 1.
But when we add I1 to linfund to obtain D := linfund ∪ {I1 }, SDg now does not have any
hole! Figure 8b clearly demonstrates the case d = 2. For general d, note that Slinfun0d has a “nice”
cover by the open stars
C := {St V : V has label u 7→ r for some u ∈ Fd2 \ {0} and r ∈ F2 }.
When we added I1 to form D, the collection C 0 := C ∪ 4I1 obtained by adding the simplex of I1
to C is a “nice” cover of SD . Thus the nerve NC 0 has the same holes as SD , by the Nerve Lemma.
But observe that NC 0 is a cone! . . . which is what our “combinatorial proof” of I1 6∈ linfund really
showed.
More precisely, a collection of stars S := {St V : V ∈ V} has
nontrivial intersection iff there is a partial linear functional extending the labels of each V ∈ V. We showed I1 intersects every partial
linear functional strictly extending g : 0 7→ 0, 1 7→
T 1. Therefore, a
0
I1
collection of stars S in C intersects nontrivially iff (S∪{4I1 }) 6= ∅.
0
In other words, in the nerve of C , 4I1 forms the vertex of a
cone over all other St V ∈ C. In our example of linfun2 , this is
demonstrated in Figure 9.
Thus, to summarize, • NC 0 , being a cone, has no holes. • By
the Nerve Lemma, SDg has no holes either. • Since Slinfund g has Figure 9: The nerve N 0 overC
holes, we know D 6= linfund , i.e. I1 6∈ linfund , as desired.
layed on D = linfun2 ∪ {I1 }.
While this introduction took some length to explain the logic of Note that NC 0 is a cone over its
our approach, much of this is automated in the theory we develop base of 2 points.
in this paper, which leverages existing works on Stanley-Reisner
theory and cellular resolutions.
***
7
1.6
Dimension theory
In our proof, we roughly did the following
• (Local) Examined the intersection of I1 with fragments of functions in linfund .
• (Global) Pieced together the fragments with nontrivial intersections with I1 to draw conclusions about the “holes” I1 creates or destroys.
This is the local-global philosophy of this homological approach to complexity, inherited from
algebraic topology. This is markedly different from conventional wisdom in computer science, which
seeks to show that a function, such as f = 3sat, has some property that no function in a class,
say C = P, has. In that method, there is no global step that argues that some global property of C
changes after adding f into it.
Using our homological technique, we show, in Section 3, a proof of Minsky and Papert’s classical
result that the class polythrkd of polynomial thresholds of degree k in d variables does not contain
the parity function parityd unless k = d (Theorem 3.40). Homologically, there are many
Pk reasons.
By considering high dimensions, we deduce that Spolythrk has a hole in dimension i=0 di that
d
is filled in by parityd . By considering low dimensions, we obtain a maximal principle for
polynomial threshold functions from which we obtain not only Minsky and Papert’s result but also
extensions to arbitrary symmetric functions. This maximal principle Theorem 3.51 says
Theorem 1.2 (Maximal Principle for Polynomial Threshold). Let C := polythrkd , and let f :
{−1, 1}d → {−1, 1} be a function. We want to know whether f ∈ C.
Suppose there exists a function g ∈ C (a “local maximum” for approximating g) such that
• for each h ∈ C that differs from g on exactly one input u, we have g(u) = f (u) = ¬h(u).
If g 6= f , then f 6∈ C. (In other words, if f ∈ C, then the “local maximum” g must be a “global
maximum”).
Notice that the maximal principle very much follows the local-global philosophy. The “local
maximum” condition is saying that when one looks at the intersection with f of g and its “neighbors” (local), these intersections together form a hole that f creates when added to C (global). The
homological intuition, in more precise terms, is that a local maximum g 6= f ∈ C implies that the
filtered class C (f ∩ g) consists of a single point with label g, so that when f is added to C, a
zero-dimensional hole is created.
We also obtain an interesting characterization of when a function can be weakly represented
by a degree bounded polynomial threshold function. A real function ϕ : U → R on a finite set
U is said to weakly represent a function f : U → {−1, 1} if ϕ(u) > 0 ⇐⇒ f (u) = 1 and
ϕ(u) < 0 ⇐⇒ f (u) = −1, but we don’t care what happens when ϕ(u) = 0. Our homological
theory of function essentially says that f ∈ polythrkd (“f is strongly representable by a polynomial
of degree k”) iff Spolythrk ∪{f }g has the same number of holes as Spolythrk g in each dimension and
d
d
for each g. But, intriguingly, f is weakly representable by a polynomial of degree k iff Spolythrk ∪{f }
d
has the same number of holes as Spolythrk in each dimension (Corollary 3.46) — in other words,
d
we only care about filtering by g = † but no other partial functions.
1.6
Dimension theory
Let C ⊆ [n → 2]. The VC Dimension dimVC C of C is the size of the largest set U ⊆ [n] such that
C U = {0, 1}U .
Consider the following setting of a learning problem: You have an oracle, called the sample
oracle, such that every time you call upon it, it will emit a sample (u, h(u)) from an unknown
8
1
INTRODUCTION
distribution P over u ∈ [n], for a fixed h ∈ C. This sample is independent of all previous and all
future samples. Your task is to learn the identity of h with high probability, and with small error
(weighted by P ).
A central result of statistical learning theory says roughly that
Theorem 1.3 ([10]). In this learning setting, one only needs O(dimVC C) samples to learn h ∈ C
with high probability and small error.
It is perhaps surprising, then, that the following falls out of our homological approach.
Theorem 1.4 (Colloquial version of Theorem 3.11). Let C ⊆ [n → 2]. Then dimVC C is upper
bounded by one plus the highest dimension, over any partial function g, of any hole in SCg . This
quantity is known as the homological dimension dimh C of C.
In fact, equality holds for common classes in the theory of computation like linfund and
polythrkd , and also when certain algebraic conditions hold. More precisely — for readers with
algebraic background —
Theorem 1.5 (Colloquial version of Corollary 3.34). dimVC C = dimh C if the Stanley-Reisner ring
of SC is Cohen-Macaulay.
These results suggest that our homological theory captures something essential about computation, that it’s not a coincidence that we can use “holes” to prove complexity separation.
1.7
Homological Farkas
Farkas’ Lemma is a simple result from linear algebra, but it is an integral tool for proving weak
and strong dualities in linear programming, matroid theory, and game theory, among many other
things.
Lemma 1.6 (Farkas’ Lemma). Let L ⊆ Rn be a linear subspace not contained in any coordinate
hyperplanes, and let P = {x ∈ Rn : x > 0} be the positive cone. Then either
• L intersects P , or
• L is contained in the kernel of a nonzero linear functional whose coefficients are all nonnegative.
but not both.
Farkas’ Lemma is a characterization of when a linear subspace intersects the positive cone in
terms of linear conditions. An alternate view important in computer science is that Farkas’ Lemma
provides a linear certificate for when this intersection does not occur. Analogously, our Homological Farkas’ Lemma will characterize such an intersection in terms of homological conditions, and
simultaneously provide a homological certificate for when this intersection does not occur.
Before stating the Homological Farkas’ Lemma, we first introduce some terminology.
For g : [n] → {1, −1}, let Pg ⊆ Rn denote the open cone whose points have signs given by
g. Consider the intersection 4g of Pg with the unit sphere S n−1 and its interior 4̊g . 4̊g is
homeomorphic to an open simplex. For g 6= ¬1, define Λ(g) to be the union of the facets F of 4g
such that 4̊g and 4̊1 sit on opposite sides of the affine hull of F . Intuitively, Λ(g) is the part of
∂4g that can be seen from an observer in 4̊1 (illustrated by Figure 10a).
The following homological version of Farkas’ Lemma naturally follows from our homological
technique of analyzing the complexity of threshold functions.
9
1.7
Λ(g)
Λ(g)
4¬1
4g
Homological Farkas
4¬1
41
(a) An example of a Λ(g). Intuitively, Λ(g) is the part of ∂4g
that can be seen from an observer in 41 .
4g
41
(b) An illustration of Homological Farkas’ Lemma. The horizontal dash-dotted plane intersects the interior of 41 , but its
intersection with any of the Λ(f ), f 6= 1, ¬1 has no holes. The
vertical dash-dotted plane misses the interior of 41 , and we see
that its intersection with Λ(g) as shown has two disconnected
components.
Figure 10
Theorem 1.7 (Homological Farkas’ Lemma Theorem 3.43). Let L ⊆ Rn be a linear subspace.
Then either
• L intersects the positive cone P = P1 , or
• L ∩ Λ(g) for some g 6= 1, ¬1 is nonempty and has holes.
but not both.
Figure 10b illustrates an example application of this result.
One direction of the Homological Farkas’ Lemma has the following intuition. As mentioned
before, Λ(g) is essentially the part of ∂4g visible to an observer Tom in 4̊1 . Since the simplex is
convex, the image Tom sees is also convex. Suppose Tom sits right on L (or imagine L to be a
subspace of Tom’s visual field). If L indeed intersects 4̊1 , then for L ∩ Λ(g) he sees some affine
space intersecting a convex body, and hence a convex body in itself. Since Tom sees everything (i.e.
his vision is homeomorphic with the actual points), L ∩ Λ(g) has no holes, just as Tom observes.
In other words, if Tom is inside 4̊1 , then he cannot tell Λ(g) is nonconvex by his vision alone,
for any g. Conversely, the Homological Farkas’ Lemma says that if Tom is outside of 4̊1 and if he
looks away from 4̊1 , he will always see a nonconvex shape in some Λ(g).
As a corollary to Theorem 1.7, we can also characterize when a linear subspace intersects a region
in a linear hyperplane arrangement (Corollary 3.55), and when an affine subspace intersects a region
in an affine hyperplane arrangement (Corollary 3.56), both in terms of homological conditions. A
particular simple consequence, when the affine subspace either intersects the interior or does not
intersect the closure at all, is illustrated in Figure 11.
The rest of this paper is organized as follows. Section 2 builds the theory underlying our
complexity separation technique. Section 2.1 explains some of the conventions we adopt in this
work and more importantly reviews basic facts from combinatorial commutative algebra and collects
important lemmas for later use. Section 2.2 defines the central objects of study in our theory, the
Stanley-Reisner ideal and the canonical ideal of each function class. The section ends by giving a
characterization of when an ideal is the Stanley-Reisner ideal of a class. Section 2.3 discusses how
to extract homological data of a class from its ideals via cellular resolutions. We construct cellular
10
2
THEORY
3
2
f
g
Λ(g)
Λ(f )
1
Figure 11: Example application of Corollary 3.57. Let the hyperplanes (thin lines) be oriented such that
the square S at the center is on the positive side of each hyperplane. The bold segments indicate the Λ of
each region. Line 1 intersects S, and we can check that its intersection with any bold component has no
holes. Line 2 does not intersect the closure S, and we see that its intersection with Λ(f ) is two points, so
has a “zeroth dimension” hole. Line 3 does not intersect S either, and its intersection with Λ(g) consists of
a point in the finite plane and another point on the circle at infinity.
resolutions for the canonical ideals of many classes prevalent in learning theory, such as conjunctions,
linear thresholds, and linear functionals over finite fields. Section 2.4 briefly generalizes definitions
and results to partial function classes, which are then used in Section 2.5. This section explains,
when combining old classes to form new classes, how to also combine the cellular resolutions of the
old classes into cellular resolutions of the new classes.
Section 3 reaps the seeds we have sowed so far. Section 3.1 looks at notions of dimension, the
Stanley-Reisner dimension and the homological dimension, that naturally appear in our theory and
relates them to VC dimension, a very important quantity in learning theory. We observe that in
most examples discussed in this work, the homological dimension of a class is almost the same as
its VC dimension, and prove that the former is always at least the latter. Section 3.2 characterizes
when a class has Stanley-Reisner ideal and canonical ideal that induce Cohen-Macaulay rings, a
very well studied type of rings in commutative algebra. We define Cohen-Macaulay classes and show
that their homological dimensions are always equal to their VC dimensions. Section 3.3 discusses
separation of computational classes in detail, and gives simple examples of this strategy in action.
Here a consequence of our framework is the Homological Farkas Lemma. Section 3.4 formulates
and proves the maximal principle for threshold functions, and derives an extension of Minsky and
Papert’s result for general symmetric functions. Section 3.5 further extends Homological Farkas
Lemma to general linear or affine hyperplane arrangements. Section 3.6 examines a probabilistic
interpretation of the Hilbert function of the canonical ideal, and shows its relation to hardness of
approximation.
Finally, Section 5 considers major questions of our theory yet to be answered and future directions of research.
2
2.1
Theory
Background and Notation
In this work, we fix k to be an arbitrary field. We write N = {0, 1, . . . , } for the natural numbers.
Let n, m ∈ N and A, B be sets. The notation f :⊆ A → B specifies a partial function f whose
domain dom f is a subset of A, and whose codomain is B. The words “partial function” will often
be abbreviated “PF.” We will use Sans Serif font for partial (possibly total) functions, ex. f, g, h,
11
2.1
Background and Notation
but will use normal font if we know a priori a function is total, ex. f, g, h. We denote the empty
function, the function with empty domain, by †. We write [n] for the set {0, 1, . . . , n − 1}. We write
[A → B] for the set of total functions from A to B and [⊆ A → B] for the set of partial functions
from A to B. By a slight abuse of notation, [n → m] (resp. [⊆ n → m] is taken to be a shorthand
for [[n] → [m]] (resp. [⊆ [n] → [m]]). The set [2d ] is identified with [2]d via binary expansion (ex:
5 ∈ [24 ] is identified with (0, 1, 0, 1) ∈ [2]4 ). A subset of [A → B] (resp. [⊆ A → B]) is referred to
as a class (resp. partial class), and we use C, D (resp. C, D), and so on to denote it. Often, a bit
vector v ∈ [2d ] will be identified with the subset of [d] of which it is the indicator function.
For A ⊆ B, relative set complement is written B \ A; when B is clearly the universal set from
context, we also write Ac for the complement of A inside B. If {a, b} is any two-element set, we
write ¬a = b and ¬b = a.
P
Denote the n-dimensional simplex {v ∈ Rn : i vi = 1} by 4n . Let X, Y be topological spaces
(resp. simplicial complexes, polyhedral complexes). The join of X and Y as a topological space
(resp. simplicial complex, polyhedral complex) is denoted by X ? Y . We abbreviate the quotient
X/∂X to X/∂.
We will use some terminologies and ideas from matroid theory in Section 2.3.5 and Section 3.3.
Readers needing more background can consult the excellently written chapter 6 of [22].
2.1.1
Combinatorial Commutative Algebra
Here we review the basic concepts of combinatorial commutative algebra. We follow [11] closely.
Readers familiar with this background are recommended to skip this section and come back as
necessary; the only difference in presentation from [11] is that we say a labeled complex is a cellular
resolution when in more conventional language it supports a cellular resolution.
Let k be a field and S = k[x] be the polynomial ring over k in n indeterminates x = x0 , . . . , xn−1 .
n−1
Definition 2.1. A monomial in k[x] is a product xa = xa00 · · · xn−1
for a vector a = (a0 , . . . , an−1 ) ∈
n
a
N of nonnegative integers. Its support supp x is the set of i where ai 6= 0. We say xa is squarefree if every coordinate of a is 0 or 1. We often use symbols σ, τ , etc for squarefree exponents, and
identify them with the corresponding subset of [n].
An ideal I ⊆ k[x] is called a monomial ideal if it is generated by monomials, and is called a
squarefree monomial ideal if it is generated by squarefree monomials.
a
Let ∆ be a simplicial complex.
Definition 2.2. The Stanley-Reisner ideal of ∆ is defined as the squarefree monomial ideal
I∆ = hxτ : τ 6∈ ∆i
generated by the monomials corresponding the nonfaces τ of ∆. The Stanley-Reisner ring of ∆
is the quotient ring S/I∆ .
Definition 2.3. The squarefree Alexander dual of squarefree monomial ideal I = hxσ1 , . . . , xσr i
is defined as
I ? = mσ1 ∩ · · · ∩ mσr .
If ∆ is a simplicial complex and I = I∆ its Stanley-Reisner ideal, then the simplicial complex ∆?
?.
Alexander dual to ∆ is defined by I∆? = I∆
Proposition 2.4 (Prop 1.37 of [11]). The Alexander dual of a Stanley-Reisner ideal I∆ can in fact
be described as the ideal hxτ : τ c ∈ ∆i, with minimal generators xτ where τ c is a facet of ∆.
12
2
THEORY
Definition 2.5. The link of σ inside the simplicial complex ∆ is
linkσ ∆ = {τ ∈ ∆ : τ ∪ σ ∈ ∆ & τ ∩ σ = ∅},
the set of faces that are disjoint from σ but whose unions with σ lie in ∆.
Definition 2.6. The restriction of ∆ to σ is defined as
∆ σ = {τ ∈ ∆ : τ ⊆ σ}.
Definition 2.7. A sequence
φ1
φ
l
Fl ← 0
F• : 0 ← F0 ←− F1 ← · · · ← Fl−1 ←−
of maps of free S-modules is called a complex if φi ◦ φi+1 = 0 for all i. The complex is exact in
homological degree i if ker φi = im φi+1 . When the free modules Fi are Nn -graded, we require that
each homomorphism φi to be degree-preserving.
Let M be a finitely generated Nn -graded module M . We say F• is a free resolution of M over
S if F• is exact everywhere except in homological degree 0, where M = F0 / im φ1 . The image in
Fi of the homomorphism φi+1 is the ith syzygy module of M . The length of F• is the greatest
homological degree of a nonzero module in the resolution, which is l here if Fl 6= 0.
The following lemma says that if every minimal generator of an ideal J is divisible by x0 , then
its resolutions are in bijection with the resolutions of J/x0 , the ideal obtained by forgetting variable
x0 .
Lemma 2.8. Let I ⊆ S = k[x0 , . . . , xn−1 ] be a monomial ideal generated by monomials not divisible
by x0 . A complex
F• : 0 ← F0 ← F1 ← · · · ← Fl−1 ← Fl ← 0
resolves x0 I iff for S/x0 = k[x1 , . . . , xn−1 ],
F• ⊗S S/x0 : 0 ← F0 /x0 ← F1 /x0 ← · · · ← Fl−1 /x0 ← Fl /x0 ← 0
resolves I ⊗S S/x0 .
Definition 2.9. Let M be a finitely generated Nn -graded module M and
F• : 0 ← F0 ← F1 ← · · · ← Fl−1 ← Fl ← 0
L
be a minimal graded free resolution of M . If Fi = a∈Nn S(−a)βi,a , then the ith Betti number
of M in degree a is the invariant βi,a = βi,a (M ).
Proposition 2.10 (Lemma 1.32 of [11]). βi,a (M ) = dimk TorSi (k, M )a .
Proposition 2.11 (Hochster’s formula, dual version). All nonzero Betti numbers of I∆ and S/I∆
lie in squarefree degrees σ, where
e i−1 (linkσc ∆∗ ; k).
βi,σ (I∆ ) = βi+1,σ (S/I∆ ) = dimk H
Proposition 2.12 (Hochster’s formula). All nonzero Betti numbers of I∆ and S/I∆ lie in squarefree
degrees σ, where
e |σ|−i−1 (∆ σ; k).
βi−1,σ (I∆ ) = βi,σ (S/I∆ ) = dimk H
13
2.1
Background and Notation
Note that since we are working over a field k, the reduced cohomology can be replaced by
reduced homology, since these two have the same dimension.
Instead of algebraically constructing a resolution of an ideal I, one can sometimes find a labeled
simplicial complex whose simplicial chain is a free resolution of I. Here we consider a more general
class of complexes, polyhedral cell complexes, which can have arbitrary polytopes as faces instead
of just simplices.
Definition 2.13. A polyhedral cell complex X is a finite collection of convex polytopes, called
faces or cells of X, satisfying two properties:
• If P is a polytope in X and F is a face of P, then F is in X.
• If P and Q are in X, then P ∩ Q is a face of both P and Q.
In particular, if X contains any point, then it contains the empty cell ∅, which is the unique cell
of dimension −1.
Each closed polytope P in this collection is called a closed cell of X; the interior of such a
polytope, written P̊, is called an open cell of X. By definition, the interior of any point polytope
is the empty cell.
The complex with only the empty cell is called the irrelevant complex. The complex with
no cell at all is called the void complex.
The void complex is defined to have dimension −∞; any other complex X is defined to have
dimension dim(X) equal to the maximum dimension of all of its faces.
Examples include any polytope or the boundary of any polytope.
Each polyhedral cell complex X has a natural reduced chain complex, which specializes to the
usual reduced chain complex for simplicial complexes X.
Definition 2.14. Suppose X is a labeled cell complex, by which we mean that its r vertices
have labels that are vectors a1 , . . . , ar in Nr . The label aF on an arbitrary face F of X is defined
as the coordinatewise maximum maxi∈F ai over the vertices in F . The monomial label of the
face F is xaF . In particular, the empty face ∅ is labeled with the exponent label 0 (equivalently,
the monomial label 1 ∈ S). When necessary, we will refer explicitly to the labeling function λ,
defined by λ(F ) = aF , and express each labeled cell complex as a pair (X, λ).
Definition 2.15. Let X be a labeled cell complex. The cellular monomial matrix supported
on X uses the reduced chain complex of X for scalar entries, with the empty cell in homological
degree 0. Row and column labels are those on the corresponding faces of X. The cellular free
chain complex FX supported on X is the chain complex of Nn -graded free S-modules (with basis)
represented by the cellular monomial matrix supported on X. The free complex FX is a cellular
resolution if it has homology only in degree 0. We sometimes abuse notation and say X itself is
a cellular resolution if FX is.
Proposition 2.16. Let (X, λ) be a labeled complex. If FX is a cellular resolution, then it resolves
S/I where I = hxaV : V ∈ X is a vertex}. FX is in addition minimal iff for each cell F of X,
λ(F ) 6= λ(G) for each face G of F .
Proposition 2.17. If X is a minimal cellular resolution of S/I, then βi,a (I) is the number of
i-dimensional cells in X with label a.
14
2
THEORY
Given two vectors a, b ∈ Nn , we write a b and say a precedes b, b − a ∈ Nn . Similarly, we
write a ≺ b if a b but a 6= b. Define Xa = {F ∈ X : aF a} and X≺a = {F ∈ X : aF ≺ a}.
Let us say a cell complex is acyclic if it is either irrelevant or has zero reduced homology. In
the irrelevant case, its only nontrivial reduced homology lies in degree −1.
Lemma 2.18 (Prop 4.5 of [11]). X is a cellular resolution iff Xb is acyclic over k for all b ∈ Nn .
For X with squarefree monomial labels, this is true iff Xb is acyclic over k for all b ∈ [2]n .
When FX is acyclic, it is a free resolution of the monomial quotient S/I where I = hxav : v ∈
X is a vertexi generated by the monomial labels on vertices.
It turns out that even if we only have a nonminimal cellular resolution, it can still be used to
compute the Betti numbers.
Proposition 2.19 (Thm 4.7 of [11]). If X is a cellular resolution of the monomial quotient S/I,
then the Betti numbers of I can be calculated as
e i−1 (X≺b : k)
βi,b (I) = dimk H
as long as i ≥ 1.
Lemma 2.18 and Proposition 2.19 will be used repeatedly in the sequel.
We will also have use for the dual concept of cellular resolutions, cocellular resolutions, based
on the cochain complex of a polyhedral cell complex.
Definition 2.20. Let X 0 ⊆ X be two polyhedral cell complexes. The cochain complex C • (X, X 0 ; k)
of the cellular pair (X, X 0 ) is defined by the exact sequence
0 → C • (X, X 0 ; k) → C • (X; k) → C • (X 0 ; k) → 0.
The ith relative cohomology of the pair is H i (X, X 0 ; k) = H i C • (X, X 0 ; k).
Definition 2.21. Let Y be a cell complex or a cellular pair. Then Y is called weakly colabeled
if the labels on faces G ⊆ F satisfy aG aF . In particular, if Y has an empty cell, then it W
must
be labeled as well. Y is called colabeled if, in addition, every face label aG equals the join aF
of all the labels on facets F ⊇ G. Again, when necessary, we will specifically mention the labeling
function λ(F ) = aF and write the cell complex (or pair) as (Y, λ).
We have the following well known lemma from the theory of CW complexes.
Lemma 2.22. Let X be a cell complex. A collection R of open cells in X is a subcomplex of X iff
S
R is closed in X.
If Y = (X, X 0 ) is a cellular pair, then we treat Y as the collection of (open) cells in X \ X 0 ,
for the reason that C i (X, X 0 , k) has as a basis the set of open cells of dimension i in X \ X 0 . As
Y being a complex is equivalent to Y being the pair (Y, {}) (where {} is the void subcomplex), in
the sense that the reduced cochain complex of Y is isomorphic to the cochain complex of the pair
(Y, {}), we will only speak of cellular pairs from here on when talking about colabeling.
Definition 2.23. Let Y = (X, A) be a cellular pair and U a subcollection of open cells of Y . We
say U is realized by a subpair (X 0 , A0 ) ⊆ (X, A) (i.e. X 0 ⊆ X, A0 ⊆ A) if U is the collection of
open cells in X 0 \ A0 .
Definition 2.24. Define Yb (resp. Y≺b and Yb ) as the collection of open cells with label b
(resp. ≺ b and b).
15
2.1
Background and Notation
We often consider Yb , Y≺b , and Yb as subspaces of Y , the unions of their open cells.
Proposition 2.25. Let Y be a cellular pair and U = Yb (resp. Y≺b and Yb ). Then U is realized
by the pair (U, ∂U), where the first of the pair is the closure of U as a subspace in Y , and the second
is the partial boundary ∂U := U \ U.
Proof. See Appendix A.
Note that if X 0 is the irrelevant complex, then H i (X, X 0 ; k) = H i (X; k), the unreduced cohoe i (X; k), the reduced cohomology of
mology of X. If X 0 is the void complex, then H i (X, X 0 ; k) = H
e i (X/X 0 ; k).
X. Otherwise X 0 contains a nonempty cell, and it is well known that H i (X, X 0 ; k) ∼
=H
0
i
0
i
∼
e (•; k) = 0.
In particular, when X = X, H (X, X ; k) = H
Definition 2.26. Let Y be a cellular pair (X, X 0 ), (weakly) colabeled. The (weakly) cocellular
monomial matrix supported on Y has the cochain complex C • (Y ; k) for scalar entries, with top
dimensional cells in homological degree 0. Its row and column labels are the face labels on Y .
The (weakly) cocellular free complex FY supported on Y is the complex of Nn -graded free
S-modules (with basis) represented by the cocellular monomial matrix supported on Y . If FY is
acyclic, so that its homology lies only in degree 0, then FY is a (weakly) cocellular resolution.
We sometimes abuse notation and say Y is a (weakly) cocellular resolution if FY is.
Proposition 2.27. Let (Y, λ) be a (weakly) colabeled complex or pair. If FY is a (weakly) cocellular
resolution, then FY resolves I = hxaF : F is a top dimensional cell of Y i. It is in addition minimal
iff for each cell F of Y , λ(F ) 6= λ(G) for each cell G strictly containing F .
We say a cellular pair (X, X 0 ) is of dimension d if d is the maximal dimension of all (open) cells
in X \ X 0 . If Y is a cell complex or cellular pair of dimension d, then a cell F of dimension k with
label aF corresponds to a copy of S at homological dimension d − k with degree xaF . Therefore,
Proposition 2.28. If Y is a d-dimension minimal (weakly) cocellular resolution of ideal I, then
βi,a (I) is the number of (d − i)-dimensional cells in Y with label a.
We have an acyclicity lemma for cocellular resolutions similar to Lemma 2.18
Lemma 2.29. Let Y = (X, A) be a weakly colabeled pair of dimension d. For any U ⊆ X, write U
for the closure of U inside X. Y is a cocellular resolution iff for any exponent sequence a, K := Ya
satisfies one of the following:
1) The partial boundary ∂K := K \ K contains a nonempty cell, and H i (K, ∂K) is 0 for all
i 6= d and is either 0 or k when i = d, or
e i (K)
2) The partial boundary ∂K is void (in particular does not contain the empty cell), and H
is 0 for all i 6= d and is either 0 or k when i = d, or
3) K is void.
Proof. See Appendix A.
Lemma 2.30. Suppose Y = (X, A) is a weakly colabeled pair of dimension d. If Y supports a
cocellular resolution of the monomial ideal I, then the Betti numbers of I can be calculated for all
i as
βi,b (I) = dimk H d−i (Y b , ∂Yb ; k).
Proof. See Appendix A.
Like with boundaries, we abbreviate the quotient K/∂K to K/∂, so in particular, the equation
above can be written as
e d−i (Yb /∂; k).
βi,b (I) = dimk H
16
2
THEORY
00
01
(a) linfun1 suboplex. Dashed lines indicate facets
of the complete suboplex not in Slinfun1 . Label 00 is
the identically zero function; label 01 is the identity
function.
(b) Slinfun2 is a cone of what is shown, which is a subcomplex of the boundary complex of an octahedron.
The cone’s vertex has label ((0, 0), 0), so that every
top dimensional simplex meets it, because every linear
functional sends (0, 0) ∈ (F2 )2 to 0.
Figure 12: linfun1 and linfun2 suboplexes.
2.2
The Canonical Ideal of a Function Class
Definition 2.31. An n-dimensional orthoplex (or n-orthoplex for short) is defined as any polytope
combinatorially equivalent to {x ∈ Rn : kxk1 ≤ 1}, the unit disk under the 1-norm in Rn . Its
boundary is a simplicial complex and has 2n facets. A fleshy (n − 1)-dimensional suboplex,
or suboflex is the simplicial complex formed by any subset of these 2n facets. The complete
(n − 1)-dimensional suboplex is defined as the suboplex containing all 2n facets. In general, a
suboplex is any subcomplex of the boundary of an orthoplex.
For example, a 2-dimensional orthoplex is equivalent to a square; a 3-dimensional orthoplex is
equivalent to an octahedron.
Let C ⊆ [n → 2] be a class of finite functions. There is a natural fleshy (n − 1)-dimensional
suboplex SC associated to C. To each f ∈ C we associate an (n−1)-dimensional simplex Ff ∼
= 4n−1 ,
which will be a facet of SC . Each of the n vertices of Ff is labeled by a pair (i, f (i)) for some i ∈ [n],
and each face G of Ff is labeled by a partial function f ⊆ f , whose graph is specified by the labels
of the vertices of G. For each pair f, g ∈ C, Ff is glued together with Fg along the subsimplex G
(in both facets) with partial function label f ∩ g. This produces SC , which we call the canonical
suboplex of C.
Example 2.32. Let [n → 2] be the set of boolean functions with n inputs. Then S[n→2] is the
complete (n − 1)-dimensional suboplex. Each cell of S[n→2] is association with a unique partial
function f :⊆ [n] → [2], so we write Ff for such a cell.
Example 2.33. Let f ∈ [n → 2] be a single boolean function with domain [n]. Then S{f } is a single
(n − 1)-dimensional simplex.
Example 2.34. Let linfun2d ⊆ [2d → 2] be the class (F2 )d∗ of linear functionals mod 2. Figure 12
shows Slinfun2 for d = 1 and d = 2.
d
The above gluing construction actually make sense for any C ⊆ [n → m] (with general codomain
[m]), even though the resulting simplicial complex will no longer be a subcomplex of S[n→2] . However, we will still call this complex the canonical suboplex of C and denote it SC as well. We
name any such complex an m-suboplex. The (n − 1)-dimensional m-suboplex S[n→m] is called the
complete (n − 1)-dimensional m-suboplex.
17
2.2
The Canonical Ideal of a Function Class
The canonical suboplex of C ⊆ [n → m] can be viewed as the object generated by looking at
the metric space Cp on C induced by a probability distribution p on [n], and varying p over all
distributions in 4n−1 . This construction seems to be related to certain topics in computer science
like derandomization and involves some category theoretic techniques. It is however not essential
to the homological perspective expounded upon in this work, and thus its details are relegated to
the appendix (See Appendix B).
Definition 2.35. Let C ⊆ [n → m]. Write S for the polynomial ring k[x] with variables xi,j for
i ∈ [n], j ∈ [m]. We call S the canonical base ring of C. The Stanley-Reisner ideal IC of C
is defined as the Stanley-Reisner ideal of SC with respect to S, such that xi,j is associated to the
“vertex” (i, j) of SC (which might not actually be a vertex of SC if no function f in C computes
f (i) = j).
The canonical ideal IC? of C is defined as the Alexander dual of its Stanley-Reisner ideal.
By Proposition 2.4, the minimal generators of IC? are monomials xσ where σ c is the graph of a
function in C. Let us define Γf to be the complement of graph f in [n] × [m] for any partial function
f :⊆ [n] → [m]. Therefore, IC? is minimally generated by the monomials {xΓf : f ∈ C}. When the
codomain [m] = [2], Γf = graph(¬f ), the graph of the negation of f , so we can also write
IC? = hxgraph ¬f : f ∈ Ci.
Example 2.36. Let [n → 2] be the set of boolean functions with domain [n]. Then I[n→2] is the
?
ideal hxi,0 xi,1 : i ∈ [n]i, and I[n→2]
is the ideal hxΓf : f ∈ [n → 2]i = hxgraph g : g ∈ [n → 2]i.
Example 2.37. Let f ∈ [n → 2]. The singleton class {f } has Stanley-Reisner ideal hxi,¬f (i) : i ∈ [n]i
and canonical ideal hxΓf i.
The Stanley-Reisner ideal IC of a class C has a very concrete combinatorial interpretation.
Proposition 2.38. Let C ⊆ [n → m]. IC is generated by all monomials of the following forms:
1. xu,i xu,j for some u ∈ [n], i 6= j ∈ [m], or
2. xgraph f for some partial function f :⊆ [n] → [m] such that f has no extension in C, but every
proper restriction of f does.
It can be helpful to think of case 1 as encoding the fact that C is a class of functions, and so for
every function f , f sends u to at most one of i and j. For this reason, let us refer to monomials
of the form xu,i xu,j , i 6= j as functional monomials with respect to S and write FMS , or FM
when S is clear from context, for the set of all functional monomials. Let us also refer to a PF f
of the form appearing in case 2 as an extenture of C, and denote by ex C the set of extentures of
C. In this terminology, Proposition 2.38 says that IC is minimally generated by all the functional
monomials and xgraph f for all extentures f ∈ ex C.
Proof. The minimal generators of IC are monomials xa ∈ IC such that xa /xu,i 6∈ IC for any (u, i) ∈ a.
By the definition of IC , a is a nonface, but each subset of a is a face of the canonical suboplex SC
of C. Certainly pairs of the form {(u, i), (u, j)} for u ∈ [n], i 6= j ∈ [m] are not faces of SC , but each
strict subset of it is a face unless (u, i) 6∈ SC or (u, j) 6∈ SC . In either case x(u,i) or x(u,j) or fall into
case 2. If a minimal generator ω is not a pair of such form, then its exponent b cannot contain
such {(u, i), (u, j)} either, or else r is divisible by xu,i xu,j . Therefore b is the graph of a partial
function f :⊆ [n → m]. In particular, there is no f ∈ C extending f, or else graph f is a face of SC .
But every proper restriction of f must have an extension in C. Thus ω is of the form stated in the
proposition. One can also quickly see that xgraph f for any such f is a minimal generator of IC .
18
2
THEORY
Taking the minimal elements of the above set, we get the following
Proposition 2.39. The minimal generators of C ⊆ [n → m] are
{xΓf : f ∈ ex C} ∪ {xu,i xu,j ∈ FM : (u 7→ i) 6∈ ex C, (u 7→ j) 6∈ ex C}.
Are all ideals with minimal generators of the above form a Stanley-Reisner ideal of a function
class? It turns out the answer is no. If we make suitable definitions, the above proof remains valid
if we replace C with a class of partial functions (see Proposition 2.85). But there is the following
characterization of the Stanley-Reisner ideal of a (total) function class.
Proposition 2.40. Let I ⊆ S be an ideal minimally generated by {xgraph f : f ∈ F} ∪ {xu,i xu,j ∈
FM : (u 7→ i) 6∈ F, (u 7→ j) 6∈ F } for a set of partial functions F. Then I is the Stanley-Reisner
ideal of a class of total functions C precisely when
For any subset F ⊆ F, if F (u) defined as {f(u) : f ∈ F, u ∈ dom f} is
equal to [m] for some u ∈ [n], then either |F (v)| > 1 for some v 6= u
in [n], or
[
∨u F :=
f (dom f \ {u})
(?)
f∈F ,u∈dom f
is a partial function extending some h ∈ F.
Lemma 2.41. For I minimally generated as above, I = IC for some C iff for any partial f :⊆ [n] →
[m], xgraph f 6∈ I implies xgraph f 6∈ I for some total f extending f.
Proof of Lemma 2.41. Let ∆I be the Stanley-Reisner complex of I. Then each face of ∆I is the
graph of a partial function, as I has all functional monomials as generators. A set of vertices σ is
a face iff xσ 6∈ I. I = IC for some I iff ∆I is a generalized suboflex, iff the maximal cells of ∆I are
all (n − 1)-dimensional simplices, iff every cell is contained in such a maximal cell, iff xgraph f 6∈ I
implies xgraph f 6∈ I for some total f extending f.
Proof of Proposition 2.40. (⇒). We show the contrapositive. Suppose for some F ⊆ F and u ∈ [n],
F (u) = [m] but |F (v)| ≤ 1 for all v 6= u and g := ∨u F does not extend any f ∈ F. Then xgraph g 6∈ I,
and every total f ⊇ g must contain one of f ∈ F, and so xgraph f ∈ I. Therefore I 6= IC for any C.
(⇐). Suppose (?) is true. We show that for any nontotal function f :⊆ [n] → [m] such that
graph
f 6∈ I, there is a PF h that extends f by one point, such that xgraph h 6∈ I. By simple induction,
x
this would show that I = IC for some C.
Choose u 6∈ dom f. Construct F := {g ∈ F : u ∈ dom g, f ⊇ g (dom g \ {u})}.
If F (u) 6= [m], then we can pick some i 6∈ F (u), and set h(u) = i and h(v) = f(v), ∀v 6= u. If
h ⊇ k for some k ∈ F, then k ∈ F , but then k(u) 6= h(u) by assumption. Therefore h does not
extend any PF in F, and xgraph h 6∈ I.
If F (u) = [m], then by (?), either |F (v)| > 1 for some v 6= u or ∨u F extends some h ∈ F. The
former case is impossible, as f ⊇ g (dom g \ {u}) for all g ∈ F . The latter case is also impossible,
as it implies that xgraph f ∈ I.
19
2.3
2.3
Resolutions
Resolutions
Sometimes we can find the minimal resolution of the Stanley-Reisner ideal of a class. For example,
consider the complete class [n → 2]. Its Stanley-Reisner ideal is hxi,0 xi,1 : i ∈ [n]i as explained in
Example 2.36.
Theorem 2.42. Let X be an (n − 1)-simplex, whose vertex i is labeled by monomial xi,0 xi,1 . Then
X is a minimal cellular resolution of S/I[n→2] .
Proof. The vertex labels of X generate I[n→2] , and each face label is distinct from other face labels,
so if X is a cellular resolution, then it resolves S/I[n→2] and is minimal. Therefore it suffices to show
that FX is exact. By Lemma 2.18, we need to show that Xb is acyclic over k for all b ⊆ [n] × [2].
Xb can be described as the subcomplex generated by the vertices {i : (i, 0), (i, 1) ∈ b}, and hence
is a simplex itself and therefore contractible. This completes the proof.
Corollary 2.43. The Betti numbers of I[n→2] are nonzero only at degrees of the form
σ=
Y
xi,0 xi,1
i∈U
for subset U ⊆ [n]. In such cases,
βi,σ (I[n→2] ) = I(i = |U | − 1).
Similar reasoning also gives the minimal resolution of any singleton class.
Theorem 2.44. Suppose f ∈ [n → 2]. Let X be an (n − 1)-simplex, whose vertex i is labeled by
variable xi,¬f (i) . Then X is a minimal cellular resolution of S/I{f } .
Corollary 2.45. The Betti numbers of I{f } are nonzero only at degrees of the form
σ=
Y
xi,¬f (i)
i∈U
for subset U ⊆ [n]. In such cases,
βi,σ (I{f } ) = I(i = |U | − 1).
However, in general, minimally resolving the Stanley-Reisner ideal of a class seems difficult.
Instead, we turn to the canonical ideal, which appears to more readily yield cellular resolutions,
and as we will see, whose projective dimension corresponds to the VC dimension of the class under
?
an algebraic condition. For example, a single point with label xΓf minimally resolves S/I{f
} for
any f ∈ [n → 2].
We say (X, λ) is a cellular resolution of a class C if (X, λ) is a cellular resolution of S/IC? .
In the following, we construct the cellular resolutions of many classes that are studied in Computational Learning Theory. As a warmup, we continue our discussion of [n → 2] by constructing a
cellular resolution of its canonical ideal.
Theorem Q
2.46. Let P be the n-dimensional cube [0, 1]n , where vertex v ∈ [2]n is labeled with the
monomial ni=1 xi,vi . Then P minimally resolves [n → 2].
20
2
THEORY
Proof. We first show that this labeled cell complex on the cube is a cellular resolution. Let σ ⊆
[n] × [2]. We need to show that Pσ is acyclic. If for some i, (i, 0) 6∈ σ & (i, 1) 6∈ σ, then Pσ is
empty and thus acyclic. Otherwise, σ c defines a partial function f :⊆ [n] → [2]. Then Pσ is the
“subcube”
{v ∈ [0, 1]n : vi = ¬f (i), ∀i ∈ dom f },
and is therefore acyclic. This shows that P is a resolution. It is easy to see that all faces of P have
unique labels in this form, and hence the resolution is minimal as well.
?
P resolves S/I[n→2]
by Example 2.36
The above proof readily yields the following description of [n → 2]’s Betti numbers.
?
Corollary 2.47. The Betti numbers for I[n→2]
are nonzero only at degrees of the form Γf for partial
functions f :⊆ [n] → [2]. More precisely,
?
) = I(| dom f | = n − i)
βi,Γf (I[n→2]
We made a key observation in the proof of Theorem 2.46, that when neither (i, 0) nor (i, 1) is
in σ for some i, then Pσ is empty and thus acyclic. A generalization to arbitrary finite codomains
is true for all complexes X we are concerned with:
Lemma 2.48. Let (X, λ) be a labeled complex in which each vertex i is labeledTwith Γf
i for partial
function fi : [n] → [m]. Then the face label λ(F ) for a general face F is Γ i∈F fi . A fortiori
Xσ is empty whenever σ is not of the form Γg for some partial function g :⊆ [n] → [m].
Proof. Treating the exponent labels, which are squarefree, as sets, we have
!c
!
[
[
\
\
c
λ(F ) =
Γfi =
(graph fi ) = graph
fi
=Γ
fi
i∈F
i∈F
i∈F
i∈F
If σ is not of the form Γg, then for some a ∈ [n] and b 6= b0 ∈ [m], (a, b), (a, b0 ) 6∈ σ. But every
exponent label is all but at most one of the pairs (a, ∗). So Xσ is empty.
If we call a complex as described in the lemma partial-function-labeled, or PF-labeled for
short, then any PF-labeled complex has a set of partial function labels, or PF labels for short,
along with its monomial/exponent labels. If fF denotes the partial function label of face F and aF
denotes the exponent label of face F , then they can be interconverted via
fF = acF
aF = ΓfF
where on the right we identify a T
partial function with its graph. Lemma 2.48 therefore says that
F ⊆ G implies fF ⊇ fG , and fF = i∈F fi , for faces F and G. When we wish to be explicit about the
PF labeling function, we use the symbol µ, such that µ(F ) = fF , and refer to labeled complexes as
pairs (X, µ) or triples (X, λ, µ). We can furthermore reword Lemma 2.18 for the case of PF-labeled
complexes. Write X⊇f (resp. X⊃f ) for the subcomplex with partial function labels weakly (resp.
strictly) extending f.
Lemma 2.49. A PF-labeled complex X is a cellular resolution iff X⊇f is acyclic over
partial functions f.
k
for all
A PF-colabeled complex or pair is defined similarly. The same interconversion equations hold.
We can likewise reword Lemma 2.29.
21
2.3
Resolutions
Lemma 2.50. Let (X, A) be a weakly PF-colabeled complex or pair of dimension d. (X, A) is a
cocellular resolution if for any partial function f, (X, A)⊇f is either
1) representable as a cellular pair (Y, B) – that is, X \A as a collection of open cells is isomorphic
to Y \ B as a collection of open cells, such that H i (Y, B) is 0 for all i 6= d, or
2) a complex Y (in particular it must contain a colabeled empty cell) whose reduced cohomology
vanishes at all dimensions except d.
Because any cellular resolution of a class C only has cells with degree Γf for some PF f, the Betti
numbers βi,σ (IC? ) can be nonzero only when σ = Γg for some PF g. We define the Betti numbers
of a class C as the Betti numbers of its canonical ideal IC? , and we denote βi,f (C) := βi,Γf (IC? ).
Finally we note a trivial but useful proposition and its corollary.
Proposition 2.51. Let C ⊆ [n → m], and let f :⊆ [n] → [m]. The subset of functions extending f,
{h
T ∈ C : f ⊆ h}, is the intersection of the collection of sets which extend the point restrictions of f ,
i∈dom f {h ∈ C : (i, f(i)) ⊆ h}.
S
If partial functions g1 , . . . , gk ∈ [n → m] satisfy t gt = f, then we also have
{h ∈ C : f ⊆ h} =
k
\
{h ∈ C : gt ⊆ h}.
t=1
Corollary 2.52. Let f :⊆ [n]
S → [m]. Suppose X is a PF-labeled complex. If partial functions
g1 , . . . , gk ∈ [n → m] satisfy t gt = f, then
X⊇f =
k
\
X⊇gt .
t=1
With these tools in hand, we are ready to construct cellular resolutions of more interesting
function classes.
2.3.1
Delta Functions
Let deltan ⊆ [n → 2] be the class of delta functions δi (j) = I(i = j). Form the abstract simplex
X with vertices [n]. Label each vertex i with δi and induce PF labels on all higher dimensional
faces in the natural way. One can easily check the following lemma.
Lemma 2.53. For any face F ⊆ [n] with |F | > 1, its PF label fF is the function defined on [n] \ F ,
sending everything to 0. Conversely, for every partial f :⊆ [n] → [2] with im f ⊆ {0}, there is a
unique face F with fF = f as long as n − | dom f | ≥ 2.
Theorem 2.54. X is a (n − 1)-dimensional complex that minimally resolves deltan .
Proof. We apply Lemma 2.49: We show for any f :⊆ [n] → [2], X⊇f is acyclic.
If f sends two distinct elements to 1, then X⊇f is empty. If f sends exactly one element i to
1, then X⊇f is the single point i. If f is the empty function, then X⊇f is the whole simplex and
thus acyclic. Otherwise, im f = {0}. If n − | dom f | = 1, then there is exactly one delta function
extending f, so X⊇f is again a point. If n − | dom f | ≥ 2, then by Lemma 2.53, X⊇f is exactly one
face F with fF = f, and therefore acyclic.
X is furthermore minimal because all PF labels are distinct.
22
2
THEORY
Tabulating the faces by their labels, we obtain
Corollary 2.55. For i > 0, βi,f (deltan ) is nonzero only when im f ⊆ {0} and n − | dom f| ≥ 2,
and i = n − | dom f| − 1. In that case, βi,f (deltan ) = 1. In particular, the top dimensional Betti
number is βn−1,† (deltan ) = 1.
2.3.2
Weight-k Functions
Write o := 0 ∈ [n → 2], the function that sends all inputs to 0. Let wt(f, k)n ⊆ [n → 2] be the
class consisting of all functions g such that there are exactly k inputs u ∈ [n] such that g(u) 6= f (u).
This is a generalization of delta, as wt(o, 1)n = delta.
P WLOG, we consider the case f = o in
this section. Consider the hyperplane Hk := {v ∈ Rn : i vi = k} and the polytope given by
Pnk := [0, 1]n ∩ Hk .
We inductively define its labeling function µkn and show that (Pnk , µkn ) is a minimal cellular resolution
of wt(f, k)n .
For n = 1, P10 and P11 are both a single point. Set µ01 (P10 ) = (0 7→ 0) and µ11 (P11 ) = (0 7→ 1).
Then trivially, (P10 , µ01 ) is the minimal resolution of wt(o, 0) = {0 7→ 0} and (P11 , µ11 ) is the minimal
resolution of wt(o, 1) = {0 7→ 1}.
k , µk ) is a minimal cellular resolution of wt(o, k)
Suppose that µkm is defined and that (Pm
m
for all 0 ≤ k ≤ m. Consider n = m + 1 and fix k. Write, for each u ∈ [n], b ∈ [2], Fu,b :=
[0, 1]u × {b} × [0, 1]n−u−1 for the corresponding facet of [0, 1]n . Then Pnk has boundary given by
[
Fu,b ∩ Hk .
u∈[n],b∈[2]
k−1
∼ P k and Fu,1 ∩ Hk ∼
(here ∼
But we have Fu,0 ∩ Hk =
= Pn−1
= means affinely isomorphic). Thus, if
n−1
G is a face of Fu,b ∩ Hk , we define the labeling functions
µkn (G) : [n] → [2]
i 7→ µk−b
n−1 (G)(i)
if i < u
i 7→ b
if i = u
i 7→
µk−b
n−1 (G)(i
− 1)
if i > u.
If we represent functions as a string of {0, 1, .} (where . signifies “undefined”), then essentially
µkn (G) is obtained by inserting b ∈ {0, 1} at the uth position in µk−b
n−1 (G). It is easy to see that,
when G is both a face of Fu,b ∩ Hk and a face of Fu0 ,b0 ∩ Hk , the above definitions of µkn (G) coincide.
Finally, we set µkn (Pnk ) = †. This finishes the definition of µkn .
In order to show that (Pnk , µkn ) is a minimal cellular resolution, we note that by induction
k−b
hypothesis, it suffices to show that (Pnk )⊇† = Pnk is acyclic, since (Pnk )⊇(u7→b)∪f ∼
)⊇f is
= (Pn−1
k
acyclic. But of course this is trivial given that Pn is a polytope. By an easy induction, the vertex
labels of Pnk are exactly the functions of wt(o, k)n . Thus
Theorem 2.56. (Pnk , µkn ) as defined above is a minimal resolution of wt(o, k)n .
Corollary 2.57. For k 6= 0, n, C := wt(o, k)n has a Betti number βi,† (C) = I(i = n − 1). Furthermore, for each PF f, βi,f (wt(o, k)n ) is nonzero for at most one i, where it is 1.
23
2.3
2.3.3
Resolutions
Monotone Conjunction
Let L = {l1 , . . . , ld } be a set of literals. The class of monotone conjunctions monconjd over
L is defined as the set of functions that can be represented as a conjunction of a subset of L.
We represent each h ∈ monconjd as the set of literals L(h) in its conjunctive form, and for each
subset (or indicator function thereof) T of literals, let Λ(T ) denote the corresponding function. For
example, Λ{l1 , l3 } is the function that takes v ∈ [2]d to 1 iff v1 = v3 = 1.
Theorem 2.58. Let X be the d-cube in which each vertex V ∈ [2]d has partial function label (that
is in fact a total function) fV = Λ(V ), where on the RHS V is considered an indicator function for
a subset of literals. Then X resolves monconjd minimally.
We first show that the induced face labels of X are unique, and hence if X is a resolution, it is
minimal. This will follow from the following three lemmas.
Lemma 2.59. Let w be a partial function w :⊆ [d] → [2]. Let Σw be the set of monotone conjunctions {h T
: li ∈ L(h) if w(i) = 1 and li 6∈ L(h) if w(i) = 0}. Then the intersection of functions (not
literals) Σw is the partial function Λ(w) := f :⊆ [2d ] → [2],
0
f(v) = 1
undefined
if vi = 0 for some i with w(i) = 1
if vi = 1 for all i with w(i) = 1 and for all i where w(i) is undefined
otherwise.
When w is a total function considered as a bit vector, Λ(w) coincides with the previous definition
of Λ.
If F is the face of the cube resolution X with the vertices {V : V ⊇ w} (here treating V ∈ [2]d ∼
=
[d → 2] as a function), then the partial function label of F is f.
T
Proof. f is certainly contained in Σw . To see that the inclusion is an equality, we show that for
any v not of the two cases above, there are two functions h, h0 that disagree on v. Such a v satisfies
vi = 1 for all w(i) = 1 but vi = 0 for some w(i) being undefined. There is some h ∈ Σw with L(h)
containing the literal li and there is another h0 ∈ Σw with li 6∈ L(h0 ). These two functions disagree
on v.
The second statement can be checked readily. The third statement follows from Lemma 2.48.
Lemma 2.60. For any partial function f of the form in Lemma 2.59, there is a unique partial
function w :⊆ d → 2 with f = Λ(w), and hence there is a unique cell of X with PF label f.
Proof. The set A := w−1 1 ∪ (dom w)c is the set {i ∈ d : vi = 1, ∀v ∈ f −1 1}, by the second case in
f’s definition. The set B := w−1 1 is the set of i ∈ d such that the bit vector v with vi = 0 and
vj = 1 for all j 6= i is in f −1 0, by the first case in f’s definition. Then dom w = (A \ B)c , and
w−1 0 = (dom w) \ (w−1 1).
Lemma 2.61. The face labels of X are all unique.
Proof. Follows from Lemma 2.59 and Lemma 2.60.
Proof of Theorem 2.58. We show that X is a resolution (minimal by the above) by applying
Lemma 2.49. Let f :⊆ [2d ] → [2] be a partial function and g0 , g1 be respectively defined by
gt = f f −1 t for t = 0, 1, so that f = g0 ∪ g1 . By Corollary 2.52, X⊇f = X⊇g0 ∩ X⊇g1 . We first
show that X⊇g1 is a face of X, and thus is itself a cube. If h ∈ monconjd is a conjunction, then
24
2
THEORY
T
it can be seen that h extends g1 iff L(h) ⊆ L1 := v∈dom g1 {li : vi = 1} (check this!). Thus X⊇g1
is the subcomplex generated by the vertices V whose coordinates Vi satisfy Vi = 0, ∀i 6∈ L1 . This
subcomplex is precisely a face of X.
Now we claim that each cell of X⊇g0 ∩ X⊇g1 is a face of a larger cell which contains the vertex
W with W1 = 1, ∀i ∈ L1 and Wi = 0, ∀i 6∈ L1 . This would imply that X⊇g0 ∩ X⊇g1 is contractible
via the straight line homotopy to W .
We note that if h, h0 ∈ monconjd and L(h) ⊆ L(h0 ), then h extends g0 only if h0 also extends
g0 . (Indeed, h extends g0 iff ∀v ∈ dom g0 , vk = 0 while lk ∈ L(h) for some k. This still holds for
h0 if h0 contains all literals appearing in h). This means that, if F is a face of X⊇g0 , then the face
F 0 generated by {V 0 : ∃V ∈ F, V ⊆ V 0 } (where V and V 0 are identified with the subset of literals
they correspond to) is also contained in X⊇g0 ; F 0 can alternatively be described geometrically as
the intersection [0, 1]d ∩ (F + [0, 1]d ). If furthermore F is a face of X⊇g1 , then F 0 ∩ X⊇g1 contains
W as a vertex, because W is inclusion-maximal among vertices in X⊇g1 (when identified with sets
for which they are indicator functions for). This proves our claim, and demonstrates that X⊇f is
?
contractible. Therefore, X is a (minimal) resolution, of Imonconj
by construction.
d
Corollary 2.62. βi,f (monconjd ) is nonzero iff f = Λ(w) for some PF w :⊆ [d] → [2] and i =
d − | dom w|, and in that case it is 1. In particular, the top dimensional nonzero Betti number is
βd,17→1 (monconjd ) = 1.
Proof. This follows from Lemma 2.59 and Lemma 2.60.
We will refer to X as the cube resolution of monconjd .
2.3.4
Conjunction
S
Define L0 := di=1 {li , ¬li }. The class of conjunctions conjd is defined as the set of functions that
can be represented as a conjunction of a subset of L0 . In particular, L0 contains the null function
⊥ : v 7→ 0, ∀v, which can be written as the conjunction l1 ∧ ¬l1 .
We now describe the polyhedral cellular resolution of conjd , which we call the cone-over-cubes
resolution, denoted COCd . Each nonnull function h has a unique representation as a conjunction of
e be the inverse function taking a set
literals in L0 . We define L(h) to be the set of such literals and Λ
of consistent literals to the conjunction function. We assign a vertex Vh ∈ {−1, 0, 1}d × {0} ∈ Rd+1
to each nonnull h by
if li ∈ L(h)
1
(Vh )i = −1 if ¬li ∈ L(h)
0
otherwise
for all 1 ≤ i ≤ d (and of course (Vh )d+1 = 0), so that the PF label fVh = h. We put in COCd all
d
z
}|
{
faces of the (2, 2, . . . , 2) pile-of-cubes: these are the collection of 2d d-dimensional unit cubes with
vertices among {−1, 0, 1}d × {0}. This describes all faces over nonnull functions.
Finally, we assign the coordinate V⊥ = (0, . . . , 0, 1) ∈ Rd+1 , and put in COCd the (d + 1)dimensional polytope C which has vertices Vh for all h ∈ conjd , and which is a cone over the
pile of cubes, with vertex V⊥ . (Note that this is an improper polyhedron since the 2d facets of C
residing on the base, the pile of cubes, all sit on the same hyperplane.)
Figure 13 shows the cone-over-cubes resolution for d = 2.
Theorem 2.63. COCd is a (d + 1)-dimensional complex that minimally resolves conjd .
25
2.3
Resolutions
⊥
¬l1 ∧ l2
>
¬l1
¬l1 ∧ ¬l2
l1 ∧ l2
l2
l1
¬l2
l1 ∧ ¬l2
Figure 13: Cone-over-cube resolution of conj2 . Labels are PF labels.
Proof. Let X = COCd . We first shot that X is a resolution of conjd . We wish to prove that for
any f :⊆ [2d ] → [2], the subcomplex of X⊇f is acyclic.
First suppose that im f = {0}. Then X⊇f is a subcomplex that is a cone with V⊥ as the vertex,
and hence contractible.
Otherwise f sends some point u ∈ [2d ] ∼
= [2]d to 1. All h ∈ conjd extending f must have L(h)
ui
ui
be a subset of {li : i ∈ [d]}, where li is the literal li if ui = 1 and ¬li if li = 0. The subcomplex of
X consisting of these h is a single d-cube of the pile, given by the opposite pair of vertices 0 and
2u − 1 in Rd considered as the hyperplane containing the pile. But then this case reduces to the
reasoning involved in the proof that the cube resolution resolves monconjd . Hence we conclude
that X⊇f is acyclic for all f, and therefore X resolves conjd .
We prove the uniqueness of PF labels and therefore the minimality of X through the following
series of propositions.
Each face of COCd containing the vertex V⊥ is a cone over some subpile-of-subcubes, which has
vertices Pw = {V ∈ {−1, 0, 1}d × {0} : Vi = w(i), ∀i ∈ dom w} for some PF w :⊆ [d] → {−1, 1}. We
shall write Cw for a face associated with such a w. Obviously dim Cw = d + 1 − | dom w|.
Proposition 2.64. Let W ∈ {−1, 0, 1}d × {0} be defined by Wi = w(i), ∀i ∈ dom w, and Wi = 0
otherwise. Thus W is the “center” of the subpiles-of-subcubes mentioned above. Its PF label is a
total function f = fW ∈ conjd .
Then the face Cw has a unique PF label Λ0 (w) := f f −1 0 = f ∩ ⊥ as a partial function
:⊆ [2d ] → [2].
Proof. By Lemma 2.48, the PF label of Cw is the intersection of the PF labels of its vertices. Since
Λ0 (w) = f ∩ ⊥, Λ0 (w) ⊇ fCw .
Because L(f ) ⊆ L(fV ) for all V ∈ Pw , f (u) = 0 implies fV (u) = 0, ∀V ∈ Pw . Thus Λ0 (w) ⊆
fV , ∀V ∈ Cw =⇒ Λ0 (w) = fCw as desired.
Uniqueness follows from the uniqueness of preimage of 0.
S
The rest of the faces in COCd reside in the base, and for each face F , {L(fV ) : V ∈ F } contains
at most one of each pair {¬li , li }. Define the partial order C on {−1, 0, 1}d as the product order
of partial order 0 C0 +1, −1. It is easy to check that V E W implies L(fV ) ⊆ L(fW ), which further
implies fV −1 (0) ⊆ fW −1 (0) and fV −1 (1) ⊇ fW −1 (1). Each face F can be described uniquely by the
least and the greatest vertices in F under this order, which we denote resp. as min F and max F .
26
2
THEORY
Then the vertices in F are precisely those who fall in the interval [min F, max F ] under partial order
C.
Proposition 2.65. Let F be a face residing in the base of COCd and write V := min F and
W := max F . Then F has a unique PF label fF = Λ(V, W ) :⊆ [2d ] → {−1, 0, 1},
if ui = (1 − Vi )/2 for some i with Vi 6= 0
0
u 7→ 1
if ui = (1 + Wi )/2 for all i with Wi 6= 0
undefined otherwise.
T
Proof. By the observation above, we see that fF = U ∈F fU has fF−1 (0) = fV−1 (0) and fF−1 (1) =
−1
fW
(1). Both sets are exactly of the form described above.
It remains to check that the map F 7→ fF is injective. Let f = fF for some face F . We have
(max F )i = 1 iff ∀u ∈ f −1 (1), ui = 1 and (max F )0 = 0 iff ∀u ∈ f −1 (1), ui = −1. Thus f determines
max F . Let v be the bit vector defined by vj = (1 + (max F )j )/2 if (max F )j 6= 0 and vj = 0
otherwise. Let v i denote v with the ith bit flipped. Then (min F )i 6= 0 iff f(v i ) = 0. For all other
i, we have (min F )i = (max F )i . This proves the uniqueness of the label fF .
Proposition 2.66. Every face of COCd has a unique PF label.
Proof. The only thing remaining to check after the two propositions above is that faces incident
on the vertex V⊥ have different PF labels from all other faces. But it is obvious that functions of
the form in the previous proposition have nonempty preimage of 1, so cannot equal Λ0 (w) for any
w.
Summarizing our results, we have the following
Theorem 2.67. βi,f (conjd ) is nonzero iff f = Λ0 (w) for some w :⊆ [d] → {−1, 1} and i =
d + 1 − | dom w| or f = Λ(V, W ) for some V, W ∈ {−1, 0, 1}d , V E W . In either case, the Betti
number is 1.
In particular, the top dimensional nonzero Betti number is βd+1,† (conjd ) = 1.
2.3.5
Threshold Functions
Let U ⊆ Rd be a finite set of points. We are interested in the class of linear threshold functions
linthrU on U , defined as the set of functions of the form
(
1 if c · u > r
u 7→
0 if c · u ≤ r
for some c ∈ Rd , r ∈ R. We shall assume U affinely spans Rd ; otherwise, we replace Rd with the
affine span of U , which does not change the class linthrU .
When U = {−1, 1}d , this is the class of linear threshold functions on d bits, and we write
linthrd for linthrU in this case. Define
ms (u0 , . . . , ud−1 ) = (u0 · · · us−2 us−1 , u0 · · · us−2 us , · · · , ud−s · · · ud−2 ud−1 )
as the function that outputs degree s monomials of its input. For U = Mk , the image of {−1, 1}d
under the map
m≤k : u 7→ (m1 u, · · · , mk u)
27
2.3
Resolutions
linthrU becomes polythrkd , the class of polynomial threshold functions on d bits with degree
bound k.
We will construct a minimal cocellular resolution for linthrU , which will turn out to be homeomorphic as a topological space to the d-sphere S d . 1
We first vectorize the set U by mapping each point to a vector, u 7→ ~u, (u1 , . . . , ud ) 7→
~ . Each oriented affine
(u1 , . . . , ud , 1). We refer to the image of U under this vectorization as U
hyperplane H in the original affine space Rd (including the hyperplane at infinity, i.e. all points
get labeled positive or all points get labeled negative) corresponds naturally and bijectively to a
~ in Rd+1 which can be identified by the normal vector ν(H)
~ on the unit sphere
vector hyperplane H
d
d+1
~
S ⊆R
perpendicular to H and oriented the same way.
~ that contains ~u is exactly
For each vector ~u ∈ Rd+1 , the set of oriented vector hyperplanes H
~ residing on the equator E~u := ν(H)
~ ⊥ ∩ Sd
the set of those which have their normal vectors ν(H)
d
d
of S . This equator divides S \ E~u into two open sets: v · ~u > 0 for all v in one
T (let’s call this
set R~u+ ) and v · ~u < 0 for all v in the other (let’s call this set R~u− ). Note that {E~u : u ∈ U }
~ (vector)
is empty, since we have assumed at the beginning that U affinely spans Rd , and thus U
d+1
d
spans R . The set of all such equators for all ~u divides S into distinct open subsets, which form
the top-dimensional (open) cells of a cell complex. More explicitly, each cell
T F (not necessarily
top-dimensional and possibly empty) of this complex has a presentation as {A~u : u ∈ U } where
each A~u is one of {E~u , R~u+ , R~u− }. If the cell is nonempty, then this presentation is unique and we
assign the PF label fF :⊆ U → [2] defined by
if A~u = R~u+
1
fF (u) = 0
if A~u = R~u−
undefined otherwise.
~ for some oriented affine hyperplane H such that
It is easily seen that any point in F is ν(H)
−1
−1
U \ dom fF lies on H, fF (1) lies on the positive side of H, and fH
(0) lies on the negative side of
H.
If F = ∅ is the empty cell, then we assign the empty function f∅ = † as its PF label.
Figure 14 illustrates this construction.
We claim this labeling gives a minimal cocellular resolution X of linthrU . We show this via
Lemma 2.50.
Suppose f is the empty function. Then X⊇f = X, which is a complex with nontrivial reduced
cohomology only at dimension d, where the
is 1 (case 2 of Lemma 2.50).
T rank of+itsTcohomology
−
Now suppose f is nonempty. Then X⊇f = u:f(u)=1 R~u ∩ u:f(u)=0 R~u is an intersection of open halfspheres. It is either empty (case 3 of Lemma 2.50) or is homeomorphic, along with its boundary in
S d , to the open d-disk and its boundary (Dd , ∂Dd ), which has cohomology only at degree d because
Dd /∂ ∼
= S d , where its rank is 1 (case 1 of Lemma 2.50).
Thus our claim is verified. X is in fact minimal, as each cell has a unique monomial label. We
have proved the following.
Theorem 2.68. The colabeled complex X constructed as above is a minimal cocellular resolution
of linthrU .
Definition 2.69. The colabeled complex X is called the coBall resolution of linthrU , written
coBallU .
1
For readers familiar with hyperplane arrangements: The cocellular resolution is essentially S d intersecting the
fan of the hyperplane arrangement associated with the matroid on U . The partial function labels on the resolution
are induced from the covector labelings of the fan.
28
2
THEORY
u2 = (1, −1)
u3 = (1, 1)
11..
.1.1
u0 = (−1, −1)
u1 = (−1, 1)
.10.
1..0
c · u3 = 0
0.0.
..00
Figure 14: Cocellular resolution of linthrU , where U = {u0 , u1 , u2 , u3 } as labeled in the figure. Each
equator is the orthogonal space to the vector ~ui ; the case for i = 3 is demonstrated in the figure. The text in
monofont are the PF labels of vertices visible in this projection. For example, 1..0 represents the PF that
sends u0 to 1 and u3 to 0, and undefined elsewhere.
X can be made a polytope in an intuitive way, by taking the convex hull of all vertices on X 2 .
In addition, we can obtain a minimal polyhedral cellular resolution Y by taking the polar of this
polytope and preserving the labels across polars. Then the empty cell of X becomes the unique
dimension d + 1 cell of Y . We call this cellular resolution Y the ball resolution, written BALLU
or BALL when U is implicitly understood, of linthrU .
For any partial function f :⊆ U → [2], define σf to be the function
+ if f(u) = 1
σf(u) = − if f(u) = 0
0 if f(u) is undefined.
Let L(U ) := {sgn(ψ U ) : ψ is a affine linear map} be the poset of covectors of U , under the
pointwise order 0 < +, −, with smallest element 0. Therefore the cocircuits (minimal covectors)
are the atoms of L(U ). Recall that L(U ) has a rank function defined as
(
rank(a) = 0
if a is a cocircuit;
rank(b) = 1 + rank(a) if b covers a.
and rank(0) = −1.
From the construction of coBallU , it should be apparent that each PF label is really a covector
(identified by σ). There is an isomorphism between L(U ) and the face poset of coBallU :
F ⊆ G ⇐⇒ fF ⊆ fG ⇐⇒ σfF ≤ σfG .
Noting that rank(σfF ) = dim F (and in particular, rank(σf∅ ) = −1 = dim ∅), this observation
yields the following via Proposition 2.28
2
As remarked in the previous footnote, X is the intersection of a polyhedral fan with the unit sphere. Instead of
intersecting the fan with a sphere, we can just truncate the fans to get a polytope.
29
2.3
Resolutions
Theorem 2.70. The Betti number βi,f (linthrU ) is nonzero only when σf is a covector of U . In
this case, βi,f (linthrU ) = 1 if i = d−rank(σf), and 0 otherwise. In particular, the top dimensional
Betti number of linthrU is βd+1,† (linthrU ) = 1.
Via Hochster’s dual formula, this means that the canonical suboplex of linthrU is a homological
(d + 1)-sphere.
Let’s look at the example of U = Md , so that linthrU = polythrdd = [{−1, 1}d → 2]. In
~ is an orthgonal basis for R2d , and thus the equators of coBallU are cut out by a set of
this case, U
pairwise orthogonal hyperplanes. In other words, under a change of coordinates, coBallU is just the
d
sphere S 2 −1 cut out by the coordinate hyperplanes, and therefore is combinatorially equivalent to
the complete suboplex of dimension 2d − 1, with the PF labels given by the 12 (sgn +1) function.
Its polar, BALLU , just recovers the cube resolution of [2d → 2] as discussed in the beginning of
Section 2.3.
When linthrU = polythrkd , notice a very natural embedding of the cocellular resolution
coBallMk ,→ coBallMk+1
as the section of coBallMk+1 cut out by the orthogonal complement of
{w
~ γ : γ is a monomial of degree k + 1},
where wγ is all 0s except at the position where γ appears in m≤k+1 , and ~· is the vectorization
function as above, appending a 1 at the end. This corresponds to the fact that a polynomial
threshold of degree k is just a polynomial threshold of degree k + 1 whose coefficents for degree
k + 1 monomials are all zero.
This is in fact a specific case of a much more general phenomenon. Let’s call a subset P ⊆ Rn
openly convex if P is convex and
∀u, v ∈ P, ∃τ > 1 : τ u + (1 − τ )v ∈ P.
Examples include any open convex set in Rn , any affine subspace of Rn , and the intersections of
any of the former and any of the latter. Indeed, if P and Q are both openly convex, then, P ∩ Q
is convex: for any u, v ∈ P ∩ Q, if the definition of openly convex for P yields τ = ρ > 1 and that
for Q yields τ = ρ0 > 1, then we may take τ = min(ρ, ρ0 ) for P ∩ Q, which works because P ∩ Q is
convex.
An openly convex set is exactly one which is convex and, within its affine span, is equal to the
interior of its closure.
Our proof that coBallU is a minimal cocellular resolution can be refined to show the following
Theorem 2.71. Let U ⊆ Rn be a point set that affinely spans Rn . Let L be an openly convex cone
of the vector space Rn+1 . Define Y to be the intersection of X = coBallU with L, such that each
nonempty open cell Y ∩ F̊ of Y gets the same exponent label λY (Y ∩ F̊ ) = λX (F̊ ) as the open cell
F̊ of X, and Y has the empty cell ∅ with monomial label xλY (∅) = 1 ∈ S iff L is vector subspace.
Then Y is a minimal cocellular resolution of hxλY (F̊ ) : F̊ is a top dimensional cell in Y i.
We will need a technical lemma, distinguishing the case when L is a vector subspace and when
it is not.
Lemma 2.72. Let L be an openly convex cone in Rq . Then either L equals its vector span, or
there is an open coordinate halfspace (i.e. {v ∈ Rq : vj > 0} or {v ∈ Rq : vj < 0}) that contains L.
30
2
THEORY
Proof. See Appendix A.
Proof of Theorem 2.71. It suffices to show that Y⊇f for any PF f satisfies one of the three conditions
of Lemma 2.29, and the minimality would follow from the uniqueness of labels.
If L is a vector subspace, Y⊇† = Y is a sphere (condition 2). Otherwise, Y⊇† = Y is contained in
an open halfspace H, and thus by projection from the origin onto an affine subspace ∂H 0 parallel
to ∂H, Y is homeomorphic to ∂H 0 ∩ L, an openly convex set of dimension dim Y (condition 1).
Whether L is a vector space, for any nonempty PF f, Y⊇f is the intersection of the unit sphere
(the underlying space of coBallMd ), L, and a number of open halfspaces, and thus the intersection
of openly convex sets contained in an open halfspace. This is again homeomorphic to an openly
convex set of dimension dim Y via projection to an affine subspace, if it is not empty. (condition
1/condition 3).
Linear functionals on Md are bijective with real functions on the boolean d-cube {−1, 1}d .
Therefore the cone L represents a cone of real functions when U = {−1, 1}d , and Y is a minimal
cellular resolution of the threshold functions of L. In other words, we have the following corollary
Corollary 2.73. Let C ⊆ [{−1, 1}d → 2] be the class obtained by strongly thresholding an openly
convex cone L of real functions {−1, 1}d → R, i.e. C = { 21 (sgn(f )+1) : f ∈ L, ∀u ∈ {−1, 1}d [f (u) 6=
0]}. Then C has a minimal cocellular resolution of dimension equal to the dimension of the affine
hull of L.
This corollary specializes to the case when L is any vector subspace of boolean functions. The
examples explored in the beginning of this section took L as degree bounded polynomials. We
make the following formal definitions.
Definition 2.74. Let L be a cone of real functions on {−1, 1}d . Suppose C = { 12 (sgn(f ) + 1) : f ∈
L, f (u) 6= 0, ∀u ∈ {−1, 1}d }. We say C is the strongly thresholded class of L, written C = thr L.
We call C thresholded convex if L is openly convex. We call C thresholded linear if L is linear.
While this corollary produces minimal cocellular resolutions for a large class of functions, it does
not apply to all classes. For example, the corollary shows that the Betti numbers of thresholded
convex classes are either 0 or 1, but as we show in the next section, the linear functionals over finite
fields have very large Betti numbers, so cannot be a thresholded convex class.
2.3.6
Linear Functionals over Finite Fields
d
Let p be a prime power. Define linfunpd ∼
= Fd∗
p ⊆ [p → p] to be the class of linear functionals
p
over the d-dimensional vector space [p]d ∼
= Fdp . We will refer to elements of linfund as covectors.
Denote the affine span of a set of elements g1 , . . . , gk by Lg1 , . . . , gk M. In this section we construct
the minimal resolution of linfundp .
Fix a linear order C on Fd∗
p . We construct as follows a DAG Td of depth d + 1 (with levels 1, ...,
d + 1), whose nodes are of the form (f, V ) where V is an affine subspace of the dual space Fd∗
p and
f is the C-least element of V . (Therefore if any affine subspace appears in a node, then it appears
only in that node — indeed, every affine subspace appears in exactly one node.)
There is only one node at level 1, which we call the root. This is the C-least element along with
V = Fd∗
p .
For any node (f, V ) where dim V > 1, we add as its children the nodes (g, W ) where W is a
codimension-1 affine subspace of V not containing f , and g is the C-least element of W . By simple
induction, one sees that all affine subspaces appearing on level i of Td has dimension d − i. In
31
2.3
Resolutions
particular, the nodes at level d + 1, the leaf nodes, are all of the form (f, {f }). This completes the
construction of Td .
For each path (f1 , V1 = Fd∗
p ), (f2 , V2 ), . . . , (fd+1 , Vd+1 ) from the root to a leaf node, we have by
construction f1 C f2 C · · · C fd C fd+1 . Therefore, every such path is unique.
Lemma 2.75. Any node (f, V ) at level i of Td has exactly pd−i − 1 children.
Proof. The children of (f, V ) are in bijection with the set of codimension-1 affine subspaces of V not
containing f . Each nonzero covector in V ∗ defines a vector hyperplane in V , whose cosets determine
p parallel affine hyperplanes. Exactly one of these affine hyperplanes contain f . Covectors f and g
in V ∗ determine the same hyperplane if f = cg for some constant c ∈ Fdp , c 6= 0. As remarked above,
d−i
V has dimension d − i, and so has cardinality pd−i . Therefore there are p p−1−1 (p − 1) = pd−i − 1
affine hyperplanes of V not containing f .
Q
d−i − 1) maximal paths in the DAG T . (When d = 0,
Lemma 2.76. There are Up (d) := d−1
d
i=0 (p
Up (d) := 1.)
Proof. Immediately follows from the previous lemma.
For example, suppose p = 2 and C is the right-to-left lexicographic order on the covectors:
0 · · · 00 C 0 · · · 01 C 0 · · · 10 C · · · C 1 · · · 10 C 1 · · · 11, where a covector (x1 , . . . , xd ) 7→ a1 x1 + · · · ad xd
is abbreviated as the bitstring a1 a2 · · · ad . When d = 3, the root is (000, F3∗
2 ). There are then seven
dimension 3 − 1 = 2 affine planes in F3p not containing 000, so seven nodes at level 1:
• Covector 001 for all affine planes containing 001, which are
{001, 111, 101, 011}, {001, 111, 100, 010}, {001, 101, 110, 010}, {001, 100, 110, 011}.
• There are 3 other affine planes, which correspond to the following nodes
1. (100, {111, 110, 100, 101})
2. (010, {111, 011, 010, 110})
3. (010, {010, 011, 101, 100})
Or, suppose we choose to order covectors by the number of 1s and then lexicographically,
0 · · · 000 ≺ 0 · · · 001 ≺ 0 · · · 010 ≺ 0 · · · 100 ≺ · · · 10 · · · 000 ≺ 0 · · · 011 ≺ 0 · · · 101 ≺ 0 · · · 110 ≺ · · · ≺
1 · · · 11. Then the DAG will be exactly the same as above.
Once we have built such a DAG Td , we can construct the corresponding cellular resolution X
?
3 The cellular resolution will be simplicial and pure of dimension d. Its vertex set is
of Ilinfun
p.
d
linthrp ∼
= Fd∗ ; each vertex has itself as the PF label. For each maximal path
d
p
(f1 , V1 = Fd∗
p ), (f2 , V2 ), . . . , (fd+1 , Vd+1 ),
we add a top simplex (of dimension d) with the vertex set {f1 , f2 , . . . , fd+1 }. As usual, the PF label
of a face F ⊆ linthrpd is just the intersection of the PF labels of its vertices.
Lemma 2.77. For an k-dimensional face F of X, its PF label is a linear functional on a vector
subspace of Fdp of dimension d − k.
3
If we treat Td as a poset, then the cellular resolution as a complex is a quotient of the order complex of Td by
identifying (f, V ) with (g, W ) iff f = g.
32
2
THEORY
Proof. F has kT+ 1 vertices, f0 , . . . , fk . Their intersection is the partial function defined on the
subspace W = ki=1 ker(fi − fi−1 ), and equals the restriction of fi to W for any i. The affine independence of {f0 , . . . , fk } implies the vector independence of {(f1 − f0 ), . . . , (fk − fk−1 )}. Therefore
W has codimension k, as desired.
Now, to show that X is a minimal resolution, we will require the following lemma.
Lemma 2.78. Fix any linear order C on Fd∗
p . Suppose (g1 , g2 , . . . , gk ) is a sequence of covectors
such that gi is the C-least element of the affine space generated by (gi , gi+1 , . . . , gk ). Then there is
a maximal path in Td containing (g1 , . . . , gk ) as a subsequence.
Proof. We proceed by induction on k. When k = 0, the claim is vacuously true. Assume k ≥ 1. We
will show that there is a path ℘ from the root to a node (g1 , V ) with V containing W = Lg1 , . . . , gk M,
the affine subspace generated by g1 , . . . , gk . Then we apply the induction hypothesis with Fd∗
p
replaced by W and (g1 , g2 , . . . , gk ) replaced by (g2 , g3 , . . . , gk ) to obtain a path from (g1 , V ) to a
leaf node, which would give us the desired result.
The first node of ℘ is of course the root. We maintain the invariant that each node (f, W 0 )
added to ℘ so far satisfies W 0 ⊇ W . If we have added the nodes (f1 , V1 ), (f2 , V2 ), . . . , (fj , Vj ) in p,
then either fj = g1 , in which case we are done, or Vj is strictly larger than W . In the latter case,
there exists an affine subspace Vj+1 of Vj with W ⊆ Vj+1 ⊂ Vj and dim Vj+1 = dim Vj − 1, and
we add (fj+1 , Vj+1 ) to the path ℘, with fj+1 being the C-least element of Vj+1 . This process must
terminate because the dimension of Vj decreases with j, and when it does, we must have fj = g1 ,
and the path ℘ constructed will satisfy our condition.
Theorem 2.79. X is a d-dimensional complex that minimally resolves linfunpd .
Proof. To prove that X is a cellular resolution, it suffices to show that X⊇f for any partial function
f :⊆ [pd → p] is acyclic. The set of f ∈ Fd∗
p extending f is an affine subspace W . Our strategy is to
prove that if {g1 , . . . , gk } generates an affine subspace of W and is a face of X, then {g, g1 , . . . , gk }
is also a face of X, where g is the C-least element of W . This would show that X⊇f is contractible
and thus acyclic. But this is precisely the content of Lemma 2.78: Any such {g1 , . . . , gk } can be
assumed to be in the order induced by being a subsequence of a maximal path of Td . This means in
particular that gi is the least element of Lgi , . . . , gk M. A fortiori, {g, g1 , g2 , . . . , gk } must also satisfy
the same condition because g is the least element of W . Therefore Lemma 2.78 applies, implying
that {g, g1 , g2 , . . . , gk } is a face of X, and X is a cellular resolution as desired.
The resolution is minimal since the PF label of any face P is a covector defined on a strictly
larger subspace than those of its subfaces.
Definition 2.80. The resolution X is called the flag resolution, FLAGpd , of linfunpd with respect
to C.
Theorem 2.81. The Betti number βi,g (linfunpd ) is nonzero only when g is a linear functional
defined on a subspace of Fdp , and i = d − dim dom g. In this case, it is equal to Up (i) (as defined in
Lemma 2.76).
Proof. All the cells in the resolution X have exponent labels of the form Γg as stated in the theorem,
and by Lemma 2.77, such cells must have dimension i = d − dim(dom g). It remains to verify that
the number B of cells with PF label g is Up (i).
The subset of Fd∗
p that extends g is an affine subspace W of dimension d − dim dom g = i. The
number B is the number of sequences (g0 , . . . , gi ) ∈ W i+1 such that gj is the C-least element of
Lgj , . . . , gi M for each j, and such that Lg0 , . . . , gi M = W . If we treat W ∼
= Fi∗
p and construct Ti on W ,
then B is exactly the number of maximal paths of Ti , which is Up (i) by Lemma 2.76.
33
2.3
Resolutions
1011
..11
1..1
...1
1101
.1.1
110.
1100
0111
Figure 15: An example of a nonpure minimal resolution of a boolean function class ⊆ [4 → 2]. The labels
are PF labels. For example, ..11 represents a partial function sending 2 and 3 to 1, and undefined elsewhere.
1000
0010
.0.0
.00.
..00
1...
00..
..1.
1111
0001
...1
0..0
0.0.
0100
.1..
Figure 16: Another example of nonpure minimal resolution of a boolean function class ⊆ [4 → 2]. Only the
vertices and edges are labeled (with PF labels). Note that the maximal cells are the three triangles incident
on the vertex 1111 and the tetrahedron not incident on 1111. They have the same PF label, the empty
function †. Therefore it is possible for a boolean function class to have nonzero Betti numbers in different
dimensions for the same degree.
As discussed in Section 2.3.5, we have the following corollary because the Betti numbers of
linfun2d can be greater than 1.
Corollary 2.82. linfun2d is not a thresholded convex class.
2.3.7
Abnormal Resolutions
All of the classes exhibited above have pure minimal resolutions, but this need not be the case
in general. Figure 15 gives an example of a nonpure minimal resolution of a class ⊆ [4 → 2]. It
consists of a segment connected to a (solid) triangle. This example can be generalized as follows.
Let C ⊆ [n + 1 → 2] be {¬δi = Ind(u 6= i) : i ∈ [n]} ∪ {g := I(u 6∈ {n − 1, n})}. Let X be the
simplicial complex on vertex set C, consisting of an (n − 1)-dimensional simplex on {¬δi : i ∈ [n]},
and a segment attaching ¬δn−1 to g. With the natural PF labels, X minimally resolves C and is
nonpure.
Definition 2.83. We say a class C has pure Betti numbers if for every PF f, βi,f (C) 6= 0 for at
most one i.
All of the classes above discussed in the previous sections have pure Betti numbers. But this
is not true in general. Figure 16 shows a minimal resolution of a class D ⊆ [4 → 2] that has three
34
2
THEORY
triangles and one tetrahedron as its top cells, and they all have the empty function as the PF
label. Thus β2,† (D) = β3,† (D) = 1. This example can be generalized as follows. Let D ⊆ [n → 2]
be {δi : i ∈ [n]} ∪ {1}. Let X be the simplicial complex on vertex set D, consisting of an (n − 1)dimensional simplex on {δi : i ∈ [n]} and triangles on each triple {δi , δj , 1} for each i 6= j. With the
natural PF labels, X is a minimal cellular resolution of D, and the cells with PF label † are exactly
the (n − 1)-dimensional simplex and each of the triangles incident on 1. Thus the gap between
the highest nontrivial Betti number and the lowest nontrivial Betti number for the same partial
function can be linear in the size of the input space.
2.4
Partial Function Classes
Most of the definitions we made actually apply almost verbatim to partial function classes C ⊆ [⊆
n → 2]. Here we list the corresponding definitions for PF classes and the propositions that hold
PF classes as well as for function classes. We omit the proofs as they are similar to the ones given
before.
Definition 2.84. Let C ⊆ [⊆ n → m]. The canonical suboplex SC of C is the subcomplex of the
complete (n − 1)-dimensional m-suboplex consisting of all cells Ff where f has an extension in C.
The canonical base ring S of C is the same as the canonical base ring of [n → m]. The
Stanley-Reisner ideal IC of C is defined as the Stanley-Reisner ideal of SC with respect to S.
The canonical ideal of C is the dual ideal IC? of its Stanley-Reisner ideal. It is generated by
{xΓf : f ∈ C}, and generated minimally by {xΓf : f ∈ C is maximal}.
A Betti number βi,b (IC? ) is nonzero only if b = Γf for some partial function f with extension in
C. Thus we define βi,f (C) = βi,Γf (IC? ).
Proposition 2.85 (Counterpart of Proposition 2.38). Let C ⊆ [⊆ n → m]. Each minimal generator
of IC is either 1) xu,i xu,j for some u ∈ [n], i 6= j ∈ [m], or 2) xgraph f for some partial function
f :⊆ [n] → [m] such that f has no extension in C, but every proper restriction of f does. In addition,
the set of all such monomials is exactly the set of minimal generators of IC .
Definition 2.86. Let C ⊆ [⊆ n → m]. A labeled complex (X, λ) is a (co)cellular resolution of
partial class C if (X, λ) is a (co)cellular resolution of S/IC? .
Proposition 2.87 (Counterpart of Lemma 2.48). If (X, λ) is a cellular resolution
T of a partial class
C ⊆ [⊆ n → m], then it is PF-labeled as well. The PF label λ(F ) of a face F is V ∈F fV .
Lemma 2.49 and Lemma 2.50 give conditions on when a PF-(co)labeled complex is a resolution,
and they apply verbatim to resolutions of partial classes as well. Proposition 2.51 and Corollary 2.52
hold as well when C is replaced by a partial class C, but we will not use them in the sequel.
2.5
Combining Classes
We first give a few propositions on obtaining resolutions of a combination of two classes C and D
from resolutions of C and D.
Proposition 2.88. Let I and J be two ideals of the same polynomial ring S. If (XI , λI ) is a
polyhedral cellular resolution of S/I, and (XJ , λJ ) is a cellular resolution of S/J, then the join (XI ?
XJ , λI ?λJ ) is a cellular resolution of S/(I +J), where we define λI ?λJ (F ?G) := lcm(λI (F ), λJ (G)).
Proof. Let a be an exponent sequence. (XI ? XJ )a is precisely (XI )a ? (XJ )a , which is acyclic
when both (XI )a and (XJ )a are acyclic. So XI ? XJ is a resolution.
The 0-cells of XI ? XJ are just the 0-cells of XI union the 0-cells of XJ , with the same labels,
so XI ? XJ resolves I + J.
35
2.5
Combining Classes
Note however that in general XI ? XJ is not minimal even when XI and XJ both are.
Proposition 2.89. Let C and D be classes ⊆ [m → n]. If (XC , λC ) is a cellular resolution of C,
and (XD , λD ) is a cellular resolution of D, then the join (XC ? XD , λC ? λD ) is a cellular resolution
of C ∪ D. If µC is the PF labeling function of XC and µD is the PF labeling function of XD , then
the PF labeling function of XC ? XD is given by
µC ? µD (F ? G) := µC (F ) ∩ µD (G).
Proof. By the above proposition, (XC ?XD , λC ?λD ) resolves IC? +ID? , which has minimal generators
{xΓf : f ∈ C ∪ D}. The characterization of µC ? µD follows from the the definition of λC ? λD .
We will need to examine the “difference” between the Betti numbers of I + J and those of I
and J. The following lemma gives a topological characterization of this difference.
Lemma 2.90. Let I and J be two monomial ideals of the same polynomial ring S. Suppose (XI , λI )
is a polyhedral cellular resolution of S/I, and (XJ , λJ ) is a cellular resolution of S/J. Label XI ×XJ
by the function λI × λJ : F × G 7→ lcm(λI (F ), λJ (G)) for nonempty cells F and G; the empty cell
has exponent label 0. If σ is an exponent sequence, then there is a long exact sequence
e i ((XI × XJ )≺σ ) → H
e i ((XI )≺σ ) ⊕ H
e i ((XJ )≺σ ) → H
e i ((XI ? XJ )≺σ ) → · · ·
··· → H
where i decreases toward the right.
Proof. One can check that (XI ? XJ )≺σ is the homotopy pushout of (XI )≺σ ← (XI × XJ )≺σ →
(XJ )≺σ . The lemma then follows from the homotopy pushout exact sequence.
We also have an algebraic version.
Lemma 2.91. Let I and J be two monomial ideals of the same polynomial ring S. For each
exponent sequence a, there is a long exact sequence
· · · → kβi,a (I∩J) → kβi,a (I) ⊕ kβi,a (J) → kβi,a (I+J) → kβi−1,a (I∩J) → · · ·
Proof. We have a short exact sequence
0 → I ∩ J → I ⊕ J → I + J → 0.
By Proposition 2.10, we can apply Tor(−, k) to obtain the long exact sequence as stated.
The ideal I ∩ J is generated by {lcm(mi , mj ) : mi ∈ mingen(I), mj ∈ mingen(J)}. When I = IC?
and J = ID? , I ∩ J = hxΓ(f ∩g) : f ∈ C, g ∈ D}. Define the Cartesian Intersection C D of C and D
? .
to be {f ∩ g : f ∈ C, g ∈ D}. This is a class of partial functions, and we can check IC? ∩ ID? = ICD
So the above lemma can be restated as follows
Lemma 2.92. Let C, D ⊆ [n → m]. For each PF f :⊆ [n] → [m], there is a long exact sequence
· · · → kβi,f (CD) → kβi,f (C) ⊕ kβi,f (D) → kβi,f (C∪D) → kβi−1,f (CD) → · · ·
Next, we seek to produce from cellular resolutions of C and D a cellular resolution of the Cartesian Union C q D of two classes C ⊆ [U → V ], D ⊆ [U 0 → V 0 ], defined as the class with elements
f q g : U t U 0 → V t V 0 for f ∈ C, g ∈ D, defined by
(
f (u) if u ∈ U
f q g(u) =
g(u) else.
We start with the general version for ideals, and specialize to function classes.
36
2
THEORY
Proposition 2.93. Let I be an ideal of polynomial ring S and let J be an ideal of polynomial ring
T such that S and T share no variables. If (XI , λI ) resolves S/I and (XJ , λJ ) resolves S/J, then
(XI × XJ , λI q λJ ) resolves the ideal S/(I ⊗ J) with I ⊗ J := (I ⊗ T )(S ⊗ J) in the ring S ⊗ T ,
where λI q λJ (F × G) = λI (F )λJ (G) for any cells F ∈ XI and G ∈ XJ . (Here, tensor ⊗ is over
base ring k). Furthermore, if XI and XJ are both minimal then (XI × XJ , λI q λJ ) is minimal as
well.
Proof. Let ω 0 , . . . , ω p−1 be minimal monomial generators of I and let γ 0 , . . . , γ q−1 be minimal
monomial generators of J. The ideal I ⊗ J is generated by {ω i γ j : (i, j) ∈ [p] × [q]}, which are
furthermore minimal because {ω i }i and {γ j }j are respectively minimal, and S and T share no
variables. The complex XI × XJ has vertices Vi × Vj0 for vertices Vi ∈ XI and Vj ∈ XJ . If Vi has
label ω i and Vj0 has label γ j , then Vi × Vj0 has label ω i γ j via λI q λJ . Thus XI × XJ resolves
S/(I ⊗ J), if it is a resolution.
And in fact, it is, because for any exponent sequence a wrt S and exponent sequence b wrt T ,
(XI × XJ )aqb = (XI )a × (XJ )b , which is acyclic (Here a q b is the exponent sequence whose
values on variables in S come from a and whose values on variables in T come from b).
The faces of a cell F × G ∈ XI × XJ are
{F × G0 : G0 ⊆ ∂G, dim G0 = dim G − 1} ∪ {F 0 × G : F 0 ⊆ ∂F, dim F 0 = dim F − 1}.
If λI (F ) 6= λI (F 0 ) for any F 0 ⊂ F and λJ (G) 6= λJ (G0 ) for any G0 ⊂ G, then λI q λJ (F × G) =
λI (F )λJ (G) is not equal to any of λI (F 0 )λJ (G) or λI (F )λJ (G0 ) for any of the above F 0 or G0 .
Therefore (XI × XJ , λI q λJ ) is minimal if XI and XJ are.
Proposition 2.94. Let C ⊆ [U → V ] and D ⊆ [U 0 → V 0 ]. If (XC , λC ) is a cellular resolution of
C, and (XD , λD ) is a cellular resolution of D, then the product (XC × XD , λC q λD ) is a cellular
resolution of C q D. Furthermore, if XC and XD are both minimal then (XC × XD , λC q λD ) is
minimal as well.
Finally, we want to construct cellular resolutions of restrictions of a function class to a subset
of its input space.
Definition 2.95. Let C ⊆ [U → V ] and U 0 ⊆ U . Then the restriction class C U 0 ⊆ [U 0 → V ]
is defined as C U 0 = {f U 0 : f ∈ C}.
Again we start with a general algebraic version and then specialize to restriction classes.
Proposition 2.96. Let X := {xi : i ∈ [n]} and Y := {yj : j ∈ [m]} be disjoint sets of variables.
Let I be an ideal of polynomial ring S = k[X t Y]. Suppose (X, λ) resolves I. Then (X, λ Y )
resolves the ideal I/hxi − 1 : xi ∈ Xi in the ring k[Y], where
λ Y (F ) := λ(F )/hxi − 1 : xi ∈ Xi.
Essentially, if we just ignore all the variables in X then we still get a resolution, though most
of the time the resulting resolution is nonminimal even if the original resolution is.
Proof. The subcomplex (X, λ Y )ya for a monomial ya in k[Y] is exactly the subcomplex (X, λ)x1 ya ,
and hence acyclic.
One can easily see that the Stanley-Reisner ideal of C U 0 is IC /hxu,v − 1 : u 6∈ U 0 , v ∈ V i
and similarly the canonical ideal of C U 0 is IC? /hxu,v − 1 : u 6∈ U 0 , v ∈ V i (both ideals are of the
polynomial ring S[xu,v : u ∈ U 0 , v ∈ V ]). Then the following corollary is immediate.
37
2.5
Combining Classes
A
B
zn−1
S
Figure 17: S as the union A ∪ B.
Proposition 2.97. Let C ⊆ [U → V ] and U 0 ⊆ U . If (X, λ) is a cellular resolution of C, then
(X, λ U 0 × V ) resolves C U 0 , where λ U 0 × V := λ {xu,v : u ∈ U 0 , v ∈ V }. Similarly, if C is
an algebraic free resolution of IC? , then C U 0 := C/hxu,v − 1 : u 6∈ U 0 , v ∈ V i is an algebraic free
?
resolution of ICU
0.
Finally we show that there is a series of exact sequences relating the Betti numbers of C ⊆ [n → 2]
to the Betti numbers of C U ⊆ [n]. All of the below homology are with respect to k.
Definition 2.98. Let C ⊆ [n → m] and f :⊆ [n] → [m]. The class C filtered by f, C f, is
{f \ f : f ⊆ f ∈ C}. For any U ⊆ [n] × [m] that forms the graph of a partial function f, we also
write C U = C f.
It should be immediate that SCU = linkU SC , so that by Hochster’s dual formula,
e i−1 (SCf ).
e i−1 (linkgraph f SC ) = dimk H
βi,f (C) = dimk H
Consider the standard embedding of the complete (n − 1)-dimensional suboplex S1n−1 ∼
= {z ∈
: kzk1 = 1}. Then SC ⊆ S1n−1 is the union of two open sets: A := SC ∩ {z ∈ Rn : |zn−1 | < 2/3}
and B := SC ∩ {z ∈ Rn : |zn−1 | > 1/3} (see Figure 17). If all functions in C sends n − 1 to
the same output, then B is homotopy equivalent to a single point; otherwise B contracts to 2
points. A deformation retracts onto SC[n−1] . The intersection A ∩ B deformation retracts to the
disjoint union of two spaces, respectively homeomorphic to the links of SC with respect to the
vertices (n − 1, 0), (n − 1, 1) ∈ [n] × [2]. We therefore have the following long exact sequence due to
Mayer-Vietoris
Rn
e i+1 (SC ) → H
e i (SC(n−1,0) ) ⊕ H
e i (SC(n−1,1) ) → H
e i (SC[n−1] ) ⊕ H
e i (B) → H
e i (SC ) → · · ·
··· → H
If every function f ∈ C has f (n − 1) = 1, then C (n − 1, 1) = C [n − 1]; a similar thing
happens if all f (n − 1) = 0. So suppose C {n − 1} = [2]. Then B ' ••, and neither C (n − 1, 0)
nor C (n − 1, 1) are empty. Therefore the long exact sequence simplifies down to
e i+1 (SC ) → H
e i (SC(n−1,0) ) ⊕ H
e i (SC(n−1,1) ) → H
e i (SC[n−1] ) ⊕ ZI(i=0) → H
e i (SC ) → · · ·
··· → H
Note that for any simplicial complex ∆, the link and restriction operations commute:
linkτ (∆ σ) = (linkτ ∆) σ.
38
2
THEORY
Correspondingly, for function class C, filtering and restricting commute:
C U V = C V U.
Let U := graph f for some f :⊆ [n−1] → [2] and denote U0 := U ∪{(n−1, 0)}, U1 := U ∪{(n−1, 1)}.
The above long exact sequence generalizes to the following, by replacing C with C U and applying
the commutativity above:
e i+1 (SCU ) → H
e i (SCU ) ⊕ H
e i (SCU ) → H
e i (SC[n−1]U ) ⊕ ZI(i=0) → H
e i (SCU ) → · · ·
··· → H
0
1
This yields via Hochster’s formulas the following sequence relating the Betti numbers of C and
C [n − 1].
Theorem 2.99. Let C ⊆ [n → 2], f :⊆ [n − 1] → [2], and f0 := f ∪ (n − 1 7→ 0), f1 := f ∪ (n − 1 7→ 1).
We have an exact sequence
· · · → kβi+1,f (C) → kβi,f0 (C)+βi,f1 (C) → kβi,f (C[n−1])+I(i=−1) → kβi,f (C) → · · ·
Using Theorem 2.99 we can recapitulate the following fact about deletion in oriented matroids.
Below we write V \ u for V \ {u} in the interest of clarity.
Corollary 2.100. Let V be a point configuration with affine span Rd and u ∈ V . Suppose V \ u
has affine span Rd−e , where e is either 0 or 1. Then τ ∈ {−, 0, +}V \u is a covector of rank r of
V \ u iff one of the following is true:
1. τ− := τ ∪ (u 7→ −) is a covector of rank r + e of V .
2. τ+ := τ ∪ (u 7→ +) is a covector of rank r + e of V .
3. τ0 := τ ∪ (u 7→ 0) is a covector of rank r + e of V , but τ− and τ+ are not covectors of V .
Proof. Let C = linthrV and D = linthrV \u = C (V \ u). Write f := σ −1 τ, f0 := σ −1 τ0 , f+ :=
σ −1 τ+ , f− := σ −1 τ− . βi,f (C) = 1 iff σf is a covector of V of rank d − i by Theorem 2.70.
If Item 1 is true, but not Item 2, then τ0 cannot be a covector of V (or else subtracting a small
multiple of τ− from τ0 yields τ+ ). As C and D both have pure Betti numbers, we have an exact
sequence
0 → kβj,f− (C) → kβj,f (D) → 0
where j = d − rank τ− . This yields that τ is a covector of rank d − e − j = rank τ− − e. The case
that Item 2 is true but not Item 1 is similar.
If Item 1 and Item 2 are both true, then τ0 must also be a covector. Furthermore, it must be
the case that rank τ− = rank τ+ = rank τ0 + 1. Again as C and D have pure Betti numbers, we have
an exact sequence
0 → kβj+1,f0 (C) → kβj,f− (C)+βj,f+ (C) → kβj,f (D) → 0
where j = d − rank τ− . Thus τ is a covector of rank d − e − j = rank τ− − e.
Finally, if Item 3 is true, we immediately have an exact sequence
0 → kβj,f (D) → kβj,f0 (C) → 0
with j = d − rank τ0 , so τ is a covector of rank d − e − j = rank τ0 − e.
In general, if C ⊆ [n → 2] and C [n − 1] are known to have pure Betti numbers, then
Theorem 2.99 can be used to deduce the Betti numbers of C [n − 1] directly from those of C. This
strategy is employed in the proof of Corollary 3.32 in a later section. It is an open problem to
characterize when a class has pure Betti numbers.
39
3
Applications
3.1
Dimension Theory
In this section we investigate the relationships between VC dimension and other algebraic quantities
derived from the Stanley-Reisner ideal and the canonical ideal.
Definition 3.1. Suppose C ⊆ [n → 2]. We say C shatters a subset U ⊆ [n] if C U = [U → 2].
The VC dimension of C, dimVC C, is defined as the largest k such that there is a subset U ⊆ [n]
of size k that is shattered by C. The VC radius of C, radVC C, is defined as the largest k such that
all subsets of [n] of size k are shattered by C.
The VC dimension is a very important quantity in statistical and computational learning theory. For example, suppose we can obtain data points (u, f (u)) by sampling from some unknown
distribution u ∼ P, where f is an unknown function known to be a member of a class C. Then
the number of samples required to learn the identity of f approximately with high probability is
O(dimVC C) [10]. Simultaneous ideas also popped up in model theory [17]. In this learning theory perspective, an extenture f of C is what is called a minimal nonrealizable sample: there is no
function in C that realizes the input/output pairs of f, but there is such functions for each proper
subsamples (i.e. restrictions) of f.
Note that C shatters U iff ICU = IC ⊗S S/JU equals hxu,0 xu,1 : u ∈ U i as an ideal of S/JU , where
JU = hxū,v − 1 : ū 6∈ U, v ∈ V i. In other words, every nonfunctional minimal monomial generator
of IC gets killed when modding out by JU ; so C shatters U iff every extenture of C is defined on a
point outside U . Therefore if we choose U to be any set with |U | < min{| dom f| : f ∈ ex C}, then C
shatters U . Since dom f is not shattered by C if f is any extenture, this means that
Theorem 3.2. For any C ⊂ [n → 2] not equal to the whole class [n → 2],
radVC C = min{| dom f| : f ∈ ex C} − 1.
Define the collapsing map π : k[xu,0 , xu,1 : u ∈ [n]] → k[xu : u ∈ [n]] by π(xu,i ) = xu . If
U ⊆ [n] is shattered by C, then certainly all subsets of U are also shattered by C. Thus the collection
of shattered sets form an abstract simplicial complex, called the shatter complex SHC of C.
Theorem 3.3. Let I be the the Stanley-Reisner ideal of the shatter complex SHC in the ring
S 0 = k[xu : u ∈ [n]]. Then π∗ IC = I + hx2u : u ∈ [n]i. Equivalently, U ∈ SHC iff xU 6∈ π∗ IC .
Proof. U is shattered by C iff for every f : U → [2], f has an extension in C, iff xgraph f 6∈ IC , ∀f :
U → [2], iff xU 6∈ π∗ IC .
We immediately have the following consequence.
Theorem 3.4. dimVC C = max{|U | : xU 6∈ π∗ IC }.
Recall the definition of projective dimension [11].
Definition 3.5. The length of a minimal resolution of a module M is the called the projective
dimension, projdim M , of M .
We make the following definitions in the setting of function classes.
Definition 3.6. For any C ⊆ [n → 2], the homological dimension dimh C is defined as the
projective dimension of IC? , the length of the minimal resolution of IC? . The Stanley-Reisner
dimension dimSR C is defined as the projective dimension of the Stanley-Reisner ring S/IC .
40
3
APPLICATIONS
[n → 2]
{f }
deltan
monconjd
conjd
linthrd
polythrkd
linfun2d
dimh
n
0
n−1
d
d+1
d+1
Σk0
d
dimSR
n
n
n+1
2d+1 − d − 1
2d+1 − d − 1
2d+1 − d − 1
2d+1 − Σk0 = 2d + Σdk+1
2d+1 − d − 1
dimVC
n
0
1
d [14]
d [14]
d + 1 [1]
Σk0 [1]
d
Table 1: Various notions of dimensions for boolean function classes investigated in this work. Σkj :=
Pk
d
i=j i . The VC dimensions without citation can be checked readily.
One can quickly verify the following lemma.
Lemma 3.7. If S/IC has a minimal cellular resolution X, then dimSR C = dim X + 1. If C has a
minimal cellular resolution X, then dimh C = dim X. The same is true for cocellular resolutions
Y if we replace dim X with the difference between the dimension of a top cell in Y and that of a
bottom cell in Y .
Recall the definition of regularity [11].
Definition 3.8. The regularity of a Nn -graded module M is
reg M = max{|b| − i : βi,b (M ) 6= 0},
where |b| =
Pn
j=1 bi .
There is a well known duality between regularity and projective dimension.
Proposition 3.9. [11, thm 5.59] Let I be a squarefree ideal. Then projdim(S/I) = reg(I ? ).
This implies that the Stanley-Reisner dimension of C is equal to the regularity of IC? . For each
minimal resolutions we have constructed, it should be apparent that max{|Γf| − i : βi,f (C) 6= 0}
occurs when i is maximal, and thus for such an f with smallest domain it can be computed as
#variables − | dom f| − dimh C. Altogether, by the results of Section 2.3, we can tabulate the
different dimensions for each class we looked at in this work in Table 1.
For all classes other than deltan , we see that dimh is very close to dimVC . We can in fact show
the former is always at least thte latter.
Proposition 3.10. Let C ⊆ [U → V ] and U 0 ⊆ U . Then dimh C ≥ dimh C U 0 .
Proof. Follows from Proposition 2.97.
Theorem 3.11. For any C ⊆ [n → 2], dimh C ≥ dimVC C.
Proof. Let U ⊆ [n] be the largest set shattered by C. We have by the above proposition that
dimh C ≥ dimh C U . But C U is the complete function class on U , which has the cube minimal
resolution of dimension |U |. Therefore dimh C ≥ |U | = dimVC C.
As a consequence, we have a bound on the number of minimal generators of an ideal I expressable as a canonical ideal of a class, courtesy of the Sauer-Shelah lemma [10].
41
3.1
Dimension Theory
Corollary 3.12. Suppose ideal I equals IC? for some C ⊆ [n → 2]. Then I is minimally generated
by a set no larger than O(nd ), where d is the projective dimension of I.
However, in contrast to VC dimension, note that homological dimension is not monotonic:
delta2d ⊆ conjd but the former has homological dimension 2d while the latter has homological
dimension d + 1. But if we know a class C ⊆ [n → 2] has dimh C = dimVC C, then C ⊆ D implies
dimh C ≤ dimh D by the monotonicity of VC dimension. We write this down as a corollary.
Corollary 3.13. Suppose C, D ⊆ [n → 2]. If dimh C = dimVC C, then C ⊆ D only if dimh C ≤ dimh D.
The method of restriction shows something more about the Betti numbers of C.
Theorem 3.14. C shatters U ⊆ [n] iff for every partial function f :⊆ U → [2], there is some
g :⊆ [n] → [2] extending f such that β|U |−| dom f|,g (C) ≥ 1.
Proof. The backward direction is clear when we consider all total function f : U → [2].
?
From any (algebraic) resolution F of IC? , we get an (algebraic) resolution F U of ICU
by ignoring
the variables {xu,v : u 6∈ U, v ∈ [2]}. If for some f :⊆ U → [2], for all g :⊆ [n] → [2] extending f,
β|U |−| dom f|,g C = 0, then there is the (|U | − | dom f|)th module of F U has no summand of degree
Γg, which violates the minimality of the cube resolution of C U .
There is also a characterization of shattering based on the Stanley-Reisner ideal of a class. We
first prove a trivial but important lemma.
e n (∆) 6= 0 iff ∆ is complete.4
Lemma 3.15. Suppose ∆ is an n-dimensional suboplex. Then H
Proof. The backward direction is clear.
Write S1n for the complete n-dimensional suboplex. Suppose ∆ 6= S1n . Choose an n-dimensional
simplex F not contained in ∆. Let ∇ be the complex formed by the n-dimensional simplices not
contained in ∆ or equal to F . By Mayer-Vietoris for simplicial complexes, we have a long exact
sequence
e n (∇) ⊕ H
e n (∆) → H
e n (∇ ∪ ∆) → H
e n−1 (∇ ∩ ∆) → · · ·
e n (∇ ∩ ∆) → H
··· → H
Now ∇ ∪ ∆ is just S1n \ int F , which is homeomorphic to an n-dimensional disk, and hence cone m (∇ ∪ ∆) = 0, ∀m > 0, and therefore H
e m (∇ ∩ ∆) ∼
e m (∇) ⊕ H
e m (∆), ∀m > 0.
tractible. Hence H
=H
e
e
e n (∆) = 0, as
But ∇ ∩ ∆ has dimension at most n − 1, so Hn (∇ ∩ ∆) = 0, implying Hn (∇) = H
desired.
Theorem 3.16. Let C ⊆ [n → 2]. Suppose U ⊆ [n] and let τ = U × [2]. Then C shatters U iff
β|U |−1,τ (IC ) 6= 0.
Proof. C shatters U iff C U = [U → 2]. The canonical suboplex of C U is SCU = SC τ . By the
e |U |−1 (SC τ ) 6= 0 iff H
e |U |−1 (SC τ ; k) 6= 0. By Hochster’s
above lemma, SC τ is complete iff H
formula (Proposition 2.12), the dimension of this reduced cohomology is exactly β|U |−1,τ (IC ).
The above yields another proof of the dominance of homological dimension over projective
dimension.
4
The proof given actually works as is when ∆ is any pure top dimensional subcomplex of a simplicial sphere.
42
3
APPLICATIONS
Second proof of Theorem 3.11. By Proposition 3.9, dimh C + 1 = projdim(S/IC? ) = reg(IC ). By
Theorem 3.16, the largest shattered set U must satisfy β|U |−1,U ×[2] (IC ) 6= 0, so by the definition of
regularity,
dimVC C = |U | = |U × [2]| − (|U | − 1) − 1 ≤ reg(IC ) − 1 = dimh C.
From the same regularity argument, we obtain a relation between homological dimension and
the maximal size of any minimal nonrealizable samples.
Theorem 3.17. For any minimal nonrealizable sample f of C, we have
|f| ≤ dimh C + 1.
Proof. Again, dimh C + 1 = reg(IC ). For each extenture (i.e. minimal nonrealizable sample) f,
xgraph f is a minimal generator of IC , so we have β0,graph f (IC ) = 1. Therefore,
|f| ≤ reg(IC ) = dimh C + 1.
It is easy to check that equality holds for C = monconj, linfun, polythr.
Combining Theorem 3.3, Theorem 3.14, and Theorem 3.16, we have the equivalence of three
algebraic conditions
Corollary 3.18. Let C ⊆ [n → 2] and U ⊆ [n]. The following are equivalent
1. C shatters U .
2. xU 6∈ π∗ IC .
3. ∀f :⊆ U → [2], there is some g :⊆ [n] → [2] extending f such that β|U |−| dom f |,g (C) ≥ 1.
4. β|U |−1,U ×[2] (IC ) 6= 0.
The above result together with Corollary 3.13 implies several algebraic conditions on situations
in which projective dimension of an ideal is monotone. Here we write down one of them.
Corollary 3.19. Let S = k[xu,i : u ∈ [n], i ∈ [2]]. Suppose ideals I and J of S are generated by
monomials of the form xΓf , f ∈ [n → 2]. If max{|U | : xU 6∈ π∗ I} = projdim I, then I ⊆ J implies
projdim I ≤ projdim J.
3.2
Cohen-Macaulayness
We can determine the Betti numbers of dimension 1 of any class of boolean functions. Let C ⊆
[n → 2]. Write C⊇f := {h ∈ C : h ⊇ f}. Then we have the following theorem.
Theorem 3.20. The 1-dimensional Betti numbers satisfy
(
1 if |C⊇f | = 2
β1,f (C) =
0 otherwise.
More precisely, let {f : f ∈ C} be a set of basis, each with degree Γf , and define
M
φ:
Sf IC? , φ(f ) = xΓf .
f ∈C
43
3.2
Cohen-Macaulayness
Let ω f,g = xΓf /xΓ(f ∩g) and ζf,g := ω f,g g − ω g,f f . Then ker φ has minimal generators
{ζf,g : C⊇f = {f, g}, f ≺ g},
where ≺ is lexicographic ordering (or any linear order for that matter).
We will use the following lemma from [6].
Lemma 3.21 ([6] Lemma 15.1 bis). ker φ is generated by {ζh,h0 : h, h0 ∈ C}.
Proof of Theorem 3.20. It’s clear that the latter claim implies the former claim about Betti numbers.
We first show that G = {ζf,g : C⊇f = {f, g}, f ≺ g} is a set of generators as claimed. By the
lemma above, it suffices to show that ζh,h0 for any two functions h ≺ h0 ∈ C can be expressed as a
linear combinations of G. Denote by kf − gk1 the L1 distance n − | dom(f ∩ g)|. We induct on the
size of the disagreement p = kh − h0 k1 . When p = 1, ζf,g ∈ G, so there’s nothing to prove. Suppose
the induction hypothesis is satisfied for p ≤ q and set p = q +1. Let f = h∩h0 . If C⊇f has size 2 then
we are done. So assume |C⊇f | ≥ 3 and let h00 be a function in C⊇f distinct from h or h00 . There must
be some u, u0 ∈ [n]\dom f such that h(u) = h00 (u) = ¬h0 (u) and h0 (u0 ) = h00 (u0 ) = ¬h(u0 ). Indeed, if
such a u does not exist, then h00 (v) = h0 (v) for all v ∈ [n] \ dom f, and thus h00 = h0 , a contradiction;
similarly, if u0 does not exist, we also derive a contradiction. Therefore kh − h00 k1 , kh0 − h00 k ≤ q,
and by induction hypothesis, ζh,h00 and ζh0 ,h00 are both expressible as linear combination of G, and
thus ζh,h0 = ζh,h00 − ζh00 ,h0 is also expressible this way. This proves that G is a set of generators.
For any partial f, if C⊇f = {f, g}, then the degree xΓf strand of φ is the map of vector spaces
kωf,g g ⊕ kωg,f f
→ kxΓf , (ω, ω 0 ) 7→ ω + ω 0
whose kernel is obviously kζf,g . Therefore, G must be a minimal set of generators.
Definition 3.22. Let C ⊆ [n → 2] and f, g ∈ C. If Cf ∩g = {f, g}, then we say f and g are
neighbors in C, and write f ∼C g, or f ∼ g when C is clear from context.
Next, we discuss the conditions under which S/IC and S/IC? could be Cohen-Macaulay. Recall
the definition of Cohen-Macaulayness.
Definition 3.23 ([11]). A monomial quotient S/I is Cohen-Macaulay if its projective dimension
is equal to its codimension codim S/I := min{supp ω : ω ∈ mingen(I ? )}.
Cohen-Macaulay rings form a well-studied class of rings in commutative algebra that yields
to a rich theory at the intersection of algebraic geometry and combinatorics. The mathematician
Melvin Hochster famously wrote “Life is really worth living” in a Cohen-Macaulay ring [9].
By [5, Prop 1.2.13], we have that S/I is Cohen-Macaulay for I squarefree only if every minimal
generator of I ? has the same support size. Then the following theorem shows that requiring S/IC?
to be Cohen-Macaulay filters out most interesting function classes, including every class considered
above except for singleton classes. We first make a definition to be used in the following proof and
in later sections.
Definition 3.24. Let D ⊆ [n → m]. We say
S D is full if for every pair (u, v) ∈ [n] × [m], there is
some function h ∈ D with h(u) = v — i.e. {graph h : h ∈ D} = [n] × [m].
Theorem 3.25. Let C ⊆ [n → 2]. The following are equivalent
1. S/IC? is Cohen-Macaulay.
44
3
APPLICATIONS
2. Under the binary relation ∼, C⊇f forms a tree for every PF f :⊆ [n] → [2].
3. dimh C ≤ 1.
Proof. We will show the equivalence of the first two items; the equivalence of the second and third
items falls out during the course of the proof.
First suppose that C is not full. Then IC has a minimal generator xu,b for some u ∈ [n], b ∈ [2]. If
S/IC? is Cohen-Macaulay, then all minimal generators of IC must have the same support size, so for
each functional monomial xv,0 xv,1 , either xv,0 or xv,1 is a minimal generator of IC . This means that
?
C is a singleton class, and thus is a tree under ∼ trivially. Conversely, S/I{f
} is Cohen-Macaulay
?
for any f ∈ [n → 2] because the projective dimension of S/I{f } is dimh {f } + 1 = 1 which is the
common support size of I{f } (Theorem 2.44).
Now assume C is full. Then mingen(IC ) ⊇ FM and min{| supp ω| : ω ∈ mingen(IC )} = 2. Hence
S/IC? is Cohen-Macaulay iff the projective dimension of S/IC? is 2 iff the homological dimension of
C is 1. This is equivalent to saying that the 1-dimensional cell complex X with vertices f ∈ C and
edges f ∼ g minimally resolves IC? with the obvious labeling, which is the same as the condition
specified in the theorem.
Corollary 3.26. Let C ⊆ [n → 2]. If S/IC? is Cohen-Macaulay, then C has a minimal cellular
resolution and has pure Betti numbers which are 0 or 1.
Example 3.27. Let o : [n] → [2] be the identically zero function. The class C := deltan ∪ {o}
satisfies S/IC? being Cohen-Macaulay. Indeed, f ∼C g iff {f, g} = {δi , o} for some i, so ∼C forms a
star graph with o at its center. For each nonempty f :⊆ [n] → [2], if im f = {0}, then C⊇f contains
o and thus is again a star graph. If f(i) = 1 for a unique i, then C⊇f = δi , which is a tree trivially.
Otherwise, C⊇f = ∅, which is a tree vacuously.
It seems unlikely that any class C with Cohen-Macaulay S/IC? is interesting computationally, as
Theorem 3.25 and Theorem 3.11 imply the VC dimension of C is at most 1. By the Sauer-Shelah
lemma [10], any such class C ⊆ [n → 2] has size at most n + 1.
In contrast, the classes C ⊆ [n → 2] with Cohen-Macaulay S/IC form a larger collection, and
they all have cellular resolutions. For this reason, we say C is Cohen-Macaulay if S/IC is CohenMacaulay.
Definition 3.28. Let n be the n-dimensional cube with vertices [2]n . A cublex (pronounced
Q-blex) is a subcomplex of n .
n has a natural PF labeling η = η that labels each vertex V ∈ [2]n with the corresponding
function η(V ) : [n] → [2] with η(V )(i) = Vi , and the rest of the PF labels are induced via intersection
as in Lemma 2.48. Specifically, each face Fw is associated to a unique PF w :⊆ [n] → [2], such
that Fw consists of all vertices V with η(V ) ⊇ w; we label such a Fw with η(Fw ) = w. A cublex X
naturally inherits η , which we call the canonical PF label function of X.
Rephrasing Reisner’s Criterion [11, thm 5.53], we obtain the following characterization.
Proposition 3.29 (Reisner’s Criterion). C ⊆ [n → 2] is Cohen-Macaulay iff
e i−1 (SCf ; k) = 0 for all i 6= n − | dom f|.
βi,f (C) = dimk H
Theorem 3.30. Let C ⊆ [n → 2]. The following are equivalent.
1. C is Cohen-Macaulay.
45
3.2
Cohen-Macaulayness
2. dimSR C = n.
3. C = {η (V ) : V ∈ X} for some cublex X such that X⊇f is acyclic for all f :⊆ [n] → [2].
Proof. (1 ⇐⇒ 2). This is immediate after noting that codim S/IC = n.
(3 =⇒ 2). X is obviously a minimal cellular resolution of C, and for each f, the face Ff with
PF label f, if it exists, has dimension n − | dom f|, so Reisner’s Criterion is satisfied.
(2 =⇒ 3). Let X be the cubplex containing all faces Ff such that βi,f (C) 6= 0 for i = n−| dom f|.
e i−1 (SCf ; k) 6= 0 iff SCf is the complete (i − 1)-dimensional suboplex
This is indeed a complex: H
by Lemma 3.15; hence for any g ⊇ f, SCg is the complete (j − 1)-dimensional suboplex, where
j = n − | dom g|, implying that βj,g (C) = 1.
We prove by induction on poset structure of f :⊆ [n] → [2] under containment that X⊇f is
acyclic for all f. The base case of f being total is clear. Suppose our claim is true for all g ⊃ f. If
X⊇f is an (n − | dom f|)-dimensional cube, then we are done. Otherwise,
X⊇f =
[
X⊇g .
g⊃f
| dom g|=| dom f|+1
By induction hypothesis, each of X⊇g is acyclic, so the homology of X⊇f is isomorphic to the
homology of the nerve N of {X⊇g }. We have for any collection F of such g,
\
X⊇g 6= ∅ ⇐⇒ ∃f ∈ C ∀g ∈ F[f ⊇ g].
g∈F
e • (SCf ; k) = 0 (since Xf is empty),
Therefore N is isomorphic to SCf as simplicial complexes. As H
X⊇f is acyclic as well.
X is obviously minimal since it has unique PF labels, and its vertex labels are exactly C.
The minimal cublex cellular resolution of Cohen-Macaulay C constructed in the proof above is
called the canonical cublex resolution of C.
Corollary 3.31. If C ⊆ [n → 2] is Cohen-Macaulay, then C has a minimal cellular resolution and
has pure Betti numbers which are 0 or 1.
It should be easy to see that if C is Cohen-Macaulay, then so is the filtered class C f for any
PF f :⊆ [n] → [2]. It turns out this is also true for restrictions of C.
Corollary 3.32 (Cohen-Macaulayness is preserved under restriction). If C ⊆ [n → 2] is CohenMacaulay, then so is C U for any U ⊆ [n]. Its canonical cublex resolution is the projection of the
canonical cublex resolution of C onto the subcube Fw of n , where w :⊆ [n] → [2] takes everything
outside U to 0. Consequently, β•,f (C U ) = 0 iff β•,f 0 (C) = 0 for all f 0 ⊇ f extending f to all of
[n] \ U .
Proof. It suffices to consider the case U = [n − 1] and then apply induction. Fix f :⊆ [n − 1] → [2],
and let f0 := f ∪ (n − 1 7→ 0), f1 = f ∪ (n − 1 7→ 1). We wish to show βi,f (C U ) = 0 for all
i 6= n − 1 − | dom f|. We have three cases to consider.
1. β•,f0 (C) = β•,f1 (C) = 0. Certainly, β•,f (C) would also have to be 0 (the existence of the
subcube Ff would imply the existence of Ff0 and Ff1 in the canonical cublex resolution of C).
By Theorem 2.99, this implies β•,f (C U ) = 0 as well.
46
3
APPLICATIONS
2. WLOG βi,f0 (C) = I(i = n − | dom f| − 1) and β•,f1 (C) = 0. Again, β•,f (C) = 0 for the same
reason. So Theorem 2.99 implies βi,f (C U ) = I(i = n − | dom f| − 1).
3. βi,f0 (C) = βi,f1 (C) = I(i = n − | dom f| − 1). Then C = [n → 2] and therefore βi,f (C) = I(i =
n − | dom f|). Theorem 2.99 yields an exact sequence
0 → kβj+1,f (CU ) → kβj+1,f (C) → kβj,f0 (C)+βj,f1 (C) → kβj,f (CU ) → 0,
where j = n − | dom f| − 1. Because C has pure Betti numbers by Corollary 3.31, the only
solution to the above sequence is βi,f (C U ) = I(i = n − | dom f| − 1).
This shows by Proposition 3.29 that C U is Cohen-Macaulay. The second and third statements
then follow immediately.
Lemma 3.33. If C ⊆ [n → 2] is Cohen-Macaulay, then βi,f (C) = I(i = n − | dom f|) iff f ∈ C for
all total f extending f.
Proof. βi,f (C) = I(i = n − | dom f|) iff linkf (SC ) is the complete suboplex iff f ∈ C for all total f
extending f.
Corollary 3.34. If C ⊆ [n → 2] is Cohen-Macaulay, then dimh C = dimVC C.
Proof. dimh C is the dimension of the largest cube in the canonical cublex resolution of C, which
by the above lemma implies C shatters a set of size dimh C. Therefore dimh C ≤ dimVC C. Equality
then follows from Theorem 3.11.
Example 3.35. The singleton class {f }, delta ∪ {o} as defined in Example 3.27, and the complete
class [n → 2] are all Cohen-Macaulay. However, inspecting Table 1 shows that, for d ≥ 1, none of
delta, monconj, conj, linthr, or linfun on d-bit inputs are Cohen-Macaulay, as their StanleyReisner dimensions are strictly greater than 2d . Likewise, polythrkd is not Cohen-Macaulay unless
k = d. Consequently, the converse of Corollary 3.34 cannot be true.
Example 3.36. We can generalize delta ∪ {o} as follows. Let nb(f )kn be the class of functions
on [n] that differs from f ∈ [n → 2] on at most k inputs. Then nb(f )kn is Cohen-Macaulay; its
canonical cublex resolution is the cublex with top cells all the k-dimensional cubes incident on f .
For example, delta ∪ {o} = nb(o)1n .
Finally, we briefly mention the concept of sequential Cohen-Macaulayness, a generalization of
Cohen-Macaulayness.
Definition 3.37 ([18]). A module M is sequential Cohen-Macaulay if there exists a finite filtration
0 = M0 ⊆ M1 ⊆ · · · ⊆ M r = M
of M be graded submodules Mi such that
1. Each quotient Mi /MI−1 is Cohen-Macaulay, and
2. dim(M1 /M0 ) < dim(M2 /M1 ) < · · · < dim(Mr /Mr−1 ), where dim denotes Krull dimension.
Sequentially Cohen-Macaulay rings S/I satisfy projdim S/I = max{| supp a| : xa ∈ mingen(I ? )}
by a result of [7]. If S/IC is sequentially Cohen-Macaulay, this means it is actually Cohen-Macaulay,
since all minimal generators of IC? have the same total degree. Thus what can be called “sequentially
Cohen-Macaulay” classes coincide with Cohen-Macaulay classes.
47
3.3
Separation of Classes
P
link{P } S
S
Figure 18: The link of suboplex S with respect to vertex P is homeomorphic to the intersection of S with
a hyperplane.
3.3
Separation of Classes
In this section, unless specificed otherwise, all homologies and cohomologies are taken against k.
?
Suppose C, D ⊆ [n → m]. If C ⊆ D, then C ∪ D = D, and IC? + ID? = IC∪D
= ID? . In particular, it must
be the case that for every i and σ,
?
βi,σ (IC? + ID? ) = βi,σ (IC∪D
) = βi,σ (ID? ).
Thus C ⊂ D if for some i and f, βi,f (C) 6= βi,f (C ∪ D). The converse is true too, just by virtue of β0,−
encoding the elements of each class. By Theorem 3.20, C ⊂ D already implies that β1,− must differ
between the two classes. However, we may not expect higher dimensional Betti numbers to certify
strict inclusion in general, as the examples in Section 2.3.7 show.
This algebraic perspective ties into the topological perspective discussed in the introduction as
follows. Consider C ⊆ [2d → {−1, 1}] and a PF f :⊆ [2d ] → {−1, 1}. By Hochster’s dual formula
e i−1 (SCf ; k) = dimk H
e i−1 (linkgraph f SC ; k). When f = †, this
(Proposition 2.11), βi,f (C) = dimk H
quantity is the “number of holes of dimension i−1” in the canonical suboplex of C. When graph f =
{(u, f(u))} has a singleton domain, linkgraph f SC is the section of SC by a hyperplane. More precisely,
d
d
if we consider SC as embedded the natural way in S12 −1 = {z ∈ R2 : kzk1 = 1} (identifying each
coordinate with a v ∈ [2d ] ∼
= [2]d ), linkgraph f SC is homeomorphic to SC ∩{z : zu = f(u)/2}. Figure 18
illustrates this. For general f, we have the homeomorphism
linkgraph f SC ∼
= SC ∩ {z : zu = f(u)/2, ∀u ∈ dom f}.
Thus comparing the Betti numbers of D and C ∪ D is the same as comparing “the number of holes”
of SD and SC∪D and their corresponding sections.
If PF-labeled complex (XC , µC ) resolves C and PF-labeled complex (XD , µD ) resolves D, then
the join (XC ? XD , µC ? µD ) resolves C ∪ D by Proposition 2.89. The Betti numbers can then be
computed by
e i−1 ((XC ? XD )⊃f ; k)
βi,f (C ∪ D) = dimk H
via Proposition 2.19. Here are some simple examples illustrating this strategy.
Theorem 3.38. Let d ≥ 2. Let I1 ∈ [2d → 2] be the indicator function u 7→ I(u = 1 = 1 · · · 1 ∈
[2]d ). Consider the partial linear functional g : 0 → 0, 1 → 1. Then βi,g (linfun2d ∪ {I1 }) = 0 for
all i.
48
3
APPLICATIONS
The proof below is in essence the same as the proof of I1 6∈ linfun2d given in the introduction,
but uses the theory we have developed so far. The application of the Nerve Lemma there is here
absorbed into the Stanley-Reisner and cellular resolution machineries.
Proof. Let (X, µ) be the flag resolution of linfun2d and • be the one point resolution of {I1 }. Then
X ? • is the cone over X, with labels µ0 (F ? •) = µ(F ) ∩ I1 and µ0 (F ) = µ(F ) for cells F in X.
Consider Z := (X ? •)⊃g . Every cell F of X in Z has PF label a linear functional on a linear
subspace of Fd2 strictly containing V := {0, 1}. As such, µ(F ) ∩ I1 strictly extends g, because µ(F )
sends something to 0 outside of V. This means Z is a cone over X⊃g , and thus is acyclic. Therefore
βi,g (linfun2d ∪ {I1 }) = 0 for all i.
But βd−1,g (linfun2d ) is nonzero, so we obtain the following corollary.
Corollary 3.39. I1 6∈ linfun2d for d ≥ 2.
Theorem 3.38 says the following geometrically: the canonical suboplex of linfun2d g (a complex
d
of dimension 22 − 2) has holes in dimension d − 1, but these holes are simultaneously covered up
when we add I1 to linfun2d .
Theorem 3.40. Let parityd be the parity function on d bits. Then βi,† (polythrkd ∪{parityd }) =
0 for all i if k < d.
∼ {1, −1}, a 7→ (−1)a ,
Let us work over {−1, 1} instead of {0, 1}, under the bijection {0, 1} =
so that parityd (u0 , . . . , ud−1 ) = u0 · · · ud−1 for u ∈ {−1, 1}d and polythrkd consists of sgn(p) for
polynomials p with degree at most k not taking 0 on any point in {−1, 1}d .
Proof. Fix k < d. Let (X, µ) denote the ball resolution of polythrkd and • be the one point
resolution of {f }. Then X ? • is the cone over X, with labels µ0 (F ? •) = µ(F ) ∩ f and µ0 (F ) = µ(F )
for cells F in X.
Consider Z := (X ? •)⊃† . Every PF label f :⊆ {−1, 1}d → {−1, 1} of X intersects parityd
nontrivially if f 6= †. Otherwise, suppose p is a polynomial function such that p(u) > 0 ⇐⇒ f(u) =
1, p(u) < 0 ⇐⇒ f(u) = −1, and p(u) = 0 ⇐⇒ uQ6∈ dom f. Then by discrete Fourier transform 5 ,
the coeffient of p for the monomial parityd (u) = d−1
i=0 ui is
X
p(a)parityd (a) < 0
a∈{−1,1}d
because whenever p(a) is nonzero, its sign is the opposite of parityd (a). This contradicts k < d.
Thus in particular, the PF label of every cell of X except for the top cell (with PF label †) intersects
parityd nontrivially. Therefore Z is a cone and thus β•,† (polythrkd ∪ {parityd }) = 0.
P
But βe,† (polythrkd ) = 1, where e = kj=0 dj is the homological dimension of polythrkd . So
we recover the following result by Minsky and Papert.
Corollary 3.41 ([12]). parityd 6∈ polythrkd unless k = d.
From the analysis below, we will see in fact that adding parityd to polythrkd causes changes
to Betti numbers in every dimension up to dimVC polythrkd = dimh polythrkd , so in some sense
parityd is maximally homologically separated from polythrkd . This “maximality” turns out to be
equivalent to the lack of weak representation Corollary 3.46.
5
See the opening chapter of [15] for a good introduction to the concepts of Fourier analysis of boolean functions.
49
3.3
Separation of Classes
By Lemma 2.90, the “differences” between the Betti numbers of C ∪ D and those of C and of
D are given by the homologies of (XC × XD , µC × µD )⊃f . Suppose C consists of a single function
f . Then XC is a single point with exponent label Γf . (XC × XD , µC × µD ) is thus isomorphic to
XD as complexes, but the exponent label of each nonempty cell F ∈ XC × XD isomorphic to cell
F 0 ∈ XD is now lcm(λD (F 0 ), Γf ), and the PF label of F is µD (F 0 ) ∩ f ; the empty cell ∅ ∈ XC × XD
has the exponent label 0. We denote this labeled complex by (XD )f .
Notice that (XD )f is a (generally nonminimal) cellular resolution of the PF class Df := D{f },
because (XD )f
⊇f = (XD )⊇f whenever f ⊆ f and empty otherwise, and therefore acyclic. So the
(dimensions of) homologies of (XC × XD )⊃f are just the Betti numbers of Df . This is confirmed
by Lemma 2.92. Another perspective is that SCD is the intersection SC ∩ SD , so by Mayer-Vietoris,
?
ICD
gives the “difference” in Betti numbers between β•,− (C) + β•,− (D) and β•,− (C ∪ D).
ID?f determines the membership of f through several equivalent algebraic conditions.
Lemma 3.42. Let D ⊆ [n → m] be a full class (see Definition 3.24). Then the following are
equivalent:
1. f ∈ D
2. ID?f is principally generated by xΓf
3. ID?f is principal
4. βi,f (Df ) = 1 for exactly one partial f when i = 0 and equals 0 for all other i.
5. βi,f (Df ) = 0 for all f and all i ≥ 1.
6. βi,f (Df ) = 0 for all f 6= f and all i ≥ 1.
Proof. (1 =⇒ 2 =⇒ 3) If f ∈ D, then ID?f is principally generated by xΓf .
(3 =⇒ 2 =⇒ 1) If ID?f is principal, then it’s generated by xΓg for some partial function g.
This implies that h∩f ⊆ g =⇒ graph h ⊆ Γf ∪graph g, ∀h ∈ D. But taking the union over all h ∈ D
contradicts our assumption on D unless g = f . Thus there is some h ∈ D with h ∩ f = f =⇒ h = f .
(3) ⇐⇒ 4) This should be obvious.
(4 ⇐⇒ 5 ⇐⇒ 6) The forward directions are obvious. Conversely, if ID?f has more than one
minimal generator, then its first syzygy is nonzero and has degrees Γf , implying the negation of
Item 5 and Item 6.
Thus ID?f by itself already determines membership of f ∈ D. It also yields information on the
Betti numbers of C ∪ D via Lemma 2.92. Thus in what follows, we study ID?f in order to gain insight
into both of the membership question and the Betti number question.
Let us consider the specific case of D = linthrU , with minimal cocellular resolution coBallU =
(Y, µ). Then linthrU f has minimal cocellular resolution (Y, µf ), where we relabel cells F of Y
by µf (F ) = µ(F ) ∩ f , so that, for example, the empty cell still has PF label the empty function.
~ forms a set of orthogonal basis for
Choose U to be a set of n points such that the vectorization U
Rn . Then linthrU = [U → 2], and Y is homeomorphic to the unit sphere S n−1 as a topological
space and is isomorphic to the complete (n − 1)-dimensional suboplex as a simplicial complex. It
has 2n top cells 4g , one forTeach function g ∈ [U → 2]; in general, it has a cell 4f for each PF
f :⊆ U → 2, satisfying 4f = f ⊇f 4f .
Γf
Let us verify that βi,f (linthrf
U ) equals βi,Γf (hx i) = I(i = 0 & f = f ) by Lemma 2.30. For
any f ⊆ f , define f ♦ f to be the total function
f ♦ f : u 7→ f (u) ∀u ∈ dom f,
50
u 7→ ¬f (u) ∀u 6∈ dom f.
3
APPLICATIONS
∂ f 4f ♦ f
4f
4f ♦ f
4¬f
Figure 19: The bold segments form the partial boundary ∂ f 4f ♦ f . In particular, this partial boundary
contains three vertices. It is exactly the part of 4f ♦ f visible to a spider in the interior of 4¬f , if light
travels along the sphere.
Define the (f, f)-star F(f, f) to be the collection of open cells 4̊g with PF label f ⊆ g ⊆ f ♦ f.
This is exactly the collection of open cells realized by the cellular pair (4f ♦ f , ∂ f 4f ♦ f ), where
∂ f 4f ♦ f denotes the partial boundary of 4f ♦ f that is the union of the closed cells with PF labels
(f ♦ f) \ (i 7→ f(i)) for each i ∈ dom f. In particular, F(f, f ) is realized by (4f , ∂4f ), and F(f, †) is
realized by (4¬f , {}) (where {} is the void complex). In the following we suppress the subscript
to write (4, ∂ f 4) for the sake of clarity. When f 6= †, f , ∂ f 4 is the union of faces intersecting 4¬f ;
intuitively, they form the subcomplex of faces directly visible from an observer in the interior of
4¬f . This is illustrated in Figure 19.
Then the part of (YU , µf
U ) with PF label f is exactly the (f, f)-star. If f 6= f , the closed top
cells in ∂ f 4 all intersect at the closed cell with PF label f ♦ f \ f = ¬(f \ f), and thus their union
∂ f 4 is contractible. This implies via the relative cohomology sequence
e j (∂ f 4) ← H
e j (4) ← H j (4, ∂ f 4) ← H
e j−1 (∂ f 4) ← · · ·
··· ← H
e j (4) = dimk H j (4, ∂ f 4) = βn−1−j,f (linthrf ). If f = f , then ∂ f 4 = ∂4, so
that 0 = dimk H
U
f
e j (4/∂) ∼
H j (4, ∂ f 4) ∼
=H
= kI(j=n−1) . This yields βk,f (linthrU ) = I(k = 0).
The analysis of the Betti numbers of any thresholded linear class thr L is now much easier
given the above. As discussed in Section 2.3.5, the cocellular resolution (Z, µZ ) of thr L is just the
intersection of coBallU = (Y, µ) with L, with the label of an intersection equal to the label of the
original cell, i.e. Z = Y ∩ L, µZ (F ∩ L) = µ(F ). Similarly, the cocellular resolution of (thr L)f
f
f
is just (Z, µf
Z ) with Z = Y ∩ L, µZ (F ∩ L) = µ (F ). If L is not contained in any coordinate
hyperplane of Y , then thr L is full. By Lemma 3.42, f ∈ thr L iff βi,f (thr Lf ) = 0 for all i ≥ 1.
This is equivalent by Lemma 2.30 to the statement that for all f, the degree Γf part of (Z, µf
Z ),
FL (f, f) := F(f, f) ∩ L, has the homological constraint
H dim Z−i (FL (f, f), ∂FL (f, f)) = H dim Z−i (4f ♦ f ∩ L, ∂ f 4f ♦ f ∩ L) = 0, ∀i ≥ 0.
But of course, f ∈ thr L iff L ∩ 4̊f 6= ∅. We therefore have discovered half of a remarkable theorem.
Theorem 3.43 (Homological Farkas). Let L be a vector subspace of dimension l ≥ 2 of Rn not
contained in any coordinate hyperplane, let P denote the positive cone {v ∈ Rn : v > 0}, and
let 1 : [n] → {−1, 1}, j 7→ 1. For any g : [n] → {−1, 1}, define Ξ(g) to be the topological space
represented by the complex ∂FL (1, 1 ∩ g). Then the following are equivalent:6
6
Our proof will work for all fields
k of any characteristic, so the cohomologies can actually be taken against Z.
51
3.3
Separation of Classes
1. L intersects P.
e • (Ξ(g); k) = 0 as long as 4g ∩ L 6= ∅.
2. For all g 6= 1, ¬1, H
This theorem gives homological certificates for the non-intersection of a vector subspace with the
positive cone, similar to how Farkas’ lemma [22] gives linear certificates for the same thing. Let’s
give some intuition for why it should be true. As mentioned before, ∂F(1, 1 ∩ g) is essentially the
part of 41 ♦(1∩g) = 4g visible to an observer Tom in 4̊¬1 , if we make light travel along the surface
of the sphere, or say we project everything into an affine hyperplane. Since the simplex is convex,
the image Tom sees is also convex. If L indeed intersects 4̊1 (equivalently 4̊¬1 ), then for Ξ(g)
he sees some affine space intersecting a convex body, and hence a convex body in itself. As Tom
stands in the interior, he sees everything (i.e. his vision is bijective with the actual points), and the
obvious contraction he sees will indeed contract Ξ(g) to a point, and Ξ(g) has trivial cohomology.
Conversely, this theorem says that if Tom is outside of 4̊1 (equivalently 4̊¬1 ), then he will be
able to see the nonconvexity of ∂F(1, 1 ∩ g) for some g, such that its intersection with an affine
space is no longer contractible to a single point.
Proof of 1 =⇒ 2. Note that Ξ(g) is a complex of dimension at most l − 2, so it suffices to prove
the following equivalent statement:
e l−2−i (Ξ(g); k) = 0 for all i ≥ 0 as long as 4g ∩ L 6= ∅.
For all g 6= 1, ¬1, H
L intersects P iff L intersects 4̊1 iff 1 ∈ thr L. By Lemma 3.42, this implies βi,f (thr Lf ) = 0 for
all f 6= 1, † and i ≥ 0. As we observed above, this means
H l−1−i (FL (1, f), ∂FL (1, f)) = 0, ∀i ≥ 0.
Write A = FL (1, f) and B = ∂FL (1, f) for the sake of brevity. Suppose A = 41 ♦ f ∩L is nonempty.
Then for f 6= † as we have assumed, both A and B contain the empty cell, and therefore we have a
relative cohomology long exact sequence with reduced cohomologies,
e l−2−i (B) ← H
e l−2−i (A) ← H l−2−i (A, B) ← · · ·
· · · ← H l−1−i (A, B) ← H
e • (A) = 0, we have
Because H
e l−2−i (B), ∀i.
H l−1−i (A, B) ∼
=H
This yields the desired result after observing that 1 ♦ f 6= 1, ¬1 iff f 6= 1, †.
Note that we cannot replace L with any general openly convex cone, because we have used
Lemma 3.42 crucially, which requires thr L to be full, which can happen only if L is a vector
subspace, by Lemma 2.72.
The reverse direction is actually quite similar, using the equivalences of Lemma 3.42. But
straightforwardly applying the lemma would yield a condition on when g = ¬1 as well which boils
down to L ∩ 4¬1 6= ∅, that significantly weakens the strength of the theorem.7 To get rid of this
condition, we need to dig deeper into the structures of Betti numbers of thr L.
Theorem 3.44. Suppose L is linear of dimension l, thr L is a full class, and f 6∈ thr L. Let g be
such that σg is the unique covector of L of the largest support with g ⊆ f (where we let g = 0 if no
such covector exists). We say g is the projection of f to thr L, and write g = Π(f, L). Then the
following hold:
7
Note that the condition says L intersects the closed cell 4¬1 , not necessarily the interior, so it doesn’t completely
trivialize it.
52
3
APPLICATIONS
1. βi,g (thr Lf ) = I(i = l − 1 − rank σg). (Here rank denotes the rank wrt matroid of L as
defined in Section 2.3.5)
2. For any h 6⊇ g, β•,h (thr Lf ) = 0.
3. For any PF r with domain disjoint from dom g, βi,r∪g (thr Lf ) = βi,r (thr Lf ([n] \ dom g)).
Note that such a σg would indeed be unique, since any two covectors with this property are
consistent and thus their union gives a covector with weakly bigger support.
Proof. (Item 1) The assumption on g is exactly that L intersects 4f at 4̊g ⊆ 4f and g is the
S
maximal such PF. Then F(f, g) ∩ L = f ♦ g⊇h⊇g 4̊h ∩ L = 4̊g ∩ L. Therefore βi,g (thr Lf ) =
e l−1−i ((4g ∩ L)/∂) = I(l − 1 − i = dim(4g ∩ L)) (note that when 4g ∩ L is a point (resp. the
H
empty cell), the boundary is the empty cell (resp. the empty space), so that this equality still holds
in those cases). But dim(4g ∩ L) is rank σg. So the Betti number is I(i = l − 1 − rank σg).
?
(Item 2) We show that Ithr
is generated by monomials of the form xΓf for f ⊇ g. It suffices
Lf
to demonstrate that for any function h ∈ thr L, the function h o g defined by
(
g(u) if u ∈ dom g
h o g(u) :=
h(u) otherwise.
is also in thr L, as f ∩ (h o g) ⊇ f ∩ h.
Let ϕ ∈ L be a function ϕ : U → R such that sgn(ϕ) = σg. If ψ ∈ L is any function, then for
sufficiently small > 0, sgn(ψ + ϕ) = sgn(ψ) o sgn(ϕ) = sgn(ψ) o g. Since L is linear, ψ + ϕ ∈ L,
and we have the desired result.
?
(Item 3) As shown above, the minimal generators of Ithr
are all divisible by xΓg . The result
Lf
then follows from Lemma 2.8.
Corollary 3.45. Suppose L is linear of dimension l ≥ 2 and thr L is a full class. Then f ∈ thr L
iff βi,f (thr Lf ) = 0 for all f 6= f, † and all i ≥ 1.
If l ≥ 1, then we also have f ∈ thr L iff βi,f (thr Lf ) = 0 for all f 6= f, † and all i ≥ 0.
Proof. We show the first statement. The second statement is similar.
The forward direction follows from Lemma 3.42. If β•,† (thr Lf ) = 0, then the same lemma
also proves the backward direction.
So assume otherwise, and in particular, f 6∈ thr L. By Theorem 3.44, it has to be the case that
βi,† (thr Lf ) = I(i = l − 1 − (−1)) = I(i = l) since rank 0 = −1. Consequently, βl−1,f (thr Lf ) 6= 0
for some f ⊃ †. If l ≥ 2, then this contradicts the right side of the equivalence, as desired.
We can now finish the proof of Theorem 3.43.
Proof of 2 =⇒ 1 in Theorem 3.43. Assume l ≥ 2. (2) says exactly that βj,f (thr Lf ) = 0 for all
f 6= 1, † and all j ≥ 1. So by Corollary 3.45, 1 ∈ thr L and therefore thr L intersects P.
From the literature of threshold functions in theoretical computer science, we say a real function
ϕ : U → R on a finite set U weakly represents a function f : U → {−1, 1} if ϕ(u) > 0 ⇐⇒
f (u) = 1 and ϕ(u) < 0 ⇐⇒ f (u) = −1, but we don’t care what happens when ϕ(u) = 0. In these
terms, we have another immediate corollary of Theorem 3.44.
f
Corollary 3.46. A function f is weakly representable by polythrkd iff βi,† ((polythrkd )
53
) = 0, ∀i.
3.3
Separation of Classes
This is confirmed by Theorem 3.40. By Lemma 2.92, this result means that f is weakly representable by polythrkd iff adding f to polythrkd did not change the homology of Spolythrk .
d
Remark 3.47. Item 3 of Theorem 3.44 reduces the characterization of Betti numbers of thr Lf to
the case when f is not “weakly representable” by thr L.
The following theorem says that as we perturb a function f 6∈ thr L by a single input u to obtain
f u , a nonzero Betti number βi,f of “codimension 1” of thr Lf remains a nonzero Betti number of
u
“codimension 1” of thr Lf if we truncate f.
Theorem 3.48 (Codimension 1 Stability). Suppose L is linear of dimension l ≥ 2 and thr L
is a full class. Let f be a function not in thr L and write g = Π(f, L). Assume rank σg = s
(so that βl−s−1,g (thr Lf ) = 1) and let f :⊆ [n] → [2] be such that βl−s−2,f (thr Lf ) 6= 0. Then
βl−s−2,f (thr Lf ) = 1.
Furthermore, if | dom(f \ g)| > 1 and u ∈ dom(f \ g), set f 0 := f \ (u 7→ f (u)). Then we have
u
βl−s−2,f 0 (thr Lf ) = 1
where
(
f (v)
f u (v) :=
¬f (v)
if v 6= u
if v = u.
Proof. By Theorem 3.44, it suffices to show this for g = †; then s = −1.
Recall FL (f, f) = F(f, f)∩L. By Lemma 2.30, βl−1,f (thr Lf ) = dimk H 0 (FL (f, f), ∂FL (f, f)).
If ∂FL (f, f) contains more than the empty cell, then the RHS is the zeroth reduced cohomology of the connected space FL (f, f)/∂, which is 0, a contradiction. Therefore ∂FL (f, f) = {∅},
i.e. as geometric realizations, L does not intersect ∂F(f, f). Consequently, βl−1,f (thr Lf ) =
dimk H 0 (4f ♦ f , {∅}) = dimk H 0 (4f ♦ f ) = 1.
Now f u ♦ f 0 = f ♦ f, and ∂ f 0 4f ♦ f ⊂ ∂ f 4f ♦ f since f 0 ⊂ f. Therefore ∂FL (f u , f 0 ) ⊆ ∂FL (f, f)
u
also does not intersect L. So βl−1,f 0 (thr Lf ) = dimk H 0 (4f ♦ f , {∅}) = dimk H 0 (4f ♦ f ) = 1.
Below we give some examples of the computation of Betti numbers of thr Lf .
Theorem 3.49. Let f := parityd ∈ [{−1, 1}d → {−1, 1}] and C := linthrd ⊆ [{−1, 1}d →
{−1, 1}]. Then βi,f ∩1 (Cf ) = (2d−1 − 1)I(i = 1).
Proof. Let (Y, µ) be the coball resolution of linthrd . Consider the cell F1 of Y with PF label 1.
It has 2d facets since
(
r 7→ −1 if r = s
Is :=
r 7→ 1
if r 6= s
is a linear threshold function (it “cuts” out a corner of the d-cube), so that each facet of F1 is the
cell Gs := F1∩Is with PF label
(
r 7→ 1
if r = s
1 ∩ Is =
undefined otherwise.
If for s, s0 ∈ {−1, 1}d , f (s) = f (s0 ), then Gs and Gs0 do not share a codimension 2 face (do not
share a facet of their own). (If they do, then
(
r 7→ + if r =
6 s, s0
r 7→ 0 else
54
3
APPLICATIONS
is a covector of the d-cube. But this means that (s, s0 ) is an edge of the d-cube, implying that
f (s) 6= f (s0 )).
S
Now note that ∂ f ∩1 F1 = {Gs : f (s) = 1}. For each Gs 6⊆ ∂ f ∩1 F1 , we have ∂Gs ⊆ ∂ f ∩1 F1 by
Fn/2
the above reasoning. Let α := d + 1 = dimh C and n = 2d . Therefore, ∂ f ∩1 R1 ∼
= S α−2 \ i=1 Dn−2 ,
the (α − 2)-sphere with n/2 holes. So
(
n/2−1 if m = α − 3
m
e (∂ f ∩1 F1 ) = Z
H
0
otherwise
Hence
e α−1−i (F1 , ∂ f ∩1 F1 ; k)
βi,f ∩1 (C) = dimk H
e α−2−i (∂ f ∩1 F1 )
= rank H
= (n/2 − 1)I(α − 2 − i = α − 3)
= (2d−1 − 1)I(i = 1).
⊆ [{−1, 1}d →
Theorem 3.50. Let f := parityd ∈ [{−1, 1}d → {−1, 1}] and C := polythrd−1
d
f
d−1
{−1, 1}]. Then βi,f ∩1 (C ) = (2
− 1)I(i = 1).
Proof. Let (Y, µ) be the coball resolution of linthrd . Consider the cell F1 of Y with PF label 1.
It has 2d facets since
(
r 7→ −1 if r = s
Is :=
r 7→ 1
if r 6= s
is a linear threshold function (it “cuts” out a corner of the d-cube), so that each facet of F1 is the
cell Gs := F1∩Is with PF label
(
r 7→ 1
if r = s
1 ∩ Is =
undefined otherwise.
Note that a function g : {−1, 1}d → {−1, 0, 1} is the sign function of a polynomial p with degree
d − 1 iff im(gf ) ⊇ {−1, 1} (i.e. g hits both 1 and −1). Indeed, by Fourier Transform, the degree
constraint on p is equivalent to
hp, f i =
X
p(u)f (u) = 0.
u∈{−1,1}d
If im gf = im sgn(p)f does not contain −1, then this quantity is positive as long as p 6= 0, a
contradiction. So suppose im gf ⊇ {−1, 1}. Set mp := #{u : g(u)f (u) = 1} and mn := #{u :
g(u)
g(u)f (u) = −1}. Define the polynomial p by p(u) = g(u)
mp if g(u)f (u) = 1 and p(u) = mn if
g(u)f (u) = −1. Then hp, f i = 0 and sgn p = g by construction, as desired.
As ∂ f ∩1 F1 is the complex with the facets F := {Gs : f (s) = 1}, to find its homology it suffices
to consider the nerve of the facet cover. For a function g : {−1, 1}d → {−1, 0, 1}, write ḡ for the
partial function g g −1 ({−1, 1}) (essentially, we are marking as undefined all inputs that g send to
0). But by the above, any proper subset G ⊂ F must have nontrivial intersection (which is a cell
55
3.4
The Maximal Principle for Threshold Functions
T
with PF label ḡ for some g with im g ⊇ {−1, 1}), while F must have PF label a subfunction g of
h̄ for
(
1 if f (u) = −1
h(u) =
0 otherwise.
T
Again, by the last paragraph, this implies that F = ∅. Therefore, in summary, the nerve NF is
e j (∂ f ∩1 F1 ) ∼
e j (NF ) =
the boundary of a (n/2 − 1)-dimensional simplex, where n = 2d , so that H
=H
d−1
I(j = n/2 − 2) · Z. Let α = dimh polythrd = 2d − 1. Then
e α−1−i (F1 , ∂ 1∩f F1 ; k)
βi,1∩f (C) = dimk H
e α−2−i (∂ 1∩f F1 )
= rank H
= I(α − 2 − i = n/2 − 2)
= I(i = 2d−1 − 1)
as desired.
3.4
The Maximal Principle for Threshold Functions
By looking at the 0th Betti numbers of thr Lf , we can obtain a “maximal principle” for thr L.
Theorem 3.51. Suppose there exists a function g ∈ thr L such that
• g 6= f and,
• for each h ∈ thr L that differs from g on exactly one input u, we have g(u) = f (u) = ¬h(u).
Then βi,f ∩g (thr Lf ) = I(i = 0) and f 6∈ thr L. Conversely, any function g ∈ thr L satisfying
βi,f ∩g (thr Lf ) = I(i = 0) also satisfies condition (3.51).
Informally, Theorem 3.51 says that if we look at the partial order on thr L induced by the
mapping from thr L to the class of partial functions, sending g to g ∩ f , then, assuming f is in
thr L, any function g that is a “local maximum” in thr L under this partial order must also be a
global maximum and equal to f . We shall formally call any function g ∈ thr L satisfying condition
3.51 a local maximum with respect to f .
Proof. Let coBall = (Y, µ) be the minimal cocellular resolution of thr L. Let Fg denote the face of
Y with label g. Each facet of Fg has the label g ∩ h for some h differing from g on exactly one
input. Condition (3.51) thus says that ∂ f ∩g Fg = ∂Fg . Therefore, if l = dim L,
e l−1−i (Fg /∂ f ∩g ; k)
βi,f ∩g (thr Lf ) = dimk H
e l−1−i (Fg /∂; k)
= dimk H
= I(l − 1 − i = dim Fg )
= I(i = 0)
This shows that f 6∈ thr L as desired.
For the converse statement, we only need to note that the Betti number condition implies
∂ f ∩g Fg = ∂Fg , by reversing the above argument.
For any f : {−1, 1}d → {−1, 1}, define thrdeg f to be the minimal degree of any polynomial P
with 0 6∈ P ({−1, 1}d ) and sgn(P ) = f . The maximal principle enables us to compute thrdeg f for
any symmetric f (a result that appeared in [3]).
56
3
APPLICATIONS
Theorem 3.52. Suppose f : {−1, 1}d → {−1, 1} is symmetric, i.e. f (u) = f (π · u) for any
permutation π. Let r be the number of times f changes signs. Then thrdeg f = r.
P
Proof. To show thrdeg f ≤ r: Let s(u) := i (1 − ui )/2. Because f is symmetric, it is a function
of s, say f¯(s(u)) = f (u) 8 . Suppose WLOG f¯(0) >Q
0 and f¯ changes signs between s and s + 1 for
s = t1 , . . . , tr . Then define the polynomial Q(s) := ri=1 (ti + 12 − s). One can immediately see that
sgn Q(s) = f¯(s) = f (u). Therefore thrdeg f ≤ r.
Q
1
To show thrdeg f ≥ r: Let k = r − 1 and consider the polynomial Q0 (s) = r−1
i=1 (ti + 2 − s) and
0
k
its sign function ḡ(s) = sgn Q (s) ∈ polythrd . We show that g(u) = ḡ(s(u)) a local maximum.
Since ḡ(s) = f¯(s) for all s ∈ [0, tr ], it suffices to show that for any v with s(v) > tr , the function
(
g(u)
if u 6= v
g v (u) :=
¬g(u) if u = v.
is not in polythrkd . WLOG, assume v = (−1, . . . , −1, 1, . . . , 1) with σ := s(v) −1’s in front. For
the sake of contradiction, suppose there exists degree k P
polynomial P with sgn P = g v . Obtain
through symmetrization the polynomial R(z1 , . . . , zσ ) :=P π∈Sσ P (π · z, 1, . . . , 1). R is a symmetric
polynomial, so expressable as a univariate R0 (q) in q := j (1 − zj )/2 ∈ [0, σ] on the Boolean cube.
Furthermore, sgn R0 (q) = ḡ(q) for all q 6= σ, and sgn R0 (σ) = −ḡ(σ). Thus R0 changes sign k + 1
times on [0, σ] but has degree at most k, a contradiction. This yields the desired result.
The proof above can be extended to give information on the zeroth Betti numbers of polythrkd
{f }. Suppose f is again symmetric, and as in the proof above,
r := thrdeg f
X
s(u) :=
(1 − ui )/2
i
f¯(s(u)) = f (u)
Q(s) := f¯(0)
r
Y
1
(ti + − s)
2
i=1
where f¯ changes signs between s and s + 1 for s = t1 , . . . , tr .
Q
Theorem 3.53. Let k < r and a < b ∈ [r] be such that b − a = k − 1. Set Q0 (s) = f¯(ta ) bi=a (ti +
1
0
k
2 − s) and g(u) := ḡ(s(u)) := sgn Q (s(u)). Then βi,f ∩g (polythrd {f }) = I(i = 0).
Proof. We prove the equivalent statement (by Theorem 3.51) that g is a local maximum. Since
ḡ(s) = f¯(s) for s ∈ [ta , tb + 1], we just need to show that for any v with s(v) 6∈ [ta , tb + 1], the
function
(
g(u)
if u 6= v
g v (u) :=
¬g(u) if u = v.
is not in polythrkd .
If s(v) > tb + 1, then WLOG assume v = (−1, . . . , −1, 1, . . . , 1) with σ := s(v) −1’s in front.
For the sake of contradiction, suppose there exists degree k polynomial P with sgn P = g v . Obtain
8
f can be expressed as a polynomial in {−1, 1}d , and by the fundamental theorem of symmetric polynomials, f
d
is a polynomial in the elementary symmetric
Ppolynomials. But with respect to the Boolean cube {−1, 1} , all higher
symmetric polynomials are polynomials in i ui , so in fact f is a univariate polynomial in s.
57
3.5
Homological Farkas
P
through symmetrization the polynomial R(z1 , . . . , zσ ) :=P π∈Sσ P (π · z, 1, . . . , 1). R is a symmetric
polynomial, so expressable as a univariate R0 (q) in q := j (1 − zj )/2 ∈ [0, σ] on the Boolean cube.
Furthermore, sgn R0 (q) = ḡ(q) on q ∈ [0, σ − 1], and sgn R0 (σ) = −ḡ(σ). Thus R0 changes sign k + 1
times on [0, σ] but has degree at most k, a contradiction.
If s(v) < ta , then WLOG assume v = (1, . . . , 1, −1, . . . , −1) with σ := s(v) −1’s in the back.
For the sake of contradiction, suppose there exists degree kPpolynomial P with sgn P = g v . Obtain
through symmetrization the polynomial R(z1 , . . . , zσ ) := π∈Sσ P
P(π · z, −1, . . . , −1). R is a symmetric polynomial, so expressable as a univariate R0 (q) in q := j (1 − zj )/2 ∈ [0, d − σ] on the
Boolean cube. Furthermore, sgn R0 (q) = ḡ(q + σ) on q ∈ [1, σ − 1], and sgn R0 (0) = −ḡ(σ). Thus
R0 changes sign k + 1 times on [0, d − σ] but has degree at most k, a contradiction.
3.5
Homological Farkas
Theorem 3.43 essentially recovers Theorem 1.7, after we define Λ(g) to be ∂FL (¬1, ¬1 ∩ g), and
utilize the symmetry 1 ∈ thr L ⇐⇒ ¬1 ∈ thr L. Then Λ(g) indeed coincides with the union of
facets of 4g whose linear spans separates 4g and 41 .
We can generalize the homological Farkas’ lemma to arbitrary linear hyperplane arrangements.
Let H = {Hi }ni=1 be a collection of hyperplanes in Rk , and {wi }i be a collection of row matrices
such that
Hi = {x ∈ Rk : wi x = 0}.
Set W to be the matrix with rows wi . For b ∈ {−, +}n , define Rb := {x ∈ Rk : sgn(W x) = b}.
Thus R+ = {x ∈ Rk : W x > 0}. Suppose W has full rank (i.e. the normals wi to Hi span the
whole space Rk ), so that W is an embedding Rk Rn . Each region Rb is the preimage of Pb , the
cone in Rn with sign b. Therefore, Rb is linearly isomorphic to im W ∩ Pb , via W .
Let L ⊆ Rk be a linear subspace of dimension l. Then
L ∩ R+ 6= ∅ ⇐⇒ W (L) ∩ P+ 6= ∅
e l−2−i (W (L) ∩ Λ(b)) = 0∀i ≥ 0]
⇐⇒ ∀b 6= +, −, [W (L) ∩ Λ(b) 6= ∅ =⇒ H
e l−2−i (L ∩ W −1 Λ(b)) = 0∀i ≥ 0]
⇐⇒ ∀b 6= +, −, [L ∩ W −1 Λ(b) 6= ∅ =⇒ H
This inspires the following definition.
Definition 3.54. Let H = {Hi }ni=1 and W be as above, with W having full rank. Suppose
b ∈ {−, +}n . Then ΛH (b) is defined as the union of the facets of Rb ∩ S k−1 whose linear spans
separate Rb and R+ .
One can immediately see that ΛH (b) = W −1 Λ(b).
In this terminology, we have shown the following
Corollary 3.55. Let H = {Hi }ni=1 be a collection of linear hyperplanes in Rk whose normals span
Rk (This is also called an essential hyperplane arrangement.). Suppose L ⊆ Rk is a linear subspace
of dimension l. Then either
• L ∩ R+ 6= ∅, or
e l−2−i (L ∩ ΛH (b)) 6= 0 for some i ≥ 0,
• there is some b 6= +, −, such that L ∩ ΛH (b) 6= ∅ and H
but not both.
58
3
APPLICATIONS
Rb
ΛA (b)
A
sphere at infinity
¬A
Λ¬A (¬b)
¬R¬b
Figure 20: Illustration of the symbols introduced so far.
This corollary can be adapted to the affine case as follows. Let A = {Ai }ni=1 be an essential
oriented affine hyperplane arrangement in Rk−1 . The hyperplanes A divide Rk−1 into open, signed
regions Rb , b ∈ {−, +}n such that Rb lies on the bi side of Ai . We can define ΛA (b) as above, as
the union of facets F of Rb such that Rb falls on the negative side of the affine hull of F , along
with their closures in the “sphere at infinity.”
~ = {A
~ i }n to be the
Let Hb := {(x, b) : x ∈ Rk−1 }. Treating Rk−1 ,→ Rk as H1 , define A
i=1
oriented linear hyperplanes vectorizing Ai in Rk . Vectorization produces from each Rb two cones
~ b, R
~ ¬b ⊆ Rk , defined by
R
~ b := {v ∈ Rk : ∃c > 0, cv ∈ Rb }
R
~ ¬b := {v ∈ Rk : ∃c < 0, cv ∈ Rb }.
R
Define ¬A as the hyperplane arrangement with the same hyperplanes as A but with orientation
reversed. Let ¬Rb denote the region in ¬A with sign b. Set Λ¬A (b) analogously for ¬A, as the
union of facets F of ¬Rb such that ¬Rb falls on the negative side of the affine hull of F , along their
closures in the “sphere at infinity.” Thus the natural linear identity between ¬A and A identifies
¬R¬b with Rb , and Λ¬A (¬b) with the union of facets not in ΛA (b). See Figure 20.
~ i ∩H1 = Ai as oriented hyperplanes, and by symmetry, A
~ i ∩H−1 =
Note that, by construction, A
¬Ai . By projection with respect to the origin, A and ¬A can be glued along the “sphere at infinity”
~ i ∩ S n−1 }i . Similarly, Rb and ¬Rb can be glued together along a subspace of the “sphere
to form {A
~ b , and ΛA (b) and Λ¬A (b) can be glued together likewise to obtain Λ ~ (b).
at infinity” to obtain R
A
~ b = Rb t∞ ¬Rb
We denote this “gluing at infinity” construction by − t∞ −, so that we write R
and ΛA~ (b) = ΛA (b) t∞ Λ¬A (b).
~ be its vectorization in Rk .
Let N be an affine subspace of Rk−1 of dimension l − 1, and let N
Then
~ ∩R
~ + 6= ∅ ⇐⇒ N
~ ∩R
~ − 6= ∅
N ∩ R+ 6= ∅ ⇐⇒ N
~ ∩ Λ ~ (b) 6= ∅ =⇒ H
e l−2−i (N
~ ∩ Λ ~ (b)) = 0∀i ≥ 0]
⇐⇒ ∀b 6= +, −, [N
A
A
~ ∩ Λ ~ (b) = (N ∩ ΛA (b)) t∞ (N ∩ Λ¬A (b)), so we get the following
But N
A
Corollary 3.56. N does not intersect R+ iff there is some b 6= +, − such that (N ∩ ΛA (b)) t∞
(N ∩ Λ¬A (b)) is nonempty and is not nulhomotopic.
59
3.5
3
Homological Farkas
2
f
g
Λ(g)
Λ(f )
1
Figure 21: Example application of Corollary 3.57. Let the hyperplanes (thin lines) be oriented such that
the square at the center is R+ . The bold segments indicate the Λ of each region. Line 1 intersects R+ ,
and we can check that its intersection with any bold component is nulhomotopic. Line 2 does not intersect
R+ , and we see that its intersection with Λ(f ) is two points, so has nontrivial zeroth reduced cohomology.
Line 3 does not intersect R+ either, and its intersection with Λ(g) consists of a point in the finite plane and
another point on the circle at infinity.
When N does not intersect the closure R+ , we can just look at ΛA (b) and the component at
infinity for a homological certificate.
Corollary 3.57. Let A = {Ai }ni=1 be an affine hyperplane arrangement in Rk−1 whose normals
affinely span Rk−1 . Suppose R+ is bounded and let N be an affine subspace of dimension l − 1.
Then the following hold:
e • (N ∩ ΛA (b)) = 0.
1. If R+ ∩ N 6= ∅, then for all b 6= +, −, N ∩ ΛA (b) 6= ∅ =⇒ H
2. If R+ ∩ N = ∅, then for each j ∈ [0, l − 2], there exists b 6= +, − such that N ∩ ΛA (b) 6= ∅
e j (N ∩ ΛA (b)) = 0 for some j.
and H
~ := A
~ ∪ {H0 }, where H0 is the linear hyperplane of Rk with last
Proof. (Item 1) Consider B
~ 0 for the region with sign c with respect to
coordinate 0, oriented toward positive side. Write R
c
~ + does not intersect H0 other than at the
B (where c ∈ {−, +}n+1 ). Because R+ is bounded, R
0
~
~ ∩ Λ ~ (c) is nulhomotopic if nonempty.
origin. Then N ∩ R+ 6= ∅ ⇐⇒ N ∩ R+ 6= ∅ ⇐⇒ ∀c, N
B
~ ∩ Λ ~ (c) ∼
Note that for any b ∈ {−, +}n , we have ΛB~ (bb+) ∼
= ΛA (b), and N
= N ∩ ΛA (b). (Here
B
bb+ means + appended to b). Substituing c = bb+ into the above yields the result.
(Item 2) The most natural proof here adopts the algebraic approach.
~ ) ⊆ [[n + 1] →
~ Consider C := thr W ~ (N
Let WB~ : Rk → Rn+1 be the embedding matrix for B.
B
~ as
{−, +}]. This is the class of functions corresponding to all the sign vectors achievable by N
+
~ Define C := C . Since N ∩ R+ = ∅, βi,† (C) = I(i = l)
it traverses through the regions of B.
by Theorem 3.44. By the minimality of Betti numbers, for all j ≥ 0, βl−1−j,f (C) 6= 0 for some
e j (Ξ ~ (+ ♦ f) ∩ N
~ ) 6= 0
f :⊆ [n + 1] → {−, +}, f 6= †, + with n + 1 6∈ dom f. But this means that H
B
~ ∼
by the proof of Theorem 3.44. Of course, (+ ♦ f)(n + 1) = −, meaning that ΞB~ (+ ♦ f) ∩ N
=
ΛA (¬(+ ♦ f)) ∩ N . For the desired result, we just set b = ¬(+ ♦ f).
Figure 21 gives an example application of Corollary 3.57.
60
3
APPLICATIONS
3.6
Probabilistic Interpretation of Hilbert Function
In this section we exhibit a probabilistic interpretation of the Hilbert function of a canonical ideal.
For a graded module M over S = k[x0 , . . . , xn−1 ], the graded Hilbert function HF(M ; a)
takes an exponent sequence a to the dimension over k of the component of M with degree a. Its
generating function
X
HF(M ; a)xa
HS(M ; x) =
a
is called the graded Hilbert series of M . It is known [11] that
K(M ; a)
HF(M ; a) = Qn−1
i=0 (1 − xi )
for some polynomial K in n variables. This polynomial is called the K-polynomial of M . If one
performs fractional decomposition on this rational function, then one can deduce that the Hibert
function coincides with a polynomial when a has large total degree. (This is briefly demonstrated
below for the N-graded version). This polynomial is called the Hilbert polynomial and is written
HP(M ; a).
Let χM (x) denote the graded Euler characteristic of a module M :
XX
χM (x) =
(−1)i βi,a (M )xa .
i≥0 a0
For example, for M = IC? , we write χC (x) := χIC? (x), and it takes the form
χC (x) =
X
f ∈C
xΓf −
X
xΓ(f ∩g) + · · · .
f ∼C g
It is shown in [11, Thm 4.11, 5.14] that
χI ? (1 − x) = K(I ? ; 1 − x) = K(S/I; x) = χS/I (x)
for any squarefree monomial ideal I.
Now let C ⊆ [n → 2], and let S be its canonical base ring. For any f 6∈ C, the minimal
generators of IC?f are xΓ(f ∩g) for g ∈ C “closest” to f . In particular, for every function h ∈ C,
| dom f ∩ h| ≤ | dom f| = 2d+1 − totdeg xΓf for some xΓf ∈ mingen(IC?f ).
Definition 3.58. Define the hardness of approximating f with C as ℵ(f, C) = min{2d −
| dom f ∩ h| : h ∈ C}
Then ℵ(f, C) is the smallest total degree of any monomial appearing in χCf minus 2d .
Therefore,
log χCf (ζ, . . . , ζ)
− 2d
ζ→0+
log ζ
log K(S/ICf ; ϑ, . . . , ϑ)
= lim
− 2d
ϑ→1−
log 1 − ϑ
log HS(S/ICf ; ϑ, . . . , ϑ)
= lim
+ 2d
ϑ→1−
log 1 − ϑ
ℵ(f, C) = lim
61
3.6
Probabilistic Interpretation of Hilbert Function
where the last equality follows from
HS(S/I; t, . . . t) = K(S/I; t, . . . , t)/(1 − t)2
d+1
.
The N-graded Hilbert series expands into a Laurent polynomial in (1 − t),
HS(S/I; t, . . . , t) =
a−1
a−r−1
+ ··· +
+ a0 + · · · as ts
r+1
(1 − t)
(1 − t)
such that the N-graded Hilbert polynomial HP(S/I; t, . . . , t) has degree r. Thus
ℵ(f, C) = 2d − (r + 1)
= 2d − deg HP(S/ICf ; t, . . . , t) − 1
= 2d − totdeg HP(S/ICf ) − 1
d+1 −1
d+1
Note that total number of monomials in degree k is k+2
= Θ(k 2 −1 ). Therefore, if we
2d+1 −1
define ℘(k; f, C) to be the probability
monomial ω of degree k has supp ω ⊆ f ∩ h
that a random
for some h ∈ C, then ℘(k; f, C) = Θ
HP(S/ICf ;k)
k2d+1 −1
, and
log ℘(k; f, C)
− 2d .
k→∞
log k
ℵ(f, C) = − lim
Now, ℘(k) is really the probability that a PF f has extension in C and is extended by f , where
f is chosen from the distribution Qk that assigns a probability to f proportional to the number of
monomials of total degree k whose support is f. More precisely,
k−1
Qk (f) =
| dom f|−1
k+2d+1 −1
2d+1 −1
Note that there is a nonzero probability of choosing an invalid partial function, i.e. a monomial
that is divisible by xu,0 xu,1 for some u ∈ [2d ]. Under this distribution, a particular PF of size d + 1
is Θ(k) times as likely as any particular PF of size d. As k → ∞, Qk concentrates more and more
probability on the PFs of large size.
By a similar line of reasoning using C instead of Cf , we see that deg HP(S/IC ) + 1 = 2d , so we
define ℵ(C) = 0. Therefore the probability that a PF f has extension in C when f is drawn from Qk
is
d
℘(k; C) ∼ k −2 .
We deduce that
Theorem 3.59. The probability that a PF f drawn from Qk is extended by f when it is known to
have extension in C is Θ(k −ℵ(f,C) ).
Note that we are assuming f and C are fixed, and in particular when we are interested in a
parametrized family (fd , Cd ), there might be dependence on d that is not written here. The main
point we want to make, however, is that the Betti numbers of C and Cf affect the behavior of
these classes under certain kinds of probability distributions. By considering higher Betti numbers
and their dependence on the parameter d, we may compute the dependence of ℘ on d as well.
Conversely, carrying over results from subjects like statistical learning theory could yield bounds
on Betti numbers this way.
62
4
DISCUSSION
4
Discussion
We have presented a new technique for complexity separation based on algebraic topology and
Stanley-Reisner theory, which was used to give another proof of Minsky and Papert’s lower bound
on the degree of polynomial threshold function required to compute parity. We also explored the
connection between the algebraic/topological quantity dimh C and learning theoretical quantity
dimVC C, and surprisingly found that the former dominates the latter, with equality in common
computational classes. The theory created in this paper seems to have consequences even in areas
outside of computation, as illustrated the Homological Farkas Lemma. Finally, we exhibited a
probabilistic interpretation of the Hilbert function that could provide a seed for future developments
in hardness of approximation.
4.1
Geometric Complexity Theory
For readers familiar with Mulmuley’s Geometric Complexity program [13], a natural question is
perhaps in what ways is our theory different? There is a superficial similarity in that both works
associate mathematical objects to complexity classes and focus on finding obstructions to equality of
complexity classes. In the case of geometric complexity, each class is associated to a variety, and the
obstructions sought are of representation-theoretic nature. In our case, each class is associated to
a labeled simplicial complex, and the obstructions sought are of homological nature. But beyond
this similarity, the inner workings of the two techniques are quite distinct. Whereas geometric
complexity focuses on using algebraic geometry and representation theory to shed light on primarily
the determinant vs permanent question, our approach uses combinatorial algebraic topology and
has a framework general enough to reason about any class of functions, not just determinant and
permanent. This generality allowed, for example, the unexpected connection to VC dimension. It
remains to be seen, however, whether these two algebraic approaches are related to each other in
some way.
4.2
Natural Proofs
So this homological theory is quite different from geometric complexity theory. Can it still reveal
new insights on the P = NP problem? Based on the methods presented in this paper, one might
?
try to show P/poly 6= NP by showing that the ideal ISIZE(d
c ){3SAT } is not principal, for any c and
d
large enough d. Could Natural Proofs [16] present an obstruction?
A predicate P : [2d → 2] → [2] is called natural if it satisfies
• (Constructiveness) It is polynomial time in its input size: there is an 2O(d) -time algorithm
that on input the graph of a function f ∈ [2d → 2], outputs P(f ).
• (Largeness) A random function f ∈ [2d → 2] satisfies P(f ) = 1 with probability at least
1
n.
Razborov and Rudich’s celebrated result says that
Theorem 4.1. [16] Suppose there is no subexponentially strong one-way functions. Then there
exists a constant c such that no natural predicate P maps SIZE(dc ) ⊆ [2d → 2] to 0.
This result implicates that common proof methods used for proving complexity separation of
lower complexity classes, like Hastad’s switching lemma used in the proof of parity 6∈ AC0 [2],
cannot be used toward P vs NP.
?
In our case, since SIZE(dc ) has 2poly(d) functions, naively computing the ideal ISIZE(d
c ){3SAT } is
d
already superpolynomial time in 2d , which violates the “constructiveness” of natural proofs. Even
63
4.3
Homotopy Type Theory
?
if the ideal ISIZE(d
c )3SATd is given to us for free, computing the syzygies of a general ideal is NP-
hard in the number of generators Ω(2d ) [4]. Thus a priori this homological technique is not natural
(barring the possibility that in the future, advances in the structure of SSIZE(dc ) yield poly(2d )-time
?
algorithms for the resolution of ISIZE(d
c ){3SAT } ).
d
4.3
Homotopy Type Theory
A recent breakthrough in the connection between algebraic topology and computer science is the
emergence of Homotopy Type Theory (HoTT) [19]. This theory concerns itself with rebuilding the
foundation of mathematics via a homotopic interpretation of type theoretic semantics. Some of the
key observations were that dependent types in type theory correspond to fibrations in homotopy
theory, and equality types correspond to homotopies. One major contribution of this subfield is the
construction of a new (programming) language which “simplifies” the semantics of equality type, by
proving, internally in this language, that isomorphism of types “is equivalent” to equality of types.
It also promises to bring automated proof assistants into more mainstream mathematical use. As
such, HoTT ties algebraic topology to the B side (logic and semantics) of theoretical computer
science.
Of course, this is quite different from what is presented in this paper, which applies algebraic
topology to complexity and learning theory (the A side of TCS). However, early phases of our
homological theory were inspired by the “fibration” philosophy of HoTT. In fact, the canonical
suboplex was first constructed as a sort of “fibration” (which turned out to be a cosheaf, and not
a fibration) as explained in Appendix B. It remains to be seen if other aspects of HoTT could be
illuminating in future research.
5
Future Work
In this work, we have initiated the investigation of function classes through the point of view of
homological and combinatorial commutative algebra. We have built a basic picture of this mathematical world but left many questions unanswered. Here we discuss some of the more important
ones.
Characterize when dimVC = dimh , or just approximately. We saw that all of the interesting
computational classes discussed in this work, for example, linthr and linfun, have homological
dimensions equal to their VC dimensions. We also showed that Cohen-Macaulay classes also satisfy
this property. On the other hand, there are classes like delta whose homological dimensions are
very far apart from their VC dimensions. A useful criterion for when this equality can occur,
or when dimh = O(dimVC ), will contribute to a better picture when the homological properties
of a class reflect its statistical/computational properties. Note that adding the all 0 function to
delta drops its homological dimension back to its VC dimension. So perhaps there is a notion of
“completion” that involves adding a small number of functions to a class to round out the erratic
homological behaviors?
Characterize the Betti numbers of thr Lf . We showed that the Betti numbers of thr Lf
has nontrivial structure, and that some Betti numbers correspond to known concepts like weak
representation of f . However, we only discovered a corner of this structure. In particular, what do
the “middle dimension” Betti numbers look like? We make the following conjecture.
64
5
FUTURE WORK
Conjecture 5.1. Let C = polythrkd and f 6∈ C. For every PF f in Cf , there is some i for which
βi,f \f (C) is nonzero.
It can be shown that this is not true for general thr L classes, but computational experiments
suggest this seems to be true for polynomial thresholds.
How do Betti numbers of thr Lf change under perturbation of f ? We proved a stability
theorem for the “codimension 1” Betti numbers. In general, is there a pattern to how the Betti
numbers respond to perturbation, other than remaining stable?
Does every boolean function class have a minimal cellular or cocellular resolution? It
is shown in [20] that there exist ideals whose minimal resolutions are not (CW) cellular. A natural
question to ask here is whether this negative results still holds when we restrict to canonical ideals
of boolean, or more generally finite, function classes. If so, we may be able to apply techniques
from algebraic topology more broadly.
When does a class C have pure Betti numbers? If we can guarantee that restriction preserves
purity of Betti numbers, then Theorem 2.99 can be used directly to determine the Betti numbers
of restriction of classes. Is this guarantee always valid? How do we obtain classes with pure Betti
numbers?
Under what circumstances can we expect separation of classes using high dimensional
Betti numbers? Betti numbers at dimension 0 just encode the members of a class, and Betti
numbers at dimension 1 encode the “closeness” relations on pairs of functions from the class. On
the other hand, the maximal dimension Betti number of thr Lf encodes information about weak
representation of f . So it seems that low dimension Betti numbers reflect more raw data while
higher dimension Betti numbers reflect more “processed” data about the class, which are probably
more likely to yield insights different from conventional means. Therefore, the power of our method
in this view seems to depend on the dimension at which differences in Betti number emerges (as
we go from high dimension to low dimension).
Extend the probabilistic interpretation of Hilbert function. One may be able to manipulate the distribution Qk in Section 3.6 to arbitrary shapes when restricted to total functions, by
modifying the canonical ideal. This may yield concrete connections between probabilistic computation and commutative algebra.
Prove new complexity separation results using this framework We have given some
examples of applying the homological perspective to prove some simple, old separation results, but
hope to find proofs for nontrivial separations in the future.
65
Appendices
A
Omitted Proofs
Proof of Proposition 2.25. The set of open cells in U \ ∂U is obviously U. So we need to show that
U and ∂U are both subcomplex of Y . The first is trivial by Lemma 2.22.
Suppose U = Yb . An open cell F̊ is in ∂U only if its label aF 6 b. But then any cell in its
boundary ∂F must fall inside ∂U as well, because its exponent label majorizes aF . Thus the closed
cell satsifies F ∈ ∂U. This shows ∂U is closed and thus a subcomplex by Lemma 2.22.
The case of U = Y≺b has the same proof.
For U = Yb , the only difference is the proof of ∂U being closed. We note that an open cell F̊ is
in ∂U iff F̊ ∈ U and its label aF b. Thus any open cell G̊ in its boundary ∂F falls inside ∂U as
well, because its exponent label aG aF b. So F ∈ ∂U, and ∂U is closed, as desired.
Proof of Lemma 2.29. Let E be the chain complex obtained from cochain complex F(X,A) by placing
cohomological degree d at homological degree 0. For each a, we show the degree xa part Ea of E
has rank 0 or 1 homology at homological degree 0 and trivial homology elsewhere iff one of the
three conditions are satisfied.
As a homological chain L
complex, Ea consists of free modules Eai at each homological degree i
isomorphic to a direct sum F ∈∆d−i ((X,A)a ) S, where ∆i (X, A) denotes the pure i-skeleton of the
pair (X, A) (i.e. the collection of open cells of dimension i in X \ A). Writing SF for the copy of
the base ring S corresponding to the cell F , the differential is given componentwise by
X
d : Eai → SG ∈ Eai−1 , a 7→
sign(F, G)aF .
facetsF ⊂G
If K is void, this chain is identically zero.
Otherwise if ∂K is empty, then Ea just reproduces the reduced simplicial cochain complex of K
— reduced because the empty cell is in K and thus has a corresponding copy of S at the highest
e d−i (K) is nonzero only possible at i = 0, as desired,
homological degree in Ea . Then Hi (Ea ) = H
and at this i, the rank of the homology is 0 or 1 by assumption.
Finally, if ∂K contains a nonempty cell, then Ea recovers the relative cochain complex for
(K, ∂K). Then Hi (Ea ) = H̃ d−i (K, ∂K) is nonzero only possible at i = 0, where the rank of the
homology is again 0 or 1.
This proves the reverse direction (⇐).
For the forward direction (⇒), suppose ∂K only contains an empty cell (i.e. does not satisfy
conditions 1 and 2). Then Ea is the nonreduced cohomology chain complex of K, and therefore it
must be the case that H i (K) = Hd−i (Ea ) = 0 at all i 6= d. But H 0 (K) = 0 implies K is empty,
yielding condition 3.
Otherwise, if ∂K is void, this implies condition 2 by the reasoning in the proof of the backward
direction. Similarly, if ∂K is nonempty, this implies condition 1.
Proof of Lemma 2.30. Let E be the chain complex obtained from cochain complex FY by placing
cohomological degree d at homological degree 0. Then βi,b (I) = dimk Hi (E⊗k)b = dimk H d−i (FY ⊗
k)b . But the degree b part of FY ⊗ k is exactly the cochain complex of the collection of open cells
Yb . By Proposition 2.25, Yb is realized by (Y b , ∂Yb ), so H d−i (FY ⊗ k)b = H d−i (Y b , ∂Yb ), which
yields the desired result.
66
B
COSHEAF CONSTRUCTION OF THE CANONICAL SUBOPLEX
1
2
id
1
¬id
⊥
1
2
1
>
(0, 1)
41
( 12 , 12 )
(1, 0)
Figure B.1: “Gluing” together the metric spaces [2 → 2]p for all p ∈ 41 . The distances shown are L1
distances of functions within each “fiber.” ⊥ is the identically 0 function; > is the identically 1 function; id
is the identity; and ¬id is the negation function. If we ignore the metric and “untangle” the upper space,
we get the complete 1-dimensional suboplex.
Proof of Lemma 2.72. WLOG, we can replace Rq with the span of L, so we assume L spans Rq .
We show by induction on q that if L 6⊆ H for every open halfspace, then 0 ∈ L. This would imply
our result: As L is open in Rq , there is a ball contained in L centered at the origin. Since L is a
cone, this means L = Rq .
Note that L 6⊆ H for every open coordinate halfspace H is equivalent to that L ∩ H 6= ∅ for
every open coordinate halfspace H. Indeed, if L ∩ H 0 = ∅, then Rq \ H 0 contains the open set L, and
thus the interior int(Rq \ H) is an open coordinate halfspace that contains L. If L intersects every
open coordinate halfspace, then certainly it cannot be contained in any single H, or else int(Rq \ H)
does not intersect L.
We now begin the induction. The base case of q = 1: L has both a positive point and negative
point, and thus contains 0 because it is convex.
Suppose the induction hypothesis holds for q = p, and let q = p + 1. Then for any halfspace
H, L ∩ H and L ∩ int(Rq \ H) are both nonempty, and thus L intersects the hyperplane ∂H by
convexity. Certainly L ∩ ∂H intersects every open coordinate halfspace of ∂H because the latter
are intersections of open coordinate halfspaces of Rq with ∂H. So by the induction hypothesis,
L ∩ ∂H contains 0, and therefore 0 ∈ L as desired.
B
Cosheaf Construction of the Canonical Suboplex
Let C ⊆ [n → m] and p be a probability distribution on [n]. p induces an L1 metric space Cp by
d(f, g) = n1 kf − gk1 . If we vary p over 4n−1 , then Cp traces out some kind of shape that “lies over”
4n−1 . For C = [2 → 2], this is illustrated in Figure B.1. In this setting, Impagliazzo’s Hardcore
Lemma [2] would say something roughly like the following:
Let C ⊆ [n → 2] and C be the closure of C under taking majority over “small”
subsets of C. For any f ∈ [n → 2], either in the fiber [n → 2]U over the uniform
distribution U, f is “close” to C, or in the fiber [n → 2]q for some q “close” to
U, f is at least distance 1/2 + from C.
Thus this view of “fibered metric spaces” may be natural for discussion of hardness of approximation
or learning theory.
67
B.1
Cosheaves and Display Space
If we ignore the metrics and “untangle” the space, we get the canonical suboplex of [2 → 2],
the complete 1-dimensional suboplex. In general, the canonical suboplex of a class C ⊆ [n → m]
can be obtained by “gluing” together the metric spaces Cp for all p ∈ 4n−1 , so that there is a
map Υ : SC → 4n−1 whose fibers are Cp (treated as a set). But how do we formalize this “gluing”
process?
In algebraic topology, one usually first tries to fit this picture into the framework of fibrations
or the framework of sheaves. But fibration is the wrong concept, as our “fibers” over the “base
space” 4n−1 are not necessarily homotopy equivalent, as seen in Figure B.1. So SC cannot be the
total space of a fibration over base space 4n−1 . Nor is it the étalé space of a sheaf, as one can
attest to after some contemplation.
It turns out the theory of cosheaves provide the right setting for this construction.
B.1
Cosheaves and Display Space
Definition B.1. A precosheaf is a covariant functor F : O(X) → Set from the poset of open
sets of a topological space X to the category of sets. For each inclusion ı : U ,→ V , the set map
Fı : F(U ) → F(V ) is called the inclusion map from F(U ) to F(V ).
A precosheaf F is further called a cosheaf if it satisfies
the following cosheaf condition: For
S
every open covering {Ui }i of an open set U ⊆ X with {Ui }i = U ,
a
a
a
F(Uk ) ←
F(Uk ∩ Ul ) →
F(Ul )
k
k6=l
l
has pushout F(U ). Here each arrow is the coproduct of inclusion maps.
There is a concept of costalk dual to the concept of stalks in sheaves.
Definition B.2. Let F : O(X) → Set be a cosheaf and let p ∈ X. Then the costalk Fp is defined
as the cofiltered limit
Fp := lim F(U ),
U ∈p
of F(U ) over all open U containing p.
Analogous to the étalé space of a sheaf, cosheaves have something called a display space [8]
that compresses all of its information in a topological space. We first discuss the natural cosheaf
associated to a continuous map.
Let ψ : Y → X be a continuous map between locally path-connected spaces Y and X. We have
a cosheaf F ψ : O(X) → Set induced as follows: For each U ∈ O(X), F ψ (U ) = π0 (ψ −1 U ), where π0
denotes the set of connected components. For an inclusion ı : U ,→ V , F ψ (ı) maps each component
in Y of ψ −1 U into the component of ψ −1 V that it belongs to. For open cover {Ui }i with union U ,
a
a
a
F ψ (Uk ) ←
F ψ (Uk ∩ Ul ) →
F ψ (Ul )
k
k6=l
l
has pushout F ψ (U ). Indeed, this is just the standard gluing construction of pushouts in Set for
each component of ψ −1 U . (An alternative view of F ψ is that it is the direct image cosheaf of F id ,
where id : Y → Y is the identity).
Now we reverse the construction. Let X be a topological space, and F : O(X) → Set be a
cosheaf. We construct the display space Y and a map ψ : Y → X such that F ∼
= F ψ . For the
points of Y , we will take the disjoint union of all costalks,
G
|Y | :=
Fp .
p∈X
68
B
COSHEAF CONSTRUCTION OF THE CANONICAL SUBOPLEX
Then the set-map |ψ| underlying the desired ψ will be
Fp 3 y 7→ p ∈ X.
Now we topologize Y by exhibiting a basis. For any U ∈ O(X), there is a canonical map
G
G
gU :=
mp,U :
Fp → F(U )
p∈U
p∈U
formed by the coproduct of the limit maps mp,U : Fp → F(U ). Then each fiber of gU is taken as
an open set in Y : For each s ∈ F(U ), we define
[s, U ] := gU−1 (s)
as an open set. Note that [s, U ]∩[t, U ] = ∅ if s, t ∈ F(U ) but s 6= t. We claim that for s ∈ F(U ), t ∈
F(V ),
G
(1)
[s, U ] ∩ [t, V ] = {[r, U ∩ V ] : F(iU )(r) = s & F(iV )(r) = t}
where iU : U ∩ V → U and iV : U ∩ V → V are the inclusions. The inclusion of the RHS
into the LHS should be clear. For the opposite direction, suppose p ∈ U ∩ V and y ∈ Fp with
gU (y) = mp,U (y) = s and gV (y) = mp,V (y) = t. Since Fp is the cofiltered limit of {F(W ) : p ∈ W },
we have the following commutative diagram
Fp
mp,U ∩V
mp,U
mp,V
F(U ∩ V )
F (k)
F (j)
F(U )
F(V )
Therefore there is an r ∈ F(U ∩ V ) such that mp,U ∩V (y) = r and F(j)(r) = s and F(k)(r) = t.
Then y ∈ [r, U ∩ V ] ⊆ RHS of 1. Our claim is proved, and {[s, U ] : s ∈ F(U )} generates a
topological basis for Y .
Finally, to complete the verification that F ∼
= F ψ , we show that F(U ) ∼
= π0 (ψ −1 U ), natural
over all U ∈ O(X). It suffices to prove that for each U ∈ O(X) and s ∈ F(U ), [s, U ] is connected;
then F(U ) 3 s 7→ [s, U ] is a natural isomorphism.
S
partition i∈A {[si , Ui ]} t
S Suppose for some s ∈ F(U ) this is not true:
S there exists a nontrivial
S
{[sj , Uj ]} =
j∈BS
S [s, U ] of [s, U ] by open sets i∈A {[si , Ui ]} and j∈B {[sj , Uj ]}. We assume WLOG
that i∈A Ui ∪ j∈B Uj = U (in case that for some x ∈ U , Fp = ∅, we extend each Ui and Uj to
cover x). Then by the cosheaf condition, the pushout of the following
a
a
a
F(Uk ) ←
F(Uk ∩ Ul ) →
F(Ul )
k∈A∪B
k6=l
l∈A∪B
is F(U ). By assumption, F(Ui U )(si ) = s for all i ∈ A ∪ B. So there must be some i ∈
A, j ∈ B and t ∈ F(Ui ∩ Uj ) such that F(Ui ∩ Uj Ui )(t) = si and F(Ui ∩ Uj Uj )(t) = sj .
This implies that [t, Ui ∩ Uj ] ⊆ [si , Ui ] ∩ [sj , Uj ]. If X is first countable and locally compact
Hausdorff,
or if X
S
S is metrizable, then by Lemma B.3, [t, Ui ∩ Uj ] is nonempty, and therefore
6 ∅, a contradiction, as desired.
j∈B {[sj , Uj ]} =
i∈A {[si , Ui ]} ∩
69
B.1
Cosheaves and Display Space
Lemma B.3. If X is first countable and locally compact Hausdorff, or if X is metrizable, then
[s, U ] is nonempty for every U ∈ O(X) and s ∈ F(U ).
Proof. We give the proof for the case when X is first countable and locally compact Hausdorff.
The case of metrizable X is similar.
For each x ∈ X, fix a countable local basis x ⊆ · · · ⊆ Bxn ⊆ Bxn−1 ⊆ · · · ⊆ Bx2 ⊆ Bx1 , with the
property that Bxn ⊆ Bxn−1 and is compact. Fix such a U and s ∈ F(U ). Let U0 := U and s0 := s.
We will form a sequence hUi , si i as follows. Given Ui−1 and si−1 , for each point x ∈ Ui−1 , choose
a kx > i such that Bxkx is contained in Ui−1 . These sets {Bxkx }x form an open covering of Ui−1 ,
and by the sheaf condition, for some x, im F(Bxkx Ui−1 ) contains si−1 . Then set Ui := Bxkx and
choose any element of F(Bxkx Ui−1 )−1 (si−1 ) to be si . Hence by construction si ∈ F(Ui ).
Following this procedure for all i ∈ N, we obtain a sequence
hUi , si ii≥0
T
T withTthe property that
U0 ⊇ U1 ⊇ U1 ⊇ U2 ⊇ U2 · · · . As each of Ui is compact, Ui , and hence Ui = Ui , is nonempty.
i
Let
T z be one of its elements. Then Ui ⊆ Bz for all i ≥ 1. Therefore z must be the unique element of
Ui , and the sequence hUi ii is a local basis of z. Furthermore, hsi ii is an element of the costalk at
z, as it can easily be seen to be an element of the inverse limit limi→∞ F(Ui ) = lim{F(V ) : z ∈ V }.
This shows that [s, U ] is nonempty.
Note that without assumptions on X, Lemma B.3 cannot hold. In fact, something quite extreme
can happen.
Proposition B.4. There exists a cosheaf F : O(X) → Set whose costalks are all empty.
Proof. This proof is based on Waterhouse’s construction [21]. Let X be an uncountable set with
the cofinite topology. Define F(U ) to be the set of injective functions from the finite set X \ U to
the integers. The map F(U V ) just restricts a function g : X \U → Z to g (X \V ) : X \V → Z.
One can easily check that the cosheaf sequence is a pushout. Thus F is a cosheaf.
For any x ∈ X, each point of the inverse limit of {F(U ) : x ∈ U } has the following description:
a sequence of injective functions hfA : A ZiA indexed by finite sets A ⊆ X, such that if
A
S ⊆ B are both finite sets, then fA ⊆ fB . Such a sequence would determine an injective function
A fA : X → Z, but that is impossible as X was assumed to be uncountable.
Back to our case of canonical suboplex. For any S = S[n→m] , there is a canonical embedding
Ξ : S 4mn−1 ⊆ Rmn , defined by taking vertex Vu,i , (u, i) ∈ [n] ×P[m] to eu,i , the basis vector of
Rmn corresponding to (u, i), and taking each convex combination n−1
u=0 p(u)Vu,f (i) in the simplex
Pn−1
associated to f : [n] → [m] to u=0 p(u)eu,f (i) . The map Υ : SC → 4n−1 we sketched in the
beginning of this section can then be formally described as Υ = Π ◦ Ξ SC , where Π is the linear
projection defined by eu,i 7→ eu ∈ 4n−1 . As we have shown, Υ induces a cosheaf F Υ : O(4n−1 ) →
Set, sending each open U ⊆ 4n−1 to π0 (Υ−1 U ). For example, if U is in the interior of 4n−1 ,
then F Υ (U ) has size equal to the size of C. If U is a small ball around the vertex eu , then F Υ (U )
is bijective with the set of values C takes on u ∈ [n]. It is easy to check that the costalk FpΥ at
each point p ∈ 4n−1 is just π0 (Υ−1 p) = |Cp |, the set underlying the metric space Cp , so we have
successfully “glued” together the pieces into a topological space encoding the separation information
in C.
One may naturally wonder whether the cosheaf homology of such a cosheaf matches the homology of the display space. One can show that this is indeed the case for our canonical suboplex, via
identification of the cosheaf homology with Cech homology and an application of the acyclic cover
lemma.
What is disappointing about this construction is of course that it ignores metric information in
all of the costalks Cp . Directly replacing Set with the category Met of metric spaces with metric
70
REFERENCES
maps (maps that do not increase distance) does not work, because it does not have coproducts. It
remains an open problem whether one can find a suitable category to replace Set such that SC can
still be expressed as the display space of a cosheaf on 4n−1 , while preserving metric information
in each costalk, and perhaps more importantly, allows the expression of results like Impagliazzo’s
Hardcore Lemma in a natural categorical setting. Perhaps a good starting point is to notice that
the embedding Ξ actually preserves the L1 metric within each fiber Cp .
References
[1] Martin Anthony. Discrete Mathematics of Neural Networks. 2001.
[2] Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. 2009. OCLC:
443221176.
[3] J. Aspnes, R. Beigel, M. Furst, and S. Rudich. The expressive power of voting polynomials.
Combinatorica, 14(2):135–148, June 1994.
[4] Dave Bayer and Mike Stillman. Computation of Hilbert functions. Journal of Symbolic Computation, 14(1):31–50, 1992.
[5] Winfried Bruns and Jurgen Herzog. Cohen-Macaulay Rings. Cambridge University Press,
1998.
[6] David Eisenbud. Commutative algebra: with a view toward algebraic geometry. 1994. OCLC:
891662214.
[7] Sara Faridi. The projective dimension of sequentially Cohen-Macaulay monomial ideals.
arXiv:1310.5598 [math], October 2013. arXiv: 1310.5598.
[8] J. Funk. The display locale of a cosheaf. Cahiers de Topologie et Gomtrie Diffrentielle Catgoriques, 36(1):53–93, 1995.
[9] Melvin Hochster. The canonical module of a ring of invariants. Contemp. Math, 88:43–83,
1989.
[10] Michael Kearns and Umesh Vazirani. An Introduction to Computational Learning Theory.
January 1994.
[11] Ezra Miller and Bernd Sturmfels. Combinatorial commutative algebra. Number 227 in Graduate texts in mathematics. Springer, New York, 2005. OCLC: ocm55765389.
[12] Marvin Minsky and Seymour Papert. Perceptrons: An Introduction to Computational Geometry. MIT Press, 1969.
[13] K. Mulmuley and M. Sohoni. Geometric Complexity Theory I: An Approach to the P vs. NP
and Related Problems. SIAM Journal on Computing, 31(2):496–526, January 2001.
[14] Thomas Natschlger and Michael Schmitt. Exact VC-dimension of Boolean monomials. Information Processing Letters, 59(1):19–20, July 1996.
[15] Ryan O’Donnell. Analysis of boolean functions. Cambridge University Press, New York, NY,
2014.
71
REFERENCES
[16] Alexander A Razborov and Steven Rudich. Natural Proofs. Journal of Computer and System
Sciences, 55(1):24–35, August 1997.
[17] Saharon Shelah. A combinatorial problem; stability and order for models and theories in
infinitary languages. Pacific Journal of Mathematics, 41(1):247–261, 1972.
[18] Richard P. Stanley. Combinatorics and commutative algebra. Number v. 41 in Progress in
mathematics. Birkhuser, Boston, 2nd ed edition, 1996.
[19] The Univalent Foundations Program. Homotopy Type Theory: Univalent Foundations of
Mathematics. arXiv preprint arXiv:1308.0729, 2013.
[20] Mauricio Velasco. Minimal free resolutions that are not supported by a CW-complex. Journal
of Algebra, 319(1):102–114, January 2008.
[21] William C. Waterhouse. An empty inverse limit. Proceedings of the American Mathematical
Society, 36(2):618, 1972.
[22] Gnter M. Ziegler. Lectures on Polytopes, volume 152 of Graduate Texts in Mathematics.
Springer New York, New York, NY, 1995.
72
| 0 |
Approximate Clustering with Same-Cluster Queries
Nir Ailon1 , Anup Bhattacharya2 , Ragesh Jaiswal2 , and Amit Kumar2
arXiv:1704.01862v3 [cs.DS] 4 Oct 2017
2
1
Technion, Haifa, Israel.⋆
Department of Computer Science and Engineering,
Indian Institute of Technology Delhi.⋆⋆
Abstract. Ashtiani et al. proposed a Semi-Supervised Active Clustering framework (SSAC), where
the learner is allowed to make adaptive queries to a domain expert. The queries are of the kind “do two
given points belong to the same optimal cluster?”, and the answers to these queries are assumed to be
consistent with a unique optimal solution. There are many clustering contexts where such same cluster
queries are feasible. Ashtiani et al. exhibited the power of such queries by showing that any instance of
the k-means clustering problem, with additional margin assumption, can be solved efficiently if one is
allowed O(k2 log k + k log n) same-cluster queries. This is interesting since the k-means problem, even
with the margin assumption, is NP-hard.
In this paper, we extend the work of Ashtiani et al. to the approximation setting showing that a few
of such same-cluster queries enables one to get a polynomial-time (1 + ε)-approximation algorithm for
the k-means problem without any margin assumption on the input dataset. Again, this is interesting
since the k-means problem is NP-hard to approximate within a factor (1 + c) for a fixed constant
0 < c < 1. The number of same-cluster queries used is poly(k/ε) which is independent of the size n
of the dataset. Our algorithm is based on the D2 -sampling technique, also known as the k-means++
seeding algorithm. We also give a conditional lower bound on the number of same-cluster queries
showing that if the Exponential
Time Hypothesis (ETH) holds, then any such efficient query algorithm
needs to make Ω polyklog k same-cluster queries. Our algorithm can be extended for the case when
the oracle is faulty, that is, it gives wrong answers to queries with some bounded probability. Another
result we show with respect to the k-means++ seeding algorithm is that a small modification to the kmeans++ seeding algorithm within the SSAC framework converts it to a constant factor approximation
algorithm instead of the well known O(log k)-approximation algorithm.
1
Introduction
Clustering is extensively used in data mining and is typically the first task performed when trying to understand large data. Clustering basically involves partitioning given data into groups or
clusters such that data points within the same cluster are similar as per some similarity measure.
Clustering is usually performed in an unsupervised setting where data points do not have any labels associated with them. The partitioning is done using some measure of similarity/dissimilarity
between data elements. In this work, we extend the work of Ashtiani et al. [AKBD16] who explored
the possibility of performing clustering in a semi-supervised active learning setting for center based
clustering problems such as k-median/means. In this setting, which they call Semi-Supervised Active Clustering framework or SSAC in short, the clustering algorithm is allowed to make adaptive
queries of the form:
do two points from the dataset belong to the same optimal cluster?.
where query answers are assumed to be consistent with a unique optimal solution. Ashtiani et
al. [AKBD16] started the study of understanding the strength of this model. Do hard clustering
problems become easy in this model? They explore such questions in the context of center-based
clustering problems. Center based clustering problems such as k-means are extensively used to
analyze large data clustering problems. Let us define the k-means problem in the Euclidean setting.
⋆
⋆⋆
Email address: [email protected]
Email addresses: {anupb, rjaiswal, amitk}@cse.iitd.ac.in
Definition 1 (k-means problem). Given a dataset X ⊆ Rd containing n points, and a positive
integer k, find a set of k points C ⊆ Rd (called centers) such that the following cost function is
minimized:
X
Φ(C, X) =
min D(x, c).
x∈X
c∈C
D(x, c) denotes the squared Euclidean distance between c and x. That is, D(x, c) = ||x − c||2 .
Note that the k optimal centers c1 , ..., ck of the k-means problem define k clusters of points in
a natural manner. All points for which the closest center is ci belong to the ith cluster. This is
also known as the Voronoi partitioning and the clusters obtained in this manner using the optimal
k centers are called the optimal clusters. Note that the optimal center for the
1-means problem
P
def.
for any dataset X ⊆ Rd is the centroid of the dataset denoted by µ(X) =
x∈X
|X|
x
. This means
that if X1 , ...., Xk are the optimal clusters for the k-means problem on any dataset X ⊆ Rd and
c1 , ..., ck are the corresponding optimal centers, then ∀i, ci = µ(Xi ). The k-means problem has been
widely studied and various facts are known about this problem. The problem is tractable when
either the number k of clusters or the dimension d equal to 1. However, when k > 1 or d > 1, then
the problem is known to be NP-hard [Das08, Vat09, MNV12]. There has been a number of works
of beyond the worst-case flavour for k-means problem in which it is typically assumed that the
dataset satisfies some separation condition, and then the question is whether this assumption can
be exploited to design algorithms providing better guarantees for the problem. Such questions have
led to different definitions of separation and also some interesting results for datasets that satisfy
these separation conditions (e.g., [ORSS13, BBG09, ABS12]). Ashtiani et al. [AKBD16] explored
one such separation notion that they call the γ-margin property.
Definition 2 (γ-margin property). Let γ > 1 be a real number. Let X ⊆ Rd be any dataset and
k be any positive integer. Let PX = {X1 , ..., Xk } denote k optimal clusters for the k-means problem.
Then this optimal partition of the dataset PX is said to satisfy the γ-margin property iff for all
i 6= j ∈ {1, ..., k} and x ∈ Xi and y ∈ Xj , we have:
γ · ||x − µ(Xi )||< ||y − µ(Xi )||.
Qualitatively, what this means is that every point within some cluster is closer to its own cluster
center than a point that does not belong to this cluster. This seems to be a very strong separation
property. Ashtiani et al. [AKBD16] showed that the k-means clustering problem
is NP-hard even
√
when restricted to instances that satisfy the γ-margin property for γ = 3.4 ≈ 1.84. Here is the
formal statement of their hardness result.
Theorem 1 (Theorem 10 in [AKBD16]). Finding an optimal solution to k-means objective
function is NP-hard when k = Θ(nε ) for any
√ ε ∈ (0, 1), even when there is an optimal clustering
that satisfies the γ-margin property for γ = 3.4.
In the context of the k-means problem, the same-cluster queries within the SSAC framework are
decision questions of the form: Are points x, y such that x 6= y belong to the same optimal cluster?
3 Following is the main question explored by Ashitiani et al. [AKBD16]:
√
Under the γ-margin assumption, for a fixed γ ∈ (1, 3.4], how many queries must be made
in the SSAC framework for k-means to become tractable?
3
In case where the optimal solution is not unique, the same-cluster query answers are consistent with respect to
any fixed optimal clustering.
Ashtiani et al. [AKBD16] addressed the above question and gave a query algorithm. Their
algorithm, in fact, works for a more general setting where the clusters are not necessarily optimal.
In the more general setting, there is a target clustering X̄ = X̄1 , ..., X̄k of the given dataset X ⊆ Rd
(not necessarily optimal clusters) such that these clusters satisfy the γ-margin property (i.e., for
all i, x ∈ X̄i , and y ∈
/ X̄i , γ · ||x − µ(X̄i )||< ||y − µ(X̄i )||) and the goal of the query algorithm is to
output the clustering X̄. Here is the main result of Ashtiani et al.
Theorem 2 (Theorems 7 and 8 in [AKBD16]). Let δ ∈ (0, 1) and γ > 1. Let X ⊆ Rd
be any dataset containing n points, k be a positive integer, and X1 , ..., Xk be any target clustering
of X that satisfies the γ-margin property. Then there is a query algorithm A that makes
k+log 1/δ
O k log n + k2 log(γ−1)
same-cluster queries and with probability at least (1 − δ) outputs the
4
k+log 1/δ
clustering X1 , ..., Xk . The running time of algorithm A is O kn log n + k2 log(γ−1)
.
4
The above result is a witness to the power of the SSAC framework. We extend this line of
work by examining the power of same-cluster queries in the approximation algorithms domain. Our
results do not assume any separation condition on the dataset (such as γ-margin as in [AKBD16])
and they hold for any dataset.
Since the k-means problem is NP-hard, an important line of research work has been to obtain
approximation algorithms for the problem. There are various efficient approximation algorithms
for the k-means problem, the current best approximation guarantee being 6.357 by Ahmadian et
al. [ANFSW16]. A simple approximation algorithm that gives an O(log k) approximation guarantee is the k-means++ seeding algorithm (also known as D 2 -sampling algorithm) by Arthur and
Vassilvitskii [AV07]. This algorithm is commonly used in solving the k-means problem in practice.
As far as approximation schemes or in other words (1 + ε)-approximation algorithms (for arbitrary
small ε < 1) are concerned, the following is known: It was shown by Awasthi et al. [ACKS15]
that there is some fixed constant 0 < c < 1 such that there cannot exist an efficient (1 + c) factor approximation unless P = NP. This result was improved by Lee et al. [LSW17] where it was
shown that it is NP-hard to approximate the k-means problem within a factor of 1.0013. However,
when either k or d is a fixed constant, then there are Polynomial Time Approximation Schemes
(PTAS) for the k-means problem.4 Addad et al. [CAKM16] and Friggstad et al. [FRS16] gave a
PTAS for the k-means problem in constant dimension. For fixed constant k, various PTASs are
known [KSS10, FMS07, JKS14, JKY15]. Following is the main question that we explore in this
work:
For arbitrary small ε > 0, how many same-cluster queries must be made in an efficient
(1 + ε)-approximation algorithm for k-means in the SACC framework? The running time
should be polynomial in all input parameters such as n, k, d and also in 1/ε.
Note that this is a natural extension of the main question explored by Ashtiani et al. [AKBD16].
Moreover, we have removed the separation assumption on the data. We provide an algorithm that
makes poly(k/ε) same-cluster queries and runs in time O(nd · poly(k/ε)). More specifically, here is
the formal statement of our main result:
Theorem 3 (Main result: query algorithm). Let 0 < ε ≤ 1/2, k be any positive integer, and
X ⊆ Rd . Then there is a query algorithm A that runs in time O(ndk 9 /ε4 ) and with probability at
least 0.99 outputs a center set C such that Φ(C, X) ≤ (1 + ε) · ∆k (X). Moreover, the number of
4
This basically means an algorithm that runs in time polynomial in the input parameters but may run in time
exponential in 1/ε.
same-cluster queries used by A is O(k9 /ε4 ). Here ∆k (X) denotes the optimal value of the k-means
objective function.
Note that unlike Theorem 2, our bound on the number of same-cluster queries is independent of
the size of the dataset. We find this interesting and the next natural question we ask is whether this
bound on the number of same-cluster queries is tight in some sense. In other words, does there exist
a query algorithm in the SSAC setting that gives (1+ε)-approximation in time polynomial in n, k, d
and makes significantly fewer queries than the one given in the theorem above? We answer this
question in the negative by establishing a conditional lower bound on the number of same-cluster
queries under the assumption that ETH (Exponential Time Hypothesis) [IP01, IPZ01] holds. The
formal statement of our result is given below.
Theorem 4 (Main result: query lower bound). If the Exponential Time Hypothesis (ETH)
holds, then there exists a constant c > 1 such that any c-approximation query algorithm for the
k
queries.
k-means problem that runs in time poly(n, d, k) makes at least
poly log k
Faulty query setting The existence of a same-cluster oracle that answers such queries perfectly may
be too strong an assumption. A more reasonable assumption is the existence of a faulty oracle that
can answer incorrectly but only with bounded probability. Our query approximation algorithm can
be extended to the setting where answers to the same-cluster queries are faulty. More specifically, we
can get wrong answers to queries independently but with probability at most some constant q < 1/2.
Also note that in our model the answer for a same-cluster query does not change with repetition.
This means that one cannot ask the same query multiple times and amplify the probability of
correctness. We obtain (1 + ε)-approximation guarantee for the k-means clustering problem in this
setting. The main result is given as follows.
Theorem 5. Let 0 < ε ≤ 1/2, k be any positive integer, and X ⊆ Rd . Consider a faulty SSAC
setting where the response to every same-cluster query is incorrect with probability at most some
constant q < 1/2. In such a setting, there is a query algorithm AE that with probability at least 0.99,
outputs a center set C such that Φ(C, X) ≤ (1 + ε) · ∆k (X). Moreover, the number of same-cluster
queries used by AE is O(k15 /ε8 ).
The previous theorems summarise the main results of this work which basically explores the
power of same-cluster queries in designing fast (1 + ε)-approximation algorithms for the k-means
problem. We will give the proofs of the above theorems in Sections 3, 4, and 5. There are some
other simple and useful contexts, where the SSAC framework gives extremely nice results. One
such context is the popular k-means++ seeding algorithm. This is an extremely simple sampling
based algorithm for the k-means problem that samples k centers in a sequence of k iterations. We
show that within the SSAC framework, a small modification of this sampling algorithm converts it
to one that gives constant factor approximation instead of O(log k)-approximation [AV07] that is
known. This is another witness to the power of same-cluster queries. We begin the technical part
of this work by discussing this result in Section 2. Some of the basic techniques involved in proving
our main results will be introduced while discussing this simpler context.
Other related work Clustering problems have been studied in different semi-supervised settings.
Basu et al. [BBM04] explored must-link and cannot-link constraints in their semi-supervised clustering formulation. In their framework, must-link and cannot-link constraints were provided explicitly as part of the input along with the cost of violating these constraints. They gave an active
learning formulation for clustering in which an oracle answers whether two query points belong to
the same cluster or not, and gave a clustering algorithm using these queries. However, they work
with a different objective function and there is no discussion on theoretical bounds on the number
of queries. In contrast, in our work we consider the k-means objective function and provide bounds
on approximation guarantee, required number of adaptive queries, and the running time. Balcan
and Blum [BB08] proposed an interactive framework for clustering with split/merge queries. Given
a clustering C = {C1 , . . .}, a user provides feedback by specifying that some cluster Cl should be
split, or clusters Ci and Cj should be merged. Awasthi et al. [ABV14] studied a local interactive
algorithm for clustering with split and merge feedbacks. Voevodski et al. [VBR+ 14] considered one
versus all queries where query answer for a point s ∈ X returns distances between s to all points in
X. For a k-median instance satisfying (c, ε)-approximation stability property [BBG09], the authors
found a clustering close to true clustering using only O(k) one versus all queries. Vikram and Dasgupta [VD16] designed an interactive bayesian hierarchical clustering algorithm. Given dataset X,
the algorithm starts with a candidate hierarchy T , and an initially empty set C of constraints. The
algorithm queries user with a subtree T |S of hierarchy T restricted to constant sized set S ⊂ X of
leaves. User either accepts T |S or provides a counterexample triplet ({a, b}, c) which the algorithm
adds to its set of constraints C, and updates T . They consider both random and adaptive ways to
select S to query T |S .
Our Techniques We now give a brief outline of the new ideas needed for our results. Many algorithms
for the k-means problem proceed by iteratively finding approximations to the optimal centers. One
such popular algorithm is the k-means++ seeding algorithm [AV07]. In this algorithm, one builds
a set of potential centers iteratively. We start with a set C initialized to the empty set. At each
step, we sample a point with probability proportional to the square of the distance from C, and
add it to C. Arthur and Vassilvitskii [AV07] showed that if we continue this process till |C| reaches
k, then the corresponding k-means solution has expected cost O(log k) times the optimal k-means
cost. Aggarwal et al. [ADK09] showed that if we continue this process till |C| reaches βk, for some
constant β > 1, then the corresponding k-means solution (where we actually open all the centers
in C) has cost which is within constant factor of the optimal k-means cost with high probability.
Ideally, one would like to stop when size of C reaches k and obtain a constant factor approximation
guarantee. We know from previous works [AV07, BR13, BJA16] that this is not possible in the
classical (unsupervised) setting. In this work, we show that one can get such a result in the SSAC
framework. A high-level way of analysing the k-means++ seeding algorithm is as follows. We first
observe that if we randomly sample a point from a cluster, then the expected cost of assigning all
points of this cluster to the sampled point is within a constant factor of the cost of assigning all
the points to the mean of this cluster. Therefore, it suffices to select a point chosen uniformly at
random from each of the clusters. Suppose the set C contains such samples for the first i clusters
(of an optimal solution). If the other clusters are far from these i clusters, then it is likely that
the next point added to C belongs to a new cluster (and perhaps is close to a uniform sample).
However to make this more probable, one needs to add several points to C. Further, the number
of samples that needs to be added to C starts getting worse as the value of i increases. Therefore,
the algorithm needs to build C till its size becomes O(k log k). In the SSAC framework, we can
tell if the next point added in C belongs to a new cluster or not. Therefore, we can always ensure
that |C| does not exceed k. To make this idea work, we need to extend the induction argument of
Arthur and Vassilvitskii [AV07] – details are given in Section 2.
We now explain the ideas for the PTAS for k-means. We consider the special case of k = 2.
Let X1 and X2 denote the optimal clusters with X1 being the larger cluster. Inaba et al. [IKI94]
showed that if we randomly sample about O(1/ε) points from a cluster, and let µ′ denote the mean
of this subset of sampled points, then the cost of assigning all points in the cluster to µ′ is within
(1 + ε) of the cost of assigning all these points to their actual mean (whp). Therefore, it is enough to
get uniform samples of size about O(1/ε) from each of the clusters. Jaiswal et al. [JKS14] had the
following approach for obtaining a (1 + ε)-approximation algorithm for k-means (with running time
being nd · f (k, ε), where f is an exponential function of k/ε) – suppose we sample about O(1/ε2 )
points from the input, call this sample S. It is likely to contain at least O(1/ε) from X1 , but we
do not know which points in S are from X1 . Jaiswal et al. addressed this problem by cycling over
all subsets of S. In the SSAC model, we can directly partition S into S ∩ X1 and S ∩ X2 using
|S| same-cluster queries. Having obtained such a sample S, we can get a close approximation to
the mean of X1 . So assume for sake of simplicity that we know µ1 , the mean of X1 . Now we are
faced with the problem of obtaining a uniform sample from X2 . The next idea of Jaiswal et al.
is to sample points with probability proportional to square of distance from µ1 . This is known as
D 2 -sampling. Suppose we again sample about O(1/ε2 ) such points, call this sample S ′ . Assuming
that the two clusters are far enough (otherwise the problem only gets easier), they show that S ′
will contain about O(1/ε2 ) points from X2 (with good probability). Again, in the SSAC model, we
can find this subset by |S ′ | queries – call this set S ′′ . However, the problem is that S ′′ may not
represent a uniform sample from X2 . For any point e ∈ X2 , let pe denote the conditional probability
of sampling e given that a point from X2 is sampled when sampled using D 2 -sampling. They showed
ε
, where m denotes the size of X2 . In order for the sampling lemma of Inaba et al.
pe is at least m
[IKI94] to work, we cannot work with approximately uniform sampling. The final trick of Jaiswal
et al. was to show that one can in fact get a uniform sample of size about O(ε|S ′′ |) = O(1/ε) from
S ′′ . The idea is as follows – for every element e ∈ S ′′ , we retain it with probability peεm (which is at
most 1), otherwise we remove it from S ′′ . It is not difficult to see that this gives a uniform sample
from X2 . The issue is that we do not know m. Jaiswal et al. again cycle over all subsets of S ′ –
we know that there is a (large enough) subset of S ′ which will behave like a uniform sample from
X2 . In the SSAC framework, we first identify the subset of S ′ which belongs to X2 , call this S ′′ (as
above). Now we prune some points from S ′′ such that the remaining points behave like a uniform
sample. This step is non-trivial because as indicated above, we do not know the value m. Instead,
2
ε
and m
for most of the points of X2 . Therefore, S ′′ is likely to
we first show that pe lies between m
εpe
contain such a nice point, call it v. Now, for every point e ∈ S ′′ , we retain it with probability 2p
v
(which we know is at most 1). This gives a uniform sample of sufficiently large size from X2 . For k
larger than 2, we generalize the above ideas using a non-trivial induction argument.
2
k-means++ within SSAC framework
The k-means++ seeding algorithm, also known as the D 2 -sampling algorithm, is a simple sampling
procedure that samples k centers in k iterations. The description of this algorithm is given below.
The algorithm picks the first center randomly from the set X of points and after having picked
the first (i − 1) centers denoted by Ci−1 , it picks a point x ∈ X to be the ith center with probability
proportional to minc∈Ci−1 ||x − c||2 . The running time of k-means++ seeding algorithm is clearly
O(nkd). Arthur and Vassilvitskii [AV07] showed that this simple sampling procedure gives an
O(log k) approximation in expectation for any dataset. Within the SSAC framework where the
algorithm is allowed to make same-cluster queries, we can make a tiny addition to the k-means++
seeding algorithm to obtain a query algorithm that gives constant factor approximation guarantee
and makes only O(k2 log k) same-cluster queries. The description of the query algorithm is given
in Table 1 (see right). In iteration i > 1, instead of simply accepting the sampled point x as the
ith center (as done in k-means++ seeding algorithm), the sampled point x is accepted only if it
belongs to a cluster other than those to which centers in Ci−1 belong (if this does not happen,
k-means++(X,k)
Query-k-means++(X, k)
- Randomly sample a point x ∈ X
- Randomly sample a point x ∈ X
- C ← {x}
- C ← {x}
- for i = 2 to k
- for i = 2 to k
- Sample x ∈ X using distribution D
- for j = 1 to ⌈log k⌉
defined as D(x) = Φ(C,{x})
- Sample x ∈ X using distribution D
Φ(C,X)
- C ← C ∪ {x}
defined as D(x) = Φ(C,{x})
Φ(C,X)
- return(C)
- if(NewCluster(C, x)){C ← C ∪ {x}; break}
- return(C)
NewCluster(C, x)
- If(∃c ∈ C s.t. SameCluster(c, x)) return(false)
- else return(true)
Table 1. k-means++ seeding algorithm (left) and its adaptation in the SSAC setting (right)
the sampling is repeated for at most ⌈log k⌉ times). Here is the main result that we show for the
query-k-means++ algorithm.
Theorem 6. Let X ⊆ Rd be any dataset containing n points and k > 1 be a positive integer. Let
C denote the output of the algorithm Query-k-means++(X, k). Then
E[Φ(C, X)] ≤ 24 · ∆k (X),
where ∆k (X) denotes the optimal cost for this k-means instance. Furthermore, the algorithm makes
O(k2 log k) same-cluster queries and the running time of the algorithm is O(nkd + k log k log n +
k2 log k).
The bound on the number of same-cluster queries is trivial from the algorithm description. For the
running time, it takes O(nd) time to update the distribution D which is updated k times. This
accounts for the O(nkd) term in the running time. Sampling an element from a distribution D
takes O(log n) time (if we maintain the cumulative distribution etc.) and at most O(k log k) points
are sampled. Moreover, determining whether a sampled point belongs to an uncovered cluster takes
O(k) time. So, the overall running time of the algorithm is O(nkd+k log k log n+k 2 log k). We prove
the approximation guarantee in the remaining discussion. We will use the following terminology. Let
the optimal k clusters for dataset X are given as X1 , ..., Xk . For any i, ∆i (X)
Pdenotes the optimal
cost of the i-means problem on dataset X. Given this, note that ∆k (X) = ki=1 ∆1 (Xi ). For any
non-empty center set C, we say that a point x is sampled from dataset X using D 2 -sampling w.r.t.
center set C if the sampling probability of x ∈ X is given by D(x) = Φ(C,{x})
Φ(C,X) .
The proof of Theorem 6 will mostly follow O(log k)-approximation guarantee proof of k-means++
seeding by Arthur and Vassilvitskii [AV07]. The next two lemmas from [AV07] are crucially used
in the proof of approximation guarantee.
Lemma 1. Let A be any optimal cluster and let c denote a point sampled uniformly at random
from A. Then E[Φ({c}, A)] ≤ 2 · ∆1 (A).
Lemma 2. Let C be any arbitrary set of centers and let A be any optimal cluster. Let c be a point
sampled with D 2 -sampling with respect to the center set C. Then E[Φ(C ∪{c}, A)|c ∈ A] ≤ 8·∆1 (A).
The first lemma says that a randomly sampled center from X provides a good approximation
(in expectation) to the cost of the cluster to which it belongs. The second lemma says that for any
center set C, given that a center c that is D 2 -sampled from X w.r.t. C belong to an optimal cluster
A, the conditional expectation of the cost of the cluster A with respect to center set C ∪ {c} is at
most 8 times the optimal cost of cluster A. Using the above two lemmas, let us try to qualitatively
see why the k-means++ seeding algorithm behave well. The first center belongs to some optimal
cluster A and from Lemma 1 we know that this center is good for this cluster. At the time the
ith center is D 2 -sampled, there may be some optimal clusters which are still costly with respect
to the center set Ci−1 . But then we can argue that it is likely that the ith sampled center c will
belong to one of these costly clusters, and conditioned on the center being from one such cluster
A, the cost of this cluster after adding c to the current center set is bounded using Lemma 2. The
formal proof of O(log k) approximation guarantee in [AV07] involves setting up a clever induction
argument. We give a similar induction based argument to prove Theorem 6. We prove the following
lemma (similar to Lemma 3.3 in [AV07]). We will need the following definitions: For any center set
C, an optimal cluster A is said to be “covered” if at least one point from A is in C, otherwise A is
said to be “uncovered”. Let T be a union of a subset of the optimal clusters, then we will use the
def. P
notation ΦOP T (T ) =
Xi ⊆T ∆1 (Xi ).
Lemma 3. Let C ⊆ X be any set of centers such that the number of uncovered clusters w.r.t. C
is u > 0. Let Xu denote the set of points of the uncovered clusters and Xc denote set of the points
of the covered clusters. Let us run t iterations of the outer for-loop in Query-k-means++ algorithm
such that t ≤ u ≤ k. Let C ′ denote the resulting set of centers after running t iterations of the outer
for-loop. Then the following holds:
u−t
t
′
· Φ(C, Xu ).
(1)
+
E[Φ(C , X)] ≤ (Φ(C, Xc ) + 8 · ΦOP T (Xu )) · 2 +
k
u
Proof. Let us begin by analysing what happens when starting with C, one iteration of the outer forloop in query-k-means++ is executed. The following two observations will be used in the induction
argument:
Φ(C,Xc )
c)
Observation 1: If Φ(C,X
Φ(C,X) ≥ 1/2, then we have Φ(C,Xc )+Φ(C,Xu ) ≥ 1/2 which implies that Φ(C, Xu ) ≤
Φ(C, Xc ), and also Φ(C, X) ≤ 2 · Φ(C, Xc ).
c)
Observation 2: If Φ(C,X
Φ(C,X) < 1/2, then the probability that no point will be added after one iteration
⌈log k⌉
log k
c)
= k1 .
is given by Φ(C,X
< 12
Φ(C,X)
We will now proceed by induction. We show that if the statement holds for (t − 1, u) and
(t − 1, u − 1), then the statement holds for (t, u). In the basis step, we will show that the statement
holds for t = 0 and u > 0 and u = t = 1.
Basis step: Let us first prove the simple case of t = 0 and u > 0. In this case, C ′ = C. So, we
have E[Φ(C ′ , X)] = Φ(C, X) which is at most the RHS of (1). Consider the case when u = t = 1.
This means that there is one uncovered cluster and one iteration of the outer for-loop is executed.
If a center from the uncovered cluster is added, then E[Φ(C ′ , X)] ≤ Φ(C, Xc ) + 8 · ΦOP T (Xu )
and if no center is picked, then Φ(C ′ , X) = Φ(C, X). The probability of adding a center from the
log k
c)
. So, we get E[Φ(C ′ , X)] ≤ p · (Φ(C, Xc ) + 8 ·
uncovered cluster is given by p = 1 − Φ(C,X
Φ(C,X)
ΦOP T (Xu )) + (1 − p) · Φ(C, X). Note that this is upper bounded by the RHS of (1) by observing
c)
that 1 − p ≤ Φ(C,X
Φ(C,X) .
Inductive step: As stated earlier, we will assume that the statement holds for (t − 1, u) and
def.
c)
1
(t − 1, u − 1) and we will show that the statement holds for (t, u). Suppose p = Φ(C,X
Φ(C,X) ≥ 2 , then
Φ(C, X) ≤ 2 · Φ(C, Xc ) and so Φ(C ′ , X) ≤ Φ(C, X) ≤ 2 · Φ(C, Xc ) which is upper bounded by the
RHS of (1). So, the statement holds for (t, u) (without even using the induction assumption). So,
for the rest of the discussion, we will assume that p < 1/2. Let us break the remaining analysis
into two cases – (i) no center is added in the next iteration of the outer for-loop, and (ii) a center
is added. In case (i), u does not change, t decreases by 1, and the covered and uncovered clusters
remain the same after the iteration. So the contribution of this case to E[Φ(C ′ , X)] is at most
t−1
u−t+1
⌈log k⌉
p
· (Φ(C, Xc ) + 8 · ΦOP T (Xu )) · 2 +
· Φ(C, Xu )
(2)
+
k
u
Now, consider case (ii). Let A be any uncovered cluster w.r.t. center set C. For any point a ∈ A,
let pa denote the conditional probability of sampling a conditioned on sampling a point from A.
Also, let φa denote the cost of A given a is added as a center. That is, φa = Φ(C ∪ {a}, A). The
contribution of A to the expectation E[Φ(C ′ , X)] using the induction hypothesis is:
(1 − p⌈log k⌉ ) ·
Φ(C, A) X
pa
Φ(C, Xu )
a ∈A
t−1
u−t
· (Φ(C, Xc ) + φa + 8 · ΦOP T (Xu ) − 8 · ∆1 (A)) · 2 +
· (Φ(C, Xu ) − Φ(C, A))
+
k
u−1
This is at most
t−1
u−t
· (Φ(C, Xu ) − Φ(C, A))
(1 − p
(Φ(C, Xc ) + 8 · ΦOP T (Xu )) · 2 +
+
k
u−1
P
The previous inequality follows from the fact that a∈A pa φa ≤ 8 · ∆1 (A) from Lemma 2. Summing
over all uncovered clusters, the overall contribution in case (ii) is at most:
Φ(C, Xu )
t−1
u−t
· Φ(C, Xu ) −
(1 − p⌈log k⌉ ) · (Φ(C, Xc ) + 8 · ΦOP T (Xu )) · 2 +
+
k
u−1
u
⌈log k⌉
Φ(C, A)
)·
Φ(C, Xu )
P
The above bound is obtained using the fact that A is uncovered Φ(C, A)2 ≥ u1 Φ(C, Xu )2 . So the
contribution is at most
t−1
u−t
⌈log k⌉
(1 − p
) · (Φ(C, Xc ) + 8 · ΦOP T (Xu )) · 2 +
· Φ(C, Xu )
(3)
+
k
u
Combining inequalities (2) and (3), we get the following:
t−1
Φ(C, Xu )
u−t
′
E[Φ(C , X)] ≤ (Φ(C, Xc ) + 8 · ΦOP T (Xu )) · 2 +
· Φ(C, Xu ) + p⌈log k⌉ ·
+
k
u
u
t−1
u−t
= (Φ(C, Xc ) + 8 · ΦOP T (Xu )) · 2 +
· Φ(C, Xu ) +
+
k
u
Φ(C, Xc ) ⌈log k⌉ Φ(C, Xu )
·
Φ(C, X)
u
Φ(C, Xc )
u−t
t−1
· Φ(C, Xu ) +
+
≤ (Φ(C, Xc ) + 8 · ΦOP T (Xu )) · 2 +
k
u
ku
(using the Observation 2, that is p⌈log k⌉ ≤ 1/k)
u−t
t
· Φ(C, Xu )
+
≤ (Φ(C, Xc ) + 8 · ΦOP T (Xu )) · 2 +
k
u
This completes the inductive argument and the proof.
⊔
⊓
Let us now conclude the proof of Theorem 6 using the above lemma. Consider the center set C
before entering the outer for-loop. This contains a single center c chosen randomly from the dataset
X. Let c belong to some optimal cluster A. Let C ′ denote the center set after the execution of the
outer for-loop completes. Applying the above lemma with u = t = k − 1, we get:
k−1
′
E[Φ(C , X)] ≤ (Φ(C, A) + 8 · ∆k (X) − 8 · ∆1 (A)) · 2 +
k
≤ 3 · (2 · ∆1 (A) + 8 · ∆k (X) − 8 · ∆1 (A)) (using Lemma 1)
≤ 24 · ∆k (X)
3
Query Approximation Algorithm (proof of Theorem 3)
As mentioned in the introduction, our query algorithm is based on the D 2 -sampling based algorithm
of Jaiswal et al. [JKS14, JKY15]. The algorithm in these works give a (1 + ε)-factor approximation
for arbitrary small ε > 0. The running time of these algorithms are of the form nd · f (k, ε), where
f is an exponential function of k/ε. We now show that it is possible to get a running time which
is polynomial in n, k, d, 1/ε in the SSAC model. The main ingredient in the design and analysis of
the sampling algorithm is the following lemma by Inaba et al. [IKI94].
Lemma 4 ([IKI94]). Let S be a set of points obtained by independently sampling M points uniformly at random with replacement from a point set X ⊂ Rd . Then for any δ > 0,
1
Pr Φ({µ(S)}, X) ≤ 1 +
· ∆1 (X) ≥ (1 − δ).
δM
Here µ(S) denotes the geometric centroid of the set S. That is µ(S) =
P
s∈S
s
|S|
Our algorithm Query-k-means is described in Table 2. It maintains a set C of potential centers of
the clusters. In each iteration of step (3), it adds one more candidate center to the set C (whp), and
so, the algorithm stops when |C| reaches k. For sake of explanation, assume that optimal clusters
are X1 , X2 , . . . , Xk with means µ1 , . . . , µk respectively. Consider the ith iteration of step (3). At this
time |C|= i − 1, and it has good approximations to means of i − 1 clusters among X1 , . . . .Xk . Let us
call these clusters covered. In Step (3.1), it samples N points, each with probability proportional to
square of distance from C (D 2 -sampling). Now, it partitions this set, S, into S ∩ X1 , . . . , S ∩ Xk in
the procedure UncoveredClusters, and then picks the partition with the largest size such that the
corresponding optimal cluster Xj is not one of the (i − 1) covered clusters. Now, we would like to
get a uniform sample from Xj – recall that S ∩Xj does not represent a uniform sample. However, as
mentioned in the introduction, we need to find an element s of Xj for which the probability of being
in sampled is small enough. Therefore, we pick the element in S ∩ Xj for which this probability is
smallest (and we will show that it has the desired properties). The procedure UncoveredCluster
returns this element s. Finally, we choose a subset T of S ∩ Xj in the procedure UniformSample.
This procedure is designed such that each element of Xj has the same probability of being in T .
In step (3.4), we check whether the multi-set T is of a desired minimum size. We will argue that
the probability of T not containing sufficient number of points is very small. If we have T of the
desired size, we take its mean and add it to C in Step (3.6).
We now formally prove the approximation guarantee of the Query-k-means algorithm.
Theorem 7. Let 0 < ε ≤ 1/2, k be any positive integer, and X ⊆ Rd . There exists an algorithm
that runs in time O(ndk9 /ε4 ) and with probability at least 14 outputs a center set C such that
Φ(C, X) ≤ (1 + ε) · ∆k (X). Moreover, the number of same-cluster queries used by the algorithm is
O(k9 /ε4 ).
12
3
23
2
Constants: N = (2 ε2)k , M = 64k
, L = (2 ε4)k
ε
Query-k-means(X, k, ε)
UncoveredCluster(C, S, R)
(1) R ← ∅
- For all i ∈ {1, ..., k}: Si ← ∅
(2) C ← ∅
-i←1
(3) for i = 1 to k
- For all y ∈ R: {Si ← y; i++}
(3.1) D2 -sample a multi-set S of N points
- For all s ∈ S:
from X with respect to center set C
- If (∃j, y s.t. y ∈ Sj & SameCluster(s, y))
(3.2) s ← UncoveredCluster(C, S, R)
- Sj ← Sj ∪ {s}
(3.3) T ← UniformSample(X, C, s)
- else
(3.4) If (|T |< M ) continue
- Let i be any index s.t. Si is empty
(3.5) R ← R ∪ {s}
- Si ← {s}
(3.6) C ← C ∪ µ(T )
- Let Si be the largest set such that i > |R|
(4) return(C)
- Let s ∈ Si be the element with smallest
UniformSample(X, C, s)
value of Φ(C, {s}) in Si
-T ←∅
- return(s)
- For i = 1 to L:
- D2 -sample a point x from X with respect to center set C
- If (SameCluster(s, x))
- With probability
ε
128
·
Φ(C,{s})
Φ(C,{x})
add x in multi-set T
- return(T )
Table 2. Approximation algorithm for k-means(top-left frame). Note that µ(T ) denotes the centroid of T and D2 sampling w.r.t. empty center set C means just uniform sampling. The algorithm UniformSample(X, C, s) (bottom-left)
returns a uniform sample of size Ω(1/ε) (w.h.p.) from the optimal cluster to which point s belongs.
Note that the success probability of the algorithm may be boosted by repeating it a constant
number of times. This will also prove our main theorem (that is, Theorem 3). We will assume that
the dataset X satisfies (k, ε)-irreducibility property defined next. We will later drop this assumption
using a simple argument and show that the result holds for all datasets. This property was also
used in some earlier works [KSS10, JKS14].
Definition 3 (Irreducibility). Let k be a positive integer and 0 < ε ≤ 1. A given dataset X ⊆ Rd
is said to be (k, ε)-irreducible iff
∆k−1 (X) ≥ (1 + ε) · ∆k (X).
Qualitatively, what the irreducibility assumption implies is that the optimal solution for the
(k − 1)-means problem does not give a (1+ ε)-approximation to the k-means problem. The following
useful lemmas are well known facts.
Lemma 5. For any dataset X ⊆ Rd and a point c ∈ Rd , we have:
Φ({c}, X) = Φ(µ(X), X) + |X|·||c − µ(X)||2 .
Lemma 6 (Approximate Triangle Inequality). For any three points p, q, r ∈ Rd , we have
||p − q||2 ≤ 2(||p − r||2 +||r − q||2 )
Let X1 , ..., Xk be optimal clusters of the dataset X for the k-means objective. Let µ1 , ..., µk
denote the corresponding
optimal k centers. That is, ∀i, µi = µ(Xi ). For all i, let mi = |Xi |. Also,
P
for all i, let ri =
5
x∈Xi ||x−µi ||
mi
2
. The following useful lemma holds due to irreducibility.5
This is Lemma 4 from [JKS14]. We give the proof here for self-containment.
Lemma 7. For all 1 ≤ i < j ≤ k, ||µi − µj ||2 ≥ ε · (ri + rj ).
Proof. For the sake of contradiction, assume that ||µi − µj ||2 < ε · (ri + rj ). WLOG assume that
mi > mj . We have:
Φ({µi }, Xi ∪ Xj ) = mi ri + mj rj + mj ||µi − µj ||2
(using Lemma 5)
≤ (1 + ε) · mi ri + (1 + ε) · mj rj
(since mi > mj )
≤ mi ri + mj rj + mj · ε · (ri + rj )
≤ (1 + ε) · Φ({µi , µj }, Xi ∪ Xj )
This implies that the center set {µ1 , ..., µk } \ {µj } gives a (1 + ε)-approximation to the k-means
objective which contradicts with the (k, ε)-irreducibility assumption on the data.
⊔
⊓
Consider the algorithm Query-k-means in Figure 2. Let Ci = {c1 , ..., ci } denote the set of centers
at the end of the ith iteration of the for loop. That is, Ci is the same as variable C at the end of
iteration i. We will prove Theorem 7 by inductively arguing that for every i, there are i distinct
clusters for which centers in Ci are good in some sense. Consider the following invariant:
P(i): There exists a set of i distinct clusters Xj1 , Xj2 , ..., Xji such that
ε
· ∆1 (Xjr ).
∀r ∈ {1, ..., i}, Φ({cr }, Xjr ) ≤ 1 +
16
P
ε
Note that a trivial consequence of P (i) is Φ(Ci , Xj1 ∪ ... ∪ Xji ) ≤ (1 + 16
) · ir=1 ∆1 (Xjr ). We
will show that for all i, P (i) holds with probability at least (1 − 1/k)i . Note that the theorem
follows if P (k) holds with probability at least (1 − 1/k)k . We will proceed using induction. The
base case P (0) holds since C0 is the empty set. For the inductive step, assuming that P (i) holds
with probability at least (1 − 1/k)i for some arbitrary i ≥ 0, we will show that P (i + 1) holds with
probability at least (1 − 1/k)i+1 . We condition on the event P (i) (that is true with probability
at least (1 − 1/k)i ). Let Ci and Xj1 , ..., Xji be as guaranteed by the invariant P (i). For ease of
notation and without loss of generality, let us assume that the index jr is r. So, Ci gives a good
approximation w.r.t. points in the set X1 ∪ X2 ∪ .... ∪ Xi and these clusters may be thought of as
“covered” clusters (in the approximation sense). Suppose we D 2 -sample a point p w.r.t. center set
Ci . The probability that p belongs to some “uncovered cluster” Xr where r ∈ [i + 1, k] is given
i ,Xr )
2
as Φ(C
Φ(Ci ,X) . If this quantity is small, then the points sampled using D sampling in subsequent
iterations may not be good representatives for the uncovered clusters. This may cause the analysis
to break down. However, we argue that since our data is (k, ε)-irreducible, this does not occur. 6
Lemma 8.
Φ(Ci ,Xi+1 ∪...∪Xk )
Φ(Ci ,X)
≥ 4ε .
Proof. For the sake of contradiction, assume that the above statement does not hold. Then we
have:
Φ(Ci , X) = Φ(Ci , X1 ∪ ... ∪ Xi ) + Φ(Ci , Xi+1 ∪ ... ∪ Xk )
(ε/4)
≤ Φ(Ci , X1 ∪ ... ∪ Xi ) +
· Φ(Ci , X1 ∪ ... ∪ Xi ) (using our assumption)
1 − (ε/4)
1
· Φ(Ci , X1 ∪ ... ∪ Xi )
=
1 − ε/4
6
This is Lemma 5 in [JKS14]. We give the proof for self-containment.
≤
i
1 + ε/16 X
∆1 (Xj ) (using invariant)
·
1 − ε/4
j=1
≤ (1 + ε) ·
k
X
∆1 (Xj )
j=1
⊔
⊓
This contradicts with the (k, ε)-irreducibility of X.
The following simple corollary of the above lemma will be used in the analysis later.
Corollary 1. There exists an index j ∈ {i + 1, ..., k} such that
Φ(Ci ,Xj )
Φ(Ci ,X)
≥
ε
4k .
The above corollary says that there is an uncovered cluster which will have a non-negligible
representation in the set S that is sampled in iteration (i + 1) of the algorithm Query-k-means.
The next lemma shows that conditioned on sampling from an uncovered cluster l ∈ {i + 1, ..., k},
ε
times its sampling probability if it were
the probability of sampling a point x from Xl is at least 64
ε
· m1l ). 7
sampled uniformly from Xl (i.e., with probability at least 64
Lemma 9. For any l ∈ {i + 1, ..., k} and x ∈ Xl ,
Φ(Ci ,{x})
Φ(Ci ,Xl )
≥
ε
64
·
1
ml .
Proof. Let t ∈ {1, ..., i} be the index such that x is closest to ct among all centers in Ci . We have:
Φ(Ci , Xl ) = ml · rl + ml · ||µl − ct ||2
(using Lemma 5)
2
≤ ml · rl + 2ml · (||µl − µt || +||µt − ct ||2 ) (using Lemma 6)
ε
≤ ml · rl + 2ml · (||µl − µt ||2 + · rt ) (using invariant and Lemma 5)
16
Also, we have:
||x − µt ||2
− ||µt − ct ||2 (using Lemma 6)
2
||µl − µt ||2
≥
− ||µt − ct ||2 (since ||x − µt ||≥ ||µl − µt ||/2)
8
ε
||µl − µt ||2
−
· rt (using invariant and Lemma 5)
≥
8
16
||µl − µt ||2
≥
(using Lemma 7)
16
Φ(Ci , {x}) = ||x − ct ||2 ≥
Combining the inequalities obtained above, we get the following:
||µl − µt ||2
Φ(Ci , {x})
≥
Φ(Ci , Xl )
16 · ml · rl + 2||µl − µt ||2 + 8ε · rt
1
1
ε
1
≥
·
≥
·
16 · ml (1/ε) + 2 + (1/8)
64 ml
This completes the proof of the lemma.
⊔
⊓
With the above lemmas in place, let us now get back to the inductive step of the proof. Let
J ⊆ {i + 1, ..., k} denote the subset of indices (from the uncovered cluster indices) such that
7
This is Lemma 6 from [JKS14]. We give the proof for self-containment.
Φ(Ci ,Xj )
ε
Φ(Ci ,X) ≥ 8k . For any
2
i ,{y})
Yj , Φ(C
Φ(Ci ,Xj ) ≤ mj . That
∀j ∈ J,
index j ∈ J, let Yj ⊆ Xj denote the subset of points in Xj such that
is, Yj consists of all the points such that the conditional probability
∀y ∈
of sampling any point y in Yj , given that a point is sampled from Xj , is upper bounded by 2/mj .
Note that from Lemma 9, the conditional probability of sampling a point x from Xj , given that a
ε
· m1j . This gives the following simple and useful
point is sampled from Xj , is lower bounded by 64
lemma.
Lemma 10. For all j ∈ {i + 1, ..., k} the following two inequalities hold:
Φ(C ,Y )
Φ(C ,X )
j
j
ε
1. Φ(Cii ,X)
· Φ(Cii ,X)
≥ 128
, and
2. For any y ∈ Yj and any x ∈ Xj ,
ε
128
· Φ(Ci , {y}) ≤ Φ(Ci , {x}).
Φ(Ci ,{y})
ε
Φ(Ci ,Xj ) ≥ 64
1
ε
i ,{x})
Xj , Φ(C
Φ(Ci ,Xj ) ≥ 64 · mj
Proof. Inequality (1) follows from the fact that |Yj |≥ mj /2, and
Xj . Inequality (2) follows from the fact that for all x ∈
i ,{y})
Yj , Φ(C
Φ(Ci ,Xj ) ≤
2
mj .
·
1
mj
for all y ∈
and for all y ∈
⊔
⊓
Let us see the outline of our plan before continuing with our formal analysis. What we hope to
get in line (3.2) of the algorithm is a point s that belongs to one of the uncovered clusters with index
in the set J. That is, s belongs to an uncovered cluster that is likely to have a good representation
in the D 2 -sampled set S obtained in line (3.1). In addition to s belonging to Xj for some j ∈ J, we
would like s to belong to Yj . This is crucial for the uniform sampling in line (3.3) to succeed. We
will now show that the probability of s returned in line (3.2) satisfies the above conditions is large.
Lemma 11. Let S denote the D 2 -sample obtained w.r.t. center set Ci in line (3.1) of the algorithm.
Pr[∃j ∈ J such that S does not contain any point from Yj ] ≤
1
.
4k
Proof. We will first get bound on the probability for a fixed j ∈ J and then use the union bound.
Φ(C ,Yj )
ε
ε2
ε
From property (1) of Lemma 10, we have that for any j ∈ J, Φ(Cii ,X)
· 8k
= (210
≥ 128
. Since the
)k
12
3
number of sampled points is N = (2 ε2)k , we get that the probability that S has no points from Yj
is at most 4k12 . Finally, using the union bound, we get the statement of the lemma.
⊔
⊓
Lemma 12. Let S denote the D 2 -sample obtained w.r.t. center set Ci in line (3.1) of the algorithm
and let Sj denote the representatives of Xj in S. Let max = arg maxj∈{i+1,...,k}|Sj |. Then Pr[max ∈
/
1
J] ≤ 4k .
Φ(C ,X )
j
ε
Proof. From Corollary 1, we know that there is an index j ∈ {i + 1, ..., k} such that Φ(Cii ,X)
.
≥ 4k
ε
Let α = N · 4k . The expected number of representatives from Xj in S is at least α. So, from
Chernoff bounds, we have:
Pr[|Sj |≤ 3α/4] ≤ e−α/32
On the other hand, for any r ∈ {i + 1, ..., k} \ J, the expected number of points in S from Xr is at
ε
· N = α/2. So, from Chernoff bounds, we have:
most 8k
Pr[|Sr |> 3α/4] ≤ e−α/24
So, the probability that there exists such an r is at most k · e−α/24 by union bound. Finally, the
probability that max ∈
/ J is at most Pr[|Sj |≤ 3α/4] + Pr[∃r ∈ {i + 1, ..., k} \ J||Sr |> 3α/4] which
12 3
1
⊔
⊓
is at most 4k due to our choice of N = (2 ε2)k .
1
From the previous two lemmas, we get that with probability at least (1 − 2k
), the s returned
in line (3.2) belongs to Yj for some j ∈ J. Finally, we will need the following claim to argue that
the set T returned in line (3.3) is a uniform sample from one of the sets Xj for j ∈ {i + 1, ..., k}.
Lemma 13. Let S denote the D 2 -sample obtained w.r.t. center set Ci in line (3.1) and s be the
point returned in line (3.2) of the algorithm. Let j denote the index of the cluster to which s belongs.
1
), T returned in line (3.3) is a uniform
If j ∈ J and s ∈ Yj , then with probability at least (1 − 4k
64k
sample from Xj with size at least ε .
Proof. Consider the call to sub-routine UniformSample(X, Ci, s) with s as given in the statement
of the lemma. If j is the index of the cluster to which s belongs, then j ∈ J and s ∈ Yj . Let us define
L random variables Z1 , ..., ZL one for every iteration of the sub-routine. These random variables
are defined as follows: for any r ∈ [1, L], if the sampled point x belongs to the same cluster as s
and it is picked to be included in multi-set S, then Zr = x, otherwise Zr = ⊥. We first note that
for any r and any x, y ∈ Xj , Pr[Zr = x] = Pr[Zr = y]. This is because for any x ∈ Xj , we have
ε
·Φ(C ,{s})
ε
·Φ(C ,{s})
i
i
ε
i ,{s})
i ,{x})
128
128
· Φ(C
Pr[Zr = x] = Φ(C
= 128
≤1
Φ(Ci ,X) · Φ(Ci ,{x})
Φ(Ci ,X) . It is important to note that
Φ(Ci ,{x})
from property (2) of Lemma 10 and hence valid in the probability calculations above.
Let us now obtain a bound on the size of T . Let Tr = I(Zr ) be the indicator variable that is 1
if Zr 6= ⊥ and 0 otherwise. Using the fact that j ∈ J, we get that for any r:
P
ε
ε
ε
ε
ε3
x∈Xj Φ(Ci , {s})
E[Tr ] = Pr[Tr = 1] =
·
≥
·
·
= 16 .
128
Φ(Ci , X)
128 8k 64
(2 )k
Given that L =
223 k 2
,
ε4
applying Chernoff bounds, we get the following:
64k
1
64k
= 1 − Pr |T |≤
≥ 1−
Pr |T |≥
ε
ε
4k
This completes the proof of the lemma.
⊔
⊓
Since a suitable s (as required by the above lemma) is obtained in line (3.2) with probability at
1
least (1− 2k
), the probability that T obtained in line (3.3) is a uniform sample from some uncovered
1
1
)·(1− 4k
). Finally, the probability that the centroid µ(T ) of the multi-set
cluster Xj is at least (1− 2k
1
) using Inaba’s lemma. Combining
T that is obtained is a good center for Xj is at least (1 − 4k
1
everything, we get that with probability at least (1 − k ) an uncovered cluster will be covered in the
ith iteration. This completes the inductive step and hence the approximation guarantee of (1 + ε)
holds for any dataset that satisfies the (k, ε)-irreducibility assumption. For the number of queries
and running time, note that every time sub-routine UncoveredCluster is called, it uses at most
kN same cluster queries. For the subroutine UniformSample, the number of same-cluster queries
made is L. So, the total number of queries is O(k(kN + L)) = O(k5 /ε4 ). More specifically, we have
proved the following theorem.
Theorem 8. Let 0 < ε ≤ 1/2, k be any positive integer, and X ⊆ Rd such that X is (k, ε)irrducible. Then Query-k-means(X, k, ε) runs in time O(ndk5 /ε4 ) and with probability at least
(1/4) outputs a center set C such that Φ(C, X) ≤ (1 + ε) · ∆k (X). Moreover, the number of samecluster queries used by Query-k-means(X, k, ε) is O(k5 /ε4 ).
To complete the proof of Theorem 7, we must remove the irreducibility assumption in the above
theorem. We do this by considering the following two cases:
ε
1. Dataset X is (k, (4+ε/2)k
)-irreducible.
ε
2. Dataset X is not (k, (4+ε/2)k )-irreducible.
In the former case, we can apply Theorem 8 to obtain Theorem 7. Now, consider the latter
ε
case. Let 1 < i ≤ k denote the largest index such that X is (i, (1+ε/2)k
)-irreducible, otherwise i = 1.
Then we have:
k−i
ε
ε
· ∆k (X).
· ∆k (X) ≤ 1 +
∆i (X) ≤ 1 +
(4 + ε/2)k
4
This means that a (1 + ε/4)-approximation for the i-means problem on the dataset X gives the
desired approximation for the k-means problem. Note that our approximation analysis works for the
i-means problem with respect to the algorithm being run only for i steps in line (3) (instead of k).
That is, the centers sampled in the first i iterations of the algorithm give a (1+ε/16)-approximation
for the i-means problem for any fixed i. This simple observation is sufficient for Theorem 7. Note
since Theorem 8 is used with value of the error parameter as O(ε/k), the bounds on the query and
running time get multiplied by a factor of k4 .
4
Query Lower Bound (proof of Theorem 4)
In this section, we will obtain a conditional lower bound on the number of same-cluster queries
assuming the Exponential Time Hypothesis (ETH). This hypothesis has been used to obtain lower
bounds in various different contexts (see [Man16] for reference). We start by stating the Exponential
Time Hypothesis (ETH).
Hypothesis 1 (Exponential Time Hypothesis (ETH)[IP01, IPZ01]): There does not exist
an algorithm that can decide whether any 3-SAT formula with m clauses is satisfiable with
running time 2o(m) .
Since we would like to obtain lower bounds in the approximation domain, we will need a gap
version of the above ETH hypothesis. The following version of the PCP theorem will be very useful
in obtaining a gap version of ETH.
Theorem 9 (Dinur’s PCP Theorem [Din07]). For some constants ε, d > 0, there exists a
polynomial time reduction that takes a 3-SAT formula ψ with m clauses and produces another
3-SAT formula φ with m′ = O(m polylog m) clauses such that:
– If ψ is satisfiable, then φ is satisfiable,
– if ψ is unsatisfiable, then val(φ) ≤ 1 − ε, and
– each variable of φ appears in at most d clauses.
Here val(φ) denotes the maximum fraction of clauses of φ that are satisfiable by any assignment.
The following new hypothesis follows from ETH and will be useful in our analysis.
Hypothesis 2: There exists constants ε, d > 0 such that the following holds: There does
not exist an algorithm that, given a 3-SAT formula ψ with m clauses with each variable
appearing in atmost d clauses, distinguishes whether ψ is satisfiable or val(ψ) ≤ (1 − ε),
Ω
runs in time 2
m
poly log m
.
The following simple lemma follows from Dinur’s PCP theorem given above.
Lemma 14. If Hypothesis 1 holds, then so does Hypothesis 2.
We now see a reduction from the gap version of 3-SAT to the gap version of the Vertex Cover
problem that will be used to argue the hardness of the k-means problem. The next result is a
standard reduction and can be found in a survey by Luca Trevisan [Tre04].
Lemma 15. Let ε, d > 0 be some constants. There is a polynomial time computable function
mapping 3-SAT instances ψ with m variables and where each variable appears in at most d clauses,
into graphs Gψ with 3m vertices and maximum degree O(d) such that if ψ is satisfiable, then Gψ
has a vertex cover of size at most 2m and if val(ψ) ≤ (1 − ε), then every vertex cover of Gψ has
size at least 2m(1 + ε/2).
We formulate the following new hypothesis that holds given that hypothesis 2 holds. Eventually,
we will chain all these hypothesis together.
Hypothesis 3: There exists constants ε, d > 0 such that the following holds: There does not
exist an algorithm that, given a graph G with n vertices and maximum degree d, distinguishes
between the case when G has a vertex cover of size at most
2n/3 and the case when G has
a vertex cover of size at least
2n
3
Ω
· (1 + ε), runs in time 2
n
poly log n
.
The following lemma is a simple implication of Lemma 15
Lemma 16. If Hypothesis 2 holds, then so does Hypothesis 3.
We are getting closer to the k-means problem that has a reduction from the vertex cover problem
on triangle-free graphs [ACKS15]. So, we will need reductions from vertex cover problem to vertex
cover problem on triangle-free graphs and then to the k-means problem. These two reductions are
given by Awasthi et al. [ACKS15].
Lemma 17 (Follows from Theorem 21 [ACKS15]). Let ε, d > 0 be some constants. There is
a polynomial-time computable function mapping any graph G = (V, E) with maximum degree d to
a triangle-free graph Ĝ = (V̂ , Ê) such that the following holds:
3 2
–
|V̂ |= poly(d,1/ε)· |V | and maximum
degree
of verticesin Ĝ is O(d /ε ), and
Ĝ)|
≤ 1 − |V C(
.
– 1 − |V C(G)|
≤ (1 + ε) · 1 − |V C(G)|
|V |
|V |
|V̂ |
Here V C(G) denote the size of the minimum vertex cover of G.
We can formulate the following hypothesis that will follow from Hypothesis 3 using the above
lemma.
Hypothesis 4: There exists constants ε, d > 0 such that the following holds: There does
not exist an algorithm that, given a triangle-free graph G with n vertices and maximum
2n
degree d, distinguishes between the case when G has a vertex cover of size at most
3 and
the case when G has a vertex cover of size at least
2n
3
Ω
· (1 + ε), runs in time 2
The next claim is a simple application of Lemma 17.
Lemma 18. If Hypothesis 3 holds, then so does Hypothesis 4.
n
poly log n
.
Finally, we use the reduction from the vertex cover problem in triangle-free graphs to the kmeans problem to obtain the hardness result for the k-means problem. We will use the following
reduction from Awasthi et al. [ACKS15].
Lemma 19 (Theorem 3 [ACKS15]). There is an efficient reduction from instances of Vertex
Cover (in triangle free graphs) to those of k-means that satisfies the following properties:
– if the Vertex Cover instance has value k, then the k-means instance has cost at most (m − k)
– if the Vertex Cover instance has value at least k(1 + ε), then the optimal k-means cost is at least
m − (1 − Ω(ε))k. Here ε is some fixed constant > 0.
Here m denotes the number of edges in the vertex cover instance.
The next hypothesis follows from Hypothesis 4 due to the above lemma.
Hypothesis 5: There exists constant c > 1 such that the following holds: There does not
exist an algorithm that gives
guarantee of c for the k-means problem that
an approximation
Ω
runs in time poly(n, d) · 2
k
poly log k
.
Claim. If Hypothesis 4 holds, then so does Hypothesis 5.
Now using Lemmas 14, 16, 18, and 4, get the following result.
Lemma 20. If the Exponential Time Hypothesis (ETH) holds then there exists a constant c > 1
such that any c-approximation
algorithm for the k-means problem cannot have running time better
Ω
than poly(n, d) · 2
k
poly log k
.
This proves Theorem 4 since if there is a query algorithm that runs in time poly(n, d, k) and
makes polyklog k same-cluster queries, then we can convert it to a non-query algorithm that runs in
k
time poly(n, d) · 2 poly log k in a brute-force manner by trying out all possible answers for the queries
and then picking the best k-means solution.
5
Query Approximation Algorithm with Faulty Oracle
In this section, we describe how to extend our approximation algorithm for k-means clustering
in the SSAC framework when the oracle is faulty. That is, the answer to the same-cluster query
may be incorrect. Let us denote the faulty same-cluster oracle as OE . We consider the following
error model: for a query with points u and v, the query answer OE (u, v) is wrong independently
with probability at most q that is strictly less than 1/2. More specifically, if u and v belong to the
same optimal cluster, then OE (u, v) = 1 with probability at least (1 − q) and OE (u, v) = 0 with
probability at most q. Similarly, if u and v belong to different optimal clusters, then OE (u, v) = 1
with probability at most q and OE (u, v) = 0 with probability at least (1 − q).
The modified algorithm giving (1 + ε)-approximation for k-means with faulty oracle OE is given
in Figure 3. Let X1 , . . . , Xk denote the k optimal clusters for the dataset X. Let C = {c1 , . . . , ci }
denote the set of i centers chosen by the algorithm at the end of iteration i. Let S denote the
sample obtained using D 2 -sampling in the (i + 1)st iteration. The key idea for an efficient (1 + ε)approximation algorithm for k-means in the SSAC framework with a perfect oracle was the following.
Given a sample S, we could compute using at most k|S| same-cluster queries the partition S1 , . . . , Sk
of S among the k optimal clusters such that Sj = S ∩ Xj for all j. In the following, we discuss how
we achieve this partitioning of S among k optimal clusters when the oracle OE is faulty.
We reduce the problem of finding the partitions of S among the optimal clusters to the problem
of recovering dense (graph) clusters in a stochastic block model (SBM). An instance of an SBM is
created as follows. Given any arbitrary partition V1 , . . . , Vk of a set V of vertices, an edge is added
between two vertices belonging to the same partition with probability at least (1 − q) and between
two vertices in different partitions with probability at most q. We first construct an instance I
of an SBM using the sample S. By querying the oracle OE with all pairs of points u, v in S, we
obtain a graph I on |S| vertices, where vertices in I correspond to the points in S, and an edge
exists in I between vertices u and v if OE (u, v) = 1. Since oracle OE errs with probability at most
q, for any u, v ∈ Sj for some j ∈ [k], there is an edge between u and v with probability at least
(1 − q). Similarly, there is an edge (u, v) ∈ I for any two points u ∈ Sy and v ∈ Sz , y 6= z belonging
to different optimal clusters with probability at most q. Note that the instance I created in this
manner would be an instance of an SBM. Since q < 1/2, this procedure, with high probability,
creates more edges between vertices belonging to the same partition than the number of edges
between vertices in different partitions. Intuitively, the partitions of S would correspond to the
dense (graph) clusters in SBM instance I, and if there were no errors, then each partition would
correspond to a clique in I. One way to figure out the partitions S1 , . . . , Sk would be to retrieve the
dense (graph) clusters from the instance I. Ailon et al. [ACX15] gave a randomized algorithm to
retrieve all large clusters of any SBM instance. Their algorithm on a graph of n vertices retrieves
√
all clusters of size at least n with high probability. Their main result in our context is given as
follows.
13
3
23
2
Constants: N = (2 ε2)k , M = 64k
, L = (2 ε4)k
ε
Faulty-Query-k-means(X, k, ε)
UncoveredCluster(C, S, J)
(1) J ← ∅
- For all i ∈ {1, ..., k}: Si ← ∅
(2) C ← ∅
-i←1
(3) for i = 1 to k
- For all y ∈ J: {Si ← y; i++}
(3.1) D2 -sample a multi-set S of N points
- T1 , . . . , Tl = PartitionSample(S) for l < k
from X with respect to center set C
- for j = 1, . . . , l
(3.2) s ← UncoveredCluster(C, S, J)
- if IsCovered(C, Tj ) is FALSE
(3.3) T ← UniformSample(X, C, s)
- if ∃t such that St = ∅, then St = Tj
(3.4) If (|T |< M ) continue
- Let Si be the largest set such that i > |J|
(3.5) J ← J ∪ {s}
- Let s ∈ Si be the element with smallest
(3.6) C ← C ∪ µ(T )
value of Φ(C, {s}) in Si
(4) return(C)
- return(s)
UniformSample(X, C, s)
-S←∅
- For i = 1 to L:
PartitionSample(S)
- D2 -sample point x ∈ X with respect to center set C - Construct SBM instance I by querying OE (s, t) ∀s, t ∈ S
- U = U ∪ {x}
- Run cluster recovery algorithm of Ailon et al. [ACX15] on I
- T1 , . . . , Tl = PartitionSample(U ) for l < k
- Return T1 , . . . , Tl for l < k
- for j = 1, . . . , l
- If (IsCovered(s, Tj ) is TRUE)
- ∀x ∈ Tj , with probability
in multi-set S
- return (S)
ε
128
·
Φ(C,{s})
Φ(C,{x})
add x IsCovered(C,U)
- for c ∈ C
– if for majority of u ∈ U , OE (c, u) = 1, Return TRUE
- Return FALSE
Table 3. Approximation algorithm for k-means (top-left frame) using faulty oracle. Note that µ(T ) denotes
the centroid of T and D2 -sampling w.r.t. empty center set C means just uniform sampling. The algorithm
UniformSample(X, C, s) (bottom-left) returns a uniform sample of size Ω(1/ε) (w.h.p.) from the optimal cluster
in which point s belongs.
Lemma 21 ([ACX15]). There exists a polynomial time algorithm that, given an instance of a
√
stochastic block model on n vertices, retrieves all clusters of size at least Ω( n) with high probability,
provided q < 1/2.
We use Lemma 21 to retrieve the large clusters from our SBM instance I. We also need to make
sure that the sample S is such that its overlap with at least one uncovered optimal cluster is large,
where an optimal cluster Sj for some j is uncovered
if C ∩ Sj = ∅. More formally, we would require
p
the following: ∃j ∈ [k] such that |Sj |≥ Ω( |S|), and Xj is uncovered by C. From Corollary 1,
given a set of centers C with |C|< k, there exists an uncovered cluster such that any point sampled
ε
using D 2 -sampling would belong to that uncovered cluster with probability at least 4k
. Therefore,
ε
2
in expectation, D -sample S would contain at least 4k |S| points from one such uncovered optimal
p
2
cluster. In order to ensure that this quantity is at least as large as |S|, we need |S|= Ω( 16k
ε2 ).
Our bounds for N and L, in the algorithm, for the size of D 2 -sample S satisfy this requirement
with high probability. This follows from a simple application of Chernoff bounds.
12 2
Lemma 22. For D 2 -sample S of size at least 2 ε2k , there is at least one partition Sj = S ∩ Xj
among the partitions returned by the sub-routine PartitionSample corresponding to an uncovered
1
cluster Xj with probability at least (1 − 16k
).
Proof. From Corollary 1, for any point p sampled using D 2 -sampling, the probability that point p
ε
. In expectation, the number of points sampled
belongs to some uncovered cluster Xj is at least 4k
10
ε|S|
2 k
from uncovered cluster Xj is E[|Sj |] = 4k = ε . Exact recovery using Lemma 21 requires |Sj | to
6
1
be at least 2 εk . Using Chernoff bounds, the probability of this event is at least (1 − 16k
).
⊔
⊓
Following Lemma 22, we condition on the event that there is at least one partition corresponding
to an uncovered cluster among the partitions returned by the sub-routine PartitionSample. Next,
we figure out using the sub-routine IsCovered which of the partitions returned by PartitionSample
are covered and which are uncovered. Let T1 , . . . , Tl be the partitions returned by PartitionSample
where l < k. Sub-routine IsCovered determines whether a cluster is covered or uncovered in the
following manner. For each j ∈ [l], we check whether Tj is covered by some c ∈ C. We query oracle
OE with pairs (v, c) for v ∈ Tj and c ∈ C. If majority of the query answers for some c ∈ C is 1, we
say cluster Tj is covered by C. If for all c ∈ C and some Tj , the majority of the query answers is
0, then we say Tj is uncovered by C. Using Chernoff bounds, we show that with high probability
uncovered clusters would be detected.
Lemma 23. With probability at least (1 −
correctly by the sub-routine IsCovered.
1
16k ),
all covered and uncovered clusters are detected
Proof. First, we figure out the probability that any partition Tj for j ∈ [l] is detected correctly
as covered or uncovered. Then we use union bound to bound the probability that all clusters
are detected correctly. Recall that each partition returned by PartitionSample has size at least
6
|Tj | ≥ 2 εk for j ∈ [l]. We first compute for one such partition Tj and some center c ∈ C, the
probability that majority of the queries OE (v, c) where v ∈ Tj are wrong. Since each query answer
is wrong independently with probability q < 1/2, in expectation the number of wrong query answers
would be q|Tj |. Using Chernoff bound, the probability that majority of the queries is wrong is at
most e−
1 2
26 k
(1− 2q
)
3ε
. The probability that the majority of the queries is wrong for at least one center
6
1 2
− 23εk (1− 2q
)
c ∈ C is at most ke
probability at least (1 −
6
. Again using union bound all clusters are detected correctly with
1 2
2 k
k2 e− 3ε (1− 2q ) )
≥ (1 −
1
16k ).
⊔
⊓
1
With probability at least (1 − 8k
), given a D 2 -sample S, we can figure out the largest uncovered optimal cluster using the sub-routines PartitionSample and IsCovered. The analysis of the
Algorithm 3 follows the analysis of Algorithm 2. For completeness, we compute the probability of
success, and the query complexity of the algorithm. Note that s in line (3.2) of the Algorithm 3
1
1
is chosen correctly with probability (1 − 4k
)(1 − 8k
). The uniform sample in line (3.3) is chosen
1
1
properly with probability (1 − 4k )(1 − 8k ). Since, given the uniform sample, success probability
1
), overall the probability of success becomes (1 − k1 ). For
using Inaba’s lemma is at least (1 − 4k
6
query complexity, we observe that PartitionSample makes O( kε8 ) same-cluster queries to oracle
4
OE , query complexity of IsCovered is O( kε4 ). Since PartitionSample is called at most k times,
7
total query complexity would be O( kε8 ). Note that these are bounds for dataset that satisfies (k, ε)irreducibility condition. For general dataset, we will use O(ε/k) as the error parameter. This causes
the number of same-cluster queries to be O(k15 /ε8 ).
6
Conclusion and Open Problems
This work explored the power of the SSAC framework defined by Ashtiani et al. [AKBD16] in the
approximation algorithms domain. We showed how same-cluster queries allowed us to convert the
popular k-means++ seeding algorithm into an algorithm that gives constant approximation for the
k-means problem instead of the O(log k) approximation guarantee of k-means++ in the absence of
such queries. Furthermore, we obtained an efficient (1+ε)-approximation algorithm for the k-means
problem within the SSAC framework. This is interesting because it is known that such an efficient
algorithm is not possible in the classical model unless P = NP.
Our results encourages us to formulate similar query models for other hard problems. If the
query model is reasonable (as is the SSAC framework for center-based clustering), then it may
be worthwhile exploring its powers and limitations as it may be another way of circumventing the
hardness of the problem. For instance, the problem closest to center-based clustering problems such
as k-means is the correlation clustering problem. The query model for this problem may be similar
to the SSAC framework. It will be interesting to see if same-cluster queries allows us to design
efficient algorithms and approximation algorithms for the correlation clustering problem for which
hardness results similar to that of k-means is known.
References
ABS12.
ABV14.
ACKS15.
ACX15.
ADK09.
AKBD16.
Pranjal Awasthi, Avrim Blum, and Or Sheffet. Center-based clustering under perturbation stability.
Information Processing Letters, 112(1):49 – 54, 2012.
Pranjal Awasthi, Maria-Florina Balcan, and Konstantin Voevodski. Local algorithms for interactive
clustering. In ICML, pages 550–558, 2014.
Pranjal Awasthi, Moses Charikar, Ravishankar Krishnaswamy, and Ali Kemal Sinop. The Hardness
of Approximation of Euclidean k-Means. In Lars Arge and János Pach, editors, 31st International
Symposium on Computational Geometry (SoCG 2015), volume 34 of Leibniz International Proceedings
in Informatics (LIPIcs), pages 754–767, Dagstuhl, Germany, 2015. Schloss Dagstuhl–Leibniz-Zentrum
fuer Informatik.
Nir Ailon, Yudong Chen, and Huan Xu. Iterative and active graph clustering using trace norm minimization without cluster size constraints. Journal of Machine Learning Research, 16:455–490, 2015.
Ankit Aggarwal, Amit Deshpande, and Ravi Kannan. Adaptive sampling for k-means clustering. In
Irit Dinur, Klaus Jansen, Joseph Naor, and Jos Rolim, editors, Approximation, Randomization, and
Combinatorial Optimization. Algorithms and Techniques, volume 5687 of Lecture Notes in Computer
Science, pages 15–28. Springer Berlin Heidelberg, 2009.
Hassan Ashtiani, Shrinu Kushagra, and Shai Ben-David. Clustering with same-cluster queries. In NIPS.
2016.
ANFSW16. Sara Ahmadian, Ashkan Norouzi-Fard, Ola Svensson, and Justin Ward. Better guarantees for k-means
and euclidean k-median by primal-dual algorithms. arXiv preprint arXiv:1612.07925, 2016.
AV07.
David Arthur and Sergei Vassilvitskii. k-means++: the advantages of careful seeding. In Proceedings
of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, SODA ’07, pages 1027–1035,
Philadelphia, PA, USA, 2007. Society for Industrial and Applied Mathematics.
BB08.
Maria-Florina Balcan and Avrim Blum. Clustering with interactive feedback. In International Conference
on Algorithmic Learning Theory, pages 316–328. Springer Berlin Heidelberg, 2008.
BBG09.
Maria-Florina Balcan, Avrim Blum, and Anupam Gupta. Approximate clustering without the approximation. In Proc. ACM-SIAM Symposium on Discrete Algorithms, pages 1068–1077, 2009.
BBM04.
Sugato Basu, Arindam Banerjee, and Raymond J. Mooney. Active semi-supervision for pairwise constrained clustering. In Proceedings of the Fourth SIAM International Conference on Data Mining, Lake
Buena Vista, Florida, USA, April 22-24, 2004, pages 333–344, 2004.
BJA16.
Anup Bhattacharya, Ragesh Jaiswal, and Nir Ailon. Tight lower bound instances for k-means++ in two
dimensions. Theor. Comput. Sci., 634(C):55–66, June 2016.
BR13.
Tobias Brunsch and Heiko Rglin. A bad instance for k-means++. Theoretical Computer Science, 505:19
– 26, 2013.
CAKM16. Vincent Cohen-Addad, Philip N. Klein, and Claire Mathieu. Local search yields approximation schemes
for k-means and k-median in euclidean and minor-free metrics. 2016 IEEE 57th Annual Symposium on
Foundations of Computer Science (FOCS), 00:353–364, 2016.
Das08.
Sanjoy Dasgupta. The hardness of k-means clustering. Technical Report CS2008-0916, Department of
Computer Science and Engineering, University of California San Diego, 2008.
Din07.
Irit Dinur. The pcp theorem by gap amplification. J. ACM, 54(3), June 2007.
FMS07.
Dan Feldman, Morteza Monemizadeh, and Christian Sohler. A PTAS for k-means clustering based on
weak coresets. In Proceedings of the twenty-third annual symposium on Computational geometry, SCG
’07, pages 11–18, New York, NY, USA, 2007. ACM.
FRS16.
Zachary Friggstad, Mohsen Rezapour, and Mohammad R. Salavatipour. Local search yields a PTAS for
k-means in doubling metrics. 2016 IEEE 57th Annual Symposium on Foundations of Computer Science
(FOCS), 00:365–374, 2016.
IKI94.
Mary Inaba, Naoki Katoh, and Hiroshi Imai. Applications of weighted Voronoi diagrams and randomization to variance-based k-clustering: (extended abstract). In Proceedings of the tenth annual symposium
on Computational geometry, SCG ’94, pages 332–339, New York, NY, USA, 1994. ACM.
IP01.
Russell Impagliazzo and Ramamohan Paturi. On the complexity of k-sat. Journal of Computer and
System Sciences, 62(2):367 – 375, 2001.
IPZ01.
Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which problems have strongly exponential
complexity? Journal of Computer and System Sciences, 63(4):512 – 530, 2001.
JKS14.
Ragesh Jaiswal, Amit Kumar, and Sandeep Sen. A simple D2 -sampling based PTAS for k-means and
other clustering problems. Algorithmica, 70(1):22–46, 2014.
JKY15.
Ragesh Jaiswal, Mehul Kumar, and Pulkit Yadav. Improved analysis of D2 -sampling based PTAS for
k-means and other clustering problems. Information Processing Letters, 115(2):100 – 103, 2015.
KSS10.
Amit Kumar, Yogish Sabharwal, and Sandeep Sen. Linear-time approximation schemes for clustering
problems in any dimensions. J. ACM, 57(2):5:1–5:32, February 2010.
LSW17.
Euiwoong Lee, Melanie Schmidt, and John Wright. Improved and simplified inapproximability for kmeans. Information Processing Letters, 120:40 – 43, 2017.
Man16.
Pasin Manurangsi. Almost-polynomial ratio ETH-hardness of approximating densest k-subgraph. CoRR,
abs/1611.05991, 2016.
MNV12.
Meena Mahajan, Prajakta Nimbhorkar, and Kasturi Varadarajan. The planar k-means problem is NPhard. Theor. Comput. Sci., 442:13–21, July 2012.
ORSS13.
Rafail Ostrovsky, Yuval Rabani, Leonard J. Schulman, and Chaitanya Swamy. The effectiveness of
lloyd-type methods for the k-means problem. J. ACM, 59(6):28:1–28:22, January 2013.
Tre04.
Luca Trevisan. Inapproximability of combinatorial optimization problems. CoRR, cs.CC/0409043, 2004.
Vat09.
Andrea Vattani. The hardness of k-means clustering in the plane. Technical report, Department of
Computer Science and Engineering, University of California San Diego, 2009.
VBR+ 14.
Konstantin Voevodski, Maria-Florina Balcan, Heiko Röglin, Shang-Hua Teng, and Yu Xia. Efficient
clustering with limited distance information. CoRR, abs/1408.2045, 2014.
VD16.
Sharad Vikram and Sanjoy Dasgupta. Interactive bayesian hierarchical clustering. In Proceedings of
the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June
19-24, 2016, pages 2081–2090, 2016.
| 1 |
New avenue to the Parton Distribution Functions:
Self-Organizing Maps
arXiv:0810.2598v2 [hep-ph] 2 Nov 2008
H. Honkanena,b,∗ , S. Liutia,†
a
Department of Physics, University of Virginia, P.O. Box 400714,
Charlottesville, VA 22904-4714, USA
b
Department of Physics and Astronomy, Iowa State University,
Ames, IA 50011, USA
J. Carnahanc , Y. Loitierec , P. R. Reynoldsc
c
Department of Computer Science, School of Engineering,
University of Virginia, P.O. Box 400740 Charlottesville, VA 22904-4740,
USA
Abstract
Neural network algorithms have been recently applied to construct Parton
Distribution Function (PDF) parametrizations which provide an alternative
to standard global fitting procedures. We propose a technique based on an
interactive neural network algorithm using Self-Organizing Maps (SOMs).
SOMs are a class of clustering algorithms based on competitive learning
among spatially-ordered neurons. Our SOMs are trained on selections of
stochastically generated PDF samples. The selection criterion for every optimization iteration is based on the features of the clustered PDFs. Our main
goal is to provide a fitting procedure that, at variance with the standard
neural network approaches, allows for an increased control of the systematic
bias by enabling user interaction in the various stages of the process.
PACS numbers: 13.60.Hb, 12.38.Bx, 84.35.+i
∗
†
[email protected]
[email protected]
1
Introduction
Modelling experimental data always introduces bias, in the form of either a
theoretical or systematical bias. The former is introduced by researchers with
the precise structure of the model they use, which invariably constrains the
form of the solutions. The latter form of bias is introduced by algorithms,
such as optimization algorithms, which may favour some results in ways
which are not justified by their objective functions, but rather depend on the
internal operation of the algorithm.
In this paper we concentrate on high energy hadronic interactions, which
are believed to be described by Quantum Chromodynamics (QCD). Because
of the properties of factorization and asymptotic freedom of the theory, the
cross sections for a wide number of hadronic reactions can be computed using perturbation theory, as convolutions of perturbatively calculable hard
scattering coefficients, with non perturbative Parton Distribution Functions
(PDFs) that parametrize the large distance hadronic structure. The extraction of the PDFs from experiment is inherently affected by a bias, which
ultimately dictates the accuracy with which the theoretical predictions can
be compared to the high precision measurements of experimental observables.
In particular, the form of bias introduced by PDFs will necessarily impact
the upcoming searches of physics beyond the Standard Model at the Large
Hadron Collider (LHC). This situation has in fact motivated an impressive
body of work, and continuous, ongoing efforts to both estimate and control
PDFs uncertainties.
Currently, the established method to obtain the PDFs is the global analysis,
a fitting procedure, where initial scale Q0 ∼ 1GeV ≤ Qmin
dat ansatze, as a
function of the momentum fraction x, for each parton flavour i in hadron h are
evolved to higher scales according to the perturbative QCD renormalization
group equations. All the available observables e.g. the proton structure
function, F2p (x, Q2 ), are composed of the candidate PDFs and comparison
with the data is made with the help of some statistical estimator such as the
global χ2 ,
2
χ =
Ne
XX
(Datai − Theori ) Vij−1 (Dataj − Theorj ) ,
expt. i,j=1
1
(1)
where the error matrix Vij consists of the statistical and uncorrelated systematic errors, as well as of the correlated systematic errors when available. The
parameters in the ansatze are then adjusted and the whole process repeated
until a global minimum has been found.
The modern PDF collaborations (CTEQ [1] and references within , MRST [2–
4], Alekhin [5, 6], Zeus [7] and H1 [8]) also provide error estimates for the PDF
sets. They all rely on some kind of variant of the Hessian method (see e.g.
[9] for details), which is based on a Taylor expansion of the global χ2 around
it’s minimum. When only the leading terms are kept, the displacement of
χ2 can be written in terms of Hessian matrix Hij , which consists of second
derivatives of χ2 with respect to the parameter displacements, evaluated at
the minimum. The error estimate for the parameters themselves, or for any
quantity that depends on those parameters, can then be obtained in terms
of the inverse of the Hessian matrix,
(∆X)2 = ∆χ2
X ∂X
∂yi
i,j
H −1
∂X
.
ij ∂y
j
(2)
For details of PDF uncertainty studies see e.g. Refs. [10, 11].
The global analysis combined with Hessian error estimation is a powerful
method, allowing for both extrapolation outside the kinematical range of the
data and extension to multivariable cases, such as nuclear PDFs (nPDFs)
[12–15]. In principle, when more data become available, the method could
also be applied to Generalized Parton Distributions (GPDs), for which only
model-dependent [16] or semi model-dependent [17, 18] solutions presently
exist.
However, there are uncertainties related to the method itself, that are difficult
to quantify, but may turn out to have a large effect. Choosing global χ2 as a
statistical estimator may not be adequate since the minimum of the global fit
may not correspond to a minima of the individual data sets, and as a result
the definition of ∆χ2 may be ambiguous. Estimates for the current major
global analyses are that ∆χ2 = 50 − 100 is needed to obtain a ∼ 90% confidence interval [1, 2]. In principle this problem could be avoided by using the
Lagrange multiplier method (see e.g.[19]), which does not assume quadratic
behaviour for the errors around the minimum, instead of the Hessian method,
2
but this is computationally more expensive solution. Introducing a functional
form at the initial scale necessarily introduces a parametrization dependence
bias and theoretical assumptions behind the fits, such as s, s̄, c quark content, details of the scale evolution (e.g. higher order perturbative corrections,
large/small x resummation), higher twists etc. as well as the data selection
and treatment, e.g. kinematical cuts, all reflect into the final result of the
analysis. Also, there may be systematical bias introduced by the optimization algorithm. The differences between the current global PDF sets tend to
be larger than the estimated uncertainties [20], and these differences again
translate to the predictions for the LHC observables, such as Higgs [21] or
W ± and Z production cross sections [1].
A new, fresh approach to the PDF fitting has recently been proposed by
NNPDF collaboration [22, 23] who have replaced a typical functional form
ansatz with a more complex standard neural network (NN) solution and the
Hessian method with Monte Carlo sampling of the data (see the references
within [23] for the nonsinglet PDF fit and the details of the Monte Carlo
sampling).
Figure 1: Schematic diagram of a feed-forward neural network, from [24].
Neural network can be described as a computing solution that consists of
interconnected processing elements, neurons, that work together to produce
an output function. In a typical feed-forward NN (see Fig. 1) the output is
given by the neurons in the last layer, as a non-linear function of the output
of all neurons in the previous layer, which in turn is a function of the output
of all neurons in the previous layer, and so on, starting from the first layer,
which receives the input. For a NN with L layers and nl neurons in each
PL−1
layer, the total number of the parameters is l=1
(nl nl+1 + nl+1 ).
3
In the beginning of the NNPDF fitting procedure a Monte Carlo sample
of replicas of the experimental data is generated by jittering the central
values of the data withing their errorbars using univariate Gaussian (or some
other distribution if desired) random numbers for each independent error
source. The number of the replicas is made so large that the Monte Carlo
set of replicas models faithfully the probability distribution of the original
data. For each replica a Genetic Algorithm (GA) fit is performed by first
setting the NN parameters for each parton flavour to be fitted randomly,
then making clones of the set of parameters, and mutating each of them
randomly (multiple mutations). After scale evolution the comparison with
the data is performed for all the clones, and the best clones are selected for
a source of new clones, and the process repeated until the minimum for the
χ2 has been found. Overfitting of the data is prevented by using only part of
the data in the minimizing procedure, and using the other part to monitor
the behaviour of the χ2 . When fitting PDFs one thus ends up with Nrep
PDF sets, each initial scale parton distribution parametrized by a different
NN. The quality of the global fit is then given by the χ2 computed from the
averages over the sample of trained neural networks. The mean value of the
parton distribution at the starting scale for a given value of x is found by
averaging over the replicas, and the uncertainty on this value is the variance
of the values given by the replicas.
The NNPDF method circumvents the problem of choosing a suitable ∆χ2 ,
and it relies on GA which works on a population of solutions for each MC
replica, thus having a lowered possibility of getting trapped in local minima.
NN parametrizations are also highly complex, with large number of parameters, and thus unbiased compared to the ansatze used in global fits. The
estimated uncertainties for NNPDF fits are larger than those of global fits,
possibly indicating that the global fit uncertainties may have been underestimated. It should, however, be pointed out that the MC sampling of the data
is not not tied to use of NNs, and it thus remains undetermined whether the
large uncertainties would persist if the MC sampling was used with a fixed
functional form. The complexity of NN results may also induce problems, especially when used in a purely automated fitting procedure. Since the effect
of modifying individual NN parameters is unknown, the result may exhibit
4
strange or unwanted behaviour in the extrapolation region, or in between
the data points if the data is sparse. In such a case, and in a case of incompatible data, the overfitting method is also unsafe to use. Implementation of
information not given directly by the data, such as nonperturbative models,
lattice calculations or knowledge from prior work in general, is also difficult
in this approach.
A possible method of estimating the PDF uncertainties could also be provided by Bayesian statistical analysis, as preliminarily studied in [25, 26] and
explained in [27], in which the errors for the PDF parameters, or for an
observable constructed from the PDFs, are first encapsulated in prior probabilities for an enlarged set of model parameters, and posterior distributions
are obtained using computational tools such as Markov Chain Monte Carlo.
Similar to NNPDF approach, this method allows for an inclusion of nonGaussian systematic errors for the data.
In this introductory paper we propose a new method which relies on the
use of Self-Organizing Maps (SOMs), a subtype of neural network. The
idea of our method is to create means for introducing “Researcher Insight”
instead of “Theoretical bias”. In other words, we want to give up fully
automated fitting procedure and eventually develop an interactive fitting
program which would allow us to “take the best of both worlds”, to combine
the best features of both the standard functional form approach and the
neural network approach. In this first step, we solely concentrate on single
variable functions, free proton PDFs, but discuss the extension of the model
to multivariable cases. In Section 2 we describe the general features of the
SOMs, in Sections 3 and 4 we present two PDF fitting algorithms relying
on the use of SOMs and finally in Section 5 we envision the possibilities the
SOM method has to offer.
2
Self-Organizing Maps
The SOM is a visualization algorithm which attempts to represent all the
available observations with optimal accuracy using a restricted set of models. The SOM was developed by T. Kohonen in the early 1980’s ([28], see
also [29]) to model biological brain functions, but has since then developed
5
into a powerful computational approach on it’s own right. Many fields of
science, such as statistics, signal processing, control theory, financial analyses, experimental physics, chemistry and medicine, have adopted the SOM
as a standard analytical tool. SOMs have been applied to texture discrimination, classification/pattern recognition, motion detection, genetic activity
mapping, drug discovery, cloud classification, and speech recognition, among
others. Also, a new application area is organization of very large document
collections. However, applications in particle physics have been scarce so
far, and mostly directed to improving the algorithms for background event
rejection [30–32].
SOM consists of nodes, map cells, which are all assigned spatial coordinates,
and the topology of the map is determined by a chosen distance metric Mmap .
Each cell i contains a map vector Vi , that is isomorphic to the data samples
used for training of the neural network. In the following we will concentrate
on a 2-dimensional rectangular lattice for simplicity. A natural choice for the
P
topology is then L1 (x, y) = 2i=1 |xi − yi |, which also has been proved [33]
to be an ideal choice for high-dimensional data, such as PDFs in our case.
The implementation of SOMs proceeds in three stages: 1) initialization of
the SOM, 2) training of the SOM and 3) associating the data samples with
a trained map, i.e. clustering. During the initialization the map vectors are
chosen such that each cell is set to contain an arbitrarily selected sample of
either the actual data to be clustered, or anything isomorphic to them (see
Fig. 2 for an example). The actual training data samples, which may be
e.g. subset or the whole set of the actual data, are then associated with map
vectors by minimizing a similarity metric Mdata . We choose Mdata = L1 . The
map vector each data sample becomes matched against, is then the most
similar one to the data sample among all the other map vectors. It may
happen that some map vectors do no not have any samples associated with
them, and some may actually have many.
During the training the map vectors are updated by averaging them with the
data samples that fell into the cells within a given decreasing neighbourhood,
see Fig. 3. This type of training which is based on rewarding the winning
node to become more like data, is called competitive learning. The initial
value of a map vector Vi at SOM cell i then changes during the course of
6
Figure 2: A 2D grid SOM which cells get randomly associated with the type of
data samples we would like to study, such as nonsinglet PDFs or observables. At
this stage each cell gets associated with only one curve, the map vector.
training as
Vi (t + 1) = Vi (t) (1 − w(t) Nj,i (t)) + Sj (t) w(t) Nj,i(t)
(3)
where now Vi (t + 1) is the contents of the SOM cell i after the data sample
Sj has been presented on the map. The neighbourhood, the radius, within
which the map vectors are updated is given by the function Nj,i (t), centered
on the winner cell j. Thus even the map vectors in those cells that didn’t
find a matching data sample are adjusted, rewarded, to become more like
2
data. Typically Nj,i (t) = e−Mmap (j,i) /r(t) , where r(t) is a monotonously decreasing radius sequence. In the beginning of the training the neighbourhood
may contain the whole map and in the end it just consists of the cell itself.
Moreover, the updating is also controlled by w(t), which is a monotonously
decreasing weight sequence in the range [0, 1].
As the training proceeds the neighbourhood function eventually causes the
data samples to be placed on a certain region of the map, where the neighbouring map vectors are becoming increasingly similar to each other, and
the weight sequence w(t) furthermore finetunes their position.
In the end on a properly trained SOM, cells that are topologically close to
each other will have map vectors which are similar to each other. In the final
phase the actual data is matched against the map vectors of the trained map,
and thus get distributed on the map according to the feature that was used
7
Figure 3: (Colour online) Each data sample Si is associated with the one map
vector Vi it is most similar to. As a reward for the match, the winning map vector,
as well as its neighbouring map vectors, get averaged with the data associated
with the winning cell.
as Mdata . Clusters with similar data now emerge as a result of unsupervised
learning.
For example, a map containing RGB colour triplets would initially have
colours randomly scattered around it, but during the course of training it
would evolve into patches of colour which smoothly blend with each other,
see Fig. 4. This local similarity property is the feature that makes SOM
suitable for visualization purposes, thus facilitating user interaction with the
data. Since each map vector now represent a class of similar objects, the SOM
is an ideal tool to visualize high-dimensional data, by projecting it onto a
low-dimensional map clustered according to some desired similar feature.
SOMs, however, also have disadvantages. Each SOM, though identical in
size and shape and containing same type of data, is different. The clusters
may also split in such a way that similar type of data can be found in several different places on the map. We are not aware of any mathematical
or computational means of detecting if and when the map is fully trained,
and whether there occurs splitting or not, other than actually computing the
8
Figure 4: (Colour online) SOM containing RGB colour triplets getting trained.
Adapted from Ref. [34]
similarities between the neighbouring cells and studying them.
In this work we use the so-called batch-version of the training, in which all
the training data samples are matched against the map vectors before the
training begins. The map vectors are then averaged with all the training
samples within the neighbourhood radius simultaneously. The procedure is
repeated Nstep (free parameter to choose) times such that in every training
step the same set of training data samples is associated with the evolving
map and in Eq.(3) t now counts training steps. When the map is trained,
the actual data is finally matched against the map vectors. In our study
our training data are always going to be the whole data we want to cluster,
and the last training step is thus the clustering stage. The benefit of the
batch training compared to the incremental training, described earlier, is
that the training is independent of the order in which the training samples
are introduced on the map.
3
MIXPDF algorithm
In this approach our aim is to both i) to be able to study the properties of
the PDFs in a model independent way and yet ii) to be able to implement
knowledge from the prior works on PDFs, and ultimately iii) to be able to
guide the fitting procedure interactively with the help of SOM properties.
At this stage it is important to distinguish between the experimental data
9
and the training data of the SOM. When we are referring to measured data
used in the PDF fitting, such as F2 data, we always call it experimental data.
The SOM training data in this study is going to be a collection of candidate
PDF sets, produced by us, or something composed of them. A PDF set in
the following will always mean a set of 8 curves, one for each independent
¯ s = s̄, c = c̄ and b = b̄ in this simplified
parton flavour f = (g, uv , dv , ū, d,
introductory study), that are properly normalized such that
XZ 1
dxxff /p (x, Q2 ) = 1,
(4)
f
0
and conserve baryon number and charge
Z 1
Z 1
2
dxfuv /p (x, Q ) = 2,
dxfdv /p (x, Q2 ) = 1.
0
(5)
0
In order to proceed we have to decide how to create our candidate PDF
sets, decide the details of the SOMs, details of the actual fitting algorithm,
experimental data selection and details of the scale evolution.
In this introductory paper our aim is not to provide a finalised SOMPDF set,
but rather to explore the possibilities and restrictions of the method we are
proposing. Therefore we refrain from using “all the possible experimental
data” as used in global analyses, but concentrate on DIS structure function
data from H1 [35], BCDMS [36, 37] and Zeus [38], which we use without additional kinematical cuts or normalization factors (except rejecting the data
points below our initial scale). The parameters for the DGLAP scale evolution were chosen to be those of CTEQ6 (CTEQ6L1 for lowest order (LO))
[39], the initial scale being Q0 = 1.3 GeV. In next-to-leading order (NLO)
case the evolution code was taken from [40] (QCDNUM17 beta release).
We will start now with a simple pedagogical example, which we call MIXPDF
algorithm, where we use some of the existing PDF sets as material for new
candidate PDFs. At first, we will choose CTEQ6 [39], CTEQ5 [41], MRST02
[2, 42], Alekhin [5] and GRV98 [43] sets and construct new PDF sets from
them such that at the initial scale each parton flavour in the range x =
[10−5 , 1] is randomly selected from one of these five sets (we set the heavy
flavours to be zero below their mass thresholds). The sumrules on this new
10
set are then imposed such that the original normalization of uv and dv are
preserved, but the rest of the flavours are scaled together so that Eq.(4)
is fulfilled. In this study we accept the <few% normalization error which
results from the fact that our x-range is not x = [0, 1]. From now on we
call these type of PDF sets database PDFs. We randomly initialize a small
5 × 5 map with these candidate database PDFs, such that each map vector
Vi consists of the PDF set itself, and of the observables F2p (x, Q20 ) derived
from it. Next we train the map with Nstep = 5 batch-training steps with
training data that consists of 100 database PDFs plus 5 original “mother”
PDF sets, which we will call init PDFs from now on. We choose the similarity
criterion to be the similarity of observables F2p (x, Q2 ) with Mdata = L1 . The
similarity is tested at a number of x-values (equidistant in logarithmic scale
up to x ∼ 0.2, and equidistant in linear scale above that) both at the initial
scale and at all the evolved scales where experimental data exist. On every
training, after the matching, all the observables (PDFs) of the map vectors
get averaged with the observables (PDFs, flavor by flavor) matched within
the neighbourhood according to Eq. (3). The resulting new averaged map
vector PDFs are rescaled again (such that uv and dv are scaled first) to obey
the sumrules. From now on we will call these type of PDF sets map PDFs.
The map PDFs are evolved and the observables at every experimental data
scale are computed and compared for similarity with the observables from
the training PDFs.
After the training we have a map with 25 map PDFs and the same 105 PDF
sets we used to train the map. The resulting LO SOM is shown in Fig. 5,
with just F2p (x, Q20 )’s of each cell shown for clarity. The red curves in the
figure are the map F2’s constructed from the map PDFs, black curves are
the database F2’s and green curves are the init F2’s constructed from the init
PDFs (CTEQ6, CTEQ5, MRST02, Alekhin and GRV98 parametrizations).
It is obviously difficult to judge visually just by looking at the curves whether
the map is good and fully trained. One hint about possible ordering may be
provided by the fact that the shape of the F2p curve must correlate with the
χ2 /N against the experimental data. The distribution of the χ2 /N values
(no correlated systematic errors are taken into account for simplicity) of the
map PDFs, shown in each cell , does indeed seem somewhat organized.
11
1
2
3
4
2.46
2.20
4.36
4.98
3.28
2.49
2.38
1.54
2.59
4.04
3.22
2.51
1.45
2.75
6.55
5.51
4.27
3.26
2.00
9.49
7.24
4.72
4.58
2.32
4
3
2
1
0
0
LO 2.44
1. iteration
-5
10 10
-4
-3
10 10
-2
10
-1
1
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
Figure 5: (Colour online) Trained 1. iteration LO map. Red curves are map F2p ’s,
green curves init F2p ’s and black curves rest of the database F2p ’s. The number in
each cell is the χ2 /N of the map PDF.
Table 1 lists the χ2 /N values the original init PDFs obtain within the MIXPDF framework described above. Comparison of these values with the values
in Fig. 5 reveals that some of the map PDFs, as also some of the database
PDFs, have gained a χ2 /N comparable to or better than that of the init
PDFs.
Inspired by the progress, we start a second fitting iteration by selecting the
5 best PDF sets from the 25+5+100 PDF sets of the first iteration as our
new init PDFs (which are now properly normalized after the first iteration) to
generate database PDFs for a whole new SOM. Since the best PDF candidate
from the first iteration is matched on this new map as an unmodified init
PDF, it is guaranteed that the χ2 /N as a function of the iteration either
decreases or remains the same. We keep repeating the iterations until the
χ2 /N saturates. Fig. 6 (Case 1) shows the χ2 /N as a function of iterations
for the best PDF on the trained map, for the worst PDF on the map and for
12
PDF‡
Alekhin
CTEQ6
CTEQ5
CTEQ4
MRST02
GRV98
LO χ2 /N
3.34
1.67
3.25
2.23
2.24
8.47
NLO χ2 /N
29.1
2.02
6.48
2.41
1.89
9.58
Table 1: χ2 /N for different MIXPDF input PDF sets against all the datasets
used (H1, ZEUS, BCDMS, N=709).
the worst of the 5 PDFs selected for the subsequent iteration as an init PDF.
The final χ2 /N of these runs are listed in Table 2 (first row) as Case 1 and
Fig. 7 shows these results (black solid line), together with original starting
sets at the initial scale (note the different scaling for gluons in LO and in
NLO figures). For 10 repeated LO runs we obtain χ¯2 /N=1.208 and σ=0.029.
Let us now analyze in more detail how the optimization proceeds. Figs. 8,9
show the LO maps also for the 2. and 3. iterations. On the first iteration the
init PDFs, the shapes of which were taken from existing parametrizations, fall
in the cells (0,1) (CTEQ5), (1,3) (Alekhin), (1,4) (MRST02), (2,3) (CTEQ6)
and (3,0) (GRV98), so the modern sets, Alekhin, MRST02 and CTEQ6,
group close to each other, i.e. the shapes of the observables they produce are
very similar, as expected. The best 5 PDFs selected as the 2. iteration init
PDFs also come from this neighbourhood, 3 of them from the cell (1,3) and
the other 2 from the cell (2,3). Two of these selected sets are map PDFs,
two are database PDFs and also the original init PDF CTEQ6 survived for
the 2. iteration.
At the end of the 2. iteration the init PDFs, which originated from the
neighbouring cells, are scattered in the cells (0,1), (0.3), (1,0) (CTEQ6),
‡
These are the χ2 /N for the initial scale PDF sets taken from the quoted parametrizations and evolved with CTEQ6 DGLAP settings, the heavy flavours were set to be zero
below their mass thresholds, no kinematical cuts or normalization factors for the experimental data were imposed, and no correlated systematic errors of the data were used to
compute the χ2 /N . We do not claim these values to describe the quality of the quoted
PDF sets.
13
4.0
4.0
LO
3.5
NLO
Case 1:
best of 5
worst of 5
worst of map
5 5, Nstep=5
3.0
Case 1:
Case 2:
3.0
Case 2:
2
2.0
best of 5
worst of 5
worst of map
2.5
2.0
1.5
1.5
1.0
1.0
0.5
0.5
0
1
2
3
4
5
6
7
8
Iteration
0
1
2
3
4
5
6
7
8
Iteration
Figure 6: (Colour online) χ2 /N of the MIXPDF runs as a function of the iteration.
(2,2) and (2,3) and the best 5 PDFs produced during this iteration are in
the cells (4,1) (database PDF), (4,0) (map PDF), (3,2) (map PDF), (2,2)
(database PDF) and (3,0) (map PDF).
After the 3. iteration the above best 5 PDFs of 2. iteration are in the cells
(2,0), (3,1), (0.2), (0.4) and (3,2) and the new best 5 PDFs produced are all
map PDFs with 4 of them in neighbouring cell pairs.
Map PDFs provide complicated linear combinations of the database PDFs
and obviously play an important role in the algorithm. The size of the map
dictates how much the neighbouring map vectors differ from each other.
Since the PDFs in the same cell are not required to be similar, only the
observables constructed from them are, a cell or a neighbourhood may in
principle contain a spread of PDFs with a spread of χ2 /N’s. However, since
our selection criteria for the init PDFs was based on the best χ2 /N only,
it is inevitable that the observables on the map become increasingly similar
as the iterations go by, and the χ2 /N flattens very fast as can be seen from
Fig. 6. As a consequence we quickly lose the variety in the shapes of the
PDFs as the iterations proceed, and on the final iteration all the PDFs on
the map end up being very similar.
14
/N
best of 5
worst of 5
worst of map
2.5
2
/N
3.5
best of 5
worst of 5
worst of map
5 5, Nstep=5
SOM
5x5
5x5
5x5
5x5
5x5
5x5
5x5
5x5
15x15
15x15
15x15
15x15
Nstep
5
5
5
5
5
5
10
40
5
5
5
5
# init
5
5
10
10
15
20
10
10
5
5
30
30
# database
100
100
100
100
100
100
100
100
900
900
900
900
Case
1
2
1
2
1
1
1
1
1
2
1
2
LO χ2 /N
1.19
1.37
1.16
1.49
1.16
1.17
1.16
1.20
1.22
1.31
1.16
1.25
NLO χ2 /N
1.28
1.44
1.25
1.43
1.45
1.30
1.25
l.53
Table 2: χ2 /N against all the datasets used (H1, ZEUS, BCDMS) for some
selected MIXPDF runs.
The MIXPDF algorithm obviously has several other weaknesses too. Among
them are how the result would change if we started with another first iteration
PDF selection, and what are the effects of changing the size of the map,
number of the database PDFs and init PDF sets and size of Nstep ? In general,
how much the final result depends on the choices that we make during the
fitting process?
Let us now study some of these questions a bit. Since we have chosen our
evolution settings to be those of CTEQ6’s, it necessarily becomes a favoured
set (although we don’t impose any kinematical cuts on the experimental
data). Therefore we performed another LO and NLO runs, with CTEQ6
now replaced with CTEQ4 [44]. The results of these runs are reported in
Table 2 (2. row) and in Fig. 6 (χ2 /N) and Fig. 7 (the PDFs) as Case 2 .
The Case 2 clearly produces worse results. Without an input from CTEQ6
we automatically lose all the low gluons at small-x -type of results in NLO,
for example.
Fig. 10 addresses the issue of choosing different Nstep , for LO Case 1 and
Case 2. The solid (dotted) lines show the best (worst) init PDF selected
15
4.0
4.0
Q=1.3 GeV
CTEQ6
Alekhin
MRST02
GRV98
CTEQ5
CTEQ4
LO
3.5
3.0
0.25*xg
2.5
Q=1.3 GeV
CTEQ6
Alekhin
MRST02
GRV98
CTEQ5
CTEQ4
NLO
0.45*xg
5 5, Nstep= 5
Case 1
Case 2
5 5, Nstep= 5
Case 1
Case 2
15 15, Nstep= 5
Case 1
Case 2
15 15, Nstep= 5
Case 1
Case 2
3.5
3.0
2.5
2.0
2.0
1.5
1.0
xuV
xu
1.5
1.0
xuV
xu
0.5
0.0
-5
10
0.5
0.0
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
-5
10
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
Figure 7: (Colour online) MIXPDF results together with the input PDF sets.
from each iteration for several Nstep selections. In Case 2 the best results
are obtained with small number of training steps, whereas Case 1 does not
seem to benefit from a longer SOM training. Keeping the stochastical nature
of the process in our minds, we may speculate that the seemingly opposite
behaviour for the Case 1 and Case 2 results from the fact that it is more
probable to produce a good set of database PDFs in Case 1 than in Case 2.
If the database is not so good to begin with, the longer training contaminates
all the map PDFs with the low quality part of the database PDFs.
Table 2 also showcases best results from a variety of MIXPDF runs where
we have tried different combinations of SOM features. It is interesting to
notice that increasing the size of the map and database does not necessarily
lead to a better performance. Instead the number of the init PDFs on later
iterations (always 5 for the first iteration) seem to be a key factor, it has
to be sufficiently large to preserve the variety of database the PDFs but
small enough to consist of PDFs with good quality. In our limited space of
database candidates (∼ 65 = 7776 possibilities) the optimal set of variables
16
1
2
3
4
1.43
1.76
1.70
1.72
1.38
1.59
1.57
1.65
1.51
1.37
1.46
1.53
1.39
1.42
1.35
1.36
1.33
1.41
1.60
1.33
1.47
1.65
1.67
1.67
4
3
2
1
0
0
LO 1.41
2. iteration
-5
10 10
-4
-3
10 10
-2
10
-1
1
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
Figure 8: (Colour online) Trained 2. iteration LO map, curves and numbers as in
Fig.5.
for the MIXPDF run should not be not impossible to map down.
The method we used to generate the sample data is very simple indeed. The
number of all the possible candidate database PDFs is not very large to begin
with, so the quality of the final results strongly depends on the quality of
the input for the SOM. Since the map PDFs are obtained by averaging with
the training samples, and the non-valence flavours are scaled by a common
factor when imposing the sumrules, the map PDFs tend to lie in between the
init PDFs. Therefore map PDFs with extreme shapes are never produced,
and thus never explored by the algorithm.
A method which relies on sampling existing parametrizations on a SOM is
inconvenient also because it is not readily applicable to multivariable cases.
For the case of the PDFs it is sufficient to have a value for each flavour
for a discrete set of x-values, but for a multivariable cases, such as nPDFs,
or GPDs the task of keeping track of the grid of values for each flavour in
each x for several different values of the additional variables (e.g. A and Z
17
1
2
3
4
1.26
1.26
1.28
1.36
1.35
1.26
1.26
1.31
1.33
1.36
1.29
1.25
1.30
1.37
1.36
1.32
1.27
1.35
1.35
1.41
1.29
1.27
1.30
1.42
4
3
2
1
0
0
LO 1.34
3. iteration
-5
10 10
-4
-3
10 10
-2
10
-1
1
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
Figure 9: (Colour online) Trained 3. iteration LO map, curves and numbers as in
Fig.5.
for nuclei, and the skewness, ξ and the squared 4-momentum transfer, t for
GPDs) is computationally expensive. In principle, SOM can keep track of
the interrelations of the map vectors, and knowing the parametrization for
the init PDFs, it would be possible to construct the parametrization for the
map PDFs. That would, however, lead to very complicated parametrizations,
and different nPDF, GPD etc. parametrizations are presently not even either
measured or defined at the same initial scale.
Despite of its problems, on a more basic level, MIXPDF does have the the
desirable feature that it allows us to use SOM as a part of the PDF optimization algorithm in such a way that we cluster our candidate PDFs on
the map, and select those PDFs which i) minimize a chosen fitness function,
e.g. χ2 , when compared against experimental data, and ii) have some desired
feature which can be visualized on the map or used as a clustering criterion.
Therefore, in the following Section, we keep building on this example.
18
2.2
2.2
LO, 5 5
LO, 5 5
Nstep:
Case 1
Case 2
best of 5, solid lines
worst of 5, dashed lines
2.0
1.8
1.6
best of 5, solid lines
worst of 5, dashed lines
1.4
1.4
1.2
1.2
1.0
1.0
0
1
2
3
4
5
6
7
8
Iteration
0
1
2
3
4
5
6
7
8
Iteration
Figure 10: (Colour online) LO χ2 /N for 5 × 5 SOM runs with different Nstep .
4
ENVPDF algorithm
Most of the problems with the MIXPDF algorithm originate from the need
to be able to generate the database PDFs in an unbiased way as possible, and
at the same time to have a variety of PDF candidates available at every stage
of the fitting procedure. Yet, one needs to have control over the features of
the database PDFs that are created.
To accomplish this, we choose, at variance with the “conventional” PDFs
sets or NNPDFs, to give up the functional form of PDFs and rather to rely
on purely stochastical methods in generating the initial and training samples
of the PDFs. Our choice is a GA-type analysis, in which our parameters are
the values of PDFs at the initial scale for each flavour at each value of x
where the experimental data exist. To obtain control over the shape of the
PDFs we use some of the existing distributions to establish an initial range,
or envelope, within which we sample the database PDF values.
Again, we use the Case 1 and 2 PDF sets (CTEQ6, CTEQ5, CTEQ4,
MRST02, Alekhin and GRV98) as an initialization guideline. We construct
our initial PDF generator first to, for each flavour separately, select ran19
/N
2
5
10
20
30
40
2
/N
1.8
1.6
Nstep:
5
10
20
30
40
2.0
domly either the range [0.5, 1], [1.0, 1.5] or [0.75, 1.25] times any of the Case
1 (or 2) PDF set. Compared to MIXPDF algorithm we are thus adding more
freedom to the scaling of the database PDFs. Next the initial generators generate values for each xdata § using uniform, instead of Gaussian, distribution
around the existing parametrizations, thus reducing direct bias from them.
Gaussian smoothing is applied to the resulting set of points, and the flavours
combined to form a PDF set such that the curve is linearly interpolated from
the discrete set of generated points.
The candidate PDF sets are then scaled to obey the sumrules as in MIXPDF
algorithm. In order to obtain a reasonable selection of PDFs to start with,
we reject candidates which have χ2 /N > 10 (computed as in MIXPDF algorithm). To further avoid direct bias from the Case 1 and 2 PDFs, we don’t
include the init PDFs into the training set for the first iteration as we did in
MIXPDF case. For a N × N SOM we choose the size of the database to be
4N 2 .
During the later iterations we proceed as follows: At the end of each iteration
we pick from the trained N ×N SOM 2N best PDFs as the init PDFs. These
init PDFs are introduced into the training set alongside with the database
PDFs, which are now constructed using each of the init PDFs in turn as a
center for a Gaussian random number generator, which assigns for all the
flavours for each x a value around that same init PDF such that 1 − σ of
the generator is given by the spread of the best PDFs in the topologically
nearest neighbouring cells. The object of these generators is thus to refine a
good candidate PDF found in the previous iteration by jittering it’s values
within a range determined by the shape of other good candidate PDFs from
the previous iteration. The generated PDFs are then smoothed and scaled
to obey the sumrules. Sets with χ2 /N > 10 are always rejected. We learnt
from the MIXPDF algorithm that it is important to preserve the variety of
the PDF shapes on the map, so we also keep Norig copies of the first iteration
generators in our generator mix.
Table 3 lists results from a variety of such runs. The results do not seem
§
To ensure a reasonable large-x behaviour for the PDFs, we also generate with the same
method values for them in a few x-points outside the range of the experimental data. We
also require the PDFs, the gluons especially, to be positive for simplicity.
20
to be very sensitive to the number of SOM training steps, Nstep , but are
highly sensitive to the number of first iteration generators used in subsequent iterations. Although the generators can now in principle produce an
infinite number of different PDFs, the algorithm would not be able to radically change the shape of the database PDFs without introducing a random
element on the map. Setting Norig > 0 provides, through map PDFs, that
element, and keeps the algorithm from getting fixed to a local minimum. The
ENVPDF algorithm is now more independent from the initial selection of the
PDF sets, Case 1 or 2, than MIXPDF, since no values of e.g. the CTEQ6
set in the original generator are ever introduced on the map directly.
SOM Nstep
5x5
5
5x5
10
5x5
20
5x5
30
5x5
40
5x5
5
5x5
20
5x5
5
5x5
10
5x5
15
15x15
5
15x15
5
Norig
2
2
2
2
2
0
0
2
2
2
6
6
Case
1
1
1
1
1
1
1
2
2
2
1
2
LO χ2 /N
1.04
1.10
1.10
1.10
1.08
1.41
1.26
1.14
1.12
1.18
1.00
1.13
NLO χ2 /N
1.08
1.25
1.07
1.18
Table 3: χ2 /N against all the datasets used (H1, ZEUS, BCDMS) for variety
of ENVPDF runs.
Fig. 11 shows the χ2 /N as a function of iteration for 5x5 LO and NLO,
both Case 1 and Case 2, runs, where Nstep = 5 and Norig = 2. Clearly
the ENVPDF runs take multiple number of iterations for the χ2 /N to level
compared to the MIXPDF runs, and they are therefore more costly in time.
With the ENVPDF algorithm, however, the χ2 /N keeps on slowly improving
even after all the mother PDFs from the same iteration are equally good fits.
For a larger 15x15 SOM the number of needed iterations remains as large.
21
3.0
3.0
LO
2.5
NLO
Case 1:
5 5, Nstep=5
Case 1:
5 5, Nstep=5
best of 10
worst of 10
Case 2:
Case 2:
2.0
2
best of 10
worst of 10
/N
best of 10
worst of 10
2
/N
2.0
2.5
best of 10
worst of 10
1.5
1.5
1.0
1.0
0.5
0.5
0
5
10
15
20
25
30
35
40
45
50
Iteration
0
5
10
15
20
25
30
35
40
45
50
Iteration
Figure 11: (Colour online) χ2 /N of the ENVPDF runs as a function of the iteration.
Fig. 12 shows some of the Case 1 LO ENVPDF results at the initial scale
Q = 1.3 GeV (left panel), and evolved up to Q = 3 GeV (right panel).
The reference curves shown are also evolved as in MIXPDF. Although the
initial scale ENVPDF results appear wiggly, they smooth out soon because
of the additional well known effect of QCD evolution. In fact, the initial scale
curves could be made smoother by applying a stronger Gaussian smoothing,
but this is not necessary, as long as the starting scale is below the Qmin of
the data. The evolved curves preserve the initially set baryon number scaling
within 0.5% and momentum sumrule within 1.5% accuracy. Also, the results
obtained from a larger map tend to be smoother since the map PDFs get
averaged with a larger number of other PDFs. Studying the relation between
the redundant wiggliness of our initial scale PDFs and possible fitting of
statistical fluctuations of the experimental data is beyond the scope of this
paper. The NLO Case 1 and 2 results are presented in Fig. 13. The trend
of the results is clearly the same as in MIXPDF case, CTEQ6 is a favoured
set, and especially the PDFs with gluons similar to those of CTEQ6’s have
good χ2 /N.
22
We did not study the effect of modifying the width or the shape of the
envelope in detail here, but choosing the envelope to be the wider or narrower
than 1 − σ for the Gaussian generate seem to lead both slower and poorer
convergence. Also, since we are clustering on the similarity of the observables,
the same cell may in theory contain the best PDF of the iteration and PDFs
which have χ2 /N as large as 10. Therefore the shape of the envelope should
be determined only by the curves with promising shapes.
4.0
2.5
LO
LO
3.5
Q=1.3 GeV
CTEQ6
MRST02
CTEQ4
2.0
0.25*xg
1.5
Q=3.0 GeV
CTEQ6
MRST02
CTEQ4
0.1*xg
5 5, Nstep= 5
Case 1
Case 2
5 5, Nstep= 5
Case 1
Case 2
15 15, Nstep= 5
Case 1
Case 2
15 15, Nstep= 5
Case 1
Case 2
3.0
2.5
2.0
1.0
1.5
xuV
xu
1.0
xu
0.5
xuV
0.5
0.0
-5
10
0.0
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
-5
10
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
Figure 12: (Colour online) LO ENVPDF results at the initial scale, and at Q = 3.0
GeV.
Next we want to study the PDF uncertainty using the unique means the
SOMs provide to us even for a case of PDFs without a functional form.
Since we have only used DIS data in this introductory study, we are only
able to explore the small-x uncertainty for now. Figs. 14 (LO) and 15 (NLO)
showcase, besides our best results, the spread of all the initial scale PDFs
with χ2 /N ≤ 1.2, that were obtained during a 5 × 5 (left panel) and 15 × 15
(right panel) SOM run. Since the number of such PDF sets is typically of the
order of thousands, we only plot the minimum and maximum of the bundle
23
4.0
3.0
Q=1.3 GeV
CTEQ6
MRST02
CTEQ4
2.5
2.0
15 15, Nstep= 5
Case 1
Case 2
NLO
NLO
3.5
Q=3.0 GeV
CTEQ6
MRST02
CTEQ4
0.25*xg
5 5, Nstep= 5
Case 1
Case 2
0.85*xg
3.0
5 5, Nstep= 5
Case 1
Case 2
2.5
2.0
1.5
15 15, Nstep= 5
Case 1
Case 2
1.5
1.0
xuV
1.0
xu
xuV
0.5
xu
0.0
-5
10
0.5
0.0
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
-5
10
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
Figure 13: (Colour online) NLO ENVPDF results at the initial scale, and at
Q = 3.0 GeV.
of curves. Since the total number of experimental datapoints used is ∼ 710,
the spread ∆χ2 /N ∼ 0.2 corresponds to a ∆χ2 ∼ 140. Expectedly, the smallx gluons obtain the largest uncertainty for all the cases we studied. Even
though a larger SOM with a larger database might be expected to have more
variety in the shapes of the PDFs, the χ2 /N ≤ 1.2 spreads of the 5 × 5 and
15 × 15 SOMs are more or less equal sized (the apparent differences in sizes
at Q = Q0 even out when the curves are evolved). Both maps therefore end
up producing the same extreme shapes for the map PDFs although a larger
map has more subclasses for them. Remarkably then, a single SOM run can
provide a quick uncertainty estimate for a chosen ∆χ2 without performing a
separate error analysis.
Due to the stochastical nature of the ENVPDF algorithm, we may well also
study the combined results from several separate runs. It is especially important to verify the stability of our results, to show that the results are indeed
reproducible instead of lucky coincidences. Left panels of Figs 16 (LO) and
24
3.0
3.0
LO
LO
2.5
2.5
Q=1.3 GeV
CTEQ6
MRST02
2.0
Q=1.3 GeV
CTEQ6
MRST02
5 5, Nstep= 5
Case 1
2
/N 1.2
0.25*xg
2.0
15 15, Nstep= 5
Case 1
2
/N 1.2
0.25*xg
1.5
1.5
1.0
1.0
xuV
xuV
xu
xu
0.5
0.0
-5
10
0.5
0.0
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
-5
10
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
Figure 14: (Colour online) LO ENVPDF best result and the χ2 /N ≤ 1.2 spread
of results.
17 (NLO) present the best results, and the combined χ2 /N ≤ 1.2 spreads
for 10 repeated 5 × 5, Nstep = 5 runs at the initial scale. The average χ2 /N
and the standard deviation σ for these runs are in LO (NLO) are 1.065 and
0.014 (1.122 and 0.029), corresponding to ∆χ2 ∼ 10 (20) for LO (NLO).
The right panels of the same Figs 16, 17 show the 10 best result curves and
the χ2 /N ≤ 1.2 spreads evolved up to Q = 3.0 GeV. Clearly the seemingly
large difference between the small-x gluon results at the initial scale is not
statistically significant, but smooths out when gluons are evolved. Thus the
initial scale wiggliness of the PDFs is mainly only a residual effect from our
method of generating them and not linked to the overtraining of the SOM,
and we refrain from studying cases where stronger initial scale smoothing
is applied. Therefore our simple method of producing the candidate PDFs
by jittering random numbers inside a predetermined envelope is surprisingly
stable when used together with a complicated PDF processing that SOMs
provide.
25
3.5
3.5
Q=1.3 GeV
CTEQ6
MRST02
NLO
3.0
NLO
5 5, Nstep= 5
Case 1
2
/N 1.2
2.5
0.85*xg
Q=1.3 GeV
CTEQ6
MRST02
3.0
15 15, Nstep= 5
Case 1
2
/N 1.2
0.85*xg
2.5
2.0
2.0
1.5
1.5
1.0
1.0
xuV
0.5
xuV
xu
0.5
xu
0.0
-0.5
-5
10
0.0
-0.5
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
-5
10
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
Figure 15: (Colour online) NLO ENVPDF best result and the χ2 /N ≤ 1.2 spread
of results.
5
Future of the SOMPDFs
So far we have shown a relatively straightforward method of obtaining stochastically generated, parameter-free, PDFs, with an uncertainty estimate for a
desired ∆χ2 . On every iteration using our competitive learning algorithm,
the selection of the winning PDFs was based on the χ2 /N alone, and the
fitting procedure was fully automated. In our MIXPDF algorithm the SOMs
were used merely as a tool to create new combinations, map PDFs, of our input database. The ENVPDF algorithm also used the topology of the map to
determine the shape of the envelope, within which we sampled the database
PDFs.
We reiterate that our initial study was aimed at observing and recording the
behavior of the SOM as an optimization tool. Many of the features of our
results could not in fact be predicted based on general assumptions. The
proposed method can be extended much further than that. The automated
version of the algorithm could be set to sample a vector consisting of PDF
26
4.0
3.0
LO
LO
3.5
2.5
Q=1.3 GeV
CTEQ6
MRST02
2.0
0.25*xg
Q=3.0 GeV
CTEQ6
MRST02
0.1*xg
5 5, Nstep= 5
Case 1
2
/N 1.2
3.0
5 5, Nstep= 5
Case 1
2
/N 1.2
2.5
2.0
1.5
1.5
1.0
xuV
0.5
1.0
xu
xu
xuV
0.5
0.0
-5
10
0.0
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
-5
10
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
Figure 16: (Colour online) LO ENVPDF best results and the χ2 /N ≤ 1.2 spreads
of results from 10 separate runs.
parameters, instead of values of PDFs in each value of x of the data. That
would lead to smooth, continuous type of solutions, either along the lines
of global analyses, or NNPDFs using N SOMs for N Monte-Carlo sampled
replicas of the data. Since the solution would be required to stay within an
envelope of selected width and shape given by the map, no restrictions for
the parameters themselves would be required. For such a method, all the
existing error estimates, besides an uncertainty band produced by the map,
would be applicable as well.
What ultimately sets the SOM method apart from the standard global analyses or NNPDF method, however, are the clustering and visualization possibilities that it offers. Instead of setting Mdata = L1 and clustering according
to the similarity of the observables, it is possible to set the clustering criteria to be anything that can be mathematically quantified, e.g. the shape of
the gluons or the large-x behaviour of the PDFs. The desired feature of the
PDFs can then be projected out from the SOM. Moreover, by combining the
27
4.0
3.5
Q=1.3 GeV
CTEQ6
MRST02
NLO
3.0
NLO
3.5
5 5, Nstep= 5
Case 1
2
/N 1.2
2.5
Q=3.0 GeV
CTEQ6
MRST02
0.25*xg
0.85*xg
3.0
5 5, Nstep= 5
Case 1
2
/N 1.2
2.0
2.5
2.0
1.5
1.0
1.5
xuV
0.5
1.0
xu
xu
xuV
0.0
-0.5
-5
10
0.5
0.0
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
-5
1
10
2
5
-4
10
2
5
10
-3
2
x
5
-2
10
2
5
-1
10
2
5
1
Figure 17: (Colour online) NLO ENVPDF best results and the χ2 /N ≤ 1.2
spreads of results from 10 separate runs.
method with an interactive graphic user interface (GUI), it would be possible to change and control the shape and the width of the envelope as the
minimization proceeds, to guide the process by applying researcher insight at
various stages of the process. Furthermore, the uncertainty band produced
by the SOM as the run proceeds, could help the user to make decisions about
the next steps of the minimization. With GUI it would be e.g. possible to
constrain the extrapolation of the NN generated PDFs outside the x-range
of the data without explicitly introducing terms to ensure the correct smalland large-x behaviour as in NNPDF method (see Eq.(87) in [23]). The selection of the best PDF candidates for the subsequent iteration could then
be made based on the user’s preferences instead of solely based on the χ2 /N.
That kind of method in turn could be extended to multivariable cases such
as nPDFs and even GPDs and other not so well-known cases, where the data
is too sparse for stochastically generated, parameter-free, PDFs.
Generally, any PDF fitting method involves a large number of flexible points
28
“opportunities for adapting and fine tuning”, which act as a source for
both systematical and theoretical bias when fixed. Obvious optimization
method independent sources of theoretical bias are the various parameters
of the DGLAP equations, inclusion of extra sources of Q2 -dependence beyond DGLAP-type evolution and the data selection, affecting the coverage
of different kinematical regions. SOMs themselves, and different SOMPDF
algorithm variations naturally also introduce flexible points of their own. We
explored a little about the effects of choosing the size of the SOM and the
number of the batch training steps Nstep . There are also plenty of other
SOM properties that can be modified, such as the shape of the SOM itself.
We chose to use a rectangular lattice, but generally the SOM can take any
shape desired. For demanding vizualisation purposes a hexagonal shape is
an excellent choice, since the meaning of the nearest neighbours is better
defined.
The SOMPDF method, supplemented with the use of a GUI, will allow us
to both qualitatively and quantitatively study the flexible points involved
in the PDFs fitting. More complex hadronic matrix elements, such as the
ones defining the GPDs, are natural candidates for future studies of cases
where the experimental data are not numerous enough to allow for a model
independent fitting, and the guidance and intuition of the user is therefore
irreplaceable. The method we are proposing is extremely open for user interaction, and the possibilities of such a method are widely unexplored.
Acknowledgements
We thank David Brogan for helping us to start this project. This work
was financially supported by the US National Science Foundation grant
no.0426971. HH was also supported by the U.S. Department of Energy,
grant no. DE-FG02-87ER40371. SL is supported by the U.S. Department of
Energy, grant no. DE-FG02-01ER41200.
29
References
[1] P. M. Nadolsky et al., arXiv:0802.0007 [hep-ph].
[2] A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, Eur.
Phys. J. C 23 (2002) 73 [arXiv:hep-ph/0110215].
[3] A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, Phys.
Lett. B 604 (2004) 61 [arXiv:hep-ph/0410230].
[4] A. D. Martin, W. J. Stirling, R. S. Thorne and G. Watt, Phys. Lett. B
652 (2007) 292 [arXiv:0706.0459 [hep-ph]].
[5] S. Alekhin, Phys. Rev. D 68 (2003) 014002 [arXiv:hep-ph/0211096].
[6] S. Alekhin, K. Melnikov and F. Petriello, Phys. Rev. D 74 (2006) 054033
[arXiv:hep-ph/0606237].
[7] S. Chekanov et al. [ZEUS Collaboration], Eur. Phys. J. C 42 (2005) 1
[arXiv:hep-ph/0503274].
[8] C. Adloff et al. [H1 Collaboration], Eur. Phys. J. C 30 (2003) 1
[arXiv:hep-ex/0304003].
[9] J. Pumplin et al., Phys. Rev. D 65 (2002) 014013 [arXiv:hepph/0101032].
[10] A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, Eur.
Phys. J. C 35 (2004) 325 [arXiv:hep-ph/0308087].
[11] A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, Eur.
Phys. J. C 28 (2003) 455 [arXiv:hep-ph/0211080].
[12] K. J. Eskola, H. Paukkunen and C. A. Salgado, JHEP 0807 (2008) 102
[arXiv:0802.0139 [hep-ph]].
30
[13] K. J. Eskola, V. J. Kolhinen, H. Paukkunen and C. A. Salgado, JHEP
0705 (2007) 002 [arXiv:hep-ph/0703104].
[14] M. Hirai, S. Kumano and T. H. Nagai, Phys. Rev. C 70 (2004) 044905
[arXiv:hep-ph/0404093].
[15] M. Hirai, S. Kumano and T. H. Nagai, Phys. Rev. C 76 (2007) 065207
[arXiv:0709.3038 [hep-ph]].
[16] M. Vanderhaeghen, P. A. M. Guichon and M. Guidal, Phys. Rev. D 60
(1999) 094017 [arXiv:hep-ph/9905372].
[17] S. Ahmad, H. Honkanen, S. Liuti and S. K. Taneja, Phys. Rev. D 75
(2007) 094003 [arXiv:hep-ph/0611046].
[18] S. Ahmad, H. Honkanen, S. Liuti and S. K. Taneja, arXiv:0708.0268
[hep-ph].
[19] D. Stump et al., Phys. Rev. D 65 (2002) 014012 [arXiv:hep-ph/0101051].
[20] J. Pumplin, AIP Conf. Proc. 792, 50 (2005) [arXiv:hep-ph/0507093].
[21] A. Djouadi and S. Ferrag, Phys. Lett. B 586 (2004) 345 [arXiv:hepph/0310209].
[22] M. Ubiali, arXiv:0809.3716 [hep-ph].
[23] R. D. Ball et al., arXiv:0808.1231 [hep-ph].
[24] L. Del Debbio, S. Forte, J. I. Latorre, A. Piccione and J. Rojo [NNPDF
Collaboration], JHEP 0703 (2007) 039 [arXiv:hep-ph/0701127].
[25] W. T. Giele, S. A. Keller and D. A. Kosower, arXiv:hep-ph/0104052.
[26] G. Cowan, Prepared for 14th International Workshop on Deep Inelastic
Scattering (DIS 2006), Tsukuba, Japan, 20-24 Apr 2006
31
[27] G. D’Agostini, “Bayesian Reasoning in Data Analysis: A Critical Introduction”, World Scientific Publishing, Singapore (2003).
[28] T. Kohonen, Self-organizing Maps, Third Edition, Springer (2001).
[29] http://www.cis.hut.fi/projects/somtoolbox
[30] J. S. Lange, H. Freiesleben and P. Hermanowski, Nucl. Instrum. Meth.
A 389 (1997) 214.
[31] K. H. Becks, J. Drees, U. Flagmeyer and U. Muller, Nucl. Instrum.
Meth. A 426 (1999) 599.
[32] J. S. Lange, C. Fukunaga, M. Tanaka and A. Bozek, Nucl. Instrum.
Meth. A 420 (1999) 288.
[33] Charu C. Aggarwal , Alexander Hinneburg , Daniel A. Keim, Proceedings of the 8th International Conference on Database Theory, p.420-434,
January 04-06, 2001
[34] http://www.ai-junkie.com/ann/som/som1.html and
http://davis.wpi.edu/~ matt/courses/soms/
[35] C. Adloff et al. [H1 Collaboration], Eur. Phys. J. C 21 (2001) 33
[arXiv:hep-ex/0012053].
[36] A. C. Benvenuti et al. [BCDMS Collaboration], Phys. Lett. B 223 (1989)
485.
[37] A. C. Benvenuti et al. [BCDMS Collaboration], Phys. Lett. B 237 (1990)
592.
[38] S. Chekanov et al. [ZEUS Collaboration], Eur. Phys. J. C 21 (2001) 443
[arXiv:hep-ex/0105090].
32
[39] J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. Nadolsky and
W. K. Tung, JHEP 0207 (2002) 012 [arXiv:hep-ph/0201195].
[40] http://www.nikhef.nl/~ h24/qcdnum/
[41] H. L. Lai et al. [CTEQ Collaboration], Eur. Phys. J. C 12 (2000) 375
[arXiv:hep-ph/9903282].
[42] A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, Phys.
Lett. B 531 (2002) 216 [arXiv:hep-ph/0201127].
[43] M. Gluck, E. Reya and A. Vogt, Eur. Phys. J. C 5 (1998) 461 [arXiv:hepph/9806404].
[44] H. L. Lai et al., Phys. Rev. D 55, 1280 (1997) [arXiv:hep-ph/9606399].
33
| 9 |
Power Grid Simulation using Matrix Exponential Method
with Rational Krylov Subspaces
arXiv:1309.5333v3 [cs.CE] 14 Oct 2013
Hao Zhuang, Shih-Hung Weng, and Chung-Kuan Cheng
Department of Computer Science and Engineering, University of California, San Diego, CA, 92093, USA
[email protected], [email protected], [email protected]
Abstract— One well-adopted power grid simulation methodology is to
factorize matrix once and perform only backward/forward substitution
with a deliberately chosen step size along the simulation. Since the
required simulation time is usually long for the power grid design, the
costly factorization is amortized. However, such fixed step size cannot
exploit larger step size for the low frequency response in the power grid
to speedup the simulation. In this work, we utilize the matrix exponential
method with the rational Krylov subspace approximation to enable
adaptive step size in the power grid simulation. The kernel operation
in our method only demands one factorization and backward/forward
substitutions. Moreover, the rational Krylov subspace approximation
can relax the stiffness constraint of the previous works [12][13]. The
cheap computation of adaptivity in our method could exploit the long
low-frequency response in a power grid and significantly accelerate the
simulation. The experimental results show that our method achieves up
to 18X speedup over the trapezoidal method with fixed step size.
I. I NTRODUCTION
Power grid simulation is a very essential and computational heavy
tasks during VLSI design. Given current stimulus and the power grid
structure, designers could verify and predict the worst-case voltage
noise through the simulation before signing off their design. However,
with the huge size of modern design, power grid simulation is a timeconsuming process. Moreover, manifesting effects from the package
and the board would require longer simulation time, e.g., up to few
µs, which worsens the performance of the power grid simulation.
Therefore, an efficient power grid simulation is always a demand
from industry.
Conventionally, the power grid simulation is based on the trapezoidal method where the major computation is to solve a linear
system by either iterative approaches or direct methods. The iterative
methods usually suffer from the convergence problem because of the
ill-conditioned matrix from the power grid design. On the other hand,
the direct methods, i.e., Cholesky or LU factorizations, are more general for solving a linear system. Despite the huge memory demanding
and computational effort, with a carefully chosen step size, the power
grid simulation could perform only one factorization at the beginning
while the rest of operations are just backward/forward substitutions.
Since a power grid design usually includes board and package
models, a long simulation time is required to manifest the lowfrequency response. Hence, the cost of expensive factorization can
be amortized by many faster backward/forward substitutions. Such
general factorization and fixed step size strategy[14][15][16][17] is
widely adopted in industry.
The matrix exponential method (MEXP) for the circuit simulation
has better accuracy and adaptivity because of the analytical formulation and the scaling invariant Krylov subspace approximation[12][13].
Unlike the fixed step size strategy, MEXP could dynamically adjust
the step size to exploit the low-frequency response of the power grid
without expensive computation. However, the step size in MEXP is
limited by the stiffness of circuit. This constraint would drag the
overall performance of MEXP for the power grid simulation.
In this work, we tailor MEXP using rational Krylov subspace for
the power grid simulation with adaptive time stepping. The rational
Krylov subspace uses (I − γA)−1 as the basis instead of A used in
the conventional Krylov subspaces, where I is an identity matrix and
γ is a predefined parameter. The rational basis limits the spectrum of a
circuit system, and emphasizes small magnitude eigenvalues, which
are important to exponential function, so that the exponential of a
matrix can be accurately approximated . As a result, MEXP with
rational Krylov subspace can enjoy benefits of the adaptivity and
the accuracy of MEXP. Even though the rational Krylov subspace
still needs to solve a linear system as the trapezoidal method does,
MEXP can factorize the matrix only once and then constructs the
rest of rational Krylov subspaces by backward/forward substitutions.
Therefore, MEXP can utilize its capability of adaptivity to accelerate
the simulation with the same kernel operations as the fixed step
size strategy. Overall, our MEXP enables adaptive time stepping
for the power grid simulation with only one LU factorization, and
allows scaling large step size without compromising the accuracy.
The experimental results demonstrate the effectiveness of MEXP
with adaptive step size. The industrial power grid designs can be
accelerated 17X on average compared to the trapezoidal method.
The rest of paper is organized as follows. Section II presents
the background of the power grid simulation and MEXP. Sections III and IV show the theoretical foundation of the rational
Krylov subspace and our adaptive step scheme for the power grid
simulation, respectively. Section V presents experimental results of
several industrial power grid designs. Finally, Section VI concludes
this paper.
II. P RELIMINARY
A. Power Grid Formulation
A power grid can be formulated as a system of differential
equations via modified nodal analysis as below:
Cẋ(t) = −Gx(t) + Bu(t),
(1)
where matrix C describes the capacitance and inductance, matrix
G represents the resistance and the incidence between voltages and
currents, and matrix B indicates locations of the independent sources.
Vector x(t) describes the nodal voltages and branch currents at time
t, and vector u(t) represents the corresponding supply voltage and
current sources associated to different blocks. In this work, we assume
those input sources are given and in the format of piece-wise linear.
B. Matrix Exponential Method
MEXP[12][13] is based on the analytical solution of (1). With
initial solution from the DC analysis via direct[3] or iterative
approaches[7], the equation of MEXP from t to t+h can be expressed
as
Z h
x(t + h) = eAh x(t) +
eA(h−τ ) b(t + τ )dτ.
(2)
0
where A = −C−1 G, and b(t) = C−1 Bu(t), when C is not
singular. Assuming that input u(t) is piece-wise linear (PWL), we
integrate the last term in (2) analytically, turning the solution into the
sum of three terms associated with matrix exponential operators,
x(t + h)
=
eAh x(t)
+
(eAh − I)A−1 b(t)
+
(eAh − (Ah + I))A−2
b(t + h) − b(t)
. (3)
h
Eqn. (3) has three matrix exponential terms, which are generally
referred as ϕ-functions of the zero, first and second order [6]. It
has been shown in [1, Theorem 2.1] that one can obtain the sum of
them in one shot by computing the exponential of a slightly larger
(n+p)×(n+p) matrix, where n is the dimension of A, and p is the
order of the ϕ-functions (p = 2 in (3)). Thus, (3) can be rewritten
into
e
x(t)
,
(4)
x(t + h) = In 0 eAh
e2
with
e =
A
J=
A
0
0
0
W
J
1
,
0
,
W=
e2 =
h
b(t+h)−b(t)
h
0
1
b (t)
i
(5)
To keep the notations simple, we use v to represent [x(t) e2 ]T in
the rest of paper, respectively. Note that the kernel computation of
MEXP is to derive the exponential of a matrix, i.e., eA v, which is
approximated by the Krylov subspace method in works [13][12]. The
Krylov subspace method has better scalability mainly from its sparse
matrix-vector product centric computation. However, such approximation is only better for those eigenvalues with small magnitude,
which means the maximum step size of MEXP in a stiff circuit has
to be constrained in order to maintain the accuracy of approximation.
In the following section, we will present how the rational basis could
relax the stiffness constraint.
III. MEXP WITH R ATIONAL K RYLOV S UBPSACE
In [13][12], Eqn. (4) is calculated via the Krylov subspace method
using Arnoldi process. The subspace is defined as
e v) = span{v, Av,
e ··· ,A
e m−1 v},
Km (A,
(6)
where v is an initial vector. The Arnoldi process approximates the
eigenvalues with large magnitude well. But when handling a stiff
circuit system, the formed matrix usually contains many eigenvalues
e
with small magnitude. Besides, eAh is mostly determined by the
eigenvalues with smallest magnitudes and their corresponding invariant subspaces. In this scenario, due to the existence of eigenvalues
e the Arnoldi process for Eqn. (6) requires
with large magnitude in A,
large m to capture the important eigenvalues (small magnitudes)
and invariant spaces for exponential operator. Therefore, the time
steps in MEXP has to be small enough to capture the important
eigenvalues. This suggests us transforming the spectrum to intensify
those eigenvalues with small magnitudes and corresponding invariant
subspaces. We make such transformation based on the idea of rational
Krylov subspace method[5][11]. The details are presented in the
following subsections.
A. Rational Krylov Subspaces Approximation of MEXP
For the purpose of finding the eigenvalues with smallest magnitude
e −1 , instead of using A
e
first, we uses a preconditioner (I − γ A)
directly. It is known as the rational Krylov subspace[5][11]. The
formula for the rational Krylov subspace is
e −1 , v) = span{v, (I − γ A)
e −1 v, · · · ,
Km ((I − γ A)
e −(m−1) v},
(I − γ A)
(7)
where γ is a predefined parameter. The Arnoldi process constructs
Vm+1 and Hm+1,m , and the relationship is given by
e −1 Vm = Vm Hm,m + hm+1,m vm+1 eTm ,
(I − γ A)
(8)
where em is the m-th unit vector. Matrix Hm,m is the first m ×
m square matrix of an upper Hessenberg matrix of Hm+1,m , and
hm+1,m is its last entry. Vm consists of [v1 , v2 , · · · , vm ] , and
vm+1 is its last vector. After re-arranging (8) and given a time step
e
h, the matrix exponential eAh v can be calculated as
e
eAh v
e
e
≈
Vm Vm T eAh v = kvk2 Vm Vm T eAh Vm e1
=
kvk2 Vm eαHm,m e1 ,
e
(9)
h
e m,m = I − H−1
where H
m,m , α = γ is the adjustable parameters for
control of adaptive time step size in Section IV. Note that in practice,
instead of computing (I − γA)−1 directly, we only need to solve
(C + γG)−1 Cv, which can be achieved by one LU factorization
at beginning. Then the construction of the following subspaces is by
backward/forward substitutions.
This strategy is also presented in [5][11]. Intuitively, the “shift-andinvert” operation would intensify the eigenvalues with small magnitudes and minify the eigenvalues with large magnitudes. By doing
so, the Arnoldi process could capture those eigenvalues important to
the exponential operator, which originally cannot be manifested with
small m in the conventional Krylov subspace. We would like to point
e
out that the error bound for Eqn. (9) does not longer depend on kAhk
e
as [13]. It is only the first (smallest magnitude) eigenvalue of A.
We observe that large α provides less absolute error under the same
dimension m. An intuitive explanation is also given by [11], the larger
α combined with exponential operators, the relatively smaller portion
of the eigenvalues with smallest magnitude determine the final vector.
Within the assumption of piecewise linear in Eqn. (3), our method
can step forward as much as possible to accelerate simulation, and
still maintain the high accuracy. The sacrifice resides in the small
time step when more eigenvalues determine the final vector. So we
should choose a appropriate parameter γ or increase the order m
to balance the accuracy and efficiency. Even though the increasing
m results more backward/forward substitutions, the m is still quite
small in the power grid simulation. Therefore, it does not degrade
our method too much.
The formula of posterior error estimation is required for controlling
adaptive step size. We use the formula derived from [11],
kvk2
e m,m
αH
e m+1 eTm H−1
e1 (10)
hm+1,m (I − γ A)v
m,m e
γ
The formula provides a good approximation for the error trend with
respect to m and α in our numerical experiment.
B. Block LU factorization
err(m, α) =
In practical numerical implementation, in order to avoid direct
inversion of C to form A in Eqn. (7), the equation (C + γG)−1 C
is used. Correspondingly, for Eqn. (4), we uses the equations
e =
where C
h
Bu(t+h)−Bu(t)
h
e − γ G)
e −1 C
e
(C
C 0
−G
e =
,G
0 I
0
i
Bu (t)
(11)
f
W
f =
, and W
J
The Arnoldi process based on Eqn. (11) actually only requires to
solve vk+1 with vk . The linear system is expressed as
e − γ G)v
e k+1 = Cv
e k,
(C
(12)
e − γG
e = LU.
C
(13)
where vk and vk+1 are k-th and (k + 1)-th basis in the rational
f changes with inputs during the simulation, the
Krylov subspace. If W
Arnoldi process has to factorize a matrix every time step. However,
e stay the same for this linear
it is obvious that the majority of G
system. To take advantage of this property, a block LU factorization
is devised here to avoid redundant calculation. The goal is to obtain
the lower triangular L and the upper triangular U matrices:
At the beginning of simulation, after LU factorization of C + γG =
Lsub Usub , we obtain the lower triangular sub-matrix Lsub , and upper
triangular sub-matrix Usub . Then Eqn. (13) only needs updating via
f
Lsub 0
Usub −γL−1
sub W
,
(14)
, U=
L=
0
I
0
IJ
where IJ = I − γJ is an upper triangular matrix. Assume we
have vk , the following equations further reduce operation L−1
sub and
construct vector vk+1 : z1 = [C, 0]vk , z2 = [0, I]vk ; y2 =
f
I−1
J z2 , Lsub Usub y1 = z1 + γ W y2 . Then, we obtain vk+1 =
[y1 , y2 ]T . By doing this, it only needs one LU factorization at the
beginning of simulation, and with cheap updates for the L and U at
each time step during transient simulation.
IV. A DAPTIVE T IME S TEP C ONTROL
The proposed MEXP can significantly benefit from the adaptive
time stepping because the rational Krylov subspace approximation
relaxes the stiffness constraint as well as preserves the scaling
invariant property. As a result, MEXP can effortlessly adjust the
step size to different scale, during the simulation. Such adaptivity is
particularly helpful in the power grid where the voltage noise includes
the high- to low-frequency responses from die, package and board.
Our adaptive step scheme is to step forward as much as possible
so that MEXP can quickly finish the simulation. With the insight
from Eqn. (9), MEXP can adjust α to calculate results of required
step sizes with only one Arnoldi process. However, even though the
rational Krylov subspace could scale robustly, the step size in MEXP
is restrained from input sources. As shown in Eqn. (3), MEXP has to
guarantee constant slope during a stepping, and hence the maximum
allowed step size h at every time instant is limited. Our scheme will
first determine h from inputs at time t and construct the rational
Krylov subspace from x(t). Then, x within interval [t, t + h] are
calculated through the step size scaling.
Algorithm 1 shows MEXP with adaptive step control. In order to
comply with the required accuracy during the simulation, the allowed
error err(m, α) at certain time instant t is defined as err ≤ ETT ol h
where ET ol is the error tolerance in the whole simulation process, T
is the simulation time span, h is the step size at time t, and err is the
posterior error of MEXP from Eqn. (10). Hence, when we construct
the rational Krylov subspace, we will increase m until the err(m, α)
satisfies the error tolerance. The complexity of MEXP with adaptive
time stepping is mainly determined by the total number of required
backward/forward substitutions during the
Psimulation process. The
number of total substitution operations is N
i=0 mi where N is total
time steps, and mi is required dimension of the rational Krylov
subspace at time step i. Compared to the trapezoidal method where
the number of substitution operations depends only on the fixed step
size, MEXP could use less substitution operations as long as the
maximum allowed step size h is much larger than the fixed step size.
Algorithm 1: MEXP with Adaptive Step Control
Input: C, G, B, u(t), τ , error tolerance ET ol and simulation
time T
Output: x(t)
1 t = 0; x(0) = DC analysis;
2 [Lsub , Usub ] =LU(C + γG);
3 while t ≤ T do
4
Compute maximum allowed step size h from u(t) and
α = hγ ;
5
Construct Hm,m , Vm,m , err by Arnoldi process and (10)
until err(m, α) ≤ ETT ol h;
6
Compute x(t + h) by (9);
7
t = t + h;
8 end
Our experiments in the following section demonstrates it is usually
the case for the power grid simulation.
V. E XPERIMENTAL R ESULTS
In this section, we compare performance of the power grid simulation by MEXP and the trapezoidal method (TR). MEXP with
adaptive step size control follows Algorithm 1. We predefine γ e.g.
10−10 here. and restrict the maximum allowed step size within 1ns
to have enough time instants to plot the figure. It is possible to have
more fine-grain time instants, e.g., 10ps, with only negligible cost
by adjusting α in Eqn. (9). TR is in fixed step size h in order to
minimize the cost of LU factorization. Both methods only perform
factorization once for transient simulation, and rest of operations is
mainly backward/forward substitution. We implement both methods
in MATLAB and use UMFPACK package for LU factorization. Note
that even though previous works[2][9] show that using iterative approach in TR could also achieve adaptive step control, long simulation
time span in power grid designs make direct method with fixed step
size more desirable[14][15][17]. The experiments are performed on
a Linux workstation with an Intel Core i7-920 2.67GHz CPU and
12GB memory. The power grid consists of four metal layers: M1,
M3, M6 and RDL. The physical parameters of each metal layer is
listed in Table I. The package is modeled as an RL series at each
C4 bump, and the board is modeled as a lumped RLC network. The
specification of each PDN design is listed in Table II where the size
of each design ranges from 45.7K to 7.40M.
TABLE I
W IDTHS AND PITCHES OF METAL LAYERS IN THE PDN DESIGN (µm).
pitch
2.5
M1
width
0.2
pitch
8.5
M3
width
0.25
pitch
30
M6
width
4
RDL
pitch
width
400
30
TABLE II
S PECIFICATIONS OF PDN D ESIGNS
Design
D1
D2
D3
D4
Area (mm2 )
0.352
1.402
2.802
5.002
#R
23221
348582
1468863
3748974
#C
15193
228952
965540
2467400
#L
15193
228952
965540
2464819
#Nodes
45.7K
688K
2.90M
7.40M
In order to characterize a PDN design, designers can rely on the
simulation result of impulse response of the PDN design. Many
previous works[4][10] have proposed different PDN analysis based
on the impulse response. The nature of impulse response of the PDN
design, which contains low-, mid- and high-frequency components,
TABLE III
S IMULATION RUNTIME OF PDN DESIGNS
Design
DC(s)
D1
D2
D3
D4
0.710
12.2
69.6
219
TR (h = 10ps)
LU(s)
Total
0.670
44.9m
15.6
15.4h
91.6
76.9h
294
204h
Our MEXP (γ = 10−10 )
LU(s)
Total
Speedup
0.680
2.86m
15.7
15.5
54.6m
16.9
93.3
4.30h
17.9
299
11.3h
18.1
VI. C ONCLUSION
For large scale power grid simulation, we propose an MEXP framework using two methods rational Krylov subspace approximation
and adaptive time stepping technique. The former method can relax
stiffness constraint of [12][13]. The later one helps adaptively exploit
low-, mid-, and high-frequency property in simulation of industrial
PDN designs. In the time-consuming impulse response simulation,
the proposed method achieve more than 15 times speedup on average
over the widely-adopted fixed-step trapezoidal method.
MEXP
TR
HSPICE
0.1
0.09
0.08
0.07
Voltage
can significantly enjoy the adaptive step size in MEXP. We would
also like to mention that the impulse response based analysis is not
only for the PDN design, but also for worst-case eye opening analysis
in the high speed interconnect [8].
The impulse response can be derived from the simulation result of
a step input from 0V to 1V with a small transition time. Hence, we
inject a step input to each PDN design and compare the performance
of MEXP and TR. The transition time of the step input and the
simulation time span is 10ps and 1µs for observing both high- and
low-frequency responses. Table III shows the simulation runtime of
MEXP and TR where the fixed step size is set as 10ps to comply with
the transition time. In the table, “DC”, “LU” and “Time” indicate the
runtime for DC analysis, LU factorization and the overall simulation,
respectively. DC analysis is also via the LU factorization. We can also
adopt other techniques[14][15][16] to improve the performance of DC
analysis for both methods. It is noted that these cases are very stiff
and with singular matrix C. We do not use the method[12][13] on
the benchmarks, because that it cannot handle the singular C in these
industrial PDN design without regularization. It is worth pointing out
that even after regularization[13], the stiffness still causes large m
series for matrix exponential evaluation. For example, we construct a
simple RC mesh network with 2500 nodes. The extreme values of this
circuit are Cmin = 5.04 × 10−19 , Cmax = 1.00 × 10−15 , Gmin =
1.09 × 10−2 , and Gmax = 1.00 × 102 . The corresponding maximum
eigenvalue of −C−1 G is −1.88 × 109 and minimum eigenvalue is
Re(λmin )
−3.98 × 1017 . The stiffness is Re(λ
= 2.12 × 108 . During
max )
simulation of 1ns time span, with a fixed step size 10ps, MEXP based
on [12][13] costs average and peak dimensions of Krylov subspace
mavg = 115, and mpeak =264, respectively. Our MEXP uses rational
Krylov subspaces, which only need mavg =3.11, mpeak =10 and lead
to 224X speedup in total runtime.
In these test cases, our MEXP has significant speedup over TR
because it can adaptively exploit much large step size to simulate
the design whereas TR can only march with 10ps time step for
whole 1µs time span. The average speedup is 17X. Fig. 1 shows
the simulation result of design D1 at a node on M1. As we can
see, the result by MEXP and TR are very close to the result of
HSPICE, which is as our reference result here. The errors of MEXP
and TR to HSPICE are 7.33×10−4 and 7.47×10−4 . This figure also
demonstrates that a PDN design has low-, mid- and high-freqeuncy
response so that long simulation time span is necessary, meanwhile,
small time steps are required during the 20ns in the beginning.
0.06
0.05
0.04
0.07
0.03
0.06
0.02
0.05
0
2
4
0
0
6
−8
0.01
x 10
0.1
0.2
0.3
Fig. 1.
0.4
0.5
Time
0.6
0.7
0.8
0.9
1
−6
x 10
Result of D1
VII. ACKNOWLEDGEMENTS
This work was supported by NSF CCF-1017864.
R EFERENCES
[1] A. H. Al-Mohy and N. J. Higham. Computing the action of the matrix
exponential, with an application to exponential integrators. SIAM Journal
on Scientific Computing, 33(2):488–511, 2011.
[2] T.-H. Chen and C. C.-P. Chen. Efficient large-scale power grid analysis
based on preconditioned krylov-subspace iterative methods. In Proc.
Design Automation Conference, pages 559–562, 2001.
[3] T. A. Davis. Direct Method for Sparse Linear Systems. SIAM, 2006.
[4] P. Du, X. Hu, S.-H. Weng, A. Shayan, X. Chen, A. Ege Engin, and C. K.
Cheng. Worst-case noise prediction with non-zero current transition
times for early power distribution system verification. In Intl. Symposium
on Quality Electronic Design, pages 624–631, 2010.
[5] I. Moret and P. Novati. Rd-rational approximations of the matrix
exponential. BIT Numerical Mathematics, 44(3):595–615, 2004.
[6] J. Niesen and W. M. Wright. A Krylov subspace algorithm for evaluating
the ϕ-functions appearing in exponential integrators. ACM Trans. Math.
Software. in press.
[7] Y. Saad. Iteravite Methods for Sparse Linear Systems. SIAM, 2003.
[8] R. Shi, W. Yu, Y. Zhu, C. K. Cheng, and E. S. Kuh. Efficient and
accurate eye diagram prediction for high speed signaling. In Proc. Intl.
Conf. Computer-Aided Design, pages 655–661, 2008.
[9] H. Su, E. Acar, and S. R. Nassif. Power grid reduction based on algebraic
multigrid principles. In Proc. Design Automation Conference, pages
109–112, 2003.
[10] W. Tian, X.-T. Ling, and R.-W. Liu. Novel methods for circuit worst-case
tolerance analysis. IEEE Trans. on Circuits and Systems I: Fundamental
Theory and Applications, 43(4):272–278, 1996.
[11] J. van den Eshof and M. Hochbruck. Preconditioning lanczos approximations to the matrix exponential. SIAM Journal on Scientific Computing,
27(4):1438–1457, 2006.
[12] S.-H. Weng, Q. Chen, and C. K. Cheng. Circuit simulation using matrix
exponential method. In Intl. Conf. ASIC, pages 369–372, 2011.
[13] S.-H. Weng, Q. Chen, and C. K. Cheng. Time-domain analysis of largescale circuits by matrix exponential method with adaptive control. IEEE
Trans. Computer-Aided Design, 31(8):1180–1193, 2012.
[14] X. Xiong and J. Wang. Parallel forward and back substitution for
efficient power grid simulation. In Proc. Intl. Conf. Computer-Aided
Design, pages 660–663, 2012.
[15] J. Yang, Z. Li, Y. Cai, and Q. Zhou. Powerrush: Efficient transient
simulation for power grid analysis. In Proc. Intl. Conf. Computer-Aided
Design, pages 653–659, 2012.
[16] T. Yu, M. Wong, et al. Pgt solver: an efficient solver for power grid
transient analysis. In Proc. Intl. Conf. Computer-Aided Design, pages
647–652, 2012.
[17] M. Zhao, R. V. Panda, S. S. Sapatnekar, and D. Blaauw. Hierarchical
analysis of power distribution networks. IEEE Trans. Computer-Aided
Design, 21(2):159–168, 2002.
| 5 |
A Novel Weighted Distance Measure for Multi-Attributed Graph
Muhammad Abulaish, SMIEEE
Jahiruddin∗
Department of Computer Science
South Asian University
New Delhi-21, India
[email protected]
Department of Computer Science
Jamia Millia Islamia
New Delhi-25, India
[email protected]
arXiv:1801.07150v1 [cs.SI] 22 Jan 2018
ABSTRACT
Due to exponential growth of complex data, graph structure has
become increasingly important to model various entities and their
interactions, with many interesting applications including, bioinformatics, social network analysis, etc. Depending on the complexity of the data, the underlying graph model can be a simple
directed/undirected and/or weighted/un-weighted graph to a complex graph (aka multi-attributed graph) where vertices and edges are
labelled with multi-dimensional vectors. In this paper, we present
a novel weighted distance measure based on weighted Euclidean
norm which is defined as a function of both vertex and edge attributes, and it can be used for various graph analysis tasks including classification and cluster analysis. The proposed distance
measure has flexibility to increase/decrease the weightage of edge
labels while calculating the distance between vertex-pairs. We have
also proposed a MAGDist algorithm, which reads multi-attributed
graph stored in CSV files containing the list of vertex vectors and
edge vectors, and calculates the distance between each vertex-pair
using the proposed weighted distance measure. Finally, we have
proposed a multi-attributed similarity graph generation algorithm,
MAGSim, which reads the output of MAGDist algorithm and generates a similarity graph that can be analysed using classification
and clustering algorithms. The significance and accuracy of the
proposed distance measure and algorithms is evaluated on Iris and
Twitter data sets, and it is found that the similarity graph generated
by our proposed method yields better clustering results than the
existing similarity graph generation methods.
CCS CONCEPTS
• Information systems → Similarity measures; Data analytics; Clustering; • Human-centered computing → Social network
analysis;
KEYWORDS
Data mining, Clustering, Multi-attributed graph, Weighted distance
measure, Similarity measure
∗ To
whom correspondence should be made
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from [email protected].
Compute ’17, Bhopal, India
© 2017 ACM. 978-1-4503-5323-6/17/11. . . $15.00
DOI: 10.1145/3140107.3140114
ACM Reference format:
Muhammad Abulaish, SMIEEE and Jahiruddin. 2017. A Novel Weighted
Distance Measure for Multi-Attributed Graph. In Proceedings of 10th Annual ACM India Compute Conference, Bhopal, India, November 16–18, 2017
(Compute ’17), 9 pages.
DOI: 10.1145/3140107.3140114
1
INTRODUCTION
Due to increasing popularity of Internet and Web2.0, the UserGenerated Contents (UGCs) are growing exponentially. Since most
of the UGCs are not independent, rather linked, the graph data structure is being considered as a suitable mathematical tool to model
the inherent relationships among data. Simple linked data are generally modelled using simple graph G = (V , E), where V is the set
of vertices representing key concepts or entities and E ⊆ V × V is
the set of links between the vertices representing the relationships
between the concepts or entities. Depending on the nature of data
to be modelled, the graph G could be weighted/un-weighted or
directed/undirected, and it may have self-loops. However, there are
many complex data like online social networks, where an entity is
characterized by a set of features and multiple relationships exist
between an entity-pair. To model such data, the concept of multiattributed graph can be used wherein each vertex is represented by
an n-dimensional vector and there may be multiple weighted edges
between each pair of vertices. In other words, in a multi-attributed
graph, both vertices and edges are assigned multiple labels. One of
the important tasks related to graph data analysis is to decompose
a given graph into multiple cohesive sub-graphs, called clusters,
based on some common properties. The clustering is an unsupervised learning process to identify the underlying structure (in the
form of clusters) of data, which is generally based on some similarity/distance measures between data elements. Graph clustering
is special case of clustering process which divides an input graph
into a number of connected components (sub-graphs) such that
intra-component edges are maximum and inter-components edges
are minimum. Each connected component is called a cluster (aka
community) [13]. Though graph clustering has received attention
of many researchers and a number of methods for graph clustering
has been proposed by various researchers [19], to the best of our
knowledge, the field of clustering multi-attributed graph is still
unexplored.
Since similarity/distance measure is the key requirement for
any clustering algorithm, in this paper, we have proposed a novel
weighted distance measure based on weighted Euclidean norm to
calculate the distance between the vertices of a multi-attributed
graph. The proposed distance measure considers both vertex and
edge weight-vectors, and it is flexible enough to assign different
weightage to different edges and scale the overall edge-weight
Compute ’17, November 16–18, 2017, Bhopal, India
while computing the weighted distance between a vertex-pair. We
have also proposed a MAGDist algorithm that reads the lists of
vertex and edge vectors as two separate CSV files and calculates the
distance between each vertex-pairs using the proposed weighted
distance measure. Finally, we have proposed a multi-attributed similarity graph generation algorithm, MAGSim, which reads the output
of MAGDist algorithm and generates a similarity graph. In other
words, MAGDist and MAGSim algorithms can be used to transform
a multi-attributed graph into a simple weighted similarity graph,
wherein a single weighted edge exists between each vertex-pair.
Thereafter, the weighted similarity graph can be analysed using
existing classification and clustering algorithms for varied purposes.
The efficacy of the proposed distance measure and algorithms is
tested over the well-known Iris data set and a Twitter data set
related to three different events. The proposed similarity graph generation approach is compared with other existing similarity graph
generation methods like Gaussian kernel and k-nearest neighbours
methods, and it is found that our proposed approach yields better
clustering results in comparison to the existing methods. Moreover,
the proposed distance measure can be applied to any graph data
where both vertices are edges are multi-dimensional real vectors.
In case, the data is not linked, i.e., edges do not exist between the
edges, the proposed distance measure simple works as an Euclidean
distance between the node vectors.
The rest of the paper is organized as follows. Section 2 presents
a brief review on various distance measures and graph clustering methods. Section 3 presents the formulation of our proposed
weighted distance measure for multi-attributed graph. It also presents
the founding mathematical concepts and formal descriptions of the
MAGDist and MAGSim algorithms. Section 4 presents the experimental setup and evaluation results. Finally, section 5 concludes
the paper with future directions of work.
2
RELATED WORK
Multi-attributed graph is used to model many complex problems,
mainly those in which objects or entities are characterized using a
set of features and linked together in different ways. For example,
an online social network user can be characterized using a set of
features like ID, display name, demographic information, interests,
etc. Similarly, two person in an online social network may be associated through different relationships like friendship, kinship,
common interests, common friends, etc [4]. In [15], the authors
proposed a new principled framework for estimating graphs from
multi-attribute data. Their method estimates the partial canonical
correlations that naturally accommodate complex vertex features
instead of estimating the partial correlations. However, when the
vertices of a multi-attributed graph have different dimensions, it
is unclear how to combine the estimated graphs to obtain a single
Markov graph reflecting the structure of the underlying complex
multi-attributed data. In [14], Katenka and Kolaczyk proposed a
method for estimating association networks from multi-attribute
data using canonical correlation as a dependence measure between
two groups of attributes.
Clustering is an unsupervised learning technique which aims to
partition a data set into smaller groups in which elements of a group
are similar and that elements from different groups are dissimilar.
Abulaish et al.
As a result, clustering has broad applications, including the analysis
of business data, spatial data, biological data, social network data,
and time series data [24], and a large number of researchers have
targeted clustering problem [2, 12, 17]. Since graph is a popular data
structure to model structural as well as contextual relationships of
data objects, recently graph data clustering is considered as one of
the interesting and challenging research problems [16, 22], and it
aims to decompose a large graph into different densely connected
components. Some of the applications of the graph clustering is
community detection in online social networks [5, 7, 8], social bot
detection [10, 11], spammer detection [1, 3, 6, 9], functional relation
identification in large protein-protein interaction networks, and so
on. Generally graph clustering methods are based on the concept of
normalized cut, structural density, or modularity[16, 22]. However,
like [20], graphs can be partitioned on the basis of attribute similarity in such a way that vertices having similar attribute vectors
are grouped together to form a cluster. Mainly graph clustering
and simple data clustering differs in the way associations between
objects are calculated. In graph clustering, the degree of association between a pair of objects is calculated as closeness between
the respective nodes of the graph, generally in terms of number
of direct links or paths between them. Whereas, in simple data
clustering, the degree of association between a pair of objects is
calculated in terms of similarity/distance measure between the vector representations of the objects. Both topological structure and
vertex properties of a graph play an important role in many real
applications. For example, in a social graph, vertex properties can
be used to characterize network users, whereas edges can be used
to model different types of relationships between the users.
It can be seen from the discussions mentioned above that most
of the graph analysis approaches have considered only one aspect
of the graph structure and ignored the other aspects, due to which
the generated clusters either have loose intra-cluster structures
or reflect a random vertex properties distribution within clusters.
However, as stated in [24], “an ideal graph clustering should generate clusters which have a cohesive intra-cluster structure with
homogeneous vertex properties, by balancing the structural and
attribute similarities". Therefore, considering both vertex attributes
and links for calculating similarity/distance between the pairs of
vertices in a graph is one of basic requirements for clustering complex multi-attributed graphs. In this paper, we have proposed a
weighted distance function that can transform a multi-attributed
graph into a simple similarity graph on which existing graph clustering algorithms can be applied to identify densely connected
components.
3
PRELIMINARIES AND DISTANCE
MEASURES
In this section, we present the mathematical basis and formulation
of the proposed weighted distance measures for multi-attributed
graph. Starting with the basic mathematical concepts in subsection
3.1, the formulation of the proposed distance function along with
an example is presented in the subsequent subsections.
A Novel Weighted Distance Measure for Multi-Attributed Graph
3.1
Founding Mathematical Concepts
Since the Linear Algebra concepts inner product and norm generally
form the basis for any distance function, we present a brief overview
of these mathematical concepts in this section.
Inner Product: An inner product is a generalized form of the
vector dot product to multiply vectors together in such a way that
the end result is a scalar quantity. If a® = (a 1 , a 2 , ..., an )T and b® =
(b1 , b2 , ..., bn )T are two vectors in the vector space <n , then the
® and defined using
inner product of a® and b® is denoted by h®
a , bi
equation 1 [18].
n
Õ
® = a® · b® = a 1b1 + a 2b2 + · · · + an bn =
h®
a , bi
a i bi
(1)
Compute ’17, November 16–18, 2017, Bhopal, India
(1) kak
® > 0 with kak
® = 0 iff a® = 0
(positivity)
(2) knak
® = nkak
®
(homogeneity)
(3) ka® + b®k ≤ kak
® + kb®k
(triangle inequality)
Some of the well-known norms defined in [21] are as follows:
(1) L1 -norm (aka Manhattan norm): The L 1 -norm of a vector
a® ∈ <n is simply obtained by adding the absolute values
of its components. Formally, the L1 -norm of the vector a® is
represented as kak
® 1 and it can be defined using equation 5.
n
Õ
kak
®1=
|ai |
(5)
i=1
(2) L2 -norm (aka Euclidean norm): The L 2 -norm of a vector
a® ∈ <n is obtained by taking the positive square root of
the sum of the square of its components. Formally, the
L2 -norm of a vector a® is represented as kak
® 2 and it can be
defined using equation 6.
! 1/2
n
Õ
2
kak
®2=
ai
(6)
i=1
b 1
b 2
T
®
h®
a , bi = a b = a 1 a 2 . . . an .
..
b
n
= a 1b 1 + a 2b 2 + · · · + a n b n
(2)
i=1
(3) Infinity-norm (aka max-norm): The Infinity-norm of a vector a® ∈ <n is obtained as maximum of the absolute values
of the components of the vector. Formally, the infinitynorm of the vector a® is represented as kak
® ∞ and it can be
defined using equation 7.
Alternatively, inner product of a® and b® can be defined as a matrix
product of row vector a®T and column vector b® using equation
2. However, as stated in [18], an inner product must satisfy the
® and c® belong to <n ,
following four basic properties, where a,
® b,
and n is a scalar quantity.
(1) h®
a + b® , c®i = h®
a , c®i + hb® , c®i
®
®
(2) hna® , bi = nh®
a , bi
®
®
(3) h®
a , bi = hb , ai
®
(4) h®
a , ai
® > 0 and h®
a , ai
® = 0 iff a® = 0
n
kak
® ∞ = max |ai |
i=1
(4) Lp -norm: The Lp norm of a vector a® ∈ <n is the positive
p th root of the sum of p th power of the absolute value of
the components of the vector. Formally, the Lp -norm of the
vector a® is represented as kak
® p and it can be defined using
equation 8.
! 1/p
n
Õ
p
kak
®p=
|ai |
(8)
Weighted Inner Product: The weighted inner product is generally
used to emphasize certain features (dimensions) through assigning
weight to each dimension of the vector space <n . If d 1 , d 2 , . . . , dn ∈
< are n positive real numbers between 0 and 1 and D is a corresponding diagonal matrix of order n × n, then the weighted inner
product of vectors a® and b® is defined by equation 3 [18]. It can be easily verified that weighted inner product also satisfies the four basic
properties of the inner product mentioned earlier in this section.
d 1
0
T
®
h®
a , biD = a Db =
di ai bi , where D= .
..
i=1
0
n
Õ
0
d2
..
.
0
...
...
..
.
...
0
0
..
.
dn
i=1
(5) Weighted Euclidean norm: The weighted Euclidean norm of
a vector a® ∈ <n is a special case of the Euclidean norm in
which different dimensions of a® can have different weights.
If AT = a®T is a row matrix of order 1 × n, and D is a
diagonal matrix of order n × n whose i th diagonal element
is the weight of the i th dimension of the vector a,
® then the
weighted Euclidean norm of a® is the positive square root of
the product matrix AT DA. In case D is an identity matrix,
this gives simply the Euclidean norm. Formally, for a given
vector a® ∈ <n and a weight (diagonal) matrix D of order
n × n, the weighted Euclidean norm of a® is represented as
kak
® D and defined using equation 9.
(3)
Norm: In linear algebra and related area of mathematics, the
norm on a vector space <n is a function used to assign a non
negative real number to each vector a® ∈ <n . Every inner product
gives a norm that can be used to calculate the length of a vector.
However, not every norm is derived from an inner product [18].
The norm of a vector a® ∈ <n is denoted by kak
® and can be defined
using equation 4.
p
kak
® = h®
a , ai
®
(4)
T
T
®
As given in [18], if a® = (a 1 , a 2 , ..., an ) and b = (b1 , b2 , ..., bn ) are
two vectors of the vector space <n and n > 0 is a scalar quantity,
then every norm satisfies the following three properties.
(7)
! 1/2
n
1/2
Õ
T
2
kak
® D = A DA
=
di ai
(9)
i=1
3.2
Multi-Attributed Graph
In this section, we present a formal definition of multi-attributed
graph, in which both vertices and edges can be represented as multidimensional vectors. Mathematically, a multi-attributed graph can
Compute ’17, November 16–18, 2017, Bhopal, India
Abulaish et al.
be defined as a quadruple Gm = (V , E, Lv , Le ), where V , ϕ
represents the set of vertices, E ⊆ V × V represents the set of
edges, Lv : V → <n is a vertex-label function that maps each
vertex to an n-dimensional real vector, and Le : E → <m is an
edge-label function that maps each edge to an m-dimensional real
vector. Accordingly, a vertex v ∈ V in a multi-attributed graph can
be represented as an n-dimensional vector v® = (v 1 , v 2 , . . . , vn )T
and an edge between a vertex-pair (u, v) can be represented as an
m-dimensional vector e®(u, v) = (e 1 (u, v), e 2 (u, v), . . . , em (u, v))T .
3.3
Proposed Weighted Distance Measure
In this section, we present our proposed weighted distance measure
for multi-attributed graph that is based on weighted Euclidean norm.
If u = u® = (u 1 , u 2 , . . . , un )T and v = v® = (v 1 , v 2 , . . . , vn )T are
two vertices of a multi-attributed graph and (u, v) = e®(u, v) =
(e 1 (u, v), e 2 (u, v), . . . , em (u, v))T is a multi-labelled edge between
u and v, then the distance between u and v is denoted by ∆(u, v) and
calculated using equation 10, where λ is a scalar value (equation 11),
and I is an identity matrix of order n × n. The value of λ depends on
the aggregate weight, ω(u, v), of the edges between the vertex-pair
(u, v), which is calculated using equation 12. The value of γ > 0 is
a user-supplied threshold, providing flexibility to tune the value of
λ for calculating distance between a vertex-pair. In equation 12, α i
Í
is a constant such that α i ≥ 0 and ni=1 α i = 1.
⇒ (1 + ω 1 ) ≤ (1 + ω 2 )
⇒ (1 + ω 1 )γ ≤ (1 + ω 2 )γ , where γ ≥ 1
⇒ (1+ω1 )γ ≥ (1+ω1 )γ
1
2
⇒ f (ω 1 ) ≥ f (ω 2 )
Hence, λ is a monotonically decreasing function of ω for γ ≥ 1.
Similarly, it can be shown that λ is a monotonically decreasing
function of γ for ω > 0.
Let λ = f (γ ) be a function of γ
Let γ 1 , γ 2 ≥ 1 such that γ 1 ≤ γ 2
⇒ (1 + ω)γ1 ≤ (1 + ω)γ2 , where ω > 0
1
1
⇒ (1+ω)
γ 1 ≥ (1+ω)γ 2
⇒ f (γ 1 ) ≥ f (γ 2 )
Hence, λ is a monotonically decreasing function of γ for ω > 0.
ω=1
ω=0.75
ω=0.50
ω=0.25
1.1
1
0.9
0.8
0.7
0.6
λ
0.5
0.4
∆(u, v) = ∆(®
u, v)
® = ku® − v®kλI = (u − v)T .λI .(u − v)
u 1 − v 1 1/2
...
©
. ª®
..
= u 1 − v 1 . . .
.. ®
.
u − v
...
n¬
«
n
1/2
= λ(u 1 − v 1 )2 + λ(u 2 − v 2 )2 + · · · + λ(un − vn )2
! 1/2
n
Õ
√
2
(ui − vi )
= λ×
λ
.
un − vn ..
0
0.3
1/2
0
..
.
λ
0.2
0.1
0
0
(10)
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
γ
Figure 1: Visualization of the monotonically decreasing nature of λ for varying γ and ω values
i=1
1
λ=
(1 + ω(u, v))γ
ω(u, v) = α 1e 1 (u, v) + α 2e 2 (u, v) + · · · + αm em (u, v)
n
Õ
α i ei (u, v)
=
1
(11)
It should be noted that if there is no direct link (edge) between a
vertex pair (u, v), then the value of ω(u, v) in equation 11 becomes
zero, leading the value of λ to 1. In this case, the value of ∆(u, v)
is simply an Euclidean distance between the vertex-pair (u, v), as
proved in theorem 3.2.
(12)
i=1
It may be noted that the λ function defined using equation 11 is
a monotonic decreasing function, as proved in theorem 3.1, so that
the distance between a vertex-pair could decrease with increasing
ties between them, and vice-versa. The novelty of the proposed
distance function lies in providing flexibility (i) to assign different
weights to individual edges using α, and (ii) to control the degree
of monotonicity using γ , as shown in figure 1. It may also be noted
that the value of λ will always be 1, if either if ω(u, v) = 0 or γ = 0.
Theorem 3.1. The λ function defined in equation 11 is a monotonically decreasing function.
Proof. Let λ = f (ω) be a function of ω, where ω is the aggregate
weight of edges between a vertex-pair (u, v).
Let ω 1 , ω2 ≥ 0 such that ω 1 ≤ ω 2
Theorem 3.2. In a multi-attributed graph, if there is no edge
between a vertex-pair then the distance between them is simply an
Euclidean distance.
Proof. Let u = u® = (u 1 , u 2 , . . . , un )T and v = v® = (v 1 , v 2 , . . . , vn )T
be two vertices not connected by any edge. Since there is no edge between the vertex-pair (u, v), the edge vector e®(u, v) = (e 1 (u, v), e 2 (u, v), . . . ,
em (u, v))T = 0®.
⇒ e 1 (u, v) = e 2 (u, v) = · · · = em (u, v) = 0
Hence, ω(u, v) = α 1 .e 1 (u, v) + α 2 .e 2 (u, v) + · · · + αm .em (u, v)
= α 1 .0 + α 2 .0 + · · · + αm .0 = 0.
[using equation 12]
1
1
Hence, λ = (1+ω(u,v))
[using equation 11]
γ = (1+0)γ = 1
√
1/2
Ín
2
Finally, ∆(u, v) = λ × i=1 (ui − vi )
[using equation 10]
√
1/2
Ín
Ín
2
2 1/2 , which is an
= 1×
=
i=1 (ui − vi )
i=1 (ui − vi )
Euclidean distance between the vertex-pair (u, v).
A Novel Weighted Distance Measure for Multi-Attributed Graph
Algorithm 1 presents a formal way to calculate the distance
between all pairs of vertices of a given multi-attributed graph
using MAGDist. The proposed MAGDist algorithm reads a multiattributed graph using two separate CSV files – one containing
the list of vertex vectors and the other containing the list of edge
vectors, and produces the distance between each vertex-pairs as a
CSV file, wherein each tuple contains a vertex-pair and distance
value. We have also proposed MAGSim algorithm to generate similarity graph using the distance values calculated by the MAGDist
algorithm. The proposed algorithm reads each vertex-pair and its
distance value < i, j, ∆(i, j) > and calculates similarity between the
vertex-pair (i, j) using equation 13.
sim(i, j) = 1 −
∆(i, j)
max {∆(x, y)}
Algorithm 1: MAGDist(Lv , Le , α, γ )
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Algorithm 2: MAGSim(D)
1
2
3
4
// generating similarity graph corresponding to a
multi-attributed graph
Input : A CSV file D containing vertex-pairs along with
distance output by the MAGDist.
Output : A CSV file G s containing edge-listed similarity graph.
dmax ← getMaxDistance(D);
for each tuple < i, j, ∆(i, j > in D do
∆(i, j)
sim(i, j) = 1 − d
write tuple < i, j, sim(i, j) > into G s ;
max
end
(13)
x,y ∈V
1
Compute ’17, November 16–18, 2017, Bhopal, India
// computing distance between all pairs of
vertices of a multi-attributed graph
Input : CSV files Lv and Le containing list of vertex vectors,
and edge vectors, respectively. An 1D array α[1..m],
Í
wherein α i ≥ 0 and m
i=1 α i = 1 for calculating the
linear combination of edge weights between a pair of
vertices. A positive integer threshold γ representing
the weightage of edges in distance calculation.
Output : A CSV file D containing distance between each pairs
of vertices.
nv ← vertexCount[Lv ] ;
// number of vertices
n ← vertexDimCount[Lv ] ; // vertex vector dimension
m ← edдeDimCount[Le ] ;
// edge vector dimension
V [nv ][n] ← Lv ;
// reading Lv into array V .
for each vertex-pair (i, j) ∈ Le do
// calculating aggregate weight ω(i, j) of the
edges between vertex-pair (i, j).
ω(i, j) ← 0;
for k ← 1 to m do
ω(i, j) = ω(i, j) + α[k] × ek (i, j) ;
// [eqn. 12]
end
// calculating the value of scalar quantity λ
1
;
// [eqn. 11]
λ = (1+ω(i,
j))γ
// calculating distance ∆(i, j) between the
vertex-pair (i, j).
d ← 0;
for k ← 1 to n do
d = d + (V [i][k] − V [j][k])2 ;
end
√
√
∆(i, j) = λ × d ;
// [eqn. 10]
write tuple < i, j, ∆(i, j) > into D ;
end
Example: Figure 2 presents a simpler case of multi-attributed
graph having four vertices in which each vertex is represented as
a two dimensional feature vector and each edge is represented as
an one dimensional vector. In case, there is no direct edge between
a pair of vertices (e.g., v 1 and v 3 ), the corresponding edge vector
is a zero vector. If we simply calculate the Euclidean distance between the vertex-pairs, then the distance between the vertex-pairs
(v 1 , v 2 ), (v 2 , v 3 ), (v 3 , v 4 ) and (v 4 , v 1 ) is same (10 unit), whereas the
distance calculated by the proposed MAGDist differs, based on
the weight of the edges between them. Similarly, the Euclidean
distance
between the vertex-pairs (v 1 , v 3 ) and (v 2 , v 4 ) are same
√
(10 2), but the distance values calculated using MAGDist are different. The distance values between each vertex-pairs of the multiattributed graph calculated using MAGDist for γ = 1 and γ = 2
are is shown in D 1 and D 2 matrices, respectively of figure 2. It can
be observed that giving more weightage to edges by increasing
the value of γ reduces the distance between the respective vertexpairs. Figure 3 presents a multi-attributed graph in which both
vertex and edge labels are multi-dimensional vectors. For example, the edge vector corresponding to the edge connecting v 2 and
v 3 is e®(2, 3) = (0.36, 0.64)T . If we simply calculate the Euclidean
distance between the vertex-pairs, then the distance between the
vertex-pairs (v 1 , v 2 ), (v 2 , v 3 ), (v 3 , v 4 ) and (v 4 , v 1 ) is same (10 unit),
whereas the distance calculated by the proposed MAGDist differs,
based on the weight of the edges between them. The distance values
between each vertex-pairs of the multi-attributed graph calculated
using MAGDist for γ = 1 and γ = 2 are is shown in D 1 and D 2
matrices, respectively of figure 3. It can be observed from these two
distance matrices too that giving more weightage to edges by increasing the value of γ reduces the distance between the respective
vertex-pairs.
4
EXPERIMENTAL SETUP AND RESULTS
To establish the efficacy of the proposed MAGDist distance measure,
we have considered the well-known Iris data set 1 , which contains
total 150 instances of three different categories of Iris flower; 50
instances of each category. The Iris data set can be represented as
a 150 × 5 data matrix, wherein each row represents an Iris flower,
first four columns represent four different attribute values in centimetre, and the 5th column represents class labels (categories) of
the Iris flower as Setosa, Virginica, or Versicolour. Table 1 shows
a partial snapshot of the Iris data set. We model Iris data set as a
multi-attributed similarity graph in which each vertex v ∈ <4 is a
4-dimensional vector representing a particular instance of the Iris
1 http://archive.ics.uci.edu/ml/datasets/Iris
Compute ’17, November 16–18, 2017, Bhopal, India
0 v04
10 10
v3 v1010
3
v4 1.01.0
1010
0 0 9.81
.14
9.81 1414
.14 8.894
.94
9.81
0
7
.
81
1010
9
.
81
0
7
.
81
1.01.0 0.64
0.64 D1D1
1414
.14.14 7.81
7.81 0 0 7.707
.07
8
.
94
10
7
.
07
0
8.94 10
7.07
0
1010
0.250.25
0 0
v
0 01
Abulaish et al.
0.04
v1 0.04
DD2 2
00
99.62
.62
14
14.14
.14
88.00
.00
99..62
62 14
14..14
14
00
66..10
10
66..10
00
10
77..07
5
.
00
07
00
88..00
7
.
07
7.07
00
55..00
0
v2 v20 0
Figure 2: A multi-attributed graph with vertices as multi-dimensional vectors, and distance matrices D 1 , D 2 calculated using
MAGDist algorithm for γ = 1 and γ = 2, respectively
0 0v4
10 10
v4
1.01.0
9.81 1414
.14 8.816
.16
0 0 9.81
.14
9
.
81
0
8
.
16
14
.14
0
8.16 14.14
D1 9.81
D
DD2 2
1
14
.
14
8
.
16
0
7
.
0.36 0.64
0.64
14.14 8.16
0
7.0707
0.36
8.16 1414
.07 0 0
8.16
.14.14 7.707
1010
1.01.0
0.250.25 0.750.75
0 0
v01
0
v3 v31010
v1
0.04
0.04
0.04
0.04
v2
1010
0
0
00
9
.62
9.62
14
.14
14.14
.67
66.67
62 14
14..14
14 66..67
99..62
67
0
6
.
67
14
.
0
6.67 14.14
14
6
.
67
55..00
6.67
00
00
14..14
14 55..00
00
00
14
v2
Figure 3: A multi-attributed graph with both vertices and edges as multi-dimensional vectors, and distance matrices D 1 , D 2
calculated using MAGDist algorithm for γ = 1 and γ = 2, respectively
Table 1: A partial view of the Iris data set
Sepal length
5.1
4.9
4.7
4.6
5.0
...
6.4
6.9
5.5
6.5
5.7
...
6.3
5.8
7.1
6.3
6.5
Sepal width
3.5
3.0
3.2
3.1
3.6
...
3.2
3.1
2.3
2.8
2.8
...
3.3
2.7
3.0
2.9
3.0
Petal length
1.4
1.4
1.3
1.5
1.4
...
4.5
4.9
4.0
4.6
4.5
...
6.0
5.1
5.9
5.6
5.8
Petal width
0.2
0.2
0.2
0.2
0.2
...
1.5
1.5
1.3
1.5
1.3
...
2.5
1.9
2.1
1.8
2.2
Species
Setosa
Setosa
Setosa
Setosa
Setosa
...
versicolor
versicolor
versicolor
versicolor
versicolor
...
virginica
virginica
virginica
virginica
virginica
flower. The edge (similarity) between a vertex-pair (u, v) is determined using equation 14 on the basis of the Gaussian kernel value
defined in equation 15, where σ = 1 is a constant value [23].
(
κ G (u, v) if κ G (u, v) ≥ 0.55
e(u, v) =
(14)
0
otherwise
κ G (u, v) = e
− ku−v2k
2σ
2
(15)
The resulting multi-attributed Iris similarity graph is shown in
figure 4 (termed hereafter as G 1 for rest of the paper), which contains 150 vertices and 2957 edges. In this graph, the instances of
Setosa, Virginica, or Versicolour are shown using triangles, squares,
and circles, respectively. Although κ G (u, u) = 1.0 for all vertices,
we haven’t shown self-loops in this graph. The proposed MAGDist
algorithm is applied over G 1 to calculate distance between all vertexpairs, and finally MAGSim algorithm is applied to generate Iris similarity graph (termed hereafter as G 2 for rest of the paper), which is
shown in figure 5. In [23], the authors have also used Gaussian kernel followed by the concept of nearest-neighbours to generate Iris
similarity graph (termed hereafter as G 3 for rest of the paper) and
applied Markov Clustering (MCL) to classify the instances of the
Iris data into three different categories. Therefore, in order to verify
the significance of our MAGDist and MAGSim algorithms, we have
also applied the MCL over the Iris similarity graphs G 1 and G 2 , and
present a comparative analysis of all clustering results. Figures 6
and 7 present the clustering results after applying MCL over the Iris
similarity graphs G 1 and G 2 , respectively. Table 2 presents the contingency table for the discovered clusters versus true Iris types from
all three different Iris similarity graphs. It can be observed from
this table that, in case of G 2 , only six instances of iris-versicolor
are wrongly grouped with iris-virginica in C 2 and three instances
of iris-virginica are wrongly grouped with iris-versicolor in C 3 ;
whereas in G 1 forty iris-versicolor instances are wrongly grouped
with iris-virginica in C 2 , and in G 3 , one instance of iris-versicolor
is wrongly grouped with iris-setosa in C 1 and fourteen instances of
iris-virginica are wrongly grouped with iris-versicolor in C 3 .
A Novel Weighted Distance Measure for Multi-Attributed Graph
Figure 4: G 1 : Iris data graph modelled as a multi-attributed
graph in which vertices are 4-dimensional real vectors and
edges are the Gaussian similarity (≥ 0.55) between the respective vertices
Compute ’17, November 16–18, 2017, Bhopal, India
Figure 6: Clustering results after applying MCL over the Iris
data graph G 1
Figure 7: Clustering results after applying MCL over the Iris
data graph G 2
Figure 5: G 2 : Iris data graph modelled as a multi-attributed
graph in which vertices are 4-dimensional real vectors and
edges are MAGSim similarity (≥ 0.80) calculated using equation 13
We have also analyzed the significance of different similarity
graph generation methods in terms of True Positive Rate (TPR) and
False Positive Rates (FPR) that are defined using equations 16 and
17, respectively. In these equations, TP is the True Positives, representing the number of correctly classified positive instances, FP is
the False Positives, representing the number of wrongly classified
negative instances, and P and N represent the total number of positive and negative instances, respectively in the data set. Table 3
presents TPR and FPR values for all three different similarity graphs,
showing best results for G 2 , which has been generated using our
proposed MAGDist and MAGSim algorithms.
Table 2: Contingency table for MCL clusters from similarity
graphs generated by three different methods
Clusters
C 1 (triangle)
C 2 (square)
C 3 (circle)
Set
50
0
0
G1
Vir
0
50
0
Ver
0
40
10
Set
50
0
0
G2
Vir
0
47
3
Ver
0
6
44
Set
50
0
0
G 3 [23]
Vir Ver
0
1
36
0
14
49
TPR =
TP
P
(16)
FPR =
FP
N
(17)
Compute ’17, November 16–18, 2017, Bhopal, India
Abulaish et al.
Table 3: Performance comparison of three different methods on Iris data set
Clusters
C 1 (triangle)
C 2 (square)
C 3 (circle)
Average
4.1
G1
TPR FPR
1.00
0.00
1.00
0.40
0.20
0.00
0.73 0.13
G2
TPR FPR
1.00 0.00
0.94 0.06
0.88 0.03
0.94 0.03
G 3 [23]
TPR FPR
1.00 0.01
0.72 0.00
0.98 0.14
0.90 0.05
Evaluation Results on Twitter Data Set
In order to illustrate the application of the proposed distance measure over a real multi-attributed graph, we have considered a Twitter data set of 300 tweets, comprising equal number of tweets related to three different events – NoteBandi (NTB), RyanInternationalSchool (RIS), and Rohingya (ROH). The tweets are modelled as
a multi-attributed graph, wherein each vertex represents a tweet
as an 110-dimensional binary vector based on the top-110 keyterms identified using tf-idf, and two different edges exist between
a vertex-pair – one representing the degree of Hashtags overlap
and the other representing the tweet-time overlap. The similarity
graph is generated using MAGSim algorithm, and shown in figure 8 wherein the instances of NoteBandi, RyanInternationalSchool,
and Rohingya are represented using squares, triangles, and circles,
respectively. Finally, MCL is applied over the similarity graph to
group the tweets into different clusters shown in figure 9. The evaluation of the obtained clusters is given in table 4. It can be seen
from this table that only five instances of RyanInternationalSchool
are wrongly clustered with Rohingya in C 3 .
Figure 8: Similarity graph generated from Twitter data set
(only edges having similarity value > 0.5 are shown)
5
CONCLUSION AND FUTURE WORKS
In this paper, we have proposed a novel weighted distance measure
based on weighted Euclidean norm that can be used to calculate
the distance between vertex-pairs of a multi-attributed graph containing multi-labelled vertices and multiple edges between a single
Figure 9: Clustering results obtained by applying MCL over
the similarity graph of the Twitter data set
Table 4: Evaluation results of the proposed method on Twitter data set
Clusters
C 1 (triangle)
C 2 (square)
C 3 (circle)
Contingency Table
NTB RIS ROH
0
95
0
100
0
0
0
5
100
Average
Evaluation Results
TPR
FPR
1.00
0.00
1.00
0.00
1.00
0.025
1.00
0.008
vertex-pair. The proposed distance measure considers both vertex
and edge weight-vectors, and it is flexible enough to assign different
weightage to different edges and scale the overall edge-weight while
computing the weighted distance between a vertex-pair. We have
also proposed a MAGDist algorithm that reads the lists of vertex and
edge vectors as two separate CSV files and calculates the distance
between each vertex-pairs using the proposed weighted distance
measure. Finally, we have proposed a multi-attributed similarity
graph generation algorithm, MAGSim, which reads the output produced by the MAGDist algorithm and generates a similarity graph,
which can be used by the existing classification and clustering algorithms for various analysis tasks. Since the proposed MAGDist
algorithm reads multi-attributed graph as CSV files containing
vertex and edge vectors, it can be scaled to handle large complex
graphs (aka big graphs). Applying proposed distance measure and
algorithms on (research articles) citation networks and online social
networks to analyze them at different levels of granularity is one
of the future directions of work.
REFERENCES
[1] Muhammad Abulaish and Sajid Yousuf Bhat. 2015. Classifier Ensembles using
Structural Features for Spammer Detection in Online Social Networks. Foundations of Computing and Decision Sciences 40, 2 (2015), 89–105.
A Novel Weighted Distance Measure for Multi-Attributed Graph
[2] Rakesh Agrawal, Johannes Gehrke, Dimitrios Gunopulos, and Prabhakar Raghavan. 1998. Automatic Subspace Clustering of High Dimensional Data for Data
Mining Applications. In Proceedings of the ACM SIGMOD international conference
on Management of data. Seattle, Washington, USA, 94–105.
[3] Faraz Ahmed and Muhammad Abulaish. 2013. A Generic Statistical Approach
for Spam Detection in Online Social Networks. Computer Communications 36,
10-11 (2013), 1120–1129.
[4] Sajid Yousuf Bhat and Muhammad Abulaish. 2013. Analysis and Mining of
Online Social Networks - Emerging Trends and Challenges. WIREs Data Mining
and Knowledge Discovery 3, 6 (2013), 408–444.
[5] Sajid Yousuf Bhat and Muhammad Abulaish. 2014. HOCTracker: Tracking the
Evolution of Hierarchical and Overlapping Communities in Dynamic Social
Networks. IEEE Transactions on Knowledge and Data Engineering 27, 4 (2014),
1019–1032. DOI:http://dx.doi.org/10.1109/TKDE.2014.2349918
[6] Sajid Yousuf Bhat and Muhammad Abulaish. 2014. Using communities against
deception in online social networks. Computer Fraud & Security 2014, 2 (2014),
8–16.
[7] Sajid Yousuf Bhat and Muhammad Abulaish. 2015. OCMiner: A Density-Based
Overlapping Community Detection Method for Social Networks. Intelligent Data
Analysis 19, 4 (2015), 1–31.
[8] Sajid Yousuf Bhat and Muhammad Abulaish. 2017. A Unified Framework for
Community Structure Analysis in Dynamic Social Networks. In Hybrid Intelligence for Social Networks, Hema Banati, Siddhartha Bhattacharyya, Ashish Mani,
and Mario Koppen (Eds.). Springer, 1–21.
[9] Sajid Y. Bhat, Muhammad Abulaish, and Abdulrahman A. Mirza. 2014. Spammer Classification using Ensemble Methods over Structural Social Network
Features. In Proceedings of the 14th IEEE/WIC/ACM International Conference on
Web Intelligence (WIâĂŹ14). Warsaw, Poland, 11–14.
[10] Mohd Fazil and Muhammad Abulaish. 2017. Identifying Active, Reactive, and
Inactive Targets of Socialbots in Twitter. In Proceedings of the IEEE/WIC/ACM
International Conference on Web Intelligence (WIâĂŹ17), Leipzig, Germany. ACM,
573–580.
[11] Mohd Fazil and Muhammad Abulaish. 2017. Why a Socialbot is Effective in
Twitter? A Statistical Insight. In Proceedings of the 9th International Conference on
Communication Systems and Networks (COMSNETS), Social Networking Workshop,
Bengaluru, India. 562–567.
Compute ’17, November 16–18, 2017, Bhopal, India
[12] David Gibson, Jon Kleinberg, and Prabhakar Raghavan. 1998. Inferring Web
Communities from Link Topology. In Proceedings of the Ninth ACM Conference
on Hypertext and Hypermedia. ACM, New York, USA, 225–234.
[13] M. Girvan and M. E. J. Newman. 2002. Community structure in social and
biological networks. In Proceedings of the National Academy of Sciences. USA,
8271–8276.
[14] Natallia Katenka and Eric D. Kolaczyk. 2011. Multi-Attribute Networks and the
Impact of Partial Information on Inference and Characterization. The Annals of
Applied Statistics 6, 3 (2011), 1068–1094.
[15] Mladen Kolar, Han Liu, and Eric P. Xing. 2014. Graph Estimation From MultiAttribute Data. Journal of Machine Learning Research 15 (2014), 1713–1750.
http://jmlr.org/papers/v15/kolar14a.html
[16] M. E. Newman and M. Girvan. 2004. Finding and evaluating community structure
in networks. Physical Review E69, 026113 (Feb 2004).
[17] Raymond T. Ng and Jiawei Han. 1994. Efficient and Effective Clustering Methods
for Spatial Data Mining. In Proceedings of the 20th International Conference on
Very Large Data Bases. Santiago, Chile, 144–155.
[18] Peter J. Olver. 2008. Numerical Analysis Lecture Note. Retrieved on 18.03.2017
from http://www-users.math.umn.edu/~olver/num_/lnn.pdf.
[19] S. E. Schaeffer. 2007. Graph clustering. Computer Science Review 1, 1 (2007),
27–64. DOI:http://dx.doi.org/10.1016/j.cosrev.2007.05.001
[20] Yuanyuan Tian, Richard A. Hankins, and Jignesh M. Patel. 2008. Efficient Aggregation for Graph Summarization. In Proceedings of the ACM SIGMOD International
Conference on Management of Data. Vancouver, Canada, 567–580.
[21] E.W. Weisstein. 2002. Vector Norm. Wolfram MathWorld. Retrieved on 18.03.2017
from http://mathworld.wolfram.com/VectorNorm.html.
[22] Xiaowei Xu, Nurcan Yuruk, Zhidan Feng, and Thomas A. J. Schweiger. 2007.
SCAN: A Structural Clustering Algorithm for Networks. In Proceedings of the
13th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining. San Jose, California, USA, 824–833.
[23] Mohammed J. Zaki and Wagner Meira Jr. 2014. Data Mining and Analysis:
Fundamental Concepts and Algorithms. Cambridge University Press, New York,
USA.
[24] Yang Zhou, Hong Cheng, and Jeffrey Xu Yu. 2009. Graph Clustering Based on
Structural/Attribute Similarities. VLDB Endowment 2, 1 (Aug 2009), 718–729.
DOI:http://dx.doi.org/10.14778/1687627.1687709
| 1 |
A Machine Learning Approach to Optimal Tikhonov Regularization
A Machine Learning Approach to Optimal Tikhonov
Regularization I: Affine Manifolds
Ernesto De Vito (corresponding author)
[email protected]
arXiv:1610.01952v2 [math.NA] 12 Oct 2017
DIMA, Università di Genova, Via Dodecaneso 35, Genova, Italy
Massimo Fornasier
[email protected]
Technische Universität München, Fakultät Mathematik, Boltzmannstrasse 3 D-85748,
Garching bei München, Germany
Valeriya Naumova
[email protected]
Simula Research Laboratory, Martin Linges vei 25, Fornebu, Norway
Abstract
Despite a variety of available techniques the issue of the proper regularization parameter
choice for inverse problems still remains one of the relevant challenges in the field. The
main difficulty lies in constructing a rule, allowing to compute the parameter from given
noisy data without relying either on any a priori knowledge of the solution or on the noise
level. In this paper we propose a novel method based on supervised machine learning to
approximate the high-dimensional function, mapping noisy data into a good approximation
to the optimal Tikhonov regularization parameter. Our assumptions are that solutions
of the inverse problem are statistically distributed in a concentrated manner on (lowerdimensional) linear subspaces and the noise is sub-gaussian. We show that the number of
previously observed examples for the supervised learning of the optimal parameter mapping
scales at most linearly with the dimension of the solution subspace. Then we also provide
explicit error bounds on the accuracy of the approximated parameter and the corresponding
regularization solution. Even though the results are more of theoretical nature, we present
a recipe for the practical implementation of the approach, we discuss its computational
complexity, and provide numerical experiments confirming the theoretical results. We also
outline interesting directions for future research with some preliminary results, confirming
their feasibility.
Keywords: Tikhonov regularization, parameter choice rule, sub-gaussian vectors, high
dimensional function approximations, concentration inequalities.
1. Introduction
In many practical problems, one cannot observe directly the quantities of most interest;
instead their values have to be inferred from their effects on observable quantities. When
this relationship between observable Y and the quantity of interest X is (approximately)
linear, as it is in surprisingly many cases, the situation can be modeled mathematically by
the equation
Y = AX
(1)
for A being a linear operator model. If A is a “nice”, easily invertible operator, and if the
data Y are noiseless and complete, then finding X is a trivial task. Often, however, the
1
De Vito, Fornasier and Naumova
mapping A is ill-conditioned or not invertible. Moreover, typically (1) is only an idealized
version, which completely neglects any presence of noise or disturbances; a more accurate
model is
Y = AX + η,
(2)
in which the data are corrupted by an (unknown) noise. In order to deal with this type of
reconstruction problem a regularization mechanism is required (Engl et al., 1996).
Regularization techniques attempt to incorporate as much as possible an (often vague) a
priori knowledge on the nature of the solution X. A well-known assumption which is often
used to regularize inverse problems is that the solution belongs to some ball of a suitable
Banach space.
Regularization theory has shown to play its major role for solving infinite dimensional
inverse problems. In this paper, however, we consider finite dimensional problems, since we
intend to use probabilistic techniques for which the Euclidean space is the most standard
setting. Accordingly, we assume the solution vector X ∈ Rd , the linear model A ∈ Rm×d ,
and the datum Y ∈ Rm . In the following we denote with kZk the Euclidean norm of a vector
Z ∈ RN . One of the most widely used regularization approaches is realized by minimizing
the following, so-called, Tikhonov functional
min kAz − Y k2 + α kzk2 .
z∈Rd
(3)
with α ∈ (0, +∞). The regularized solution Z α := Z α (Y ) of such minimization procedure
is unique. In this context, the regularization scheme represents a trade-off between the
accuracy of fitting the data Y and the complexity of the solution, measured by a ball in
Rd with radius depending on the regularization parameter α. Therefore, the choice of the
regularization parameter α is very crucial to identify the best possible regularized solution,
which does not overfit the noise. This issue still remains one of the most delicate aspects
of this approach and other regularization schemes. Clearly the best possible parameter
minimizes the discrepancy between Z α and the solution X
α∗ = arg
min
α∈(0,+∞)
kZ α − Xk.
Unfortunately, we usually have neither access to the solution X nor to information about
the noise, for instance, we might not be aware of the noise level kηk. Hence, for determining
a possible good approximation to the optimal regularization parameter several approaches
have been proposed, which can be categorized into three classes
• A priori parameter choice rules based on the noise level and some known “smoothness”
of the solution encoded in terms, e.g., of the so-called source condition (Engl et al.,
1996);
• A posteriori parameter choice rules based on the datum Y and the noise level;
• A posteriori parameter choice rules based exclusively on the datum Y or, the so-called,
heuristic parameter choice rules.
2
A Machine Learning Approach to Optimal Tikhonov Regularization
For the latter two categories there are by now a multitude of approaches. Below we recall
the most used and relevant of them, indicating in square brackets their alternative names,
accepted in different scientific communities. In most cases, the names we provide are the
descriptive names originally given to the methods. However, in a few cases, there was no
original name, and, to achieve consistency in the naming, we have chosen an appropriate
one, reflecting the nature of the method. We mention, for instance, (transformed/modified)
discrepancy principle [Raus-Gfrerer rule, minimum bound method]; monotone error rule;
(fast/hardened) balancing principle also for white noise; quasi-optimality criterion; L-curve
method; modified discrepancy partner rule [Hanke-Raus rule]; extrapolated error method;
normalized cumulative periodogram method; residual method; generalized maximum likelihood; (robust/strong robust/modified) generalized cross-validation. Considering the large
number of available parameter choice methods, there are relatively few comparative studies
and we refer to (Bauer and Lukas, 2011) for a rather comprehensive discussion on their differences, pros and contra. One of the features which is common to most of the a posteriori
parameter choice rules is the need of solving (3) multiple times for different values of the
parameters α, often selected out of a conveniently pre-defined grid.
In this paper, we intend to study a novel, data-driven, regularization method, which also
yields approximations to the optimal parameter in Tikhonov regularization. After an offline learning phase, whose complexity scales at most algebraically with the dimensionality
of the problem, our method does not require any additional knowledge of the noise level; the
computation of a near-optimal regularization parameter can be performed very efficiently
by solving the regularization problem (3) only a moderated amount times, see Section 6 for a
discussion on the computational complexity. In particular cases, no solution of (3) is actually
needed, see Section 5. Not being based on the noise level, our approach fits into the class
of heuristic parameter choice rules (Kindermann, 2011). The approach aims at employing
the framework of supervised machine learning to the problem of approximating the highdimensional function, which maps noisy data into the corresponding optimal regularization
parameter. More precisely, we assume that we are allowed to see a certain number n of
examples of solutions Xi and corresponding noisy data Yi = AXi + ηi , for i = 1, . . . , n.
For all of these examples, we are clearly capable to compute the optimal regularization
parameters as in the following scheme
(X1 , Y1 ) → α∗1 = arg
α∈(0,+∞)
(X2 , Y2 ) → α∗2 = arg
α∈(0,+∞)
(Xn , Yn ) → α∗n = arg
α∈(0,+∞)
...
...
min
kZ α (Y1 ) − X1 k
min
kZ α (Y2 ) − X2 k
min
kZ α (Yn ) − Xn k
(??, Y ) → ᾱ
Denote µ the joint distribution of the empirical samples (Y1 , α∗1 ), . . . , (Yn , α∗n ). Were its
conditional distribution
R ∞ µ(· | Y ) with respect to the first variable Y very much concentrated
(for instance, when 0 (α − ᾱ)q dµ(α | Y ) is very small for q ≥ 1 and for variable Y ), then
3
De Vito, Fornasier and Naumova
we could design a proper regression function
R : Y 7→ ᾱ := R(Y ) =
Z
0
∞
αdµ(α | Y ).
Such a mapping would allow us, to a given new datum Y (without given solution!), to
associate the corresponding parameter ᾱ not too far from the true optimal one α∗ , at least
with high probability. We illustrate schematically this theoretical framework in Figure 1.
Figure 1: Learning optimal regularization parameters from previously observed samples by
approximation of the regression function R.
At a first glance, this setting may seem quite hopeless. First of all, one should establish
the concentration of the conditional distribution generating α∗ given Y . Secondly, even
if we assume that the regression function R is very smooth, the vectors Y belong to the
space Rm and the number of observations n required to learn such a function need to scale
exponentially with the dimension m (Novak and Woźniakowski, 2009). It is clear that we
cannot address neither of the above issues in general. The only hope is that the solutions
are statistically distributed in a concentrated manner over smooth sets of lower dimension
h ≪ m and the noise has also a concentrated distribution, so that the corresponding data Y
are concentrated around lower-dimensional sets as well. And luckily these two assumptions
are to a certain extent realistic.
By now, the assumption that the possible solutions belong to a lower-dimensional set
of Rd has become an important prior for many signal and image processing tasks. For
instance, were solutions natural images, then it is known that images can be represented as
nearly-sparse coefficient vectors with respect to shearlets expansions (Kutyniok and Labate,
2012).
Hence, in this case the set of possible solutions can be stylized as a union of
lower-dimensional linear subspaces, consisting of sparse vectors (Mallat, 2009). In other
situations, it is known that the solution set can be stylized, at least locally, as a smooth
lower-dimensional nonlinear manifold V (Allard et al., 2012; Chen et al., 2013; Little et al.,
2017). Also in this case, at least locally, it is possible to approximate the solution set by
means of affine lower-dimensional sets, representing tangent spaces to the manifold. Hence,
the a priori knowledge that the solution is belonging to some special (often nonlinear) set
4
A Machine Learning Approach to Optimal Tikhonov Regularization
should also be taken into account when designing the regularization method.
In this paper, we want to show very rigorously how one can construct, from a relab to the regression
tively small number of previously observed examples, an approximation R
function R, which is mapping a noisy datum into a good approximation to the optimal
Tikhonov regularization parameter. To this end, we assume the solutions to be distributed
sub-gaussianly over a linear subspace V ⊂ Rd of dimension h ≪ m and the noise η to be
also sub-gaussian. The first statistical assumption is perhaps mostly technical to allow us
to provide rigorous estimates. Let us describe the method of computation as follows. We
introduce the m × m noncentered covariance matrix built from the noisy measurements
n
X
bn = 1
Yi ⊗ Yi ,
Σ
n
i=1
b n the projections onto the vector space spanned by the first most relevant
and we denote by Π
b
eigenvectors of Σn . Furthermore, we set α
bn ∈ (0, +∞) as the minimizer of
b n Y k2 ,
min kZ α − A† Π
α∈(0,+∞)
where A† is the pseudo-inverse. We define
b )=α
R(Y
bn
and we claim that this is actually a good approximation, up to noise level, to R as soon as n
is large enough, without incurring in the curse of dimensionality (i.e., without exponential
dependency of the computational complexity on d). More precisely, we prove that, for a
2
given τ > 0, with probability greater than 1 − 6e−τ , we have that
1
∗
kZ αbn − Xk ≤ kZ α − Xk + B(n, τ, σ),
σd
where σd is the smallest positive singular value of A. Let us stress that B(n, τ, σ) gets
b ) = α
actually small for small σ and for n = O(m × h) (see formula (33)) and R(Y
bn is
σ-optimal. We provide an explicit expression for B in Proposition 6. In the special case
where A = I we derive a bound on the difference between the learned parameter α
bn and the
optimal parameter α∗ , see Theorem 12, justifying even more precisely the approximation
b )=α
R(Y
bn ≈ α∗ = R(Y ).
The paper is organized as follows: After introducing some notation and problem set-up
in the next section, we provide the accuracy bounds on the learned estimators with respect
to their distribution dependent counterparts in Section 3. For the special case A = I we provide an explicit bound on the difference between the learned and the optimal regularization
parameter and discuss the amount of samples needed for an accurate learning in Section
4. We also exemplify the presented theoretical results with a few numerical illustrations.
Section 5 provides explicit formulas by means of numerical linearization for the parameter
learning. Section 6 offers a snapshot of the main contributions and presents a list of open
questions for future work. Finally, Appendix A and Appendix B contain some background
information on perturbation theory for compact operators, the sub-gaussian random variables, and proofs of some technical theorems, which are valuable for understanding the
scope of the paper.
5
De Vito, Fornasier and Naumova
2. Setting
This section presents some background material and sets the notation for the rest of the
work. First, we fix some notation. The Euclidean norm of a vector v is denoted by kvk
and the Euclidean scalar product between two vectors v, w by hv, wi. We denote with S d−1
the Euclidean unit sphere in Rd . If M is a matrix, M T denotes its transpose, M † the
pseudo-inverse, M †k = (M † )k and kM k its spectral norm. Furthermore, ker M and ran M
are the null space and the range of M respectively. For a square-matrix M , we use Tr(M )
to denote its trace. If v and w are vectors (possibly of different length), v ⊗ w is the rank
one matrix with entries (v ⊗ w)ij = vi wj .
Given a random vector ξ ∈ Rd , its noncentered covariance matrix is denoted by
Σξ = E[ξ ⊗ ξ],
which is a positive matrix satisfying the following property
ran Σξ = (ker Σξ )⊥ = span{x ∈ Rd | P[ξ ∈ B(x, r)] > 0 ∀r > 0},
(4)
here B(x, r) denotes the ball of radius r with the center at x. A random vector ξ is called
sub-gaussian if
1
1
(5)
kξkψ2 := sup sup q − 2 E[|hξ, vi|q ] q < +∞.
v∈S d−1 q≥1
The value kξkψ2 is called the sub-gaussian norm of ξ and the space of sub-gaussian vectors becomes a normed vector space (Vershynin, 2012). Appendix B reviews some basic
properties about sub-gaussian vectors.
We consider the following class of inverse problems.
Assumption 1 In the statistical linear inverse problem
Y = AX + σW,
the following conditions hold true:
a) A is an m × d-matrix with norm kAk = 1;
b) the signal X ∈ Rd is a sub-gaussian random vector with kXkψ2 = 1;
c) the noise W√∈ Rm is a sub-gaussian centered random
vector independent of X with
√
kW kψ2 = 1/ 2 and with the noise level 0 < σ < 2;
d) the covariance matrix ΣX of X has a low rank matrix, i.e.,
rank(ΣX ) = h ≪ d.
We add some comments on the above conditions. The normalisation assumptions on kAk,
kXkψ2 and kW kψ2 are stated only to simplify the bounds. They can
√ always be satisfied by
rescaling A, X and W and our results hold true by replacing σ with 2kW kψ2 kAk−1 kXk−1
ψ2 σ.
The upper bound on σ reflects the intuition that σW is a small perturbation of the noiseless
problem.
6
A Machine Learning Approach to Optimal Tikhonov Regularization
Condition d) means that X spans a low dimensional subspace of Rd . Indeed, by (4)
condition d) is equivalent to the fact that the vector space
V = ran ΣX = span{x ∈ Rd | P[X ∈ B(x, r)] > 0 for all r > 0}
(6)
is an h-dimensional subspace and h is the dimension of the minimal subspace containing X
with probability 1, i.e.,
h = min dim K,
K
where the minimum is taken over all subspaces K ⊂ Rd such that P[X ∈ K] = 1.
We write a . b if there exists an absolute constant C such that a ≤ Cb. By absolute
we mean that it holds for all the problems Y = AX + σW satisfying Assumption 1, in
particular, it is independent of d, m and h.
The datum Y depends only on the projection X † of X onto ker A⊥ and Z α as solutions
of (3) also belong to ker A⊥ . Therefore, we can always assume, without loss of generality,
for the rest of the paper that A is injective by replacing X with X † , which is a sub-gaussian
random vector, and Rd with ker A⊥ .
Since A is injective, rank(A) = d and we define the singular value decomposition of A
by (ui , vi , σi )di=1 , so that A = U DV T or
Avi = σi ui ,
i = 1, . . . , d,
where σ1 ≥ σ2 ≥ · · · ≥ σd > 0. Since kAk = 1, clearly σ1 = 1. Furthermore, let Q be the
projection onto the span{u1 , . . . , ud }, so that QA = A, and we have the decomposition
Q = AA† .
(7)
Recalling (6), since A is now assumed injective and
ΣAX = E[AX ⊗ AX] = AΣX AT ,
then
W = ran ΣAX = (ker ΣAX )⊥ = AV,
(8)
and, by condition d) in Assumption 1, we have as well dim W = h.
We denote by Π the projection onto W and by
p = max{i ∈ {1, . . . , d} | Πui 6= 0},
(9)
so that, with probability 1,
ΠAX = AX
and
X=
p
X
hX, vi ivi .
(10)
i=1
Finally, the random vectors η = σW , AX, and Y are sub-gaussian and take value in Rm ,
W, and Rm , respectively, with
kAXkψ2 ≤ kAT kkXkψ2 = 1
kY kψ2 ≤ kAXkψ2 + σkW kψ2 ≤ 2
7
(11)
De Vito, Fornasier and Naumova
√
since, by Assumption 1, kAk = 1 and σ ≤ 2.
For any t ∈ [0, 1] we set Z t as the solution of the minimization problem
min t kAz − Y k2 + (1 − t) kzk2 ,
z∈Rd
(12)
which is the solution of the Tikhonov functional
min kAz − Y k2 + α kzk2 .
z∈Rd
with α = (1 − t)/t ∈ [0, +∞].
For t < 1, the solution is unique, for t = 1 the minimizer is not unique and we set
Z 1 = A† Y.
The explicit form of the solution of (12) is given by
Z t = t(tAT A + (1 − t)I)−1 AT Y
(13)
d
X
tσi
2 + (1 − t) hY, ui ivi
tσ
i
i=1
d
X
tσi2
tσi
=
hX, vi i + 2
hη, ui i vi ,
tσi2 + (1 − t)
tσi + (1 − t)
i=1
=
which shows that Z t is also a sub-gaussian random vector.
We seek for the optimal parameter t∗ ∈ [0, 1] that minimizes the reconstruction error
min kZ t − Xk2 .
t∈[0,1]
(14)
Since X is not known, the optimal parameter t∗ can not be computed. We assume that we
have at disposal a training set of n-independent noisy data
Y1 , . . . , Yn ,
where Yi = AXi + σWi , and each pair (Xi , Wi ) is distributed as (X, W ), for i = 1, . . . , n.
We introduce the m × m empirical covariance matrix
n
X
bn = 1
Σ
Yi ⊗ Yi ,
n
(15)
i=1
b n the projections onto the vector space spanned by the first h-eigenvectors
and we denote by Π
b
of Σn , where the corresponding (repeated) eigenvalues are ordered in a nonincreasing way.
b n in terms of spectral gap at
Remark 1 The well-posedness of the empirical realization Π
the h-th eigenvalue will be given in Theorem 3, where we show that for n large enough the
h + 1-th eigenvalue is strictly smaller than the h-th eigenvalue.
8
A Machine Learning Approach to Optimal Tikhonov Regularization
We define the empirical estimators of X and η as
b = A† Π
b nY
X
so that, by Equation (7),
b n Y ),
ηb = (Y − Π
and
b + Qb
AX
η = QY.
(16)
(17)
Furthermore, we set b
tn ∈ [0, 1] as the minimizer of
b 2.
min kZ t − Xk
t∈[0,1]
b is close to X, we expect that the solution Z btn has a reconstruction error close to the
If X
minimum value. In the following sections, we study the statistical properties of b
tn . However,
we first provide some a priori information on the optimal regularization parameter t∗ .
2.1 Distribution dependent quantities
We define the function t 7→ kR(t)k2 , where
R(t) = Z t − X
t ∈ [0, 1]
is the reconstruction error vector. Clearly, the function t 7→ kR(t)k2 is continuous, so that
a global minimizer t∗ always exists in the compact interval [0, 1].
Define for all t ∈ [0, 1] the d × d matrix
d
X
(tσi2 + 1 − t) vi ⊗ vi ,
B(t) = tA A + (1 − t)I =
T
i=1
which is invertible since A is injective and its inverse is
B(t)−1 =
d
X
tσi2
i=1
1
vi ⊗ vi .
+1−t
Furthermore, B(t) and B(t)−1 are smooth functions of the parameter t and
B ′ (t) = (AT A − I)
(B(t)−1 )′ = −B(t)−2 (AT A − I).
(18)
Since Y = AX + η, expression (13) gives
R(t) = tB(t)−1 AT Y − X
= tB(t)−1 AT (AX + η) − X
= B(t)−1 (tAT AX − B(t)X + tAT η)
= B(t)−1 (−(1 − t)X + tAT η).
Hence,
kR(t)k2 = kB(t)−1 (−(1 − t)X + tAT η)k2
d
X
−(1 − t)ξi + tσi νi 2
,
=
tσi2 + (1 − t)
i=1
9
(19)
De Vito, Fornasier and Naumova
where for all i = 1, . . . , d
ξi = hX, vi i
νi = hη, ui i.
In order to characterize t∗ we may want to seek it among the zeros of the following function
H(t) =
1 d
kZ t − Xk2 = hR(t), R′ (t)i.
2 dt
Taking into account (18), the differentiation of (19) is given by
R′ (t) = B(t)−1 AT Y − tB(t)−2 (AT A − I)AT Y
(20)
= B(t)−2 (B(t) − t(AT A − I))AT Y
= B(t)−2 AT Y,
so that
H(t) = hAB(t)−3 (−(1 − t)X + tAT η), AX + ηi
=
=
=
d
X
i=1
d
X
i=1
d
X
σi
−(1 − t)ξi + tσi νi
(ξi σi + νi )
(tσi2 + (1 − t))3
σi ξi (ξi σi + νi )
(σi νi ξi−1 + 1)t − 1
(1 − (1 − σi2 )t)3
σi αi hi (t),
i=1
where αi = ξi (σi ξi + νi ) and hi (t) =
We observe that
(σi νi ξi−1 +1)t−1
.
(1−(1−σi2 )t)3
a) if t = 0 (i.e., α = +∞), B(0) = I, then
H(0) = −kAXk2 − hAX, ηi,
which is negative if kΠηk ≤ kAXk, i.e., for
σ≤
kAXk
.
kΠW k
Furthermore, by construction,
E[H(0)] = − Tr(ΣAX ) < 0;
b) if t = 1 (i.e., α = 0), B(1) = AT A and
H(1) = hA(AT A)−3 AT η, AX + ηi
= k(AAT )† ηk2 + h(AAT )† η, (AT )† Xi,
10
(21)
A Machine Learning Approach to Optimal Tikhonov Regularization
which is positive if k(AAT )† ηk ≥ k(AT )† Xk, for example, when
σ ≥ σd
kXk
.
|hW, ud i|
Furthermore, by construction,
E[H(1)] = Tr(Σ(AAT )† η ) > 0.
Hence, if the noise level satisfies
kAXk
kXk
≤σ≤
|hW, ud i|
kΠW k
the minimizer t∗ is in the open interval (0, 1) and it is a zero of H(t). If σ is too small,
there is no need of regularization since we are dealing with a finite dimensional problem.
∗
On the opposite side, if σ is too big, the best solution is the trivial one, i.e., Z t = 0.
2.2 Empirical quantities
We replace X and η with their empirical counterparts defined in (16). By Equation (17)
and reasoning as in Equation (19), we obtain
bn (t) = Z t − X
b
R
b
= tB(t)−1 AT QY − X
b + tAT ηb),
= B(t)−1 (−(1 − t)X
and
bn (t)k2 = kB(t)−1 (−(1 − t)X
b + tAT ηb)k2
kR
!2
d
X
−(1 − t)ξbi + tσi νbi
=
,
2 + (1 − t)
tσ
i
i=1
where for all i = 1, . . . , d
b vi i
ξbi = hX,
Clearly,
and
νbi = hb
η , ui i.
b′ (t) = R′ (t) = B(t)−2 AT QY.
R
n
From (21), we get
c′ n (t)i
b n (t) = hR
bn (t), R
H
(23)
b + tAT ηb), AT AX
b + AT ηb)i
= hB(t)−3 (−(1 − t)X
=
=
d
X
−(1 − t)ξbi + tσi νbi
i=1
d
X
i=1
(22)
(1 − (1 − σi2 )t)3
σi α
bi b
hi (t),
11
(ξbi σi2 + σi νbi ),
De Vito, Fornasier and Naumova
where α
bi = ξbi (σi ξbi + νbi ) and b
hi (t) =
(σi νbi ξbi−1 +1)t−1
.
(1−(1−σi2 )t)3
b n , which can be useful as a different numerical
An alternative form in terms of Y and Π
implementation, is
b n (t) = hB(t)−1 (tAT (Y − Π
b n Y ) − (1 − t)A† Π
b n Y ), B(t)−2 AT Y i
H
b n Y ) − (1 − t)QΠ
b n Y , (tAAT + (1 − t)I)†3 QY i
= htAAT (Y − Π
b n Y ) − (1 − t)Π
b n Y , (tAAT + (1 − t)I)†3 QY i.
= htAAT (Y − Π
bn (t)k2 always exists in [0, 1] and, for σ
As for t∗ , the minimizer b
tn of the function t 7→ kR
in the range of interest, it is in the open interval (0, 1), so that it is a zero of the function
b n (t).
H
3. Concentration inequalities
In this section, we bound the difference between the empirical estimators and their distribution dependent counterparts.
By (8) and item d) of Assumption 1, the covariance matrix ΣAX has rank h and, we
set λmin to be the smallest non-zero eigenvalue of ΣAX . Furthermore, we denote by ΠY
the projection from Rm onto the vector space spanned by the eigenvectors of ΣY with
corresponding eigenvalue greater than λmin /2.
The following proposition shows that ΠY is close to Π if the noise level is small enough.
Proposition 2 If σ 2 < λmin /4, then dim ran ΠY = h and
kΠY − Πk ≤
2σ 2
.
λmin
(24)
Proof Since AX and W are independent and W has zero mean, then
ΣY = ΣAX + σ 2 ΣW .
Since ΣW is a positive matrix and W is a sub-gaussian vector satisfying (11), with the
choice q = 2 in (5), we have
kΣW k = sup hΣW v, vi = sup E[hW, vi2 ] ≤ 2kW k2ψ2 = 1,
v∈S m−1
(25)
v∈S m−1
so that kΣY − ΣAX k ≤ σ 2 < λmin /4.
We now apply Proposition 15 with A = ΣAX and eigenvalues (αj )j and projections1
(Pj )j , and B = ΣY with eigenvalues (βℓ )ℓ and projections (Qℓ )ℓ . We choose j such that
αj = λmin so that αj+1 = 0, Pj = Π and, by (8),
dim ran Pj = dim ran Π = dim ran ΣAX = h.
1. In the statement of Proposition 15 the eigenvalues are counted without their multiplicity and ordered in
a decreasing way and each Pj is the projection onto the vector space spanned by the eigenvectors with
corresponding eigenvalue greater or equal than αj .
12
A Machine Learning Approach to Optimal Tikhonov Regularization
Then there exists ℓ such that βℓ+1 < λmin /2 < βℓ , so that Qℓ = ΠY and it holds that
dim ran ΠY = dim ran Pj = h. Finally, (45) implies (24) since αh+1 = 0.
b n is the projection onto the vector space spanned by the first h-eigenvectors
Recall that Π
b
of Σn defined by (15).
2
b n coincides with the proTheorem 3 Given τ > 0 with probability greater than 1−2e−τ , Π
b n with corresponding eigenvalue
jection onto the vector space spanned by the eigenvectors of Σ
greater than λmin /2. Furthermore
r
1
τ
m
2
b
kΠn − Πk .
+ √ +σ ,
(26)
λmin
n
n
provided that
√
n & ( m + τ )2 max
σ2 <
Proof
λmin
.
8
64
,1
λ2min
We apply Theorem 20 with ξi = Yi and
r
τ
m
+ √ ≤ min{1, λmin /8} ≤ 1,
δ=C
n
n
(27)
(28)
2
by (25). Since δ2 ≤ δ, with probability greater than 1 − 2e−τ ,
b n − ΣAX k ≤ kΣ
b n − ΣY k + kΣY − ΣAX k
kΣ
r
τ
m
+√
+ σ2
≤C
n
n
λmin λmin
λmin
≤
+
=
,
8
8
4
where the last inequality follows by (28).
b n and
As in the proof of Proposition 2, we apply Proposition 15 with A = ΣAX , B = Σ
αj = λmin to be the smallest eigenvalue of ΣAX , so that Pj = Π and dim ran Pj = h.
b n such that βℓ+1 < λmin /2 < βℓ and
Then, there exists a unique eigenvalue βℓ of Σ
b
dim ran Qℓ = dim ran Pj = h. Then Qℓ = Πn and (45) implies (26). Note that the constant
C depends on kY kψ2 ≤ 2 by (11), so that it becomes an absolute constant, when considering
the worst case kY kψ2 = 2.
If n and σ satisfy (27), the above proposition shows that the empirical covariance matrix
b n has a spectral gap around the value λmin /2 and the number of eigenvector with corΣ
b n is uniquely defined.
responding eigenvalue greater than λmin /2 is precisely h, so that Π
Furthermore, the dimension h can be estimated by observing spectral gaps in the singular
b n.
value decomposition of Σ
If the number n of samples goes to infinity, bound (26) does not converge to zero due
to term proportional to the noise level σ. However, if the random noise W is isotropic, we
can improve the estimate.
13
De Vito, Fornasier and Naumova
2
Theorem 4 Assume that ΣW = Id. Given τ > 0 with probability greater than 1 − 2e−τ ,
r
1
τ
m
b n − Πk .
kΠ
+√
,
(29)
λmin
n
n
provided that
√
16
n & ( m + τ )2 {1, 2 }
λmin
λmin
σ2 <
.
2
(30)
Proof As in the proof of Proposition 2, we have that
ΣY = ΣAX + σ 2 ΣW = ΣAX + σ 2 Id,
where the last equality follows from the assumption that the noise is isotropic. Hence, the
matrices ΣY and ΣAX have the same eigenvectors, whereas the corresponding eigenvalues
are shifted by σ 2 . Taking into account that λmin is the smallest non-zero eigenvalue of ΣAX
and denoted by (αj )N
j=1 the eigenvalues of ΣY , it follows that there exists j = 1, . . . , N such
that
α1 > α2 > αj = λmin + σ 2
αj+1 = . . . = αN = σ 2 .
Furthermore, denoted by Pj the projection onto the vector space spanned by the eigenvectors
with corresponding eigenvalue greater or equal than αj , it holds that Π = Pj . By assumption
σ 2 < λmin /2, so that ΠY = Pj = Π and, hence, dim ran Pj = h.
2
As in the proof of Theorem 3, with probability 1 − 2e−τ ,
r
τ
αh − αh+1
λmin
m
b n − ΣY k ≤ C
+√
}<
,
< min{1,
kΣ
n
n
4
4
b n such
where n is large enough, see (30). Then, there exists a unique eigenvalue βℓ of Σ
λmin
b n and (45)
that βℓ+1 < 2 + σ 2 < βℓ and dim ran Qℓ = dim ran Pj = h. Then Qℓ = Π
implies (29).
We need the following technical lemma.
2
Lemma 5 Given τ > 0, with probability greater than 1 − 4e−τ , simultaneously it holds
√
√
√
√
kY k . ( h + σ m + τ )
kΠW k . ( h + τ ).
(31)
kXk . ( h + τ )
Proof Since X is a sub-gaussian random vector taking values in V with h = dim V, taking
into account that kXkψ2 = 1, bound (50) gives
√
kXk ≤ 9( h + τ ),
2
with probability greater than 1 − 2e−τ . Since W is a centered sub-gaussian random vector
taking values in Rm and kW kψ2 ≤ 1, by (51)
√
kW k ≤ 16( m + τ ),
14
A Machine Learning Approach to Optimal Tikhonov Regularization
2
with probability greater than 1 − e−τ . Since kAk = 1 and
kY k ≤ kAXk + σkW k ≤ kXk + σkW k,
2
the first two bounds in (31) hold true with probability greater than 1−3e−τ . Since ΠW is a
centered sub-gaussian random vector taking values in W with h = dim W, and kΠW kψ2 ≤ 1,
by (51)
√
kΠW k ≤ 16( h + τ ),
2
with probability greater than 1 − e−τ .
As a consequence, we have the following bound.
Proposition 6 Given τ > 0, if n and σ satisfy (27), then with probability greater than
2
1 − 6e−τ
b n )Y − Πηk . B(n, τ, σ),
k(Π − Π
(32)
where
r
√
hm
σ2 √
1 m
σ3 √
√
h+
h+
m+
+σ
B(n, τ, σ) =
+
λmin
n
λmin n
λmin
λmin
r
r
m
m
1
1
1 1
σ2
√ .
+τ
+σ 1+
+ τ2
+
λmin n
λmin n
λmin
λmin n
1
Proof Clearly,
(33)
b n )Y − Πηk ≤ kΠ − Π
b n kkY k + σkΠW k.
k(Π − Π
If (27) holds true, bounds (26) and (31) imply
r
√
√
√
τ
m
1
2
b
+ √ + σ ( h + σ m + τ ) + σ( h + τ ),
k(Π − Πn )Y − Πηk .
λmin
n
n
2
with √
probability√
greater than 1− 6e−τ . By developing the brackets and taking into account
that h + m ≤ 2m, (32) holds true.
Remark 7 Usually in machine learning bounds of the type (32) are considered in terms of
their expectation, e.g., with respect to (X, Y ). In our framework, this would amount to the
following bound
i
h
b n )Y − Πηk Y1 , . . . , Yn .
E k(Π − Π
r
√
√
√
1
τ
m
2
.
+ √ + σ ( h + σ m) + σ h,
λmin
n
n
obtained by observing that E[kY k] ≤ E[kAkkXk] + σE[kW k],
E[kXk]2 ≤ E[kXk2 ] = Tr(ΣX ) ≤ 2hkXk2ψ2 . h,
and, by a similar computation,
E[kW k] .
√
E[kΠW k] .
m
√
h.
Our bound (32) is much stronger and it holds in probability with respect to both the training
set Y1 , . . . Yn and the new pair (X, Y ).
15
De Vito, Fornasier and Naumova
Our first result is a direct consequence of the estimate (32).
2
Theorem 8 Given τ > 0, with probability greater than 1 − 6e−τ ,
1
B(n, τ, σ)
σd
kQb
η − Qηk . B(n, τ, σ)
1
∗
b
kZ tn − Xk − kZ t − Xk .
B(n, τ, σ)
σd
bn (t)k − kR(t)k| . 1 B(n, τ, σ)
sup |kR
σd
0≤t≤1
b − Xk .
kX
provided that n and σ satisfy (27).
Proof By the first identity of (10)
so that
b = A† ΠAX − A† Π
b n (AX + η)
X −X
b n )AX + A† (Π − Π
b n )η − A† Πη
= A† (Π − Π
b n )Y − Πη ,
= A† (Π − Π
b ≤
kX − Xk
(34)
1
b n )Y − Πηk.
k(Π − Π
σd
An application of (32) to the previous estimate gives the first bound of the statement.
Similarly, we derive the second bound as follows. Equations (17), (7), and (34) yield
b
Qη − Qb
η = A(X − X)
b
= Q (Π − Πn )Y − Πη .
b as we show below.
The other bounds follow by estimating them by multiples of kX − Xk
b
By definition of tn
b
b
b + kX − Xk
b
kZ tn − Xk ≤ kZ tn − Xk
∗
b + kX − Xk,
b
≤ kZ t − Xk
∗
b
≤ kZ t − Xk + 2kX − Xk.
Furthermore,
b n )Y − Πη ,
bn (t) − R(t) = X − X
b = A† (Π − Π
R
(35)
and triangle inequality gives
bn (t)k − kR(t)k| ≤ sup kR
bn (t) − R(t)k = kX − Xk.
b
sup |kR
0≤t≤1
0≤t≤1
All the remaining bounds in the statement of the theorem are now consequences of (32).
16
A Machine Learning Approach to Optimal Tikhonov Regularization
Remark 9 To justify and explain the consistency of the sampling strategy for approximation of the optimal regularization parameter t∗ , let us assume that n goes to infinity
bn (t)k conand σ vanishes. Under this theoretical assumption, Theorem 8 shows that kR
vergences uniformly to kR(t)k with high probability. The uniform convergence implies the
Γ-convergence (see Braides, 2001), and, since the domain [0, 1] is compact, Theorem 1.22
in Braides (2001) ensures that
b
lim
inf kRn (t)k − inf kR(t)k = 0.
n→+∞
σ→0
0≤t≤1
0≤t≤1
While the compactness given by the Γ-convergence guarantees the consistency of the approximation to an optimal parameter, it is much harder for arbitrary A to provide an error
bound, depending on n. For the case A = I in Section 4.4 we are able to establish very
precise quantitative bounds with high probability.
Remark 10 Under the conditions of Theorem 8 for all i = 1, . . . , d it holds as well
1
|ξi − ξbi | . B(n, τ, σ)
σi
|νi − νbi | . B(n, τ, σ).
These bounds are a direct consequence of Theorem 8.
The following theorem is about the uniform approximation to the derivative function H(t).
2
Theorem 11 Given τ > 0, with probability greater than 1 − 10e−τ ,
σ √
1 √
b
( h + τ) + 4 ( d + τ)
sup |Hn (t) − H(t)| . B(n, τ, σ)
σp3
σd
0≤t≤1
provided that n and σ satisfy (27), where p is defined in (9).
Proof Equations (22) and (35) give
b n (t) − H(t) = hR
bn (t) − R(t), R′ (t)i
H
b n )Y − Πη, (AT )† B(t)−2 AT Y i
= h(Π − Π
b n )Y − Πη, (tAAT + (1 − t)I)−2 QY i,
= h(Π − Π
where we observe that tAAT + (1 − t)I is invertible on ran Q and AT Y = AT QY . Hence,
b n (t) − H(t)| ≤ k(Π − Π
b n )Y − Πηk×
|H
× k(tAAT + (1 − t)I)−2 AXk + σk(tAAT + (1 − t)I)−2 QW k .
Furthermore, recalling that AX = ΠAX and Πui = 0 for all i > p, (31) implies that
1 √
σp
kXk
.
( h + τ)
(tσp2 + (1 − t))2
σp3
1 √
1
( d + τ)
kQW
k
.
k(tAAT + (1 − t)I)−2 QW k ≤
2
(tσd + (1 − t))2
σd4
k(tAAT + (1 − t)I)−2 AXk ≤
17
De Vito, Fornasier and Naumova
2
hold with probability greater than 1 − 4e−τ . Bound (32) provides the desired claim.
The uniform approximation result of Theorem 11 allows us to claim that any b
tn ∈ [0, 1]
∗
b
b
such that Hn (tn ) = 0 can be attempted as a proxy for the optimal parameter t , especially
if it is the only root in the interval (0, 1).
b n a sum of d rational functions of polynomial numerator of degree 1
Nevertheless, being H
and polynomial denominator of degree 3, the computation of its zeros in [0, 1] is equivalent
to the computation of the roots of a polynomial of degree 3(d − 1) + 1 = 3d − 2. The
computation cannot be done analytically for d > 2, because it would require the solution
of a polynomial equation of degree larger than 4. For d > 2, we are forced to use numerical
methods, but this is not a great deal as by now there are plenty of stable and reliable
routines to perform such a task (for instance, Newton method, numerical computation of
the eigenvalues of the companion matrix, just to mention a few).
We provide below relatively simple numerical experiments to validate the theoretical
results reported above. In Figure 2 we show optimal parameters t∗ and corresponding apb n (t) = 0
proximations b
tn (computed by numerical solution to the scalar nonlinear equation H
on [0, 1]), for n different data Y = AX + η. The accordance of the two parameters t∗ and b
tn
is visually very convincing and their statistical (empirical) distributions reported in Figure
3 are also very close.
1.0
0.8
0.6
0.4
0.2
10
20
30
40
50
Figure 2: Optimal parameters t∗ and corresponding approximations b
tn (n = 1000) for 50
d
m
different new data Y = AX +η for X ∈ R and η ∈ R Gaussian vectors, d = 200,
m = 60 and A ∈ R60×200 . Here we assumed that X ∈ V for V = span{e1 , . . . , e5 }.
We designed the matrix in such a way that the spectrum is vanishing, i.e., σmin ≈
0. Here we considered as noise level σ = 0.03, so that the optimal parameter t∗
is rather concentrated around 0.7. The accordance of the two parameters t∗ and
b
tn is visually very convincing.
In the next two sections we discuss special cases where we can provide even more precise
statements and explicit bounds.
18
A Machine Learning Approach to Optimal Tikhonov Regularization
4
6
5
3
4
2
3
2
1
1
0.2
0.4
0.6
0.8
1.0
1.2
0.2
0.4
0.6
0.8
1.0
1.2
Figure 3: Empirical distribution of the optimal parameters t∗ (left) and the corresponding
empirical distribution of the approximating parameters b
tn (right) for 1000 randomly generated data Y = AX + η with the same noise level. The statistical
accordance of the two parameters t∗ and b
tn is shown.
4. Toy examples
In the following we specify the results of the previous in simple cases, which allow to
understand more precisely the behavior of the different estimators presented in Theorem 8.
4.1 Deterministic sparse signal and Bernoulli noise in two dimensions
b Z t∗ , Z btn are all performing
From Theorem 8 one might conclude that the estimators X,
an approximation to X with a least error proportional to B(n, τ, σ). With this in mind,
b is a sufficient effort, with no need of
one might be induced to conclude that computing X
b
t
n
tn . In this section
considering the empirical estimator Z , hence no need for computing b
we discuss precisely this issue in some detail on a concrete toy example, for which explicit
computations are easily performed.
Let us consider a deterministic sparse vector X = (1, 0)T ∈ R2 and a stochastic noise
W = (W1 , W2 )T ∈ R2 with Bernoulli entries, i.e., Wi = ±1 with probability 1/2. This
noise distribution is indeed Subgaussian with zero mean. For the sake of simplicity, we fix
now an orthogonal matrix A ∈ R2×2 , which
√ we express in terms of its vector-columns as
2 with√probability 1 and kXk = 1. Hence, we
follows: A = (A1 |A2 ). Notice that kW k = √
may want to consider a noise level σ ∈ [−1/ 2, 1/ 2]. In view of the fact that the sign of
σ would simply produce a symmetrization of the signs of the component of the noise, we
consider from now on only the cases where W1 ≥ 0. The other cases are obtained simply
by changing the sign of σ.
We are given measurements
Y = AX + σW.
While X is deterministic (and this is just to simplify the computations) Y is stochastic
given the randomness of the noise. Hence, noticing that Y = AX + σW = A1 + σW and
observing that EW = 0, we obtain
Z
Z
ΣY = Y Y T dP(W ) = (A1 + σW )(A1 + σW )T dP(W ) = A1 AT1 + σ 2 I.
19
De Vito, Fornasier and Naumova
It is also convenient to write ΣY by means of its complete singular value decomposition
T
1 + σ2 0
A1
ΣY = (A1 A2 )
.
0
σ2
AT2
4.2 Exact projection
The projection onto the first largest singular space of ΣY is simply given by
Πξ = hξ, A1 iA1 .
This implies that
ΠY = A1 + σhW, A1 iA1 .
From now on, we denote
σi = σhW, Ai i,
i = 1, 2.
Let us stress that these parameters indicate how strong the particular noise instance W is
correlated with W = span{A1 = AX}. If |σ1 | is large, then there is strong concentration
of the particular noise instance on W and necessarily |σ2 | is relatively small. If, vice versa,
|σ2 | is large, then there is poor correlation of the particular noise instance with W.
We observe now that, being A orthogonal, it is invertible and
I = AT A = A−1 A = (A−1 A1 A−1 A2 ).
This means that A−1 A1 = (1, 0)T and A−1 A2 = (0, 1)T and we compute explicitly
X̄ = A−1 ΠY = A−1 A1 + σ1 A−1 A1 = (1 + σ1 , 0)T .
It is important to notice that for σ1 small X̄ is indeed a good proxy for X and it does not
belong to the set of possible Tikhonov regularizers, i.e., the minimizers of the functional
tkAZ − Y k22 + (1 − t)kZk22
(36)
for some t ∈ [0, 1]. This proxy X̄ for X approximates it with error exactly computed by
√
(37)
kX̄ − Xk = |σ1 | ≤ 2|σ|.
The minimizer of (36) is given by
Z t = t(tAT A + (1 − t)I)−1 AT Y
T
A1
= t(t(AT − A−1 )A + I)−1
(A1 + σW ).
AT2
In view of the orthogonality of A we have AT = A−1 , whence t(AT − A−1 )A = 0, and the
simplification
T
A1
t
Z = t
(A1 + σW )
AT2
= t(1 + σ1 , σ2 )T
20
A Machine Learning Approach to Optimal Tikhonov Regularization
Now it is not difficult to compute
R(t)2 = kZ t − Xk22 = (t(1 + σ1 ) − 1)2 + t2 σ22 = ((1 + σ1 )2 + σ22 )t2 − 2t(1 + σ1 ) + 1
This quantity is optimized for
1 + σ1
(1 + σ1 )2 + σ22
t∗ =
The direct substitution gives
R(t∗ ) =
p
|σ2 |
(1 + σ1 )2 + σ22
And now we notice that, according to (37) one can have either
kX̄ − Xk = |σ1 | < p
or
kX̄ − Xk = |σ1 | > p
|σ2 |
(1 + σ1
)2
|σ2 |
(1 + σ1
)2
∗
+ σ22
= kZ t − Xk
+ σ22
= kZ t − Xk
∗
very much depending on σi = σhW, Ai i. Hence, for the fact that X̄ does not belong to the
∗
set of minimizers of (36), it can perfectly happen that it is a better proxy of X than Z t .
Recalling that X̄ = (1 + σ1 , 0)T and Z t = t(1 + σ1 , σ2 )T , let us now consider the error
R̄(t)2 = kZ t − X̄k2
= ((1 + σ1 )2 + σ22 )t2 − 2t(1 + σ1 )2 + (1 + σ1 )2 .
This is now opimized for
t̄ =
(1 + σ1 )2
(1 + σ1 )2 + σ22
Hence we have
R(t̄)2 = kZ t̄ − Xk2 = ((1 + σ1 )2 + σ22 )t̄2 − 2t̄(1 + σ1 ) + 1 =
or
σ12 (1 + σ1 )2 + σ22
(1 + σ1 )2 + σ22
p
σ 2 (1 + σ1 )2 + σ22
R(t̄) = p 1
(1 + σ1 )2 + σ22
√
In this case we notice that, in view of the fact that σ12 ≤ 1 for |σ| ≤ 1/ 2 one can have only
p
p
σ12 (1 + σ1 )2 + σ22
(1 + σ1 )2 + σ22 /σ12
p
kX̄ − Xk = |σ1 | ≤ p
=
|σ
|
= kZ t̄ − Xk
1
2
2
2
2
(1 + σ1 ) + σ2
(1 + σ1 ) + σ2
independently of σi = σhW, Ai i. The only way of reversing the inequality is by allowing
noise level σ such that σ12 > 1, which would mean that we have noise significantly larger
than the signal.
21
De Vito, Fornasier and Naumova
4.2.1 Concrete examples
Let us make a few concrete examples. Let us recall our assumpution that√sign W1 = +1
W
while the one of W2 is kept arbitrary. Suppose that A1 = kW
k , then σ1 = σ 2 and σ2 = 0.
In this case we get
√
σ 2 = kX̄ − Xk = kZ t̄ − Xk,
√
W
and X̄ = Z t̄ . The other extreme case is given by A2 = kW
, then σ1 = 0 and σ2 = ±σ 2.
k
p
In this case X̄ = X, while R(t̄) = R(t∗ ) = |σ| 2/(1 + 2σ 2 ) > 0. An intermediate case is
given precisely by A = I, which we shall investigate in more generality in Section 4.4. For
this choice we have σ1 = σ and σ2 = ±σ. Notice that the sign of σ2 does not matter in
the expressions of R(t̄) and R(t∗ ) because it appears always squared. Instead the sign of σ1
matters and depends on the choice of the sign of σ. We obtain
R(t∗ ) = p
and
R(t̄) =
s
|σ|
(1 + σ)2 + σ 2
,
σ 2 (2 + (σ(2 + σ))
,
1 + 2σ(1 + σ)
and, of course
kX̄ − Xk = |σ|.
In this case, R(t∗ ) ≤ kX̄ − Xk if and only if σ ≥ 0, while - unfortunately - R(t̄) > kX̄ − Xk
for all |σ| < 1 and the inequality gets reversed only if |σ| ≥ 1.
0.8
°X-X°= Σ¤
0.6
R Ht* L=
Out[99]=
Σ
IΣ2 +H1+ΣL2 M
0.4
RHtL=
Σ2 H2+Σ H2+ΣLL
1+2 Σ H1+ΣL
0.2
0.0
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
Figure 4: Comparison of kX̄ − Xk, R(t∗ ), and R(t̄) as a function of σ for σ1 = σ.
In Figure 5 we illustrate the case of σ1 = 1.2σ and in Figure 6 the case of σ1 = 0.7σ. Let
us now focus for a moment on the purely denoising case A = I. In this case, the projection
Π is actually the projection onto the subspace where X is defined. While X̄ belongs also to
22
A Machine Learning Approach to Optimal Tikhonov Regularization
1.4
1.2
°X-X°=1.2 Σ¤
1.0
0.8
R Ht* L=
Out[123]=
1-
H1+1.2 ΣL2
0.56 Σ2 +H1+1.2 ΣL2
0.6
RHtL=
0.4
1-
2 H1+1.2 ΣL3
0.56 Σ2 +H1+1.2 ΣL2
+
H1+1.2 ΣL4
0.56 Σ2 +H1+1.2 ΣL2
0.2
0.0
-1.0
-0.5
0.0
0.5
1.0
Figure 5: Comparison of kX̄ − Xk, R(t∗ ), and R(t̄) as a function of σ for σ1 = 1.2σ.
1.0
0.8
°X-X°=0.7 Σ¤
0.6
R Ht* L=
Out[134]=
1-
0.4
RHtL=
1-
H1+0.7 ΣL2
H1+0.7 ΣL2 +1.51 Σ2
2 H1+0.7 ΣL3
H1+0.7 ΣL2 +1.51 Σ2
+
H1+0.7 ΣL4
H1+0.7 ΣL2 +1.51 Σ2
0.2
0.0
-1.0
-0.5
0.0
0.5
1.0
Figure 6: Comparison of kX̄ − Xk, R(t∗ ), and R(t̄) as a function of σ for σ1 = 0.7σ.
that subspace, this is not true for Z t in general. Since we do dispose of (an approximation)
to Π when A = I, it would be better to compare
∗
kX̄ − Xk, kΠZ t − Xk, kΠZ t̄ − Xk,
as functions of σ. In general, we have
RΠ (t)2 = kΠZ t − Xk2 = (1 + σ1 )2 t2 − 2t(1 + σ1 ) + 1,
and
σ22 − σ1 (σ1 + 1)2
,
R (t̄) =
σ22 + (σ2 + 1)2
Π
RΠ (t∗ ) =
σ22
σ22
.
+ (σ1 + 1)2
The comparison of these quantities is reported in Figure 7 and one can observe that, at
∗
least for σ ≥ 0, both ΠZ t and ΠZ t̄ are better approximations of X than X̄.
23
De Vito, Fornasier and Naumova
1.2
1.0
°X-X°= Σ¤
0.8
RP Ht* L=
Out[199]= 0.6
RP HtL=
0.4
Σ2
IΣ2 +H1+ΣL2 M
Σ2 -Σ H1+ΣL2
IΣ2 +H1+ΣL2 M
0.2
0.0
-1.0
0.0
-0.5
0.5
1.0
Figure 7: Comparison of kX̄ − Xk, RΠ (t̄), and R(t∗ ) as a function of σ for σ1 = σ.
4.3 Empirical projections
b n . According
In concrete applications we do not dispose of Π but only of its empirical proxy Π
to Theorem 4 they are related by the approximation
r
m
1
τ
b
√
,
kΠ − Πn k ≤
+
λmin
n
n
with high probability (depending on τ > 0), where m = 1 in our example and λmin = 1 is
the smallest nonzero eigenvalue of ΣAX = A1 AT1 . Hence, we compute
and, since A is an orthogonal matrix
b = A−1 Π
b n Y,
X
b
b n )Y k
kX − Xk
= kA−1 (Π − Π
b n )Y k
= k(Π − Π
≤
≤
Here we used
1+τ
√ kA1 + σW k
n
√
1+τ
1+τp
√
1 + 2σ 2 + 2σ1 ≤ √ (1 + 2σ).
n
n
kA1 + σW k2 = kA1 k2 + +σ 2 kW k2 + 2σhA1 , W i = 1 + 2σ 2 + 2σ1 .
In view of the approximation relationship above, we can express now
where
p
ǫ21 + ǫ22 ≤
1+τ
√ (1
n
+
√
b = (1 + σ1 + ǫ1 , ǫ2 )T ,
X
2σ). Then
b =
kX − Xk
q
(σ1 + ǫ1 )2 + ǫ22 .
24
A Machine Learning Approach to Optimal Tikhonov Regularization
One has also
b 2 = (t(1 + σ1 ) − (1 + σ1 + ǫ1 ))2 + (tσ2 − ǫ2 )2
Rn (t)2 = kZ t − Xk
= t2 ((1 + σ1 )2 + σ22 ) − 2t(σ2 ǫ2 + (1 + σ1 )ǫ1 + (1 + ǫ1 )2 ) + const.
Hence, its optimizer is given by
(σ2 ǫ2 + (1 + σ1 )ǫ1 + (1 + ǫ1 )2 )
b
tn =
(1 + σ1 )2 + σ22
and
R(b
tn ) = kZ
b
tn
− Xk =
s
(σ22 (1 + ǫ2 )2 + 2ǫ2 (1 + σ1 )(ǫ1 + σ1 )σ2 + (1 + σ1 )2 (ǫ1 + σ1 )2 )
(1 + σ1 )2 + σ22
4.3.1 Concrete example
We compare in Figure 8 the behavior of the difference of the errors
b − kZ btn − Xk
kX − Xk
b
tn
depending on σ, for σ1 = 1.3σ and n = 100. In this case, the empirical estimator
√ Z is a
b for all noise levels σ ∈ [−0.07, 1/ 2], while
significantly better approximation to X than X
√
b
X keeps being best estimator, e.g., for σ ∈ [−1/ 2, −0.1].
0.05
0.00
-0.05
`
`
H°X-Xn °-RHtn LL
Out[29]=
-0.10
-0.15
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
b
Figure 8: The difference of the errors kX − Xk−
kZ btn − Xk as a function of σ for σ1 = 1.3σ.
√
b > kZ btn − Xk.
For σ ∈ [−0.07, 1/ 2] one has kX − Xk
From these simple two dimensional toy examples and related numerical experiments we
deduce the following general principles:
∗
• The performances of the estimators X̄, Z t , Z btn depend very much on how the noise
is distributed with respect to the subspace W to which AX belongs and its signature;
25
De Vito, Fornasier and Naumova
if the noise is not very much concentrated on such subspace, then X̄ tends to be a
better estimator, see Fig. 6, while a concentrated noise would make X̄, Z btn rather
∗
equivalent and Z t the best one for some ranges of noise, see Fig. 4 and Fig. 5.
• If the number n of samples is not very large (in the example above we considered
b nY = X
b ≈ X̄ = A−1 ΠY would be still too
n = 100), so that the approximation A−1 Π
b for significant ranges of noise, indepenrough, then the estimator Z btn may beat X
dently of its instance correlation with W, see Fig. 8.
b Z btn perform better as
Since it is a priori impossible to know precisely in which situation X,
estimators (as it depends on the correlation with W of the particular noice instance), in the
practice it will be convenient to compute them both, especially when one does not dispose
of a large number n of samples.
4.4 The case A = I
As yet one more example, this time in higher dimension, we consider the simple case where
m = d and A = I, so that W = V. In this case, we get that
R(t) = −(1 − t)X + tη.
If Y 6= 0, an easy computation shows that the minimizer of the reconstruction error kR(t)k2
is
hY, Xi
∗
∗
,
(38)
t = t (Y, X) = ϕ
hY, Y i
where
0 if s ≤ 0
ϕ(s) = s if 0 < s < 1 .
1 if s ≥ 1
If Y = 0, the solution Z t does not depend on t, so that there is not a unique optimal
parameter and we set t∗ = 0.
We further assume that X is bounded from 0 with high probability, more precisely,
1
P[kXk < r] ≤ 2 exp − 2 .
(39)
r
This assumption is necessary to avoid that the noise is much bigger than the signal.
2
Theorem 12 Given τ ≥ 1, with probability greater than 1 − 6e−τ
!
r
e √
d
τ
1
2
∗
( h + τ ),
+ √ + σ + σ ln
|b
tn − t | ≤
λmin
n
σ
n
(40)
provided that
√
64
2
,1
n & ( d + τ ) max
λ2min
)
(r
λmin 1−16τ 2
.
,e
σ < min
8
26
(41a)
(41b)
A Machine Learning Approach to Optimal Tikhonov Regularization
Proof Without loss of generality, we assume that λmin ≤ 8. Furthermore, on the event
{Y = 0}, by definition t∗ = b
tn = 0, so that we can further assume that Y 6= 0.
Since ϕ is a Lipschitz continuous function with Lipschitz constant 1,
b − Xi|
|hY, X
|b
tn − t∗ | ≤
kY k2
b n )Y − Πηk
k(Π − Π
≤
kY k
b n )k + σ kΠW k ,
≤ k(Π − Π
kY k
where the second inequality is consequence of (34). Since (41a) and (41b) imply (27) and
m = d, by (26) we get
!
r
1
τ
d
b n − Πk .
kΠ
+ √ + σ2
λmin
n
n
2
with probability greater than 1 − 2e−τ . It is now convenient to denote the probability
distribution of X as ρX , i.e., X ∼ ρX . Fixed r > 0, set
Ω = {kXk < r} ∪ {2σhX, W i < −kXk2 /2},
whose probability is bounded by
P[Ω] ≤ P[kXk < r] + P[4σhX, W i < −kXk2 , kXk ≥ r]
Z
P[4σhx, W i < −kxk2 ] dρX (x)
= P[kXk < r] +
kxk≥r
≤ P[kXk < r] +
Z
kxk2
exp −
256σ 2
kxk≥r
r2
≤ P[kXk < r] + exp −
256σ 2
dρX (x)
,
where we use (46c) with ξ √= −W (and the fact that W and X are independent), τ =
kxk/(16σ) and kW kψ2 = 1/ 2. With the choice r = 16τ / ln(e/σ), we obtain
τ2
P[Ω] ≤ P[kXk < 16τ / ln(e/σ)] + exp − 2 2
σ ln (e/σ)
≤ P[kXk < 16τ / ln(e/σ)] + exp(−τ 2 ),
where σ 7→ σ ln(e/σ) is an increasing positive function on (0, 1], so that it is bounded by 1.
Furthermore, by (41b), i.e., 16τ / ln(e/σ) ≤ τ1 , we have
P[kXk < 16τ / ln(e/σ)] ≤ P [kXk <
27
1
] ≤ 2 exp(−τ 2 ),
τ
De Vito, Fornasier and Naumova
2
by Assumption (39). Hence it holds that P[Ω] ≤ 3e−τ .
On the event Ωc
kY k2 = kXk2 + 2σhX, W i + σ 2 kW k2
≥ kXk2 + 2σhX, W i ≥ kXk2 /2
≥ r 2 /2 ≃ τ 2 / ln2 (e/σ) ≥ 1/ ln2 (e/σ)
since τ ≥ 1. Finally, (51) with ξ = ΠW ∈ W yields
√
kΠW k . ( h + τ )
with probability greater than 1 − exp(−τ 2 ). Taking into account the above estimates, we
conclude with probability greater than 1 − 4 exp(−τ 2 ) that
e √
kΠW k
. ln ( h + τ ).
kY k
σ
Then, with probability greater than 1 − 6 exp(−τ 2 ), we conclude the estimate
∗
|b
tn − t | .
1
λmin
r
d
τ
+ √ + σ2
n
n
!
√
+ σ ln(e/σ)( h + τ ).
Remark 13 The function ln(e/σ) can be replaced by any positive function f (σ) such that
σf (σ) is an infinitesimal function bounded by 1 in the interval (0, 1]. The condition (41b)
becomes
(r
)
λmin
σ < min
,1
f (σ) ≥ 16τ 2 ,
8
and, if f is strictly decreasing,
σ < min
(r
)
λmin
, 1, f −1 (16τ 2 ) .
8
Theorem 12 shows that if the number n of examples is large enough and the noise level
is small enough, the estimator b
tn is a good approximation of the optimal value t∗ . Let us
stress very much that the number n of samples needed to achieve a good accuracy depends
at most algebraically on the dimension d, more precisely n = O(d). Hence, in this case
one does not incur in the curse of dimensionality. Moreover, the second term of the error
estimate (40) gets smaller for smaller dimensionality h.
Remark 14 If there exists an orthonormal basis (ei )i of Rd , such that the random variables
hW, e1 i, . . . , hW, ed i are independent with E[hW, e1 i2 ] = 1, then Rudelson and Vershynin
28
A Machine Learning Approach to Optimal Tikhonov Regularization
√
(2013, Theorem 2.1) showed that kW k concentrates around d with high probability. Reasoning as in the proof of Theorem 12, by replacing kXk2 with σ 2 kW k2 , with high probability
it holds that
!
r
1
τ
d
1 √
τ
|b
tn − t∗ | .
+ √ + σ2 + √ ( h + τ ) √
λmin
n
n
d
d
without assuming condition (39).
In Figures 9–12, we show examples of numerical accordance between optimal and estimated regularization parameters. In this case, the agreement between optimal parameter
t∗ and learned parameter b
tn is overwhelming.
1.0
0.8
0.6
0.4
0.2
10
20
30
40
50
Figure 9: Optimal parameters t∗ and corresponding approximations b
tn (n = 1000) for 50
different data Y = X + η for X and η generated randomly with Gaussian distributions in Rd for d = 1000. We assume that X ∈ V for V = span{e1 , . . . , e5 }.
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
0.2
0.4
0.6
0.8
1.0
1.2
0.2
0.4
0.6
0.8
1.0
1.2
Figure 10: Empirical distribution of the optimal parameters t∗ (left) and the corresponding
empirical distribution of the learned parameters b
tn (right) for 500 randomly
generated data Y = X + η with the same noise level.
29
De Vito, Fornasier and Naumova
5. An explicit formula by linearization
b n (t) = 0 by analytic methods for d > 2
While it is not possible to solve the equation H(t) ≈ H
in the general case, one might attempt a linearization of this equation in certain regimes. It
is well-known that the optimal Tikhonov regularization parameter α∗ = (1−t∗ )/t∗ converges
to 0 for vanishing noise level and this means that t∗ = t∗ (σ) → 1 as σ → 0. Hence, if the
matrix A has a significant spectral gap, i.e., σ1 ≥ σ2 ≥ · · · ≥ σd ≫ 0 and σ ≈ 0 is small
enough, then
σi ≫ (1 − t∗ ),
(42)
and in this case
(σi νbi ξbi−1 + 1)t − 1
(σi νbi ξbi−1 + 1)t − 1
b
≈
,
hi (t) =
((1 − t) + tσi2 )3
σi6
t ≈ t∗ .
The above linear approximation is equivalent to replacing B(t)−1 with B(1)−1 = (AT A)−1
and Equation (23) is replaced by the following proxy (at least if t ≈ t∗ )
b lin (t) = htAAT (Y − Π
b n Y ) − (1 − t)Π
b n Y , (AAT )†3 Y i
H
n
b + tAT ηb), AX
b + ηb)i
= hA(AT A)−3 (−(1 − t)X
b + AT ηb), AX
b + ηbi t − hA(AT A)−3 X,
b AX
b + ηbi
= hA(AT A)−3 (X
!
d
d
X
X
α
bi
α
bi
−1
b
(σ
ν
b
ξ
+
1)
t
−
=
i i i
5
5.
σ
σ
i
i
i=1
i=1
b lin (t) is
The only zero of H
n
b n Y , (AAT )†3 Y i
hΠ
b nY ) + Π
b n Y , (AAT )†3 Y i
hAAT (Y − Π
b n Y , (AAT )†2 Y i
hY − Π
=1−
b nY ) + Π
b n Y , (AAT )†3 Y i
hAAT (Y − Π
b AX
b + ηbi
hA(AT A)−3 X,
=
b + AT ηb), AX
b + ηbi
hA(AT A)−3 (X
b + ηbi
h(AAT )†2 ηb, AX
=1−
b + AT ηb), AX
b + ηbi
hA(AT A)−3 (X
Pd
−5
bi
i=1 σi α
= Pd
−5
bi (σi νbi ξbi−1 + 1)
i=1 σi α
Pd
−4
bi νbi
i=1 σi α
= 1 − Pd
.
−5
bi (σi νbi ξbi−1 + 1)
i=1 σi α
b
tlin
n =
In Figure 11, we present the comparison between optimal parameters t∗ and their approxi∗
mations b
tlin
n . Despite the fact that the gap between σd and 1 − t is not as large as requested
in (42), the agreement between t∗ and b
tlin
n keeps rather satisfactory. In Figure 12, we report
the empirical distributions of the parameters, showing essentially their agreement.
30
A Machine Learning Approach to Optimal Tikhonov Regularization
1.0
0.8
0.6
0.4
0.2
10
20
30
40
50
Figure 11: Optimal parameters t∗ and corresponding approximations b
tn (n = 1000) for 50
different data Y = AX +η for X and η generated as for the experiment of Figure
2. Here we considered a noise level σ = 0.006, so that the optimal parameter t∗
can be very close to 0.5 and the minimal singular value of A is σd ≈ 0.7.
3.0
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0.5
0.5
0.2
0.4
0.6
0.8
1.0
1.2
0.2
0.4
0.6
0.8
1.0
1.2
Figure 12: Empirical distribution of the optimal parameters t∗ (left) and the corresponding empirical distribution of the approximating parameters b
tlin
n (right) for 1000
randomly generated data Y = AX + η with the same higher noise level. The
statistical accordance of the two parameters t∗ and b
tlin
n is shown.
6. Conclusions and a glimps to future directions
Motivated by the challenge of the regularization and parameter choice/learning relevant for
many inverse problems in real-life, in this paper we presented a method to determine the
parameter, based on the usage of a supervised machine learning framework. Under the assumption that the solution of the inverse problem is distributed sub-gaussianly over a small
dimensional linear subspace V and the noise is also sub-gaussian, we provided a rigorous
theoretical justification for the learning procedure of the function, which maps given noisy
data into an optimal Tikhonov regularization parameter. We also presented and discussed
explicit bounds for special cases and provided techniques for the practical implementation
31
De Vito, Fornasier and Naumova
of the method.
Classical empirical computations of Tikhonov regularization parameters, for instance
the discrepancy principle, see, e.g., (Bauer and Lukas, 2011; Engl et al., 1996), may be
assuming the knowledge of the noise level σ and use regularized Tikhonov solutions for
different choices of t
Z t = t(tAT A + (1 − t)I)−1 AT Y,
to return some parameter b
tσ which ideally provides σ-optimal asymptotic behavior of Z btσ
for noise level σ → 0. To be a bit more concrete, by using either bisection-type methods or
by fixing a pretermined grid of points T = {t(j) : j = 1, 2, . . . }, one searches some t(j̄) ∈ T
(j̄)
tσ := t(j̄) . For each t = t(j) the computation of Z t has
for which kAZ t − Y k ≈ σ and set b
cost of order O(d2 ) as soon as one has precomputed the SVD of A. Most of the empirical
estimators require then the computation of Nit instances of Z t for a total of O(Nit d2 ) operations to return b
tσ .
Our approach does not assume knowledge of σ and it fits into the class of heuristic
parameter choice rules, see (Kindermann, 2011); it requires first to have precomputed the
b n by computation of one single h-truncated SVD2 of the empirical
empirical projection Π
b n of dimension m × m, where m is assumed to be significantly smaller
covariance matrix Σ
than d. In fact, the complexity of methods for computing a truncated SVD out of standard
books is O(hm2 ). When one uses randomized algorithms one could reduce the complexity
even to O(log(h)m2 ), see (Halko et al., 2011). Then one needs to solve the optimization
problem.
b 2.
min kZ t − Xk
(43)
t∈[0,1]
b n (t),
This optimization is equivalent to finding numerically roots in (0, 1) of the function H
which can be performed in the general case and for d > 2 only by iterative algorithms. They
also require sequential evaluations of Z t of individual cost O(d2 ), but they converge usually
∗
very fast and they are guaranteed by our results to return Z bt ≈ Z t . If we denote Nit◦
the number of iterations needed for the root finding with appropriate accuracy, we obtain
a total complexity of O(Nit◦ d2 ). In the general case, our approach is going to be more
advantageous with respect to classical methods if Nit◦ ≪ Nit . However, in the special case
where the noise level is relatively small compared to the minimal positive singular value of
A, e.g., σd ≫ σ, then our arguments in Section 5 suggest that one can approximate the
b n (t) and hence t∗ with the smaller cost of O(d2 ) and no need of several
relevant root of H
evaluations of Z t . As a byproduct of our results we also showed that
b = A† Π
b nY
X
(44)
b n and
is also a good estimator and this one requires actually only the computation of Π
then the execution of the operations in formula (44) of cost O(md) (if we assumed the
precomputation of the SVD of A).
2. One needs to compute h singular values and singular vectors up to the first significant spectral gap.
32
A Machine Learning Approach to Optimal Tikhonov Regularization
Our current efforts are devoted to the extension of the analysis to the case where the underlying space V is actually a smooth lower-dimensional nonlinear manifold. This extension
will be realized by firstly approximating the nonlinear manifold locally on a proper decomposition by means of affine spaces as proposed in (Chen et al., 2013) and then applying our
presented results on those local linear approximations.
Another interesting future direction consists of extending the approach to sets V, unions
of linear subspaces as in the case of solutions expressible sparsely the respect to certain
dictionaries. In this situation, one would need to consider different regularization techniques
and possibly non-convex non-smooth penalty quasi-norms.
For the sake of providing a first glimps on the feasibility of the latter possible extension,
we consider below the problem of image denoising. In particular, as a simple example of
the image denoising algorithm, we consider the wavelet shrinkage (Donoho and Johnstone,
1994): given a noisy image Y = X + σW (already expressed in wavelet coordinates), where
σ is the level of Gaussian noise, the denoised image is obtained by
Z α = Sα (X) = arg min kZ − Y k2 + 2αkZkℓ1 ,
Z
where k · kℓ1 denotes the ℓ1 norm, which promotes a sparse representation of the image with
respect to a wavelet decomposition. Here, as earlier, we are interested in learning the highdimensional function mapping noisy images X into their optimal shrinkage parameters, i.e.,
an optimal solution of kZ α − Xk2 → minα .
Employing a properly modified version of the procedure described in this paper, we
are obtaining very exciting and promising results, in particular that the optimal shrinkage
parameter α essentially depends nonlinearly on very few (actually 1 or 2) linear evaluations
of Y . This is not a new observation and it is a data-driven verification of the well-known
results of (Donoho and Johnstone, 1994) and (Chambolle et al., 1998), establishing that the
optimal parameter depends essentially on two meta-features of the noisy image, i.e., the
noise level and its Besov regularity. In Figure 13 and Figure 14 we present the numerical results for wavelet shrinkage, which show that our approach chooses a nearly optimal
parameter in terms of peak signal-to-noise ratio (PSNR) and visual quality of the denoising.
Acknowledgments
E. De Vito is a member of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità
e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
M. Fornasier acknowledges the financial support of the ERC-Starting Grant HDSPCONTR
“High-Dimensional Sparse Optimal Control”. V. Naumova acknowledges the support of
project “Function-driven Data Learning in High Dimension” (FunDaHD) funded by the
Research Council of Norway.
Appendix A. Perturbation result for compact operators
We recall the following perturbation result for compact operators in Hilbert spaces (Anselone,
1971) and (Zwald and Blanchard, 2006, Theorem 3), whose proof also holds without the
assumption that the spectrum is simple (see Rosasco et al., 2010, Theorem 20).
33
De Vito, Fornasier and Naumova
Ground truth
Noisy image
20
20
40
40
60
60
80
80
100
100
120
120
20
40
60
80
100
120
20
40
60
80
100
120
Reconstructed image with learned parameter
Reconstructed image with optimal parameter
20
20
40
40
60
60
80
80
100
100
120
120
20
40
60
80
100
120
20
40
60
80
100
120
Figure 13: Numerical Experiments for Wavelet Shrinkage.
Proposition 15 Let A and B be two compact positive operators on a Hilbert space H
L
and denote by (αj )N
j=1 and (βℓ )ℓ=1 the corresponding families of (distinct) strictly positive
eigenvalues of A and B ordered in a decreasing way. For all 1 ≤ j ≤ N , denote by Pj (resp.
Qℓ with 1 ≤ ℓ ≤ L) the projection onto the vector space spanned by the eigenvectors of A
(resp. B) whose eigenvalues are greater or equal than αj (respect. βℓ ). Let j ≤ N such that
α −α
kA − Bk < j 4 j+1 , then there exists ℓ ≤ L so that
αj + αj+1
< βℓ
2
2
kQℓ − Pj k ≤
kA − Bk
αj − αj+1
βℓ+1 <
(45)
dim Qℓ H = dim Pj H.
If A and B are Hilbert-Schmidt, the operator norm in the above bound can be replaced by
the Hilbert-Schmidt norm.
34
A Machine Learning Approach to Optimal Tikhonov Regularization
45
PSNR for denoised image with learned
PSNR for denoised image with optimal
PSNR for noisy image
40
35
30
25
20
15
0
10
20
30
40
50
60
Figure 14: PSNR between the ground truth image and its noisy version (green line); the
ground truth and its denoised version with the optimal parameter (blue line)
and the learned parameter (red line). The results are presented for 60 random
images.
In the above proposition, if N or M are finite, αN +1 = 0 or βL+1 = 0. We note that, if the
eigenvalues of A or B are not simple, in general ℓ 6= j. However, the above result shows
that, given j = 1, . . . , N there exists a unique ℓ = 1, . . . , L such that dim Qℓ H = dim Pj H
and βℓ is the smallest eigenvalue of B greater than (αj + αj+1 )/2.
Appendix B. Sub-gaussian vectors
We recall some facts about sub-gaussian random vectors and we follow the presentation in
(Vershynin, 2012), which provides the proofs of the main results in Lemma 5.5.
Proposition 16 Let ξ be a sub-gaussian random vector in Rd . Then, for all τ > 0 and
v ∈ Rd
P[|hξ, vi| > 3kξkψ2 kvkτ ] ≤ 2 exp(−τ 2 ).
(46a)
Under the further assumption that ξ is centered, then
E[exp(τ hξ, vi)] ≤ exp 8τ 2 kξk2ψ2
√
P[hξ, vi > 4 2kξkψ2 kvkτ ] ≤ exp(−τ 2 ).
(46b)
(46c)
Proof We follow the idea in Vershynin (2012, Lemma 5.5) of explicitly computing the
constants. By rescaling ξ to ξ/kξkψ2 , we can assume that kξkψ2 = 1.
35
De Vito, Fornasier and Naumova
Let c > 0 be a small constant to be fixed. Given, v ∈ S d−1 , set χ = hξ, vi, which is a
real sub-gaussian vector. By Markov inequality,
P[|χ| > τ ] = P[c|χ|2 > cτ 2 ] = P[exp(c|χ|2 ) > exp(cτ 2 )]
2
≤ E[exp(c|χ|2 )]e−cτ .
By (5), we get that E[|χ|q ] ≤ q q/2 , so that
E[exp(c|χ|2 )] = 1 +
+∞ k
X
c
k=1
k!
E[|χ|2k ] ≤ 1 +
+∞
X
(2ck)k
k=1
k!
+∞
≤1+
1X
2c
(2ce)k = 1 +
,
e
1 − 2ce
k=1
where we use the estimate k! ≥ e(k/e)k for k ≥ 1. Setting c = 1/9, 1 +
2c
1−2ce
< 2, so that
P[|χ| > 3τ ] ≤ 2 exp(−τ 2 ),
so that (46a) is proven.
Assume now that E[ξ] = 0. By (5.8) in Vershynin (2012)
+∞
X
τ
E[exp( χ)] ≤ 1 +
e
k=2
|τ |
√
k
k
,
(47)
and, by (5.9) in Vershynin (2012),
2
exp τ C
2
+∞
X
C|τ | 2h
√
.
≥1+
h
h=1
(48)
If |τ | ≤ 1, fix an even k ≥ 2 so that k = 2h with h ≥ 1. The k-th and (k + 1)-th terms of
the series (47) is
k+1
|τ | k
|τ |
C|τ | k
|τ | k
|τ | 2h
2
√
+ √
≤ √
≤2 √
= 2h h √
,
C 2
k+1
k
k
2h
k
where the right hand side is the h-th term of the series (48) and the last inequality holds
true if C ≥ 1. Under this assumption
τ
|τ | ≤ 1.
(49)
E[exp( χ)] ≤ exp τ 2 C 2
e
If |τ | ≥ 1, fix an odd k ≥ 3, so that k = 2h − 1 with h ≥ 2, then the k-th and (k + 1)-th
terms of the series (47) is
!h
√
k+1
|τ | k+1
|τ | 2h
|τ |
2h
|τ | 2h
|τ | k
√
√
≤2 √
+ √
=
≤ √
,
C 2 (2h − 1)
k+1
k
k
h
h
where the right hand side is the h-th term of the series (48) and the inequality holds true
provided that
√
√
2 2
2h
2
=
,
C ≥ sup
3
h≥2 (2h − 1)
36
A Machine Learning Approach to Optimal Tikhonov Regularization
which holds true with the choice C = 1. Furthermore, the term of (47) with k = 2 is clearly
bounded by term of (48) with h = 1, so that we get
τ
E[exp( χ)] ≤ exp τ 2
e
|τ | ≥ 1.
Together with bound (49) with C = 1, the abound bound gives (46b) since e2 < 8.
Finally, reasoning as in the proof of (46a) and by using (46b)
P[χ > τ ] = P [exp(cχ) > exp(cτ )]
≤ E[exp(cχ)]e−cτ
≤ exp(8c2 − cτ ),
which takes the minimum at c = τ /16. Hence,
P[χ > τ ] = exp(−
τ2
).
32
Remark 17 Both (46a) and (46b) (for a suitable constants instead of kξkψ2 ) are sufficient conditions for sub-gaussianity and (46b) implies that E[ξ] = 0, see (Vershynin, 2012,
Lemma 5.5).
The following proposition bounds the Euclidean norm of a sub-gaussian vector. The
proof is standard and essentially based on the results in Vershynin (2012), but we were not
able to find the precise reference. The centered case is done in Rigolet (2015).
Proposition 18 Let ξ a sub-gaussian random vector in Rd . Given τ > 0 with probability
2
greater than 1 − 2e−τ
√
√
(50)
kξk ≤ 3kξkψ2 ( 7d + 2τ ) ≤ 9kξkψ2 ( d + τ ).
2
If E[ξ] = 0, then with probability greater than 1 − e−τ
r
√
√
7
d + 2τ ) ≤ 16kξkψ2 ( d + τ ).
kξk ≤ 8kξkψ2 (
2
(51)
Proof As usual we assume that kξkψ2 = 1. Let N be a 1/2-net of S d−1 . Lemmas 5.2
and 5.3 in Vershynin (2012) give
kξk ≤ 2 maxhξ, vi
v∈N
|N | ≤ 5d .
Fixed v ∈ N , (46a) gives that
P[|hξ, vi| > 3t] ≤ 2 exp(−t2 ).
37
De Vito, Fornasier and Naumova
By union bound
P[kξk > 6t] ≤ P[max|hξ, vi| > 3t] ≤ 2|N | exp(−t2 ) ≤ 2 exp(d ln 5 − t2 ).
v∈N
p
Bound (50) follows with the choice t = τ + 7d/4 taking into account that t2 − d ln 5 > τ 2
since ln 5 < 7/4.
Assume that ξ is centered and use (46c) instead of (46a). Then
√
√
P[kξk > 8 2t] ≤ P[max|hξ, vi| > 4 2t] ≤ |N | exp(−t2 ) ≤ exp(d ln 5 − t2 ).
v∈N
As above, the choice t = τ +
p
7d/4 provides the bound (51).
Remark 19 Compare with Theorem 1.19 in Rigolet (2015), noting that by (46b) the parameter σ in Definition 1.2 of Rigolet (2015) is bounded by 4kξkψ2 .
The following result is a concentration inequality for the second momentum of subgaussian random vector, see Theorem 5.39 and Remark 5.40 in Vershynin (2012) and footnote 20.
Theorem 20 Let ξ ∈ Rd be a sub-gaussian vector random vector in Rd . Given a family
ξ1 , . . . , ξn of random vectors independent and identically distributed as ξ, then for τ > 0
n
P[k
1X
2
ξi ⊗ ξi − E[ξ ⊗ ξ]k > max{δ, δ2 }] ≤ 2e−τ
n
i=1
where
δ = Cξ
r
τ
d
+√
n
n
!
,
and Cξ is a constant depending only on the sub-gaussian norm kξkψ2 .
References
William K. Allard, Guangliang Chen, and Mauro Maggioni. Multi-scale geometric methods
for data sets. II: Geometric multi-resolution analysis. Appl. Comput. Harmon. Anal., 32
(3):435–462, 2012. ISSN 1063-5203. doi: 10.1016/j.acha.2011.08.001.
Philip M. Anselone. Collectively compact operator approximation theory and applications
to integral equations. Prentice-Hall Inc., Englewood Cliffs, N. J., 1971.
Frank Bauer and Mark A. Lukas. Comparing parameter choice methods for regularization
of ill-posed problems. Math. Comput. Simul., 81(9):1795–1841, 2011.
Andrea Braides. Gamma-convergence for beginners. Lecture notes.
http://www.mat.uniroma2.it/ braides/0001/dotting.html, 2001.
38
Available on line
A Machine Learning Approach to Optimal Tikhonov Regularization
Antonin Chambolle, Ronald DeVore, Nam-yong Lee, and Bradley Lucier. Nonlinear wavelet
image processing: variational problems, compression, and noise removal through wavelet
shrinkage. IEEE Trans. Image Process., 7(3):319–335, 1998.
Guangliang Chen, Anna V. Little, and Mauro Maggioni. Multi-resolution geometric analysis
for data in high dimensions. In Excursions in harmonic analysis. Volume 1, pages 259–
285. New York, NY: Birkhäuser/Springer, 2013.
David Donoho and Iain Johnstone.
Biometrika, 81:425–55, 1994.
Ideal spatial adaptation by wavelet shrinkage.
Heinz W. Engl, Martin Hanke, and Andreas Neubauer. Regularization of inverse problems.
Dordrecht: Kluwer Academic Publishers, 1996.
N. Halko, P.G. Martinsson, and J.A. Tropp. Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 53
(2):217–288, 2011. ISSN 0036-1445; 1095-7200/e. doi: 10.1137/090771806.
Stefan Kindermann. Convergence analysis of minimization-based noise level-free parameter
choice rules for linear ill-posed problems. ETNA, Electron. Trans. Numer. Anal., 38:
233–257, 2011. ISSN 1068-9613/e.
Gitta Kutyniok and Demetrio Labate, editors. Shearlets. Multiscale analysis for multivariate
data. Boston, MA: Birkhäuser, 2012.
Anna V. Little, Mauro Maggioni, and Lorenzo Rosasco. Multiscale geometric methods for
data sets. I: Multiscale SVD, noise and curvature. Appl. Comput. Harmon. Anal., 43(3):
504–567, 2017. ISSN 1063-5203. doi: 10.1016/j.acha.2015.09.009.
Stéphane Mallat. A wavelet tour of signal processing. The sparse way. 3rd ed. Amsterdam:
Elsevier/Academic Press, 3rd ed. edition, 2009. ISBN 978-0-12-374370-1/hbk.
Erich Novak and Henryk Woźniakowski. Optimal order of convergence and (in)tractability
of multivariate approximation of smooth functions. Constr. Approx., 30(3):457–473, 2009.
Philippe Rigolet. 18.S997: High dimensional statistics. Lecture notes. Available on line
http://www-math.mit.edu/ rigollet/PDFs/RigNotes15.pdf, 2015.
Lorenzo Rosasco, Mikhail Belkin, and Ernesto De Vito. On learning with integral operators.
J. Mach. Learn. Res., 11:905–934, 2010.
Mark Rudelson and Roman Vershynin. Hanson-Wright inequality and sub-Gaussian concentration. Electron. Commun. Probab., 18(82):1–9, 2013.
Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. In
Compressed sensing, pages 210–268. Cambridge Univ. Press, Cambridge, 2012.
Laurent Zwald and Gilles Blanchard. On the convergence of eigenspaces in kernel principal
component analysis. In Y. Weiss, B. Schölkopf, and J. Platt, editors, Advances in Neural
Information Processing Systems 18, pages 1649–1656. MIT Press, Cambridge, MA, 2006.
39
| 2 |
Accepted as a workshop contribution at ICLR 2015
P URINE : A BI - GRAPH BASED DEEP LEARNING FRAME -
arXiv:1412.6249v5 [cs.NE] 16 Apr 2015
WORK
Min Lin1,2 , Shuo Li3 , Xuan Luo3 & Shuicheng Yan2
1. Graduate School of integrated Sciences and Engineering
2. Department of Electrical and Computer Engineering
National University of Singapore
3. Zhiyuan College, Shanghai Jiao Tong University
{linmin, eleyans}@nus.edu.sg
[email protected]
[email protected]
A BSTRACT
In this paper, we introduce a novel deep learning framework, termed Purine. In
Purine, a deep network is expressed as a bipartite graph (bi-graph), which is composed of interconnected operators and data tensors. With the bi-graph abstraction,
networks are easily solvable with event-driven task dispatcher. We then demonstrate that different parallelism schemes over GPUs and/or CPUs on single or
multiple PCs can be universally implemented by graph composition. This eases
researchers from coding for various parallelization schemes, and the same dispatcher can be used for solving variant graphs. Scheduled by the task dispatcher,
memory transfers are fully overlapped with other computations, which greatly
reduces the communication overhead and helps us achieve approximate linear acceleration.
1
I NTRODUCTION
The need for training deep neural networks on large-scale datasets has motivated serveral research
works that aim to accelerate the training process by parallelising the training on multiple CPUs or
GPUs. There are two different ways to parallelize the training. (1) Model parallelism: the model
is distributed to different computing nodes (Sutskever et al., 2014) (2) Data parallelism: different
nodes train on different samples for the same model (Seide et al., 2014; Chilimbi et al., 2014).
Some of the works even use a hybrid of them (Krizhevsky, 2014; Dean et al., 2012; Le, 2013). For
data parallelism, there are also two schemes regarding communication between the peers. (1) the
allreduce approach where all updates from the peers are aggregated at the synchronization point and
the averaged update is broadcasted back to the peers (Seide et al., 2014; Krizhevsky, 2014). (2)
the parameter server approach handles the reads and writes of the parameters asynchronously (Dean
et al., 2012; Le, 2013; Chilimbi et al., 2014). Efficient implementations of the various parallelization
schemes described by previous works are non-trivial.
To facilitate the implementation of various parallelization schemes, we built a bigraph-based deep
learning framework called “Purine”. It is named “Purine”, which is an analog of caffeine in molecular structure, because we benefited a lot from the open source Caffe framework (Jia et al., 2014) in
our research and the math functions used in Purine are ported from Caffe.
2
B I -G RAPH A BSTRACTION
Purine abstracts the processing procedure of deep neural networks into directed bipartite graphs
(Bi-Graphs). The Bi-Graph contains two types of vertices, tensors and operators. Directed edges
are only between tensors and operators and there is no interconnection within tensors or operators.
Figure 1 illustrates the Bi-Graph for the convolution layer defined in Caffe.
All feed-forward neural nets can be represented by a directed acyclic bipartite graph, which can be
solved by a universal task dispatcher. There are several works that use similar abstractions. For
1
Accepted as a workshop contribution at ICLR 2015
Top
Top
ΔTop
Add bias
ΔTop
Bias gradient
ΔBias
Bias
Convolution
Layer
Conv w.r.t bottom
Conv
Bottom
ΔBottom
Conv w.r.t weight
ΔWeight
Weight
Bottom
ΔBottom
(b) Bipartite Graph
(a) Caffe Convolution Layer
Figure 1: (a) shows the convolution layer defined in Caffe together with its inputs and outputs. (b) is
the corresponding bipartite graph that describes the underlying computation inside the convolution
layer. There are two types of vertices in the Bi-Graph. Boxes represent data tensors and the circles
represent operators. Operators are functions of the incoming tensors and the results of the functions
are put in the outgoing tensors.
example, the dataflow graph in Dryad (Isard et al., 2007) and Pig Latin (Olston et al., 2008) are the
same as the Bi-Graph abstraction introduced in this paper. Graphlab (Low et al., 2010) proposed
a more general abstraction which is applicable to iterative algorithms. However, these systems are
designed for general problems and do not support GPU. Theano (Bergstra et al., 2010) compiles
math expressions and their symbolic differentiations into graphs for evalutation. Though it supports
GPU and is widely used for deep learning, the ability to parallelize over multiple GPUs and over
GPUs on different machines is not complete.
2.1
TASK D ISPATCHER
Purine solves the Bi-Graph by scheduling the operators within the Bi-Graph with an event-driven
task dispatcher. Execution of an operator is triggered when all the incoming tensors are ready. A
tensor is ready when all its incoming operators have completed computation. The computation of
the Bi-Graph starts from the sources of the graph and stops when all the sinks are reached. This
process is scheduled by the task dispatcher.
2.2
I TERATIONS
Although it has been argued in (Low et al., 2010) that the directed acyclic graph could not effectively
express iterative algorithms as the graph structure would depend on the number of iterations. We
overcome this by iteration of the graphs. Because the task dispatcher waits until all the sinks of the
graph are reached, it acts as a synchronization point. Thus parallelizable operations can be put in a
single graph, while sequential tasks (iterations) are implemented by iteration of graphs. A concrete
example is shown in Figure 2.
3
PARALLELIZATION
Parallelization of the Bi-Graph on a cluster of CPUs or GPUs or mixed can be easily implemented
by introducing a “location” property for the tensors and operators. The “location” property uniquely
identifies the computation resource (CPUs/GPUs) on which a tensor/operator should be allocated.
The “location” property comprises two fields: hostname and device id. In a multi-machine cluster,
hostname identifies the machine that the vertice resides on. Device id specifies whether the tensor/operator should be allocated on CPU or GPU and the ordinal of the GPU if there are multiple
GPUs installed on a single machine. Besides the “location” property, another property “thread” is
assigned to operators because both CPU and GPU support multithreading. Operators with the same
2
Accepted as a workshop contribution at ICLR 2015
(a)
params
(b)
params
new params
DNN
new params
Swap
(c)
DNN, Swap, DNN, Swap, ... ...
Figure 2: Implementation of SGD by Graph iteration. Every iteration of SGD calculates a modification of the network parameter, and updates the parameter before the next iteration. Since direct
updating the network parameter would form a cyclic loop in the graph, it is dissected into two parts.
(a) The DNN graph calculates the updated parameters and places them in “new params”, (b) The
swap graph will swap the memory address of the “new” and “old” parameters. As a whole, SGD is
implemented by iterating the two graphs as in (c).
thread id will be queued in the same thread, while those with different ids are parallelized whenever
possible. It is up to the user to decide the assignment of the graph over the computation resources.
3.1
C OPY O PERATOR
In the multidevice setting, data located on one device are not directly accessible by operators on
another. Thus a special “Copy” operator is introduced to cross the boundary, connecting parts of the
Bi-Graph on individual devices. The Copy operators, just like other operators, are scheduled by the
task dispatcher. Therefore it is straightforward to overlap copy operations with other computation
tasks by assigning different threads to them.
3.2
TASK D ISPATCHER
In the case of single machine and multiple devices, only one dispatcher process is launched. Operators are associated to their threads and scheduled by the global task dispatcher. In the case of
multiple machines and multiple devices, individual dispatcher processes are launched on each of the
machines. Copy operators that copy data from machine A to machine B are sinks on machine A
and sources on machine B. This way, each machine only needs to schedule its own subgraph and no
global scheduling mechanism or communication between dispatchers is necessary.
3.3
M ODEL PARALLELISM
We demonstrate how model parallelism can be implemented in Purine by taking a two-layer fully
connected neural network as example. It can be extended to deeper networks easily. As is shown
in Figure 3, execution of the two-layer network can be divided into three sequential steps. They are
labeled as A, B, C correspondingly. To keep resources busy all the time, the network is replicated
three times and executed in order.
3.4
DATA PARALLELISM
Data parallelism is a simple yet straightforward way to parallelize deep networks. In data parallelism, computation peers each keep a replicate of the deep network. The communication between
peers can be either synchronous or asynchronous. In the synchonous case, the gradients from peers
are gathered by the parameter server. The updated parameter is calculated and copied back to all the
peers.
A hybrid of data parallelism and model parallelism has previously been proposed by Krizhevsky
(2014) in which the convolution layers use data parallelism and fully connected layers use model
parallelism. This is based on the observation that the number of parameters is large and thus the
communication cost is big for fully connected layers. The hybrid approach greatly reduces the
communication cost. A different approach to reduce communication overhead is to overlap the data
3
Accepted as a workshop contribution at ICLR 2015
(a)
B
Loss
Device 1
Label
Softmax Layer
Device 2
Softmax
Copy Op
Softmax Diff
Copied Blob
on another location
A
C
(b)
A1 A2 A3 A1 A2 A3 ......
B1 B2 B3 B1 B2 B3 ......
C1 C2 C3 C1 C2 C3 ......
Inner Product Layer
Iterations
Figure 3: Implementing model parallelism in Purine. (a) The two-layer network can be divided into
three subgraphs which execute in sequence. (b) The network is replicated three times and executed
in order.
transfer with computations. Double buffering is proposed by Seide et al. (2014) to break a minibatch
in half and exchange the gradients of the first half while doing computaion of the second half.
With the scheduling of the task dispatcher in Purine, we propose a more straightforward way to hide
the communication overhead. We show that data parallelism is feasible even for fully connected
layers, especially when the network is very deep. Since the fully connected layers are usually at
the top of the neural networks, exchange of the parameter gradients can be overlapped with the
backward computation of the convolution layers. As is shown in Figure 4, exchange of gradients in
the higher layer can be overlapped with the computation of lower layers. Gradient exchange of lower
layers could be less of a problem because they usually have a much smaller number of parameters.
Weights
Gradients
Gradient exchange
Forward Backward
Green arrows can overlap in time
Figure 4: Overlapping communication with computation.
4
Accepted as a workshop contribution at ICLR 2015
4
R ESULTS
We carried out experiments on the Purine framework with data parallelism on GoogLeNet (Szegedy
et al., 2014). Data parallelism often results in larger batch sizes, which are unfavorable for SGD
convergence demonstrated by previous studies. In this paper we ignored the possible change in
convergence rate but instead studied how much more data can be processed per unit time with the
parallelization.
We compared the number of images processed per second for GoogLeNet with different numbers
of GPUs for data parallelism. The batch size we use is 128 per GPU. There are 3 GPUs installed on
each workstation, interconnected with 10 Gigabit Ethernet.
As is shown in Table 1, the speed increases linearly with more GPUs added. The speed is faster than
the previous version of this paper because we upgraded the CUDNN library to version 2, which is
faster compared to version 1.
Note that the machines are connected by 10 gigabit ethernet and thus data on GPU need to go
through CPU memory to be tranferred over the ethernet. Even with this limitation, the speed up is
linear thanks to the overlapping of communication with computation.
Table 1: Number of images per second with increasing number of GPUs. (GPU number smaller
than or equal to 3 are tested on single machine. Performances of GPU number larger than 3 are on
different machines interconnected with 10 Gigabit Ethernet.)
Number of GPUs
Images per second
1
112.2
2
222.6
3
336.8
6
673.7
9
1010.5
12
1383.7
Running GoogLeNet with 4 GPUs on a single machine is profiled and shown in Figure 5. It can be
seen that the memory copy of model parameters between CPUs and GPUs is fully overlapped with
the computations in the backward pass. The only overhead is in the first layer of the network, which
results in the gap between iterations.
It is favorable to have small batch size in stochastic gradient descent. However, regarding parallelism, it is more favorable to have larger batch size and thus higher computation to communication
ratio. We searched for the minumum batch size possible to achieve linear speed up by exploring
different batch sizes.
Memcpy CPU to GPU
Memcpy GPU to CPU
CUDA Kernel Call
Forward
Start of graph
Backward
End of graph
Figure 5: Profiling results of Purine. Memory copies (row 1 and 2) are overlapped with computation
(row 3). The only overhead is the memory copy of first convolution layer, which results in the gap
between iterations.
Table 2: Number of images per second with 12 GPUs and different batch sizes.
Batch size per GPU
Images per second
Acceleration Ratio
128
1383.7
12
64
1299.1
11.26
56
1292.3
11.20
48
1279.7
11.09
40
1230.2
10.66
32
1099.8
9.53
Table 2 shows the processing speed with different batch sizes. The acceleration ratio is not reduced
much with batch size 64 as compared to 128. We can still achieve 9.53 fold acceleration with 12
GPUs when the batch size is set to 32.
5
Accepted as a workshop contribution at ICLR 2015
R EFERENCES
James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume
Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU
math expression compiler. In Proceedings of the Python for Scientific Computing Conference
(SciPy), June 2010. Oral Presentation.
Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. Project adam:
Building an efficient and scalable deep learning training system. In Proceedings of the 11th
USENIX conference on Operating Systems Design and Implementation, pages 571–582. USENIX
Association, 2014.
Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior,
Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in
Neural Information Processing Systems, pages 1223–1231, 2012.
Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, and Dennis Fetterly. Dryad: distributed dataparallel programs from sequential building blocks. In ACM SIGOPS Operating Systems Review,
volume 41, pages 59–72. ACM, 2007.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, pages 675–678. ACM,
2014.
Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint
arXiv:1404.5997, 2014.
Quoc V Le. Building high-level features using large scale unsupervised learning. In Acoustics,
Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8595–
8598. IEEE, 2013.
Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and Joseph M
Hellerstein. Graphlab: A new framework for parallel machine learning. arXiv preprint
arXiv:1006.4990, 2010.
Christopher Olston, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, and Andrew Tomkins. Pig
latin: a not-so-foreign language for data processing. In Proceedings of the 2008 ACM SIGMOD
international conference on Management of data, pages 1099–1110. ACM, 2008.
Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its
application to data-parallel distributed training of speech dnns. In Fifteenth Annual Conference of
the International Speech Communication Association, 2014.
Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.
arXiv preprint arXiv:1409.4842, 2014.
6
| 9 |